I was thinking what prompts the need for traceability and ensuring that the final application actually satisfies the requirements that were captured at the beginning of the project. Who needs that traceability? I then thought about different companies. The type of company a tester works in has a huge impact on the approach of their testing. Broadly speaking (read: this is a gross generalisation) we have these types of companies:
1. Product companies selling shrink-wrap software. This is anything from operating systems, games, tax return helpers, computer helper apps (Adobe Reader), etc.
2. Consultancies developing a bespoke application for their clients to address a specific business problem
3. Hybrid between the first two, for example further developing/configuring a shrink-wrap application, for example SAP or other back-office systems.
4. Large company developing a bespoke application in-house for internal use only, for example banks or insurance companies.
Let’s have a look at each from different angles.
The first type of company producing products is somewhat unique, as there is not one, single client that determines the requirements. They need to rely on Product Managers or other staff who can guess or work out that there is a target audience who’s likely to use and/or buy their software. Anticipating what the market needs determines the requirements.
Many products, when selling well, will have several versions and releases. That’s good news for the automation people as products often need to be regression tested for a variety of platforms, i.e. different operating systems, hardware configurations, or when new functionality is added over time. The quality necessary for their products depends on the market they’re selling to.
The second type is very much focussed on the one client they’re selling to. What’s in the contract will get built (and tested if the contract specifies it). What’s not in the contract depends on the good will of the consultancy. If a client just spent £3M with you and you wave the contract instead of adding a 2 day RFC that’s probably the last big contract your company got.
Automated regression testing depends wholly on the client – have they specified that they want it, is the application large enough to warrant it for just the one version, are there likely to be more version of the software and contracts coming in from the client? Then it might make sense building something automated, otherwise forget it, time is money in this setup.
The hybrid company is closer to the Consultancy but with the added complication that now there is third party code to be considered for testing. More often than not that has already been tested but one has to be careful to exclude functions as they may be used differently in the context of the re-developed/configured new solution.
The in-house development test efforts very much depend on the company, however there usually are requirements that have been developed by internal groups. This type is usually similar to the Product companies in that software developed is there to stay for probably more than one version. That means more effort can be spent on testing it properly and with a long term view.
To come back to the need for confirmation – if you’re working in some sort of Consultancy, traceability is a no brainer. You’ll need to confirm that what your company is delivering is adhering to the requirements that were set out – and by extension that you fulfil the contract. From the Consultancy point of view satisfying the contract has to come first, otherwise the business would leave itself wide open to being sued or just not paid for breaching the contract. So once the confirmation is out of the way there might or might not be time to test the application in detail.
If you’re working in a product company or in-house development effort it’s a bit different. There are other factors to consider though. Are there any legal requirements to adhere to certain standards? Does the software need to be certified against particular standards? GLP or GCP comes to mind. Traceability is maybe not so important in this setup compared to the Consultancies, unless the PM or project owners insists on it, I’d argue out of the false hope that the system will then actually deliver what was expected when the requirements were written.
There was an argument on one of the LinkedIn groups if checking/testing against requirements is a hoax or if it adds value. As always, the answer is, it depends. If you have a 1:1 relationship of tests vs requirements I’d say it’s a hoax. All requirements need more than one simple test. So saying that a particular requirement has been tested against, then ticking the box, fine, that’s implemented is giving a wrong picture.
Imagine you have a requirements that says that you need to have several user roles in the system. Would you confirm that with one single check? You could, but would it actually give you the information and confidence in the application that the stakeholders ask us to provide? I don’t think so. Why artificially tick a box and say, yes, there are several user roles in the system if you’re telling a white lie if you haven’t tested the around that requirement?
I can see why checking against requirements is popular as it’s easy to put into contracts. The effectiveness of it from a quality point of view is almost non-existent, so it might help the sales and legal people (and not even that if the customer is set on suing a non-compliant company) but the PM should be looking elsewhere for confidence that the final product is actually what the customer wanted in the first place.
GCF conformance testing of mobile handsets is another that comes to mind - the intention with these is that manufacturers can say that handsets "behave" in a certain way under certain conditions.
ReplyDeleteIt's not, of course, saying that the handsets are fault-free - but then I don't think they'll claim that - and is probably only a subset of the product manager's requirements.
So, for potentially legal requirements a form of traceability (against a subset of the testing) is required in these cases.
I thought I'd written previously about conformance testing problems - but see it's in my draft folder - time to get busy on that..
I'd pretty much agree with you Thomas. Traceability is important, but only in a limited sense, and its importance varies depending on the nature of the development and contract.
ReplyDeleteAs far as I'm concerned traceability qualifies as being necessary, but definitely not sufficient, for good testing. But that's based on the assumption that the requirements are reasonably good quality, and it doesn't take account of all the testing that's not based on the requirements. If the requirements are rubbish then who cares about traceability? There are bigger problems, and a neat trail from the requirements through to the testing won't help much.
There are all sorts of weaselly subjective words in that paragraph, but that's because it all depends on the particular context of the company, product and project.
As a test manager I like to have traceability just to provide an initial structure to the testing; so I can see that at least I'm covering the whole application, its functionality and its environment. I also want to be able to know what testing is relevant to which requirements. That's about the management though, not the real testing. As you say, Thomas, it doesn't mean you've covered the testing for a requirement effectively.
Traceability is more about a basic level of accountability, but providing it shouldn't buy you a huge amount of credibility.
It disappoints me when the debate about traceability becomes polarised. Some places take "testing against requirements" to ludicrous extremes. I've seen a "standard" that mandated that all testing should be traceable back to a requirement. That gave bad testers the perfect excuse to skimp testing when there were missing or poorly stated requirements.
If you warn against the dangers of obsessing about traceability people can start to assume you're dismissing the whole concept and don't take requirements seriously. It's not a case of either "100% 2-way traceability" or "forget about traceability". Presenting the debate as a dichotomy is extremely unhelpful. It doesn't acknowledge the the interesting subtleties of requirements and testing.
Thomas,
ReplyDeleteGreat article. I like to be able to trace my testing back to the original requirements in broad areas - for example I want to see which functional areas of the application have been exercised by particular tests.
For me, the important thing is being realistic about what the audit trail is telling you - and what it is not telling you.
The concept that you have highlighted where a single test condition is used as a basis for ticking a box against a 'multi-user' requirement, continues to concern me as I see evidence of it so often.
As you and James have said, the fact you have provided evidence of traceability of coverage, for example, does not mean that you have done good testing.
Stephen
All, thanks for the comments.
ReplyDeleteSimon, the need to test because of regulatory reasons or because the need to comply with certain standards is nothing that we can do anything against.
We can ask if it adds any value apart from helping marketing (which is important enough).
James,
I'm in two minds about traceability and it usually comes down to the project. I've seen projects where developers simply forgot to add a function and only found out when someone asked about it. For this reason alone having a rudimentary check at the function level might be a good idea.
My case against traceability is that it is sold as something that adds value to the customer. I'd argue that, for small to medium projects, we deliver less value. The time and effort that we spend in traceability could've been spent in testing the application and finding out more information about it.
I take your point about people not taking requirements seriously if we don't check against them. If, however the customer is involved in the development of the product/solution, what need is there for tracking requirements? Some food for thought.
Stephen,
I'd argue that I can show good coverage of the system without tracing back to requirements. I'd even argue that I'd provide better value to the customer (internal to the PM, external to the paying client) if showing what the application really is about. Wouldn't that add more value than showing that we adhered to requirements that the customer might now feel weren't so good in the first place?
Thomas,
ReplyDeleteAbsolutely. Tracing back to requirements is only one way of demonstrating that the SUT is fit for purpose.
Stephen