Monday, 2 August 2010

The need for confirmation – why checking against requirements is not going away

I was thinking what prompts the need for traceability and ensuring that the final application actually satisfies the requirements that were captured at the beginning of the project. Who needs that traceability? I then thought about different companies. The type of company a tester works in has a huge impact on the approach of their testing. Broadly speaking (read: this is a gross generalisation) we have these types of companies:

1. Product companies selling shrink-wrap software. This is anything from operating systems, games, tax return helpers, computer helper apps (Adobe Reader), etc.

2. Consultancies developing a bespoke application for their clients to address a specific business problem

3. Hybrid between the first two, for example further developing/configuring a shrink-wrap application, for example SAP or other back-office systems.

4. Large company developing a bespoke application in-house for internal use only, for example banks or insurance companies.

Let’s have a look at each from different angles.

The first type of company producing products is somewhat unique, as there is not one, single client that determines the requirements. They need to rely on Product Managers or other staff who can guess or work out that there is a target audience who’s likely to use and/or buy their software. Anticipating what the market needs determines the requirements.
Many products, when selling well, will have several versions and releases. That’s good news for the automation people as products often need to be regression tested for a variety of platforms, i.e. different operating systems, hardware configurations, or when new functionality is added over time. The quality necessary for their products depends on the market they’re selling to.

The second type is very much focussed on the one client they’re selling to. What’s in the contract will get built (and tested if the contract specifies it). What’s not in the contract depends on the good will of the consultancy. If a client just spent £3M with you and you wave the contract instead of adding a 2 day RFC that’s probably the last big contract your company got.
Automated regression testing depends wholly on the client – have they specified that they want it, is the application large enough to warrant it for just the one version, are there likely to be more version of the software and contracts coming in from the client? Then it might make sense building something automated, otherwise forget it, time is money in this setup.

The hybrid company is closer to the Consultancy but with the added complication that now there is third party code to be considered for testing. More often than not that has already been tested but one has to be careful to exclude functions as they may be used differently in the context of the re-developed/configured new solution.

The in-house development test efforts very much depend on the company, however there usually are requirements that have been developed by internal groups. This type is usually similar to the Product companies in that software developed is there to stay for probably more than one version. That means more effort can be spent on testing it properly and with a long term view.

To come back to the need for confirmation – if you’re working in some sort of Consultancy, traceability is a no brainer. You’ll need to confirm that what your company is delivering is adhering to the requirements that were set out – and by extension that you fulfil the contract. From the Consultancy point of view satisfying the contract has to come first, otherwise the business would leave itself wide open to being sued or just not paid for breaching the contract. So once the confirmation is out of the way there might or might not be time to test the application in detail.

If you’re working in a product company or in-house development effort it’s a bit different. There are other factors to consider though. Are there any legal requirements to adhere to certain standards? Does the software need to be certified against particular standards? GLP or GCP comes to mind. Traceability is maybe not so important in this setup compared to the Consultancies, unless the PM or project owners insists on it, I’d argue out of the false hope that the system will then actually deliver what was expected when the requirements were written.

There was an argument on one of the LinkedIn groups if checking/testing against requirements is a hoax or if it adds value. As always, the answer is, it depends. If you have a 1:1 relationship of tests vs requirements I’d say it’s a hoax. All requirements need more than one simple test. So saying that a particular requirement has been tested against, then ticking the box, fine, that’s implemented is giving a wrong picture.
Imagine you have a requirements that says that you need to have several user roles in the system. Would you confirm that with one single check? You could, but would it actually give you the information and confidence in the application that the stakeholders ask us to provide? I don’t think so. Why artificially tick a box and say, yes, there are several user roles in the system if you’re telling a white lie if you haven’t tested the around that requirement?

I can see why checking against requirements is popular as it’s easy to put into contracts. The effectiveness of it from a quality point of view is almost non-existent, so it might help the sales and legal people (and not even that if the customer is set on suing a non-compliant company) but the PM should be looking elsewhere for confidence that the final product is actually what the customer wanted in the first place.