The day-job has been crazy busy the last several weeks. I have several half-written entries that I want to finish and post and with the project hours, and stuff needed to be done at home, there simply has not been much time. However, I've been lurking in a couple of places, reading posts and email conversations, getting my fix of "smart people's thoughts" that way.
The interesting thing is that a couple of themes have crept back up and I finally have the chance to take a look at the topic(s) myself and consider some aspects I may not have considered.
The initial question revolved around defining requirements and establishing traceability of test plans, test cases and the like back to requirements. By extension, when executing the test cases defects found should likewise be able to be traced back to said requirements.
Now, folks who have read my blog in the past will realize that I've been writing about requirements off an and for some time. Well, actually, its more "on" than "off." I write about requirements and testing a lot. Possibly this is because the struggles of the company I work with on defining requirements and the subsequent struggles to adequately test the software products created from those requirements. Now, to be clear, it is not simply this company I am with that has an issue. Most places I've worked have been seriously "requirements challenged."
One thing that sends up every warning flag the back of my neck has is the idea that we can fully define the requirements before doing anything else. I know, Robin Goldsmith has an interesting book on defining "REAL requirements" and he has some good ideas. In light of the shops where I have worked over the last, oh, 25 and more years, some of these ideas simply don't apply. They are not bad ideas, in fact, I think testers should read the book and get a better understanding of it. (Look here to find it, yeah, I know its pretty pricey - expense it.)
Having said that, how many times have we heard people (developers, testers, analysts of some flavor, project managers, et al.) complain that the "customers" either "don't know what they want" or "changed their requirements." I've written before about the understanding of requirements changing, and how considering one aspect of a project may inform understanding on another. When this happens in design, development, or worse testing, the automatic chorus is that the "users" don't know what they want and work will need to be changed. All of us have encountered this right? This is nothing new, presumably.
My point with this revisit is that if you are looking to find the cause of this recurring phenomenon, look in a mirror. All of us have our own biases that effect everything we do - whether we intend to or not.
So, if your shop is like some I've worked in, you get a really nice Requirements Document that formally spells out the requirements for the new system or enhancement to the existing system. The "designers" take this and work on their design. Test planners start working on planning how they will test the software and (maybe) what things they will look for when reviewing the design.
Someone, maybe a tester, maybe a developer, will notice something; maybe an inconsistency, maybe they'll just have a hunch that the pieces don't quite go together as neatly as they should. So a question will be asked. Several things may happen. In some cases, the developer will be given a vague instruction to "handle it." In some cases, there will be much back and forth over what the system "should" do, then the developer will be told to "handle it."
At one shop I worked at, the normal result was a boss type demanding why QA (me) had not found the problem earlier.
My point is, defining requirements itself is an ongoing process around which all the other functions in software development operate.
Michael Bolton recently blogged on test framing. It is an interesting read. It also falls nicely into a question raised by Rebecca Staton-Reinstein's book Conventional Wisdom around how frames and perspectives can be both limiting and liberating.
This brings me back to my unanswered question on Requirements: How do you show traceability and coverage in advance when it is 99.99% certain that you do not know all the requirements? Can it really be done or is it a fabled goal that can't be reached - like the city of gold?
Wiser people than me may know the answer.