Sunday, February 9, 2014

Pandora's Box: Testing, Active Consideration & Process Models

Based on emails I've received it seems I've committed an injustice in my previous posts on Pandora's Box Testing.  It seems some people think I'm coming down unfairly on organizations, and testers in particular, that focus their efforts on formal, written test scripts based on "the requirements."

For that, I apologize.  In no way did I mean to imply that my respected colleagues who rely strictly on documented requirements to "drive testing" are always engaging in Pandora's Box Testing. 

My choices are to either write a massive tome or split the idea into chunks as I sort through them in my head.  Or, perhaps more clearly stated, I write on the ideas as they form in my head, and use the writing and consideration I give after writing to grow the ideas further.

Many experienced testers are extremely aware of the problem of Pandora's Box Testing.  Some are rigorous in their investigation and research to consider many possible realms and change "hope" to active decisions around what is and is not to be tested.

It is that recognition, that decision, of examining what can be tested in a meaningful way and what can not, and looking at the reasons why certain functions cannot be tested or should not be tested.

It is in this consideration that we move away from "trust," "belief" and "hope" and into the realm of "This is the right testing to do because..."

Thus, for each project, we consider what needs to be done to serve the stakeholders.  The danger is when testers are told what the stakeholders need done.  If the product owner, business representative and/or customer representative are not in agreement or, more likely, do not understand the implications, testers need to make sure that the implications are clear to all.

This does not need to be confrontational, simply a discussion.

When I have encountered this behavior it has been the result of a few modes of behavior.  It can be that people, like the PM, development leads, etc.,simply don't know any different.  It may be they are convinced that the only testing that really matters is one particular type or approach.  They have been told that such a thing is a "best practice."  Right. 

Other times, they may be suffering from their own version of Pandora's Box Testing:

Pandora's Box Software Development Model 

Hope is the greatest evil let loose from Pandora's Box.  We find software projects brimming with it.

PMs and BAs hope that by "following the process (model)" everything will work.  They hope that by creating the forms on time and having the meetings every week that everything will be fine.

In the mean time, designers have many unanswered questions and hope that the design they come up with will address them.  Developers don't understand the design and hope the designers know what they are doing.  Then they don't have time to unit test, or have been told "all testing" will be "done by QA"

Of course, because the designers and developers have other time-sensitive projects, they really can't sit down and talk things through carefully with each other or with the testers.  Or, for that matter, with the product owners or customer representatives.  So, the hope everything comes together. 

So, when testers "get the code" to test, we may hope that this time, things were done "right."  Sadly, far too often, we find they were not.  Again.

What can we do?  We're just testers, right?

We can ask questions.  We can take actions that may influence the thing we hope happens actually happens.  We can inform people of the impact of their actions:
  • We can show developers how "making their date" with delivering code that has not been unit tested will impact further testing;
  • We can show how development/project management that optimistic (at best) or aggressive timelines for development will limit the available time for review and unit testing when problems are encountered;
  • We can show how that limited time will impact further testing; 
  • We can show Designers how "making their date" with a design that is not reviewed or understood will impact developers and testers - and ultimately people using the software;
  • We can show how BA's "making their date" with poorly considered documented requirements impacts all of the above;
  • We can show PMs how communication, honest, open, clear and concise will reduce the above risks.

THAT is how we combat and defeat the evil let loose from Pandora's Box. 

We take action to make the hopes come true.

We take positive action to change things.  

Wait! One more thing... 

To my respected colleagues who emailed me who rely strictly on documented requirements to "drive testing:"

If your organization fits in the description above, and if you dutifully follow the process without variance - then I suspect am reasonably certain that you are engaging in Pandora's Box Testing.


  1. "To my respected colleagues who emailed me who rely strictly on documented requirements to 'drive testing:'"

    Why would you respect a colleague with so little understanding of testing? Such people are toxic to our craft, like alcoholic airline pilots, or doctors who fake their credentials. Maybe you were apologizing to them in jest, but I wish the sarcasm was more pointed.

    There is no such thing as "relying strictly" on one source of information when you test. I guess the best case scenario when someone says that is that he is telling a lie. It's not even a coherent idea to speak of reliance in that way-- as if the tests were somehow determined in advance and not the result of a sincere learning and exploring process.

    When someone speaks of highly formal testing that way, I can tell you, as someone who has done testing that is among the most formal ever done in our field (on court cases, where hundreds of thousands of dollars might be lavished on a few minutes of demonstration for a jury in the presence of hostile experts) I am offended by these people whose idea of testing comes, apparently, from Saturday morning cartoons.

    Testing is a deep thought process, when done well, that draws on innumerable sources and whose course cannot be pre-determined.

    1. I believe we are in agreement.

      I keep meeting people for whom the idea of "'relying strictly' on one source of information when you test" is part and parcel to what they do. It also explains the outrageous behaviors found in the software the "test."

      May I quote the last line in the comment? It very neatly says what I was struggling to say in a meeting last week. I think it is going on the wall of my cublet at client site.