Friday, December 17, 2010

On Exploration or How You Might Be Testing and Not Know It

I had an interesting conversation earlier this week.  A colleague dropped into the cube, grabbed a handful of M&M's and muttered something about how she kept finding defects and wasn't able to get any test scripts written because of it. 

OK - That got my attention.

So, I asked her what she meant.  It seems that the project she was working on was not terribly well documented, the design was unclear and the requirements were mere suggestions and she had gotten several builds.  So, she was working her way through things as she understood them.

So she explained what was going on... She intended to make sure she was understanding them correctly so she could document them and write her test scripts.  Then, she could start testing.

The problem was, she'd try different features and they didn't work like she expected.  So, she'd call the developer and ask what she was doing wrong.  Problem: She wasn't. 

The issue, as she saw it, was that the code was so unstable that she could not work her way through it enough to understand how she was to exercise the application as fully as possible.  To do that, the standard process required test cases written so that they could be repeated and "fully document" the testing process for the auditors.  Because she kept finding bugs just "checking it out" she was concerned that she was falling farther and farther behind and would never really get to testing. 

More M&Ms.

So we talked a bit.  First response:  "Wow!  Welcome to Exploratory Testing!  Your going through the product, learning about it, designing tests and executing them,  all within writing formal test cases or steps or anything.  Cool!" 

Now, we had done some "introduction to ET" sessions in the past, and have gradually ramped up more time in each major release dedicated to ET.  The idea was to follow leads, hunches and, well, explore.  The only caveat was to keep track of what steps you followed so they could recreate "unusual responses" when they were encountered. 

Explaining that the process she was working through actually WAS testing lead to, well, more M&Ms. 

The result of the conversation was that the problems she was encountering were part of testing - not delaying it.  By working through reasonable suppositions on what you would expect software to do, you are performing a far more worthwhile effort, in my mind, than "faithfully" following a script, whether you wrote it or not.

Mind you, she still encountered many problems just surfing through various functions.  That indicated other issues - but not that she was unable to test. 

That thought prompted another handful of M&Ms, and a renewed effort in testing - without a script.

2 comments:

  1. "... muttered something about how she kept finding defects and wasn't able to get any test scripts written because of it."

    Hurray! Isn't that fantastic information? Perhaps it is the 'end of the illusion' that is causing the pain?

    ReplyDelete
  2. Hey Griffin - I think that is exactly what was going on. The end of the illusion, indeed!

    ReplyDelete