Sunday, May 27, 2012

On Missing the Mark or What Bagpipe Bands Taught Me About Software Testing

For the third year in a row, I am not where I spent some 30 Memorial Day Weekends.  The Alma Highland Festival is a two day affair on the campus of Alma College, in, Alma, Michigan.  At one point in my life, I took the Friday before off from work, loaded my kilt, drum(s), coolers of adult beverages, anda small cooler of fruit and sandwich makings into the car and drove the 90 minutes or so that it took to get there.  I'd then camp out in the parking lot until we could get into the digs that would be ours until Sunday evening.

There is a two-day pipe band contest, one Saturday and one Sunday.  At one point they had enough bands to stretch from end-zone to end-zone on the football field, with bands lined up every 5 yards or so.  This year they have some 25 or 30, divided into 5 "grades" or levels of experience and expertise.  . 

I got an excited text message yesterday from a band I have been helping out by teaching their drummers once a month or so this last year.  They had played really well and were looking forward to hearing the results at the end of the day.  There were a total of 10 bands in the grade they were competing in and they hoped for a good placement, at least in drumming, if not overall.

A few hours later, I got a very sad text message.  "What do I know?" wrote the sender.  "We ended 9th drums. I thought we did better."

I have a response that has become almost "canned" I have used it so often with so many beginning bands.  It goes something like this:
I would not be surprised that you played well.  You have been working really hard and the improvement shows.  What we don't know is how hard the other bands have been working.  Since it is hard to listen objectively while playing yourself, then comparing yourself to every other band, how do you know you did not do the absolute best you could?  Even if you did, how do you know that the other bands did not do the same?  What if their "best" was simply better than your best for the day?  If you were pleased with how you played, accept that as part of the reward for the hard work.  Recognize that the real point is to improve your level of play and be able to know you gave nothing away for the other bands to capitalize on, and beat you.  If they outdrummed you today, congratulate them, have and {adult beverage} with them and a laugh or two, then work all the harder to get ready for the next contest.

It is a model I've used for years, with every level of band I've played with or worked with from the absolute beginners to Grade 2 - one step away from the god-like heights of Grade 1, the top of the field.  Sometimes, it is hard, other times, it makes things a bit easier to take.  A fair number of times, it is also true.

What does this have to do with software testing?

It is reasonably related - No matter how hard you try and no matter how carefully you work, you will not find every defect in the system.  Full Stop.

No software tester or test team can find every defect.  That is a simple fact.  Some folks feel devastated when a defect "gets away" and is found by the customer or users.  What information did you miss that lead to you not exercising the exact scenario?  Was there any reason to suspect you should exercise that exact sceanrio?  If the choice was to exercise that scenario and not others, what would be the impact of doing so?  What bugs might have been released instead of the one that was?  How can you know?

Contrary to those who cite "defect free" as the target of good testing, you can not possibly exercise every scenario, every combination of environment and every combination of variables to cover everything.

Learn from the defects that get through, examine your presumptions then see if, given what you know now, and the results of the decisions made, would you have made the same decisions you did about testing?  Can you apply these lessons in future projects?  If so, have a nice cold {adult beverage} and move on.

When the results are less than optimal in pipe bands or testing, if you learned something apply that and move on.  Berating yourself or your fellows does no good.


  1. Do you mean a difference between "relative success" and "absolute success"? Or is your comment more about context for success? Great post - thought provoking!

    1. I've never experienced "absolute success", meaning completely unqualified, no way it could have been better, success. Therefore I believe what is "relative" is the question? I prefer to think of it as "reasonable level of success."

  2. Hi! Good story.

    I was in a project that had a criteria that the customer will return the product if he finds 1 bug from the system. There was immense pressure on the testing team taken into consideration that the system was so vast.

    What I learned from that project was the skill to negotiate. I learned to discuss with the customer and to explain what we can and can't do. It also helped me to understand what we could do.

    Currently I do a defect analysis from critical customer cases and compare them against my skills as an exploratory heuristic tester to see what kind of heuristics should I focus on more to find bugs that slip away. BECAUSE BUG DO SLIP AWAY FROM TESTING!

    We all need to be given a chance to fail, otherwise we don't learn. If we fail and get bashed about it, we will withdraw into our shell and try to protect the dignity that we still have. By giving room to explore, experiment, innovate, can we reach higher learning and eventually success.

    (I want to play bagpipe! Where can I get one? My wife will kill me if I buy one, tho. :) )


    1. I like your story as well. Negotiation and managing expectations are among the more challenging tasks I can think for for most testers (QA, whatever.) It doesn't seem to matter if it is their management or customers.

      As for bagpipes, weeeeeeeell, you can buy a nice set to nail to the wall pretty inexpensively. A set to play takes a fair amount of learning ... and money. ;)