Saturday, May 14, 2011

Incomplete Complete Testing

In March, the local testing group got together to eat pizza and talk about testing.  We tend to get together each month and discuss some aspect around testing and eat pizza.  This time, we had a fun meeting where my boss and I gave a "preview" of a presentation we were slated to give at STPCon in Nashville later that month and starting testing groups. We had a decent size turnout and had a lively discussion.

One portion stuck out to everyone. There was an animated discussion around whether the efforts of a testing group could result in "complete" testing.  This discussion was the result of a seemingly simple question, "Can you really have complete testing of an application?"  It took almost no time for us to realize we had a topic for the April meeting. 

The challenge was sent out and all interested were to bring the "proof" each had cited and be sure their arguments were well considered for the April meeting.  After indulging in yet more pizza and an introduction/ice-breaking exercise, we settled down to business. 

The core question revolved around what is "complete" and what is "testing."  Could we agree on the terms?  It seems simple enough, no?  Have you ever tried to get a dozen or so people with different backgrounds, training and experience, some are testers, some are designers, some  programmers, to agree on what something so simple?  This actually took longer than I expected.  Testing is more than "unit" testing.  Testing is more than making sure things work.  Yes?  Well, maybe.  With a bit of discussion, we succeeded in getting an understanding we could work with.  That testing involves more than what many of us thought individually before the discussion and also involves aspects that others had not considered.

The interesting part of the conversation was around the idea of "proof" that complete testing was not only possible, but could reasonably be done.  With some discussion around what constituted "proof," a realization dawned on most people that a conceptual "proof" (think a theorem from math class in high school) left room for an awful lot of wiggle-room.

You see, it may be possible in certain limited circumstances to test every possible combination of everything impacting the system and it may be possible that the full range of potential valid and invalid input data and it may also be possible to exercise all possible paths within the code and it may also be possible to exercise the full range of potential loops and paths for each possible combination of the paths executed within the system. 

And then there is the reality of it.  Can you really do all of that?  Can you really do any of that?  Really?

How small is the system you're testing? 

The probability of those things and the costs associated with them is the issue.  Really. 

You may be able to cover somethings.  But all?  Really?

You see, an awful lot of systems have fairly complex input data structures.  Lots of potential valid input values.  And lots more of potential invalid values.  If you commit to "complete" testing will you really test all of them?  Then there's the example of Doug Hoffman and the calculation of a square root.  Simple, eh?  Something about floating point and five significant digits and unsigned integers and if you need to be sure the routine is right, how do you do that?

I mean, its four Billion possible values, right?  (c'mon - say that like Doctor Evil with the little finger pointed out. Four Bil-le-on ,,,)  Can you test it?  It depends, right?  What kind of machine are you running on?  An XT-Clone?  A super computer?   Makes a difference, no?  Well on one it might be completely impossible.  On another, it might take 10 minutes and show that the formula works for every possible input value, except for two. 

Then again, there's the question around the environment itself.  If you're running on a Windows environment, What is all that stuff running in the background anyway?  What happens if some of that stuff is not running - does it make a difference?  How do you know?  Are you certain? 

Without knowing, how can you possibly say that you can test all the environmental configuration combinations?  Can you test everything?  If not, can you really say you can completely test your system? 

So, you see where I'm going.  And that is kind of where the conversation went at the meeting. 

Can you test your systems completely?  Really?  Completely?

3 comments:

  1. Nice blog Pete. I was planning something similar for my next blog post. I just completed the BBST Foundations course (where we covered that example from Doug Hoffman) and this was one of the "big concepts" swimming around in my head needing to get out.

    ReplyDelete
  2. Thanks! By all means DO write about it! This is one of those topics where there are so many considerations and twists and turns that I don't think it can really be /completely/ covered.

    ReplyDelete
  3. OK....I got the time today to finally organize my thoughts into a (hopefully) cohesive body of words.
    http://www.chrischartier.info/?p=151

    ReplyDelete