Tuesday, August 23, 2016

On the Value of Software Testers

This was originally published under the title "Considering the Value of Software Testers" in Stickyminds, July, 2014. The original, unedited, version appears below. - Pete
I try hard to learn what other people think about testing and how to do it well.  If you are like me, you have as well.  In doing so, youve also heard a variety of answers from gurus, telling us what to focus on. If so, then I suspect youll find these ideas familiar:
  • Software testers find bugs;
  • Software testers verify conformance to requirements;


  • Software testers validate functions.
There are different versions of these ideas; they may be expressed in different ways.  Some people focus exclusively on one item.  Some will look at two.  Sometimes these ideas are presented as the best way (or the “right way”) to deal with questions around testing. 

Some organizations embrace one or more of these ideas. They define and direct testing based on their understanding of what these ideas mean.  They insist that testing be done a specific way, mandating practices (or documents) under the belief that controlling practices will ensure maximum effectiveness and best possible results. Less training time and easier switching between projects are two common reasons to do this.

Frankly, even when they work, I find the results unsatisfying. For example, the result of “standardizing” often consists of detailed scripts.  These scripts direct peoples efforts, which often results in actively discouraging questions.  The reasons for detailed scripts are often wrapped around concepts that many in the organization have a very shallow understanding of, such as Six Sigma in software development and repeatability of effort.

In Six Sigma, variation is viewed as the cause of error. A shallow understanding of Six Sigma leads to the understanding that varying from assigned steps in a test document will result in “error” in testing, making variation in test runs a cause of deep concern.

If the “expected results” explicitly state one thing, those executing the tests will soon find themselves looking only that thing. As Matt Heusser has often said (and I’ve stolen the line time and again), “At the end of every expected result is another, undocumented statement that says ‘… and nothing else strange happened’.” 

The point is the obvious solution is to direct people to look at broader aspects than what is documented as “expected results.”   This sets up a conundrum around what is and is not part of what should be looked for.

Many of us would assert that the tester should, out of responsibility and professionalism, track down apparently anomalous behavior and investigate what is going on.  Consider the team reducing variation in a shallow way that has defined steps that take a known period of time to execute. Then add a little time pressure.  What do you think happens when they encounter something that does not fit but is not to be check for explicitly? 

The human mind ignores these types of errors, often the most important errors, or at the very least, a hint that might lead to the most important error.  If you doubt this, then here is an exercise for you: Go to gmail or google and look for the banner ads.  Your mind has been ignoring these for year.  Do you notice how large and prominent then are?  Funny how you don’t notice them unless you look!

Of course management can insist that testers be “professional” and investigate off-script issues, but when the testers follow that advice, they will exceed the allotted time for the “test case.” If part of their performance review and the resultant pay/bonus is tied to those measures, can we really expect them to branch out from the documented steps?

Teams that rely on “click and get reports” automated tools for functional or UI testing are set up for a similar problem. Without careful investigation of the results in both the too and application logs, the software will only report errors in the explicit results. That means the error has to be anticipated in advance in order for the automation code to look for it. 

A Different Way

I’ve explored the consequences of these ideas and have tried them myself in my early career in testing.  I don’t believe they work as broadly as many say they do.  Frankly, they fail my smell test. Can I suggest a fourth definition of testing, perhaps not academically through, but a working definition, based on the things I have seen that actually work?

Software Testing is a systematic evaluation of the behavior of a piece of software, based on some model.

Instead of looking for bugs, what happens if we look at the softwares behavior?  If we have a reasonable understanding of the intent of how the software is to be used, can we develop some models around that?  One way might be to consider possible logical flows people using the software may use to do what they need to do.  Noting what the software does, we can compare that behavior against the expectations of our customers. 

These observations can serve as starting points for conversations with the product owners on their needs.   The conversations can incorporate the documented requirements, of course, along with the product ownersexpectations and expertise.   This means the project team can choose the path they wish to examine next based on its significance and the likelihood of providing information the stakeholders are interesting in evaluating next.  

Instead of a rote checklist, testers working with product owners, development team and other stakeholders can compare their understanding of the software and ask the crucial question of Will this software meet our needs? 

Comparing system behavior with the documented requirements means that testers can help initiate and participate in discussions around both the accuracy of the requirements (do they match the expectations) and the way those requirements are communicated, thus helping reduce the chance of misunderstanding.  This helps the Business and Requirements Analysts do a better job writing requirements and position us for conversations around how to make requirements better. 

By changing what we are looking for, from specific items in a check list to looking at overall behavior with specific touch points, we change what we do and how we are considered - moving testing from an activity that has to happen to get the software out the door (a cost center to be minimized) to a value-add activity.  

And You 

If you serve in this industry for any length of time, you have probably felt offended if not insulted as a tester. Perhaps someone who had never done testing a day in their life defined a process for you and you did it while knowing that the work you were doing was low-value and would take too long. Perhaps worse might be to be given a detailed low-variation test plan, measured on time, and also told to investigate - a scenario where you cant win for losing! 

If that is the case, it might be time to say something like this Lets talk about what I do as a tester.I know, you may be scared, worried about your review. A few testers I know have been fired over this, but that is only a few.  Consider the alternative:  keeping a job you dont really want to have. 

Sometimes, the way to be most effective at your job is to act as if you dont care about keeping it.
Piper Kenneth McKay at Waterloo




No comments:

Post a Comment