Showing posts with label value. Show all posts
Showing posts with label value. Show all posts

Monday, June 25, 2012

On Value, Part 2: The Failure of Testers

This is the second post which resulted from a simple question my lady-wfe asked at a local tester meeting recently.

This blog post resulted in a fair number of visits, tweets, retweets and other measures that people often use to measure popularity or "quality" of a post.

The comments had some interesting observations.  I agree with some of them, can appreciate the ideas expressed in others.  Some, I'm not so sure about.

Observations on Value

For example, Jim wrote "Yes, it all comes down to how well we "sell" ourselves and our services. How well we "sell" testing to the people who matter, and get their buy-in."

Generally, I can agree with this.  We as testers have often failed to do just that - sell ourselves and what we do, and the value of that.

Aleksis wrote "I really don't think there are shortcuts in this. Our value comes through our work. In order to be recognized as a catalyst for the product, it requires countless hours of succeeding in different projects. So, the more we educate us (not school) and try to find better ways to practice our craft, the more people involved in projects will see our value."

Right.  There are no shortcuts.  I'm not so certain that our value comes through our work.  If there are people who can deliver the same results for less pay (i.e., lower cost) then what does this do to our value?  I wonder if the issue is what that work is?  More on that later, back to comments.

Aleksis also wrote "A lot of people come to computer industry from universities and lower level education. They just don't know well enough testing because it's not teach to them (I think there was 1 course in our university). This is probably one of the reasons why software testing is not that well known."

I think there's something to this as well.  Alas, many managers and directors and other boss-types testers deal with, work with and for, come from backgrounds other than software testing.  Most were developers, or programmers when I did the same job.  Reasonably few did more than minimal testing, or unit testing or some form of functional testing.  To them, when they were doing their testing, it was a side-activity to what their "real work" was.  Their goal was to show they had done their development work right and that was that.

Now, that is all well and good, except that no one is infallible in matters of software.  Everyone makes mistakes, and many deceive themselves about software behavior that does not quite match their expectations.

Jesper chimed in with "It's important that all testing people start considering how they add value for their salary. If they don't their job is on the line in the next offshoring or staff redux." 

That seems related to Jim's comment.  If people, meaning boss-types, don't see the point of your work, you will have "issues" to sort out - like finding your next gig.

The Problem: The View of Testing

Taken together, these views, and the ones expressed in the original blog post,can be summarized as this:  Convincing people (bosses) that there is value in what you do as a tester is hard.

The greater problem I see is not convincing one set of company bosses or another that you "add value."  The greater problem is what I see rampant in the world of software development:

Testers are not seen as knowledge workers by a significant portion of technical and corporate management.


I know - that is a huge sweeping statement.  It has been gnawing at me on how to express it.  There are many ideas bouncing around that eventually led me to this conclusion. For example, consider these statements (goals) I have heard and read in the last several weeks, as being highly desirable
  • Reduce time spent executing manual test cases by X%;
  • Reduce the number of manual test cases executed by Y%;
  • Automate everything (then reduce tester headcount);
There seems to be a pervasive belief that has not been shaken or broken, no matter the logic or arguments presented against it.  Anyone can do testing if the instructions (test steps) are detailed enough. 

The core tenet is that the skilled work is done by a "senior" tester writing the detailed test case instructions.  Then, the unskilled laborers (the testers) follow the scripts as written and report if their results match the documented, "expected" results.

The First Failure of Testers

The galling thing is that people working in these environments do not cry out against this.  Either debating the wisdom of such practices, or arguing that defects found in production could NOT have been found by following the documented steps they were required to follow.

Some folks may mumble and generally ask questions, but don't do more.  I know, the idea of questioning bosses when the economy is lousy is a freighting prospect.  You might be reprimanded.  You may get "written up."  You may get fired.

If you do not resist this position with every bit of your professional soul and spirit, you are contributing to the problem.

You can resist actively, as I do and as do others whom I respect.  In doing so, you confront people with alternatives.  You present logical arguments, politely, on how the model is flawed.  You engage in conversation, learning as you go how to communicate to each person you are dealing with.

Alternatively, you can resist passively, as some people I know advocate you do.  I find that to be more obstructionist than anything else.  Instead of presenting alternatives and putting yourself forward to steadfastly explain your beliefs, you simply say "No."  Or you don't say it, you just don't comply, obey, whatever.

One of the fairly common gripes that comes up every few months on various forums, including LinkedIn, are whinge-fests on how its not fair that developers are paid "so much more" than testers are.

If you...

If you are one of the people complaining about lack of  PAY or RESPECT or ANYTHING ELSE with your chosen line of work, and you do nothing to improve yourself, you have no one to blame but yourself.

If you work in an environment where bosses clearly have a commodity-view of testers, and you do nothing to convince them otherwise, you have no one to blame but yourself.

If you do something that a machine could do just as well, and you wonder why no one respects you, you have no one to blame but yourself.

If you are content to do Validation & Verification "testing" and never consider branching beyond that, you are contributing to the greater problem and have no one to blame but yourself.

I am not blaming the victims.  I am blaming people who are content to do whatever they are told as being a "best practice" and will accept everything at face value.

I am blaming people who have no interest in the greater community of software testers.  I am blaming people who have no vision beyond what they are told "good testers" do.

I am blaming the Lemmings that wrongfully call themselves Testers.

If you are in any of those descriptions above, the failure is yours.

The opportunity to correct it is likewise yours.

Thursday, June 14, 2012

You Call That Testing? Really? What is the value in THAT?

The local tester meetup was earlier this week.  As there was no formal presentation planned it was an extended round table discussion with calamari and pasta and wine and cannoli and the odd coffee.

"What is this testing stuff anyway?"

That was the official topic.

The result was folks sitting around describing testing at companies where they worked or had worked.  This was everything from definitions to war-stories to a bit of conjecture.  I was taking notes and tried hard to not let my views dominate the conversation - mostly because I wanted to hear what the others had to say.

The definitions ranged from "Testing is a bi-weekly paycheck" (yes, that was tongue-in-cheek, I think) to more philosophical, " Testing is an attempt to identify and quantify risk."  I kinda like that one.

James Bach was also referred to with "Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous."

What was interesting to me was how the focus of the discussion was experiential.  There were statements that "We only do really detailed, scripted testing.  I'm trying to get away from that, but the boss doesn't get it.  But, we do some 'exploratory' work to create the scripts.  I want to expand that but the boss says 'No.'" 

That led to an interesting branch in the discussion, prompted by a comment from the lady-wife who was listening in and having some pasta.

She asked "How do you change that?  How do you get people to see the value that you can bring the company so you are seen as an asset and not a liability or an expense?"

Yeah, that is kind of the question a lot of us are wrestling with.

How do you quantify quality?  Is what we do related to quality at all?  Really?

When we test we... 

We exercise software, based on some model.  We may not agree with the model, or charter or purpose or ... whatever.  There it is.  

If our stated mission is to "validate the explicit requirements have been implemented as described" then that is what we do, right?  

If our stated mission is to "evaluate the software product's suitability to the business purpose of the customer" then that is what we do, right?

When we exercise software to validate the requirements we received have been filled, have we done anything to exercise the suitability of purpose?  Well, maybe.  I suspect it depends on how far out of the lines we go.  

When we exercise software to evaluate the suitability to purpose, are we, by definition exercising the requirements?  Well, maybe.  My first question is, do we have any idea at all about how to judge the suitability of purpose?  At some shops, well, maybe - yes.  Others?  I think a fair number of people don't understand enough to understand that they don't understand.

So, the conversation swirled on around testing and good and bad points.

How do we do better testing?

I know reasonably few people who don't care about what kind of a job they do.  Most folks I know want to do the best work they can do.

The problem comes when we are following the instructions, mandate, orders, model, whatever, that we are told to follow, and defects are reported in production.  Sometimes by customers, sometimes by angry customers.  Sometimes by customers saying words like "withhold payment" or "cancel the contract" or "legal action" - that tends to get the attention of certain people.

Alas, sometimes it does not matter what we as testers say.  The customers can say scary words like that and get the attention of people who define the models us lowly testers work within.  Sometimes the result is we "get in trouble" for testing within the model we are told to test within.  Of course, when we go outside the model we may get in trouble for that as well.  Maybe that never happened to you?  Ah well.

Most people want to do good work - I kinda said that earlier.  We (at least I and many people I respect) want to do the absolute best we can.  We will make mistakes.  Bugs will get out into the wild.  Customers will report problems (or not and just grumble about them until they run into someone at the user conference and they compare notes - then watch the firestorm start!)

Part of the problem is many (most) businesses look at testing and testers as expenses.  Plain and simple.  It does not seem to matter if the testers are exercising software to be used internally or commercial software to be used by paying customers.  We are an expense in their minds.


If we do stuff they do not see as "needed" then testing "takes too long" and "costs too much."  What is the cost of testing?  What is the cost of NOT testing?

I don't know.  I need to think on that.  One of the companies I worked for, once upon a time, it was bankruptcy.  Other were less dramatic, but avoiding the national nightly news was adequate incentive for one organization I worked for.

One of the participants in the meeting compared testing to some form of insurance - you buy it, don't like paying the bill, but when something happens you are usually glad you did.  Of course, if nothing bad happens, then people wonder why they "spent so much" on something they "did not need."

I don't have an answer to that one.  I need to think on that, too.

So, when people know they have an issue - like a credibility gap or perceived value gap - how do you move forward?

I don't know that either - at least not for everyone.  No two shops I've been in have followed the same path to understanding, either.  Not the "All QA does is slow things down and get in the way" shop nor the "You guys are just going through the motions and not really doing anything" shop.  Nor any of the other groups I've worked with.

Making the Change


In each of these instances, it was nothing we as testers (or QA Engineers or QA Analysts or whatever) did to convince people we had value and what we did had value.  It was a Manager catching on that we were finding things their staff would not have found.  It was a Director realizing we were working with his business staff and learning from them while we were teaching them the ins and outs of the new system so they could test it adequately.  


They went to others and mentioned the work we were doing.  They SAW what was going on and realized it was helping them - The development bosses saw the work we did as, at its essence, making them and their teams look good.  The user's bosses realized we were training people and helping them get comfortable with the system so they could explain it to others, while we were learning about their jobs - which meant we could do better testing before they got their hands on it.

It was nothing we did, except our jobs - the day-in and day-out things that we did anyway - that got managers and directors and vice-presidents and all the other layers of bosses at the various companies - to see that we were onto something.

That something cost a lot of money in the short-term, to get going.  As time went on, they saw a change in the work going on - slowly.  They began talking about it and other residents of the mahogany row began talking about it.  Then word filtered down through the various channels that something good was going on.  

The people who refused to play along before began to wander in and "check it out" and "look around for themselves." Some looked for a way to turn it to their advantage - any small error or bug would be pounced on as "SEE!  They screwed up!"  Of course, before we came along, any small errors found in production would be swept under the rug as something pending a future enhancement (that never came, of course.)

We proved the value by doing what we did, and humbly, diplomatically going about our work.  In those shops that worked wonders.

And so...

We return then to the question above.  How do we change people's perspectives about what we do? 

Can we change entire industries?  Maybe.  But what do we mean by "industries?"  Can we at least get all the developers in the world to recognize we can add value and help them?  How about their bosses? 

How about we start with the people we all work with, and go from there?  I don't know how to do that in advance.  I hope someone can figure that out and help me understand.

I'll be waiting excitedly to hear back from you.