Wednesday, August 22, 2012

Considering the Past or Testing and User Experience II

A couple of things happened that came to my mind recently.  One was a webinar I decided to check out and see what the presenters had to say on "User Experience, Design and Testing."  No kidding - that was the title.

It started well enough with basic ideas like being aware of consistent look and feel between screens and functions and make sure the screens can be read and ... yeah, you get the idea.

The presenter then began addressing why these ideas were important.  "You need to make sure the users can actually use the system and it is easy for them to do so and that the screens look good while they are using it."  OK, my ears perked up a bit.  That struck me as being a little curious - "Where was she going with this?"

"Back when people first began using computers for work in the early 1980's designers and developers did not worry about usability.  All the screens had light text on dark screens and nothing was in color and you had to tab to get from field to field and they were really hard to work on for an long period of time.  Its like the designers did not care about ideas like that and simply did not consider it."

Oh dear. I was one of those people who "did not worry about usability."  I must be evil, or at best, heartless and cruel to make people use technology they had and not stuff that hadn't been rolled out on a scale where companies (and individuals) could afford it.  I was mean and vicious because I used light text on a dark screen when everyone knows that is the hardest thing there is to try and read.

Unless that is what you have to work with. Unless, there are no other options.

The objections raised by other attendees were brushed aside with "If you really wanted to make it easier you could have."

She then talked about how you needed to click the tab button instead of using a mouse.

I disconnected at that point.

Those who cannot remember the past are condemned to repeat it.
 George Santayana
(Jorge Agustín Nicolás Ruiz de Santayana y Borrás)

Sometimes I wish things were as simple as people tell us they are - or should be - or can be.  I am not a utopian visionary.  I don't have an idealized view of the world.

I am old enough where some of my younger colleagues, like the recently graduated former intern in the office ask if I'm retiring.  I am young enough to think "What an odd idea!  Why would I retire?"  I am also sensitive enough to feel a little offended at the thought I might be "nearing retirement."

I'm reminded of when my grandson was far younger than he is now.  He asked me what I talked about with King Arthur when we had lunch. 

Yet I remember those "dark days" the presenter was describing.  I also remember working with "seasoned professionals" for whom the idea of a shared CRT was a luxury that seemed astounding to them.  It certainly was better than using the coding forms and punching cards up based on the forms.  And that was better than hard-wiring the circuitry and flipping switches appropriately to get the computer to do what needed to be done.

I also realized that the people doing the work were doing the absolute best they could do with what they had to work with.

Could it have been better? Possibly.  I have not seen any system that could not be improved on.

It is important, however, to recognize that things can be better.  It is also important to recognize that sometimes, things are the way they are because they are familiar to the users.  Change is unpleasant as it is.  It makes us all uncomfortable.  Is there something to be gained by making things better, by some measure, in the revision or replacement process?

When looking at a revised design, a refactoring of the system or a complete rewrite, most people will throw away what is in place for what they believe should be in place.  Now, my younger self would say "Of course!  That makes sense!  They want NEW! Bright! SHINY! OOOOOOoooooh! Glitter!" 

That really makes sense to my younger self.

My current self is a little more nuanced.  "What is it they don't like? What do they like? What do they want to do they can't do? Are there things in the system that are getting in the way of what they need to do? Are they looking at business process changes that need changes in the system?"

I'll also throw in "What do they consider the system to be?"

The questions are relevant.  They help us, as testers, make an estimation of what the user has in mind, and importantly, what their expectations are. 

Only the dead have seen the end of war.
 George Santayana
(Jorge Agustín Nicolás Ruiz de Santayana y Borrás)
(Somehow, I'm going to work that second Santayana quote in.  I promise.)

We can dive in and throw our lot in with the latest and greatest... and then in a few years someone else is going to do the same to our "perfect" work.  It is the way of software, no?

The next closest thing to the experience of determining what is good "usable design" or design that is good for the "user experience" is GUI Automation test development.  A solid set of automated tests are developed and six to nine months later - it has fallen apart.  It has not been maintained and the changes made to the system render the automated test suite unusable.

At some point everything we make becomes obsolete.  

The shiny, new "Absolute Best" stuff becomes... old, outdated, archaic and old-fashioned. It is kind of like hearing your favorite song played on the radio and the DJ makes a comment about "an oldie but a goodie" or (worse) "a blast from the past." 

The people who most freely criticize an application, its usability, its interface, its design and its function, are rarely the ones who worked so hard to design it, implement it and get it "right."  Go gently when commenting on an older system and its shortcomings. 

When you create your "perfect" system, it will not be long before someone else is saying the same thing of your system as you did of its predecessor. 

Remember Shelley's poem Ozymandias and its warning:

I met a traveller from an antique land
Who said: "Two vast and trunkless legs of stone
Stand in the desert. Near them on the sand,
Half sunk, a shattered visage lies, whose frown
And wrinkled lip and sneer of cold command
Tell that its sculptor well those passions read
Which yet survive, stamped on these lifeless things,
The hand that mocked them and the heart that fed.
And on the pedestal these words appear:
`My name is Ozymandias, King of Kings:
Look on my works, ye mighty, and despair!'
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare,
The lone and level sands stretch far away".


Monday, July 23, 2012

CAST 2012, or CASTing Through the Computer Screen

CAST 2012 recently wrapped up.  The annual Conference of the Association for Software Testing was held in San Jose, California, Monday and Tuesday, July 16 and 17, 2012.  Wednesday the 18th saw tutorials.  Thursday was the Board of Directors meeting.

First, even though I had helped plan the Emerging Topics track, I had some conflicts arise that kept me away (physically).  I was a tad disappointed that after the work that went in, I would not be drinking in the goodness that is the CAST experience.

Why is that a big deal? 

Its a conference, right?

CAST is unlike any conference I have ever attended.  They make it a point of being a participatory, engaged, function - not merely sitting listening to someone drone on reading power point slides.  There is required time in each session for questions and discussion around the presentation.  Some of them can be quite brutally honest.

This becomes an issue for some people.

When one engages with people who think, rote-learned answers do not hold up well.  Watching the interplay as people, including the presenters, learn, is something that is in itself, a top-flite education in testing.

And I could not be there.  Bummer.

I chose the next best thing - I joined in the video stream as much as the day-job and other responsibilities allowed.  I listened to Emerging Topics presentations, keynotes, a panel discussion on the results of Test Coach Camp and CASTLive - the evening series of interviews with speakers and participants - with online chat and... stuff.

While I could not be there in person, this was a pretty decent substitute.

Other cool things

Keynotes by Tripp Babbitt and Elizabeth Hendrickson.  Great panel discussions on what people learned from at Test Coach Camp, and, cool stuff.

Simply put, there are recordings to be viewed and listened to here.

Other things happened as well, like announcing the election results for AST Board of Directors.

I was elected to the Board of Directors to serve a single year, to fill a position left vacant by a Board Member who could not finish his term.

I am deeply honored to have been selected to serve in this way.

I am humbled, and looking forward to this new chapter in testing adventure.

Saturday, July 7, 2012

In the Beginning, on Testing and User Experience

Behold, an Angel of the Lord appeared before them and said "Lift up your faces, oh wandering ones, and hear the words of the One who is Most High. 

Thus says the Lord, the Keeper of the Way and Holder of Truth, 'Do not let your minds be troubled for I send a messenger unto you with great tidings.  The Path to Software Quality shall this messenger bring.  Heed the words and keep them whole. 

Worry not your hearts over the content of software releases.  Behold, one shall come bearing Requirements, functional and non-functional. And they shall be Good.  Study them and take them under your roof that you may know them wholly.

If, in your frail-mindedness, such Requirements are beyond the comprehension of lesser beings, Designers and Developers and Analysts, yea, unto the lowly Testers who are unable to comprehend such lofty concepts, fear not to come and humbly ask which these Requirements mean.  Lo, it shall be revealed unto you all that you need to know.  On asking, that which is given in answer shall be all that the lesser ones, unto the lowly Testers, shall be revealed.

Ask not more than once, for doing so will try the patience of the Great Ones who are given the task of sharing the Revelation of the Requirements.  To try the patience of these Great Ones can end with a comment in your permanent file, unto impacting your performance review and any pay raise you may long for now, and in the future.

Developers and Designers and lowly Testers shall then go forth from the place to the place of their cubicles and prepare such documents as is their task to prepare.'"

Then the Angel of the Lord spoke to them this warning with a voice that shook the Earth and the Hills and the Valleys and caused the Rivers to change from their courses.  "Seek not the counsel of those in the Help Desk nor those in Customer Support nor those in Services.  Their path will lead you astray for their knowledge is limited and perception flawed.  Avoid them in these tasks before you as they are not worthy to hear or read the words handed down to you from the Requirements Bearer.  Thus says the One who is Most High."

1st Book of Development Methodology, 2:7-45

I've seen project methodologies adapted at companies that look and read painfully close to this.  None have gone this far, perhaps - at least not in language and phrasing.  Alas, a painful number have the spirit and feeling of the above.

Rubbish.

As you sow, so shall you reap.

It does not matter what methodology your organization uses to make software.  It does not matter what approach you have to determining what the requirements are.  They shall be revealed in their own time.

If you are making software that people outside of your company will use - maybe they will pay money for using it.  Maybe that is how your company stays in business.  Maybe that is where the money coming into the company for things

If that is the case, I wonder where the "Requirements" are coming from.  Companies I have worked for, in this situation, the "requirements" come from Sales folk.  Now, don't get me wrong, many of these Sales folk are nice enough.  I'm just not sure they all understand certain aspects of technology and what can be done with software these days.

Frankly, I'm not sure if some of them have any idea what the software they are selling does.

That's a pity.  The bigger pity is that many times the people they are working with to "get the deal" have no real idea what is needed.

They can have a grand, overall needs view.  They can have a decent understanding of what the company wants, or at least what their bosses say the company wants, from the new or improved software,  They may know the names of some of the managers that the people using the software every day.

This does not include the people who glance at the dashboard and say things like, "Hmmm. There seems to be a divergence between widget delivery and thing-a-ma-bob capacity.  Look at these charts.  Something is not right."

That's a good start.  Do they have any idea where the data for those charts come from?  Do they have any idea on how to drill down a bit and see what can be going on?  In some cases, they might.  I really hope that this is true in the majority of cases.  From what I have seen though, this isn't the case.

The Average User

Ah yes.  What is an "average user"?  Well, some folks seem to have an idea and talk about what an "average user" of the system would do.  When they are sales folk who sell software (maybe that exists) I am not certain what they mean.

Do they mean an "everyman" kind of person?  Do they picture their mother trying to figure out the Internet and email and search engines?  I don't know. 

Do they mean someone who follows the script, the instructions they are given on "how to do this function" - probably copied from the user manual - for 2,5 hours (then a coffee break) then 2 hours (lunch!) then 2 hours (then an afternoon break) then 1.5 hours (then home)?  Maybe.

Have any of you reading this seen that actually happen?  I have not.

So, folks tasked with designing a system to meet requirements derived from conversations with people who may or may not have contact with the system/process in general and who may or may not understand the way the system is actually used at their company (and when you multiply this across the model of "collected enhancement requests" many companies) will then attempt to build a narrative that addresses all of these needs, some of them competing.

This generally describes the process at four companies I know.

The designers may or may not create "user stories" to walk through scenarios to aid in the design.  They will look at aspects of the system and say "Yup, that's covered all the requirements.  Good job, team/"

What has not happened in that model?   Anyone?

No one has actually talked with someone who is in contact with (or is) an "average user."

When I raised the point at one company that this was an opportunity for miscommunication, I was told "the users are having problems because they are using it wrong."

Really?  Users are using the system wrong?  

REALLY?

Or are they using it in a manner our model did not anticipate? 

Are they using it in a manner they need to in order to get their job done, and our application does not support them as they need it to?  Are they using it as they always needed to, working around the foibles of our software, and their boss' boss' boss - the guy talking with the sales folks - had no clue about?

Why?

Is it because the Customer Support, Help Desk, Professional Services... whatever they are called at your company - may know more about how customers actually use the software than, say, the "product experts"?  Is it because of the difference between how people expect the software to be used
and how those annoying carbon-based units actually use it?

As testers, is it reasonable to hold to testing strictly what one model of system use tells us is correct?   When we are to test strictly the "documented requirements" and adhere to the path as designed and intended by the design team, are we confirming anything other than their bias?

Are we limiting how we discover system behavior?  Are we testing?

I know I need to think on this some more. 

Thursday, July 5, 2012

On Best Practices, Or, What Marketing Taught Me About Buzzwords

An eon or two ago, whilst working toward my Bachelor's Degree, I had a way interesting professor.  Perfectly deadpan and recognized people's faces and names - but never the two together.  Crazy smart though.

He had multiple PhDs.  Before deciding he would enter academia (PhD #2), he worked as a geologist searching out potential oil deposits (PhD #1) for a large multinational oil company.  It was when he got tired of "making a sinful amount of money" (his words) he decided he should try something else.

He had an interesting lecture series on words and terms and what they mean.  One subset came back to me recently.  I was cleaning out my home office - going through stuff that at one point I thought I might need.  In one corner, in a box, was a collection of stuff that included the text book and lecture notes from that class.  I started flipping through them with a smile at my long ago youthful optimism I had recorded there.

One thing jumped out at me as I perused the notes of a young man less than half my current age - a 2 lecture discourse on the word "best" when used in advertising. 

Some of this I remembered after all these years.  Some came roaring back to me.  Some made me think "What?" 

X is the Best money can buy.

Now, I've noticed a decline in TV advertising that makes claims like this,  "Our product is the best product on the market.  Why pay more when you can have the best?"  Granted, I don't watch a whole lot of TV anymore.  Not that I did then either - no time for it.  (Now, I have no patience for most it.)

Those ads could be running on shows I don't watch.  This is entirely possible.  Perhaps some of you have seen these types of ads. 

Anyway, part of one lecture was a discussion on how so many competing products in the same market segment could possibly all claim to be the best: toothpaste, fabric softener, laundry detergent, dish detergent, soft drink, coffee, tea... whatever.  All of them were "the best."

The way this professor worked his lectures was kind of fun.  He'd get people talking, debating ideas, throwing things out and ripping the ideas apart as to why that logic was flawed or something.  He'd start with a general statement on the topic, then put up a couple of straw-men to get things going.  (I try and use the same general approach, when I can, when presenting.  It informs everyone, including the presenter.) 

The debate would rage on until he reeled everyone in and gave a summary of what was expressed.  He'd comment on what he thought was good and not so good.  Then he'd present his view and let the debate rage again.

I smiled as I read through the notes I made - and the comments he gave on the points raised.

Here, in short, is what I learned about the word "Best" in advertising: Best is a statement of equivalence.  If a product performs the function it was intended to do, and all other products do the same, one and all can claim to be "the best."

However, if a product had claims that it was "better" than the competition, they needed to be able to provide the proof they were better or withdraw the ad.

So Best Practices?

Does the same apply to that blessed, sanctified and revered phrase "Best Practice?"   

Ah, there be dragons! 

The proponents of said practices will defend them with things like, "These practices, as collected, are the best available.  So, they are Best Practices."  Others might say things like, "There are a variety of best practices.  You must use the best practice that fits the context."

What?  What are you saying?  What does that mean?

I've thought about this off and on for some time.  Then, I came across the notes from that class.

Ah-HA!  Eureka!  Zounds!  

Zounds?  Really? Yes, Zounds!  Aside from being the second archaic word I've used in the post, it does rather fit.  (I'll wait while you look up the definition if you like.)

OK, so now that we're back, consider this:  The only way for this to make any sense is to forget that these words that look and sound like perfectly ordinary words in the English language.  They do not mean what one might think they mean.

Just like X toothpaste and Y toothpaste both can not both be the best, because how can you have TWO best items?  Unless, they mean "best" as a statement of equivalence, not superiority. 

Then it makes sense.  Then I can understand what the meaning is.

The meaning is simple: Best Practices are Practices that may, or may not work in a given situation.

Therefore, they are merely practices.  Stuff that is done.

Fine.

Now find something better





Saturday, June 30, 2012

On Value, Part 4: Testers Rising to the Challenge

This is the fourth  post which resulted from a simple question my lady-wfe asked at a local tester meeting recently.

This blog post described problems with value in testing.  The second blog post looked squarely at perceptions of testers as unskilled, non-knowledge workers and the lack of response from a large number of testers.  The third post looked more closely at the source of such perceptions on testers being unskilled, non-knowledge.  This post is a discussion of what testers can do in response to such perceptions.

How many people do we know who collect a paycheck for "testing software" who can not be bothered to do any more than their collective bosses tell them?

Story 1

I know one manager who is active in the community who went so far as to offer for pay for his team to go to any conference or training event they wanted to go to.  He offered to pay for AST's BBST courses (which absolutely rock, by the way).  He offered to send them individually where ever they wanted to go to learn, confer, whatever.  Not one person expressed any interest.

That is a problem.

Story 2 - one of my own

I remember one team I worked with.  I asked them what defects were being reported by customers of the software.  They responded with a long list of "Well, there's this and this and this and that and then this weird thing and..."  I asked if maybe we needed to reconsider how we tested.  UNIVERSAL RESPONSE:  "Why would we do that?  They are using the software wrong.  If they used the software right, they would not have those problems."

Story 3 - another one of my own

I remember another team I worked with.  That team, on the first day I was there, not just the boss, but the TEAM said "We have a problem with our testing.  We're doing the best we know to do and we catch a lot of stuff but there are still defects getting through and being found by customers.  If you have any ideas on how we can test better, let's talk about them."

Guess which team was a lot more fun to work with?  Guess which team saw broader product examination and evaluation?

These teams demonstrate precisely what I see as the problem with the majority of people in software testing today, and as long as I have been involved in software.  One group wants to learn more, does not know where to begin and dives in headfirst to learn.

The other group is perfectly justified in their minds that they need to change nothing.  The boss they had 3 bosses ago said "Just do it like this."  They do.  Now, several years on, they have not changed.  They added more people to do more of the same.

A Problem

In the stories above, the folks in story 1 demonstrated precisely one aspect of the problem.  The folks in story 2 demonstrated another.  The third group was the antithesis of both - and a hopeful sign.

While many people in many lines of work expect nothing more than they are given, in training, understanding, pay or ambition, others expect to be told precisely where they fit.  Anyone looking to develop them as craftsmen was a threat - upsetting the status quo is a, well, problem.

People want to walk in 2 minutes before starting time, do their thing and go home.  Ideally, they don't want to think about anything work related until its time to walk in the office door the next morning.

Fine.  I can really empathize with that.

I'm not talking about thinking about your job I'm talking about your profession - your chosen trade and craft.  Alas, it seems some folks don't think that way.


Upshot of That Problem

If you or people you know are in that camp, I fear you might want to look for another line of work.  I might suggest you look for a line of work that requires you to not do outside study, on your own, at your own expense.  Off the top of my head, I suggest you not look at these lines of work, all of which require you to do precisely that, if you wish to become good:

  • Teaching (anything at any level);
  • Law (anything related to law);
  • Computer Software (that one is kind of obvious);
  • Project Management (more than just software projects, lots of things need project managers);
  • Accounting/Accountancy;
  • Show Production (Music, Stage, Theater, Television, Motion Picture);
  • Advertising;
  • Distribution Center Management (that's how to run a warehouse);
  • Commercial Truck Driving (or lorry driver);
  • Commercial (other) Driver (bus, taxi, car-for-hire);
  • Restaurant Manager/Owner (that one is really harder than most folks think);
  • Chef (not a cook, but a proper Chef);
  • Carpenter/Joiner;
  • Construction Contractor/supervisor;
You get the idea.

Jobs that require you to do no development on your own are jobs that will go away - at least from you.  They may get sent to some other country where labor rates are lower - and low enough to off-set the cost of transporting the components or finished product back to end-market areas.  OR it will go away because a machine will do it instead of a human person.

We're big on buggy-whip analogies when we want to make a point about people not keeping up with changing times.  Well folks, Swiss Watches fell into the same boat as buggy-whips.  Not anachronistic, just too expensive and a function that can be done by a commodity item, instead of the product of an artist.

As I See It...

You can make all the cases you want about what value testing bring to a software organization.  You can talk about how your testing improves the quality of the software product.  You can talk about all this stuff.

It does not matter.

If someone or something can do the same thing you are doing, with about the same results, and it costs the organization less money, they will replace you.

Ask the thousands of manufacturing workers who lost their assembly-line jobs in the US starting, well, in the 1970's.  As the shipyard workers in Belfast and Glasgow in the 1950's and 60's.  Ask any of the miners anywhere, mining for anything, that have been displaced by machinery.

And that is the problem.

We, as a profession, have failed miserably in demonstrating that the Tayloristic management theory does not apply to software development - which includes testing.  

Why is this?

We have failed to address the solidly formed and closely held management belief that repeated practices will ensure quality products.  Concepts developed for assembly-line workers, when applied to knowledge work, fail.  Full stop.

Why is that?


Because no human being thinks in a linear manner.  We simply are not built that way.  Knowledge work requires a level of cognitive insight and practical experience to draw on to inform that insight when it is being done well.

Having everyone think about a problem in the same, precise, linear way is only possible if everyone involved has the same experiences, understanding, thought processes an bias. 

Introduce one variable that is different and the wheels fall off.  The model fails.

Addressing the Problem

After great thought, many conversations, and now at least four blog posts mapping my consideration of my lady-wife's question, what has been determined?

OK - I've edited this 4 times.  This is take #6.  And its also the most polite.

The simple fact is, we're pretty clueless when it comes to what testing is, how it is done and why we do it.  A fundamental failure.  The vast majority of people involved in software simply don't get it.  This is true of many people who are supposedly professional testers, their bosses, their bosses bosses, project managers, subject matter experts, developers and - anyone who works with them.

Testers MUST educate themselves.  If they rely on the company they work for, that will be a mistake.  They don't get it either.  They THINK they do, and they usually don't. 

I can give my definition for testing.  Some of you may say "Ah! He has clearly been influenced by the work of ... a bunch of people."  Some of you may say "Pete, you're wrong and here's why..."  Others might just say "Meh, whatever." 

That's fine.  But talk about it.  You don't need to agree with everything I say or what someone else says.  You do need to think.  Then you need to communicate.  Then you need to think some more.

Find many, many sources of information - then share them with others.   Talk about them - agree, disagree, whatever.  Share ideas and learn.

When we as testers limit what we do by some narrow definition or purpose, every project, every time, what are we really doing is boxing testing into a corner. 

When we do that, we limit what testing is.

When we do THAT we fail to be true knowledge workers.  We fail to think fully, and we walk right into the self-fulfilling prophesy that started this series.  Testing is treated as a commodity.  Many testers have been participants in that disservice.

When we fail to to broaden ourselves, we limit ourselves. 

Broadening Testing

We must broaden ourselves, our profession, our views, our colleagues views, our employers views.  We must engage directly in combating the "just test this" mindset.  We must confront the "anyone can do this attitude" that is so pervasive in so many circles.

If you are reading this, thank you.  I believe that the this is a start.  Reading blogs, papers, articles and anything where people are sharing ideas and thoughts around software testing is a start. 

Then write your own.  Spread the word.  Don't care if you are not an expert.  I'm not an expert.  Most folks I know are not experts.  Some people THINK those folks are experts or authorities, and they simply don't consider themselves as experts. 

They do care about what they do.

You can to.

That makes you an expert. 

If you have a blog on testing, please leave a comment below with how we can find it. Thanks.

Tuesday, June 26, 2012

On Value, Part 3: The Failure of Management

This is the third  post which resulted from a simple question my lady-wfe asked at a local tester meeting recently.

This blog post described problems with value in testing.  The second blog post looked squarely at perceptions of testers as unskilled, non-knowledge workers and the lack of response from a large number of testers. This post looks at the source of such perceptions.

I'm going to tell you a story.  Don't be scared, it has a happy ending and everyone will be happy at the end.

Once upon a time...

...there were a group of people trying to figure out how to make stuff quickly and inexpensively.  They were making all kinds of stuff: tools, pens, desks, chairs, beds, clothes, books, eye glasses, guns, railroad cars and train engines.  Lots of stuff.

One bright fellow said "This stuff is really hard to make and it costs so much to make it that no one can afford to buy it.  If they can't afford to buy it only the really wealthy will be able to afford it.  If only the really wealthy can afford it, they won't buy enough for all of us to make a lot of money and stay in business so we can get really wealthy too!"

Another bright fellow said "This stuff takes too long to make and costs too much.  If we can figure out a way to make it less expensively we can make MORE of it for the same cost and sell each one for a little more than it costs us and LOADS of people will be able to buy this stuff.  Then we'll make LOADS of MONEY!"

A third bright fellow said, "I have an idea.  Let's look at what it takes to make each of those things.  What does it take to make a desk?  What does it take to make a chair?  What does it take to make a table?  If we figure THAT out, maybe we can find a way to make them less expensively."

The first bright fellow blinked and said, "I think that's a fantastic idea!  I can sack all those expensive cabinet makers and joiners I have making my furniture and get some riff-raff off the street, teach one to work a lathe, teach another to use a plane, another to use a saw and a fourth to use wood glue and a hammer to assemble the pieces.  Brilliant!  I can charge the same amount of money, pay a fraction of the wages and I'll make a PILE OF MONEY!  I just need to get someone to divide the steps up and I'm good to go!"

The second bright fellow said, "I had not thought about it, but you're right!  I can build railroad cars the same way!  But instead of having all these specialists and experts who know how to work with steel and iron and rubber and wood and how to build engines, I can get a BUNCH of people who don't know anything, teach them each ONE PART of assembling a car, and that will be that!  I'll make a PILE OF MONEY!"

The third bright fellow said, "Ummm, bright fellows?  Do any of you see any possible downsides to this?"

The other two laughed and said "Downsides?  Don't be daft!  We're going to make a PILE OF MONEY!"

The Downside

Where craftsmen had used years and years of their own experience, and had the experience of others they could turn to for reference, the approach these bright fellow were considering would result in removing the human element from the process.

Using untrained masses to replace skilled craftsmen would certainly reduce costs - you could pay them a LOT less.  Would the end result be the same?  Well - maybe.  Not at first while the new process is being sorted.  Of course, errors would not be the fault of the bright fellows, or their immediate subordinates, but the fault of the untrained masses "not knowing their job."

The thing is, the bright fellows knew they could pile no end of abuse on the unskilled, untrained masses, and if they did not like it, they could leave and there would be MORE unskilled masses willing to take their place - and pay packet at the end of the week.

Because the unskilled masses were not un-smart, and could see their way through a wall (as they say in Bree) they knew this was exactly why the bright fellows were doing what they were, well, doing.

Because they can does not make it right.

Except... 

We're not talking about making physical things, we're talking about making software.  Software that runs on computers that human beings will be working with.  Software that people work with every day.
 .  
The odd thing is, when these conversations, and subsequent actions, happened when it was about people making things, the result was turmoil.  Societal upheaval was not in it.  Social revolution in some places, literal revolution in others.  Violence - physical, emotional and mental were the result.  The political and physical fights from that time continue to color politics in the US, Canada, Britain, France, Russia, Poland, India... the list goes on.

When you treat people like unthinking machines, expect a reaction.  It may take a while, it may not be immediate.  These days, it may not be the violence of the 1890's, or 1910's or 1920's or 1930's or... yeah.  Look at the history of the labor movement.  Not taking sides - not blaming anyone.

But it happened.

When it comes to software, most managers, directors, whatever, who have people developing software reporting to them share a common background - most were software developers at one point in their career.

Most remember the skills the had to learn, at College, University, some where, and how hard it was to gain those skills.  Then they realize that other things are needed - things they may not understand.

Customers are complaining about software and bugs and problems and ... stuff.

Management consultants come in and consult.  Except they don't understand this stuff, really, either.  So they look in their case study books for examples that look like this same basic thing - where people are making stuff.

And they come up with a model that works great on assembly lines.  Processes need to be repeatable.  Define steps and follow them the same way, every time, and you will have a "positive result."  Make sure you are following these, and other, best practices.

Sounds great, right?  And then the world looks like a Dilbert cartoon.  You know, the one (well, several) where the pointy-haired boss says something about anything he does not understand must be easy.

Many of these same bosses know that software design does not fit neatly into repeatable processes.  They understand this.  They'll talk about following the process and using "intelligent cheating" or some such.

The Fault

... lies in the belief that by breaking things into small enough pieces, they can be done by anyone - or a machine.

That might work for things where thought is not required.  Things that can be done as well, more efficiently and at less cost by a machine.  This is patently true on the modern robotic assembly lines - which are nothing more than Henry Ford's assembly line taken to its logical conclusion.

This same logic is often applied to testing.  I believe that it is at the heart of the "automate everything" and "reduce manual testing by x% per year".  On some levels, it makes sense to do things as efficiently as possible.

What happens when it is not possible to reduce manual testing by whatever that magical percentage is?  Many of those same companies will tie performance measures to these other goals.

Did you reduce the testing by the magic percent?  No?  Too bad.

Did you automate all testing?  No?  There are no excuses for "that can't be automated."  Of course it can.  You did not try hard enough.

This proves to the people doing this work that no matter how well they do their work, empty meaningless slogans trump the reality of what the work is. 

If you treat your people as machines, as unthinking automatons, I pity you.

You are wasting the potential to soar - both yours and theirs.

One more thing.

If you treat your people as machines, as unthinking automatons, I will not work for or with you.

Oh.  The happy ending?  There isn't one, at least not yet.



Monday, June 25, 2012

On Value, Part 2: The Failure of Testers

This is the second post which resulted from a simple question my lady-wfe asked at a local tester meeting recently.

This blog post resulted in a fair number of visits, tweets, retweets and other measures that people often use to measure popularity or "quality" of a post.

The comments had some interesting observations.  I agree with some of them, can appreciate the ideas expressed in others.  Some, I'm not so sure about.

Observations on Value

For example, Jim wrote "Yes, it all comes down to how well we "sell" ourselves and our services. How well we "sell" testing to the people who matter, and get their buy-in."

Generally, I can agree with this.  We as testers have often failed to do just that - sell ourselves and what we do, and the value of that.

Aleksis wrote "I really don't think there are shortcuts in this. Our value comes through our work. In order to be recognized as a catalyst for the product, it requires countless hours of succeeding in different projects. So, the more we educate us (not school) and try to find better ways to practice our craft, the more people involved in projects will see our value."

Right.  There are no shortcuts.  I'm not so certain that our value comes through our work.  If there are people who can deliver the same results for less pay (i.e., lower cost) then what does this do to our value?  I wonder if the issue is what that work is?  More on that later, back to comments.

Aleksis also wrote "A lot of people come to computer industry from universities and lower level education. They just don't know well enough testing because it's not teach to them (I think there was 1 course in our university). This is probably one of the reasons why software testing is not that well known."

I think there's something to this as well.  Alas, many managers and directors and other boss-types testers deal with, work with and for, come from backgrounds other than software testing.  Most were developers, or programmers when I did the same job.  Reasonably few did more than minimal testing, or unit testing or some form of functional testing.  To them, when they were doing their testing, it was a side-activity to what their "real work" was.  Their goal was to show they had done their development work right and that was that.

Now, that is all well and good, except that no one is infallible in matters of software.  Everyone makes mistakes, and many deceive themselves about software behavior that does not quite match their expectations.

Jesper chimed in with "It's important that all testing people start considering how they add value for their salary. If they don't their job is on the line in the next offshoring or staff redux." 

That seems related to Jim's comment.  If people, meaning boss-types, don't see the point of your work, you will have "issues" to sort out - like finding your next gig.

The Problem: The View of Testing

Taken together, these views, and the ones expressed in the original blog post,can be summarized as this:  Convincing people (bosses) that there is value in what you do as a tester is hard.

The greater problem I see is not convincing one set of company bosses or another that you "add value."  The greater problem is what I see rampant in the world of software development:

Testers are not seen as knowledge workers by a significant portion of technical and corporate management.


I know - that is a huge sweeping statement.  It has been gnawing at me on how to express it.  There are many ideas bouncing around that eventually led me to this conclusion. For example, consider these statements (goals) I have heard and read in the last several weeks, as being highly desirable
  • Reduce time spent executing manual test cases by X%;
  • Reduce the number of manual test cases executed by Y%;
  • Automate everything (then reduce tester headcount);
There seems to be a pervasive belief that has not been shaken or broken, no matter the logic or arguments presented against it.  Anyone can do testing if the instructions (test steps) are detailed enough. 

The core tenet is that the skilled work is done by a "senior" tester writing the detailed test case instructions.  Then, the unskilled laborers (the testers) follow the scripts as written and report if their results match the documented, "expected" results.

The First Failure of Testers

The galling thing is that people working in these environments do not cry out against this.  Either debating the wisdom of such practices, or arguing that defects found in production could NOT have been found by following the documented steps they were required to follow.

Some folks may mumble and generally ask questions, but don't do more.  I know, the idea of questioning bosses when the economy is lousy is a freighting prospect.  You might be reprimanded.  You may get "written up."  You may get fired.

If you do not resist this position with every bit of your professional soul and spirit, you are contributing to the problem.

You can resist actively, as I do and as do others whom I respect.  In doing so, you confront people with alternatives.  You present logical arguments, politely, on how the model is flawed.  You engage in conversation, learning as you go how to communicate to each person you are dealing with.

Alternatively, you can resist passively, as some people I know advocate you do.  I find that to be more obstructionist than anything else.  Instead of presenting alternatives and putting yourself forward to steadfastly explain your beliefs, you simply say "No."  Or you don't say it, you just don't comply, obey, whatever.

One of the fairly common gripes that comes up every few months on various forums, including LinkedIn, are whinge-fests on how its not fair that developers are paid "so much more" than testers are.

If you...

If you are one of the people complaining about lack of  PAY or RESPECT or ANYTHING ELSE with your chosen line of work, and you do nothing to improve yourself, you have no one to blame but yourself.

If you work in an environment where bosses clearly have a commodity-view of testers, and you do nothing to convince them otherwise, you have no one to blame but yourself.

If you do something that a machine could do just as well, and you wonder why no one respects you, you have no one to blame but yourself.

If you are content to do Validation & Verification "testing" and never consider branching beyond that, you are contributing to the greater problem and have no one to blame but yourself.

I am not blaming the victims.  I am blaming people who are content to do whatever they are told as being a "best practice" and will accept everything at face value.

I am blaming people who have no interest in the greater community of software testers.  I am blaming people who have no vision beyond what they are told "good testers" do.

I am blaming the Lemmings that wrongfully call themselves Testers.

If you are in any of those descriptions above, the failure is yours.

The opportunity to correct it is likewise yours.