Sunday, September 28, 2014

On Folly

In her book, "The March of Folly - From Troy to Vietnam" Barbara Tuchman talks about "the pursuits by governments contrary to their self interests."  Its an exceptional book.  I strongly suggest you read it.  Here's a link.

Any software tester who has an interest in how things can be different, or at least has an inkling that things either in their organization or in general, are not working or are simply messed up really could use with reading the first two pages.  (My paperback copy is a 1985 Ballantine Books Edition of the 1984 copyright renewal)   Its good.  You could substitute "test managers" (or "software managers" or "software development leaders") for "governments" and not change that first page one iota.

Tuchman does not throw everyone who makes a mistake, or even a serious blunder, under the bus of "folly."  No, she is more precise than that.  To be painted with her brush several specific conditions must be met.

In her words "the policy adopted must meet three criteria" (sounds better that way, doesn't it?)

First, "it must have been perceived as counter productive in its own time, not in hindsight."  That is important.  After the wheels fall off, its pretty easy to track back to the "Oh, THIS should have been done differently."

Second, "a feasible alternative course of action must have been available."  OK, fair game that.  If you know something is likely to be a bad idea and there aren't any other options to try, then it's not "folly."  It often is categorized as "bad luck."

Third, "the policy in question must be that of a group, not an individual ruler, and should persist beyond any one political lifetime."  That one is a little more challenging.  How long is a "political lifetime?"  That rather varies, doesn't it?  It could be an "administration" in the US, it could be a Board of Directors configuration for a company.  It could be a "management team" structure.  It could be several things - all of which add up to a passage of time, not a quick "policy du jour."

And software - Antediluvian era

Some 30 years ago, almost exactly 30, where I was working as a programmer. For clarification, it was common practice to have programmers, programmer analysts and folks with similar titles do things like, gather requirements, confirm requirements with users/customers, create the design, plan any file structure changes that might be needed, write the code and test.  Sometimes we worked in pairs.  Sometimes there would be three of us working the same project.

The company adopted a new process model for making software.

It relied on defining requirements in advance, and hammering down every possibility and variance up front, then getting everyone to "sign off" on a form saying "Yes, this is precisely what we want/need."  Then we would take those requirements and use them for building the design of the software.  Once we had the signatures, there was no need to "bother" the "users" again.  If there were questions we'd refer to those same requirements.

Then we would use the documented requirements to drive how we documented pretty much everything else.  Our design referenced the requirements.  If we had a new screen, report or other form of interface, we could make mock-ups to show exactly what each item would look like - and how it related to which requirements.

We even had comments in the code to reflect the section of the requirements this piece of code was addressing.  We used them when building test strategies and test plans and detailed test cases.

We could precisely identify each section of each requirement then show how every section of every requirement could be referenced in every piece of the design, the code, the file structure and then in the test plan - specifically each test case.

I saw this and thought - "Wow.  This will fix so many problems we have."  The very senior person on the team, his title was actually "Senior Programmer Analyst" - he was about as high as you could go without turning into a manager, had doubts.  He was not sure that everything would work as neatly as we were told to expect it would.  I shrugged and wrote his reservations off as being "old school."

And then I tried it.

Hmmm.  Things were more complicated than anyone thought.  We kept finding conditions that we did not anticipate during the weeks of developing the requirements and doing design.  We kept finding holes in our logic.

The "good" news was that since everyone had signed off and said "This is it!" I only got into a little trouble.  The project was delayed while we reworked the requirements and got the change agreements signed then changed the code and... right.  We found more stuff that needed to change.

The folks running the initiative gently patted my hand and said "As you get more experience with the process, it will be easier.  The first few projects will have problems as you learn about the process.  Once you follow it precisely, these problems will go away.  You'll see."

That seemed comfortable.  I took solace in that and tried again.

Three major projects and - somehow - the same thing happened.  Not just for me, but all the programmers, programmer analysts, senior programmer analysts - everyone who wrote code and did this stuff ran into the same problems.

Somehow, none of us were following the process "correctly."  If we were, these problems would not be happening.

Several years later...  Deja Vu

At another company, I was now the senior developer.  My title was "Information Analyst."  I was working with some very talented people working on cross-platform technologies.  At the time, it was very bleeding edge.  Unix-based platforms doing some stuff, the trusty IBM mainframe filling the role of uber-data-server/host and them some Windows based stuff all talking and working together.  Along with code stuff, I was also mentoring/helping with testing stuff.  There wasn't a 'test team' at this shop, we worked together and some folks coded, some folks tested.  The next project, those roles swapped.  I was fortunate to have a talented, open minded group to work with.

There was a change in leadership.  We needed structure.  We needed repeatability.  We needed to fix serious problems in how we made software.

They rolled out a new model - a new software development process.  Everyone had defined roles.

We needed to focus on getting the requirements fully identified before we did anything else.  We needed to get every possible combination of conditions identified before any design work was done.  We needed to do work around codifying how requirements looked.

Then we could design software that conformed perfectly to the requirements.  We could examine all the aspects of a given requirement and handle that in out design and then our code.  If it had a new screen, report or other form of interface, we could make mock-ups to show exactly what each item would look like - and how it related to which requirements.  

Then the code could be developed according to the design, and could reference the design points and the related requirements for how the code was intended to function and what purpose it was supposed to fill.

Testing - we could build the test strategy and plan simply be reading the requirements document.  Since that was so complete, there was no reason for clarifying question to the BA's or the users or... anyone else.  Testers could sit in their cubes and design tests and then execute them when the code was ready.  Except we did not really have testers, we had developers who also did testing for projects they did not write the code for.  Except that we sometimes had a problem.

We could map out the expected results in the testing and then ask the people doing the test scripts to check "Y" or "N" if the expected results came up.

Somehow, this company, too ran into similar problems as the other company.  We kept finding conditions we had not accounted for in the detail requirements gathering.  We kept finding conditions no one had anticipated.  We kept finding holes.

When we asked about it, we were told the problem was we were not following the process correctly.  If we had, we would not have these problems.

Hmmmm.... this sounds really familiar.

The "good news" for my team was that we generally avoided being in too much trouble because everyone who needed to sign off on the requirements had done so.  There was some grumbling about we should have done a better job of identifying the requirements, but since everyone had said "Yes, these are all of them" we were able to avoid taking the fall for everyone else.

Still, it was extremely uncomfortable.

A couple years later...  Deja Vu Again

Now I was the QA Lead.  That was my title.  I was working with a small team making testing happen on code that some really talented developers were making happen.  We talked about the system, they made code and we tested it.

The customers liked it - really - the folks who used the software who did not work for the company.  They noticed the improvement and they liked it - a lot.  The Customer Service folks like it a lot.  They got a lot fewer calls from angry customers and got more of the "Here's what I'm trying to do and I'm not sure how to do it" sort of calls.  They tend to prefer those - at least at that company.

Things were working well, so well in fact that the test team was moved from that one project group of to doing testing for the entire IS Development area.  That was fine, except there were two of us for some 100 developers. Ouch.

The "approved model" for making software looked a LOT like the last one.  This time, there was the call of "repeatable process" included.  We can make everything repeatable and remove errors (I believe "drive them out" was the phrase) by being extremely consistent.

This was applied not only to the information gathering in requirements, but also in design and code development.  As one might expect, it was handed on to testing as well.  Everything needed to be repeatable.  Not only in the design process but absolutely in the execution.

So, while we strove to make these design efforts repeatable, the demand was that all tests would be absolutely repeatable.  That struck me as "I'm not sure this really makes sense" but I was game to try.  After all, a consultant was in explaining how this worked and we were essentially hushed if we had questions or doubts.

The response was something like "The first few times you try it, you will likely have problems.  Once you get used to the process and really apply it correctly, the problems will go away and things will run smoothly."

We still had problems.  We still struggled.  Somehow, even the "golden children" - the ones who were held up as examples to the rest of the staff - had trouble making this stuff work.

A few years later...  Deja Vu All Over Again

I was working at a small company.  In the time I had been there we had shifted from a fairly dogmatic approach to testing, where precise steps were followed for each and every test, to a more open form.  Simply put, we were avoiding the problem of executing the same tests over and over again, eliminating bugs in that path and ignoring any bugs slightly off that path.

The product was getting better. We had built rules to document the steps we actually took, not the ones we planned to take.  When we found a bug, we had the steps that led to it already recorded and so we could plug them straight into the bug tracker.  The developers found this more helpful than a general description.

We had documents that were needed - requirements, etc.  They were sometimes more vague than they should have been.  So we tested those as well.  This allowed us to have meaningful conversations with people as we worked to define what precisely they expected.  Of course, we carried these conversations on as we were working through designing the software and considering how to test it.

Sure, we had to sometimes "redo" what we did - but generally, things worked pretty well.  They were getting better with each project.

Then we were bought.

After the inevitable happy-sizing, the "staff re-alignment"that left us with a fragment of the old company staff, we received the instruction in the "new way" of creating software.

You start by completely defining requirements - in advance.  Nothing happens until everyone agrees that all the requirements are fully documented and are complete.  Then design happens and everyone relates the design to the requirements.  And coding is done strictly against the design to make sure everything is according to the requirements.  Testing planning is done with a specific strategy created to reflect the requirements.  Then test plans are created, from the strategy, and refer to the requirements.  The test cases are detailed, repeatable sets of instruction to make sure the tests confirm to the requirements and can be executed many times without variation.

"Yes," we were assured, "this will take some getting used to, but once you understand the new process and follow it, you won't have any problems and the software will be great."  As projects had problems, of course it was because we were not "following the process correctly."

Looking back...

The first time I encountered a process like that, I was all over it.  In my very junior programmer mind it seemed to make perfect sense.  The next time, I was wary.  I had seen it fail before - and let's face it.  If problems are the result of people "not following the process" correctly - or "not understanding" the process.  I think there may be a problem with the process.  After all, these were smart people who had gone through the training for the new way of doing things.

The third time, right.  Good luck with that.  I expressed my concerns and reasons for being unconvinced.  The last time - I rebelled.  Openly and loudly.  I broke it down with them, made reference to the model they had drawn on and showed documentation for multiple places that demonstrated that model was innately flawed.  No amount of "tweaking" would fix the central issues.

I was told, "No, this was developed for us, specifically."  I challenged that by pointing out the reference materials readily available on the internet showing this process model - complete with the same step names, artifact names and descriptions of the process.  I then explained why this model would not and could not work in their context.  (By the way, it was the same reason it was doomed in each of the previous instances I encountered it...)

Those issues are this -

* Human language is an imprecise form of communication.  People can and will misunderstand intent.
* Requirements are rarely understood, even by the people who "know" the requirements best - the people asking for the change.  People have a hard time considering all the possible paths and flows that result from a given decision.  Once they see the result, they will better understand their own needs.
* Humans do not think in a linear manner.  That is the single biggest problem I see in the "repeatable" models put forward.  At some point there is a cloud with the word "Think" present.  At that point, the linear model fails.

With each new standard model put forward, there are people working in the industry governed by the standard with practical experience around the work the standard is intended to direct and mold.

When they raise objections, dismissing them as "self-serving" is, in itself, self-serving.

Your pet project may well be ugly and unwieldy.  Admit that possibility to yourself at least, or join the list of "leaders" who commit folly - and destroy the thing they are trying to save or build.

Saturday, August 30, 2014

More Than One Way to Confer - CAST2014

In August I was in New York City for CAST 2014.  This was the Ninth installment of the Conference of the Association for Software Testing.  Like many non-profit conferences, there is a mix of staff and volunteers making sure things run smoothly.

As luck would have it, a couple of weeks before the conference began, I got a phone call asking if I was available to "help out" a little more than usual.  It seems the nice lady who normally runs the registration desk was not available this year - things going on with the family and medical issues and... life getting in the way.  I said "Of course I can help out.  Not a problem!"

The result was, even though I was at the conference, I was tracking the activity by watching twitter because I was awfully busy not being in the room.  It was really kind of fun.

Now, CAST is interesting in that for the last several years we have hosted a live webstream and recorded the keynotes and several of the track sessions - and then loads them to YouTube as soon as they are able to load them.  In fact, you can see them here:

So, I really wasn't worried about missing content.  While I was kind of bumming about missing the energy in the room(s) I knew I would get the highlights from friends and colleagues later that evening.  Why did that matter?

Well, CAST is interesting.  Part of the "energy" is in the portion of each presentation (including keynotes) referred to as "open season" - That is a moderated Q & A session where, essentially, questions on the presentation, the experience the theory behind it or lessons learned are all fair game.  Discussion is aimed not to dance around issues but to get to the heart of questions that people in the room have and wish to know more about.  When the time is up, it is not unusual for the discussion to move to the hallway or to an area intended for just that sort of interaction.  For me, this is one of the main attractions to CAST:  the discussion.

Another main feature is related - the chance meetings with people in the hallway or at breakfast.  Frankly, one of the best aspects for me are these meetings.  The "I just bumped into {famous tester/tester I respect}" events serve as highlights of my day and week.  This year, I admit, it was a flurry of these meetings - Fiona Charles, Erik Davis (and his crew from Hyland Software - these folks get it), Huib Schoots, James Bach, Griffon Jones, Matt Heusser, Karen Johnson, Michael Bolton, Selena Delesie, Michael Larsen.  The list kind of goes on and on.  People I knew from other places and years past and kind of looked forward to meeting again.

Then there were others I had not met in person, but whose writings I respect, like James Christie, Richard Bradshaw (whom I had met before, but never really had a chance to talk with), Smita Mishra, John Stevenson, and ... and... and ... Right - you get the idea. 

There was an impromptu gathering at my hotel bar when I happened to run into people I did not expect to see there - and then more appeared - and more - and - There were some 15 people at one point on a Sunday night, with not planning what so ever, just having drinks and great conversation and ... Frankly, I don't see that very often at other conferences.  

One thing I must admit though, working at the Registration Desk, while it is a lot of work, is also a LOT of fun.  In what other way do you get to greet EVERY SINGLE PERSON who walks in the door? 

I can hear it now - "But, I'm not good at that - I am shy and kind of introverted."  HAH!  Ya know WHAT?  Very few people are comfortable doing that - walking in to a room full of strangers and greeting people and being warm and friendly - and saying hello and ...

OK - here's a secret:  I kind of suck at that.  I worked really, really hard to do that and not come off like a jerk.  Ya know how I overcame that?  I thought to myself, "Self, how would you want someone to help you feel comfortable in a strange setting, like a conference when you may not know anyone and all these 'legends' are walking around?" 

So, yes.  This is far from the full list of people I had the chance to meet and hang with at CAST.  It is simply some of the thoughts running through my head as I think back on that week in New York. 

What is my point on this?  I think it is pretty simple.  If you have the opportunity to help out a local meet up or gathering or even a conference - do it.  If they ask if you could work the sign in/registration table - DO IT.  It is kind of cool!  You will do something others won't - say hello to every person who comes to the event.

You never know who might walk through the door.  One of them might be Opportunity.

Sunday, August 24, 2014

August, 1914 and Confirmation Bias

People following me on Twitter know that I regularly, though not always, tweet something about an event that occurred in history that day.  People paying attention have noticed that this month, August, I have paid particular attention to August of 1914.  This month marks the 100th anniversary of the outbreak of World War I.  Many Americans look at this as an interesting but relatively minor footnote that only touched the US much later.  This is unfortunate.

This war tumbled empires, shattered people's concepts of surety and security, and marked the shift of the order of the world.  Former colonies soared to importance.  Australia, New Zealand, Canada and a little country called the United States all found themselves thrust into the limelight where European powers thought they alone held sway.

Much of this was due to the events one hundred years ago this month.  There are lessons we  can learn today from these events as testers and as citizens of the world.


Much has been said by popular historians on the assassination of Arch-Duke Franz Ferdinand and his wife.  On a simple timeline, this prompted demands and ultimatums and threats - and as national figures refused to back down from the brinksmanship they played a part in creating, nations declared war on each other on a scale that had not been seen since Napoleon's near conquest of Europe.

Then there was Belgium.  The young King Albert, trained in statecraft as well as military matters feared that if war came, it would roll through his country as so many other wars had in the past, from Julius Caesar to Napoleon.  Countries that were pledge to defend Belgium's neutrality were pushing themselves and each other toward war.  Germany, France, Britain, Austria all had pledged to preserve and protect Belgium's neutrality under the Treaty of London of 1839.

When Germany ordered mobilization on August 1, Belgium ordered its forces to mobilize, with the order taking effect at midnight.  Soldiers reported to barracks, reserves were called up and vigilance was increased along all of Belgium's borders.  The standing policy and agreement was that if any country should invade Belgium, the guarantors of her neutrality would come to her aid.

Germany issued an ultimatum to King Alfred that her soldiers allow German forces to pass through Belgium to invade France.  Alfred refused.  His fear was simple, if Germany won the war, how likely were they to honor their promise of withdrawal after they violated their promise to not invade?

Germany invaded Luxembourg on August 2.  Germany declared war on France on August 3.  That same day, August 3, Belgium refused Germany's demand to allow German troops to pass through Belgium. 

August 4, Germany invaded Belgium and Britain declared war on Germany for doing so.

Albert disagreed with many of his generals who insisted that offensive operations were  key to victory over Germany.  Many modeled their thinking on the French plans, which called for massive assaults against German positions to drive Germany out of Alsace-Lorraine (lost during the Franco-Prussian War in 1870) and defeating German offensive operations by attacking German bases.

Albert insisted on defending the forts on the frontier and defending key cities as long as possible, and keeping the field army as an Army in Being on Belgian soil.  Thus, the German offensive would hit abd be delayed by fortifications, while his main forces finish equipping and preparing for battle.

Neither the Germans nor the French expected the Belgians to put up any kind of meaningful resistance.  The Belgians were, in the eyes of the "Great Powers" an inconsequential force.  They were the Hobbits of the Middle Earth of Europe in 1914.


French pride was injured in the short, painful Franco-Prussian War where the main French forces were surrounded and defeated in a massive double encirclement at Sedan.  It was a humiliating failure.  Since then, the French military establishment had looked forward to restoring their honor and the glory of France.

At the front of their minds was restoring to the French nation the provinces taken by Prussia after the French defeat.  They longed for the day they would march in triumph and retake their lost territory.  They longed for the day they could invade Germany and slice off a portion of German territory in retribution.

In doing so, their plans were all of the offensive.  Plan XVII called for a massive invasion of Alsace-Lorraine and then sending overwhelming numbers into Germany proper.  They would pull divisions from their territories in Algeria to make this happen, along with mobilizing as many reservists as possible.  Those who spoke of concerns about the defense of France and Paris in particular were viewed as defeatists if not out and out traitors.

Commanders spoke of élan and cran and the pantalon rouge as the keys to French victory.  If the Germans massed their main offensive to try and attach the French flank, then there would be fewer Germans to resist the French onslaught aimed at Metz.

The French Commander, Joffre, was so adamant in this that warnings and messages from Belgium on the size of the German attack into Belgium were dismissed as coming from unreliable sources.  People fighting the enemy in front of them were considered less reliable than people making plans to fight an enemy they were not yet ready to face.

Finally, a cavalry unit was sent forward to look for evidence of this massive invasion into Belgium that Albert and the Belgian commanders were frantically sending messages about.  The cavalry found little to support the claims.  This was mostly the result of effective screening by German cavalry to offset the efforts of the French.  IN short, the French cavalry failed to recognize that they were being themselves screened by German cavalry.  If there was no "massive invasion" happening, there would have been fewer Uhlan regiments present.

They saw, but did not observe.

The day after the French emissary told the Belgian high command that they were mistaken in the size and scope of the German incursion, Belgian cavalry units, fighting dismounted, defeated a large force of German Uhlans at Haelen.  Four days later, the last of the forts around Liege fell to the Germans who brought up massive artillery to destroy the defenses.

August 21, before the French attack at Charleroi could begin, the Germans launched their own massive attack.  Lanrezac succeeded in saving his army of 15 divisions, by withdrawing instead of following orders and attacking the 18 German divisions as he was facing.


In spite of joint operation plans with France, Britain was not as willing to jump into the fray as any of the major belligerents.  Instead, the Royal Navy was mobilized to protect the English Channel and keep sea lanes open.  The stated intent was to ensure that none of the navies of nations who went to war would be in a position to harm her shipping or harbors.  In reality, the intent was to help protect the French coastline and ports in case there was a need to send troops to Europe.

Where the governments of Germany, France, Russia and Austria-Hungary were  united in their desire to go to war, Britain was not.  The government face a very real threat of loss of confidence if war was entered into without an overwhelming reason.

The only way to ensure support for war was if Belgium was invaded.  The Germans did that on August 4.  Britain declared war and mobilized her army.  The German Chancellor was astounded that Britain would go to war over "a scrap of paper."  She did.

Of the countries in Europe, only Britain did not have conscription for military service in 1914.  Any force sent to Europe would be volunteers.  The British Expeditionary Force sent to France on August 7 consisted of some 80,000 men.

They met the German army in force at Mons, on August 23, three days after German forces occupied Brussels, the Belgian capital.  The British forces were heavily outnumbered, with German forces having twice as many pieces of artillery.  In spite of this, the British held the German advance and inflicted extremely heavy casualties on their opponents.

The Germans did not expect the British to put up much of a fight.  There is a tendency among armies and nations to judge opponents by how they behaved in their last conflict.  The idea that someone learned something seems a revolutionary concept.  Encountering a skilled, well trained and motivated opponent when one expects an inept one tends to shatter more than the idea that something will be easy.

It can also shake the confidence in your own abilities, despite whatever exhortations leaders make to the contrary.

By the end of August 24, the German infantry soldiers knew they were in for a harder time than they had been led to believe.  Within a matter of weeks, the people of Germany, France and Russia would know that the quick war they all expected was an illusion.


King Albert of Belgium steadfastly held on to the idea that a free and independent Belgium needed an army in the field, holding a tiny portion of Belgium.  Without that, they would be at the mercy of the German invaders, or possibly worse, their Allies.  He also knew that if his small army could delay the German advance, he could gather support from around the world.  If he could hold on long enough, that support would manifest itself in untold millions of soldiers.  He expected his country to be brought to the brink of utter destruction by resisting.  He had proclamations issued to all the towns and villages saying to turn in all weapons before the Germans arrive, lest the owner be killed.  He expected war to be made on the civilian population.  I am not certain if he expected war to be made in the way that it was. 

Kaiser Wilhem II, Moltke and Falkenheyn of Germany all expected Belgium to not resist.  At the most, they expected a token form of resistance and described this as soldiers at the frontier firing rifles into the air and others lining the roads as the German columns passed by.  When the Belgians fired their weapons, it was anything but in the air.  They did not behave as expected.  The brutal reaction of German forces (there is no other word to describe it) in Belgium was perhaps worse than Albert feared.  He described the potential reaction to Belgian Resistance as "crushing."  It certainly was.  Additionally, they expected the British forces to fold up easily and leave the French to their fate.  They found it hard to believe that England, a fellow "Germanic" country, could really make war on Germany.

Grand Quartier Général, the French High Command, the whole thing, refused to argue against Joffre's insistence on attack.  The organizational culture refused to consider the possibility of error.  This nearly lead to a complete disaster.  They were saved by a handful of officers in the field who saved their commands and France, even as they sacrificed their careers and in some cases, lives.

General Sir John French and H. H. Asquith of Britain managed to get a functioning and capable force into action in Europe in time to prevent a total collapse of the Allied lines.  Asquith demonstrated the stereotypical British habit of muddling through toward an end.  French had a nasty habit of not letting facts confuse him once his mind was made up.  However, both managed to get a coherent force into place and gain enough time for a larger, more substantial force to become available.

This was partly through the efforts of Lord Kitchener, the first serving officer in the Cabinet since the time of Charles II, he became Secretary of State for War (and the image of recruiting posters with his stern face looking out with the motto "Your Country Needs YOU."  He also horrified members of the War Council the first day by saying that Britain needed 70 Divisions, not the 6 that were available in August of 1914, and the current professional force should train the new recruits.  He also said it would take at least 3 years to get that number of adequately trained solders ready.

King George V of Britain played his own part in keeping focus on what was needed.  By calling for protection of "small nations" against invading hordes (actually, "Huns" which was the term Wilhem II used in reference to his own army - he could have chosen a better word, but landed there time and again) King George demonstrated his own model of courage, even when it was his children and relatives who went into harms way.  (Much can be taught to today's leaders in many countries, I think.)

Lieutenant Maurice Dease, V.C., 4th Battalion, Royal Fusiliers who manned a machine gun at Mons when all solders in his section were killed or so severely wounded that they could not handle the weapon.  Only after being wounded five times, when he was unable to operate the weapon, would he allow himself to be evacuated to a hospital.  He died of his wounds, but helped save his Battalion and gained the thing the British needed most right then: time.  

Private Sidney Godley, V.C., 4th Battalion, Royal Fusiliers, who took over from Lt Dease after he was mortally wounded, and continued operating the machine gun for 2 hours while the Fusiliers, and the rest of the BEF retreated.  He continued to do so, despite being twice wounded, until he ran out of ammunition.  He then dismantled the gun and threw the pieces away, to prevent them being captured.  He spent the rest of the war in a German POW camp.

And Testing...

Each of us has our own biases and beliefs. We have the choice of working hard to set aside those biases and examine the evidence in front of us, or we can dismiss the evidence as wrong or from unreliable or irrelevant sources.

We have the choice of looking at what is , what an impartial observer might note, or what we wish it to be.  In the end, testers are bound to observe how software is currently functioning.  Then we can ask two important and related questions:
Is this what I could reasonably expect to see?
Is this a problem?

After we consider those, we can then provide a reasonable evaluation of the software.  If people do not want us to consider these questions, they are taking on the role of the various command structures from August of 1914, who could not bring themselves to believe what was unfolding in front of them even as their plans and dreams of glory ended in the bloody mess at the Marne.

Sunday, July 27, 2014

On Testers and Code

Loads of people have weighed in on the question of testers needing to learn to code - or not.  The last several weeks have helped me develop my thoughts more clearly than they were developed before.

At shops where there is a distinct "automate a bunch of stuff" culture or a stand alone "automation team" it is easy to see why it is reasonable to presume that "learning to code" is "essential" for testers.

Most of the time, my standard response is a series of questions.  People think I'm doing something Socratic or that I'm leading them down a garden path just to pounce on them and say "Ah HAH!!! What about the FIZban project?  RIGHT! What good would that do you?"

The fact is, when I'm asking questions it is because I am trying to understand the situation you are in.  Most people who assert absolutely one thing or another tend to do so without considering other people's context, or that there might possibly be a situation where the "absolute" fails.  Most of them also look at you when you question the use of the term "best practice" for their context.

Here's what my current thinking is regarding Testers Learning to Code, coming from someone who is desperately trying to dust off/clean up rusty Java Script and even MORE rusty/limited JAVA skills.

Depending on what might be expected of you, there could be a reasonable expectation that you are at least conversant with the terms being used by the people you work with.  We expect developers and BAs and PMs to use the terms use, and not randomly assign definitions to things that make us cringe.  It strikes me that a fundamental understanding of what people mean by "assert" or "class" or something else might be a reasonable thing to expect.

At the same time, if we, testers, are expected to be able to assist in some way with code, either reviews of production or test code, or possibly contribute to the development of test code (yes, I know that is a potential can of worms, if not spaghetti, does it not make sense to at least learn the fundamentals?

If your organization is open to pairing, the need for testers to become anything other than "almost slightly dangerous" with writing or understanding code would, I expect, decrease dramatically.  However, that minimal knowledge might help you do a better job of testing.

Having a curious mind, I am not sure why people would not want to do that - at the least.

Like many people, I bristle when those with no concept of what I do insist that I must absolutely do something they do, so I can "be of value" - or something. 

This brings in the question of are testers really failed developers?  Do testers want to be production code developers but can't handle it?

I don't think so.

Once upon a time, people developing production code also tested it.  They also worked with the people asking for the software about what the requirements were.  Of course, in some shops, they told the people what the requirements were because, after all, they were IT.  They were the people who developed the software that would work the magic that allow those lesser beings to do their assigned work.

Sometimes, I wonder if the people insisting that all testers "must learn to code" are not making the same "this is a 'best practice'" argument that seems to defy the actual definitions of the words that other people make when they mean "just do it like this because it worked for us."  I want to believe that and not that they are descended from those same people who, once upon a time, told people what they, the software experts from IT would deliver.

The people who always know what is best.  The same one where we should not try and confuse them with any other viewpoints.  And facts to the contrary of those views are simply not allowed.

My fellow Software Testers - Learning something about what production code developers do and how they do it may have great value and may help your development as a professional. 

Learn to code because you want to learn one more tool to make yourself better, if it is appropriate for what you want your career to be - not because someone is compelling you to do so.

Sunday, July 13, 2014

On Software Quality and Software Testing

The last week or so I have been deep, very deep, into considering the relationship between Quality of Software and Software Testing.  In this, the conversation has been more at the Meta level, something akin to ASQ's view on quality in general.  (Fair warning disclaimer, along with being a software tester I am also a member of ASQ - American Society for Quality - these folks.)

Interestingly, that relationship helps me when I challenge assertions, usually gratuitous, often fundamentally flawed, on something published by the ASQ or something Deming said or wrote.  Its interesting sometimes to lean into the table and say "Can you explain what that means?  I'm not making the connection between what you are asserting here and my understanding of what {insert quality buzzword} means.  Its possible we have a different understanding of the concept and I'd like to address that to avoid future problems and potential future conflict." 

The response often comes back citing some authority, for example, Six Sigma or some concept championed by ASQ.  Interestingly, that was recently coupled with the ideas of Context Driven Testing and AST - Association for Software Testing (Ummm, for those who don't know me, I'm a member of that, too.)  Oftentimes I will then, when it is clearly an attempt to assert a position by citing authority, say something in as non-threatening a manner as possible along the lines of "I'm a member of ASQ and of AST.  I have read the white papers and books on Six Sigma (or whatever else they are asserting, usually out of the recommended context) and I'm not sure how they align with what your are saying.  I would like to understand what you are saying better.  Can you explain it or would you prefer to have that discussion off-line, maybe over coffee?  I'll buy."

I find people will be much more open to such discussions if I buy the coffee and /or bagel to go with it. 

And yeah, I realize that I am doing my own version of citing authority by making the above statement.  It does serve to get their attention and blow away the smoke screen that is intended to be set up.  Well, maybe not so much removing the smoke screen as is bringing high-powered radar into the mix - I can see their position in spite of the smoke screen.

Where am I going with this? 

Many people I meet use the terms "testing" or "quality assurance" or "QA" interchangeably or in conjunction with each other.  You get statements like "Let me know when this has been QA'd" when they mean "tested."  Then there is "QA Testing."  Do NOT get me started on that.

The idea of "testing improves quality" is often the response to the question "Why do we test?"  The bit that gets left out, possibly because it seems obvious or maybe because people are oblivious to the idea, is that testing improves quality only if someone acts on what is learned from the testing. 

If something is not changed as a result of testing - configuration, code, processes - maybe all three - maybe other stuff as well, then will "quality" be any better?  What is the point of testing?

If people want confirmation that things "work" then by all means - run the happy-path scenarios that possibly were used for unit testing, or build confirmation testing, or maybe in the CI tools - but don't confuse this with "testing."

The point of testing is to learn something about the system or piece of software at hand.  It is usually not to prove anything.  It is rarely done to prove something is "right."  It may be done to check certain behaviors - or to see if specific scenarios behave well enough for a demonstration - or even, in a limited sense, validate some piece of functionality. 

However - if there are any variances found, the testing identifies those variances - nothing more.

Testing does not make anything better.  Testing does not improve quality - ever.

Testing provides information for someone to decide that action needs to be taken. and then someone must act on that decision.  Then the quality may improve. 

It is taking action after testing is completed that improves quality - not testing.

Saturday, July 5, 2014

On the AST and CAST and Conferring about Testing

The Association for Software Testing says this about... well, itself:

The Association for Software Testing (AST)  is an international non-profit professional association with members in over 50 countries. AST is dedicated and strives to build a testing community that views the role of testing as skilled, relevant, and essential to the production of faster, better, and less expensive software products. We value a scientific approach to developing and evaluating techniques, processes, and tools. We believe that a self-aware, self-critical attitude is essential to understanding and assessing the impact of new ideas on the practice of testing.
This is kind of a mouthful. 

I'm not going to write about that.  Well, not directly anyway.

Once upon a time I was a regular participant at SQAForums, and a newly minted "QA Lead."  I was digging for information, ideas and the like to use with my new position.  I remember threads where people posting things like"Really, there are no 'best practices'."  They sometimes went on to say something like "There may be something that works well in some circumstances or may help sometimes, but the idea of 'this is the best thing to always do' is misguided." 

This made a lot of sense to me.  One guy posted something about starting a new group for testers.  This was roughly 10 years ago. I remember thinking "that sounds fantastic, but it is obviously aimed  at people more experienced in testing than I am."  I let the chance pass by. 

Two mistakes in One.  Impressive, Pete.

Fast Forward to 2009.

I was at a conference in Toronto, sitting at breakfast with a group of people I did not know, when I realized the nice lady who sat down next to me and with whom I was speaking was Fiona Charles.  The same Fiona Charles whose articles I'd read and bookmarked. WHOA!  A few minutes later, here comes Michael Bolton - no, not the singer or the guy from Office Space - Michael Bolton the tester guy.  He sits down at the same table!  WHOA^2!

We're digging into a breakfast of eggs and sausage and potatoes and fruit and coffee and tea and we're talking - I'm trying hard to be nonchalant - its not working very well.  Two of my "testing idols" are sitting at breakfast and we are talking.  WHOA^3!

So, the conference day begins - we're doing our thing at a workshop they are conducting and I'm participating in.  Make it through the day.  I grabbe some supper and had a drink and stumble off to bed with my mind quite melted. 

The next day, in between sessions, I find myself chatting with Michael who says "Got anywhere you need to be?  How about we play some games?"  YES!  DICE!  Awesome!  So, we dive in.  Pretty soon, there is a small group standing around the table we're working at - drinking coffee and juice and tea and talking and there are some really smart people there.  A lively discussion around "metrics" and "measurements" and "expectations" and "needs." 

There is Michael, myself, Fiona joined us, as did Lynn Mckee, Nancy Kelln, Paul Carvalho.  I realized that this "hallway track" had some of the best information of the day.  I also realized I was the only participant who was not a speaker.  WHOA^4

At one point, Fiona looked at me and said "What are you doing here?  You don't really fit.  You need to go to CAST."  I responded something like "CAST? What's that?"  And got a chorus of "It's awesome! You'd love it!  Its like this conversation but bigger!"  Then Michael said something like, "You're from Michigan, you said.  CAST is in Grand Rapids next year." 


I LIVE in Grand Rapids.  This way-cool conference is coming to Grand Rapids?  REALLY?  WOW!!!!!!!!!!!!!!

So when I got home, I looked it up.  I found "The Association for Software Testing" and saw the names of some of the people involved - and I said to myself "Self, these are the folks whose writing makes sense to you!  These guys rock!"

I did something I have continued to do since then - I bought myself a birthday present of a membership in AST. I have not regretted it. 

Why?  In AST, I found a community of people who are willing to share ideas and hear you out.  They don't see you as a novice, even when you are.  Instead, most of the people who really get it see you as someone who is on a journey with them to learn about more and better software testing. 

That conversation in the hallway at a conference in Toronto was only the beginning.  When CAST was in Grand Rapids the next August, I swung by the conference site the day before it began and ran into Fiona Charles, who was sitting with Griffin Jones.  I loaded them into my car and dragged them kicking and screaming to my favorite Italian place in Grand Rapids for dinner.  Giving them a mini tour in the process. 

We landed at the restaurant, sat down on the terraza, ordered wine, the lady-wife joined us - and we had an amazing conversation with dinner that covered nearly everything in our heads - architecture, art, the economy, US-Canadian history, software testing.  It was an amazing evening. 

Every CAST since then has been like that for me - Exquisite conversation, learning, enlightenment and challenges. 

Ideas are presented - and it is strongly suggested you be able to explain and defend them - otherwise the results will be "less than ideal" for you.  People selling stuff - from tools to snake oil - are sent packing.  People with challenges are encouraged.  People looking for ideas find them.

Each year is different - and each year there are similarities. Generally, the sessions inspire conversation and discussion.  This leads to thinking and consideration.  Sometimes they result in "Interesting Encounters."

Last year, someone was presenting on failed projects and mentioned the Mars lander - the one that crashed several years ago?  Remember that?  Partway through their story a hand went up and said "That's not quite what happened - I was an engineer on that project..."  Yeah. Really. 

This lead to a series of interesting hallway conversations - and the session she presented was very well attended.

So, what is it that, for me, AST is about? 

It helps me be better at what I do.

Friday, July 4, 2014

On Coffee and Process and Ritual and Testing

I am writing this the morning of July 4th.  In the US, this is a holiday celebrating the original 13 colonies declaring their Independence from Great Britain.  This morning is nearly perfect.  The sun us out, the air is not too hot and not too cold.  There is a gentle breeze.  I'm sitting out in the back garden reading and writing and sipping freshly made coffee. 

I like a really good cup of coffee.

No, I really like a good cup of coffee.

I like a well made cup of tea as well.  Don't get me wrong, a well made cup of tea with a bit of sugar and a dollop of milk, its a wonderful thing.

Still, I really like a good cup of coffee.

A couple of years ago, my lady-wife bought me a coffee press as a Christmas gift.  It was amazing for me to experiment with some of my favorite coffee beans and work out how to get the flavor and balance just right.  When I did, I was a very happy tester who really likes a good cup of coffee.

Things were great, until one crucial part went missing.   The wee tiny bit that held the screen mesh to the plunger that filters the water, now coffee, and separate the coffee grounds from the stuff that you want to drink.  I never proved it, but I strongly suspect that one day after washing the press and waiting for it to dry, our orange tom cat pumpkin found it an irresistible toy.

Needless to say, the choices of the stove top percolator (not bad, but still not as good as the press) or the electric drip coffee maker (ummm, ok, nuff said.)  So, struggling through for what seems an interminable period of "ok" coffee, the arrival of ANOTHER coffee press this past Father's Day was deeply appreciated.

Did I mention that I really like a good cup of coffee?  I do.  A LOT.

So, this was slightly larger than the previous one.  I would need to make some minor changes to my remembered favorite permutations based on which coffee I was making.  Then - I began comparing the coffee I had been drinking the week before to what I was making right then.  I mean, right - then.

Now, let us look at this.  What was different between these?  The coffee itself was the same - same roast, same grind, same water - same... Everything.  Really - that is what coffee is - a mix of ground coffee beans and water and... yeah.  That's about it.

So, what is the difference?  Maybe - the Process of making coffee?

We talk about Process Improvement - how do we make something better - like, Testing?  So, while the lady-wife chuckles at me for "my fussy coffee ritual" I find it makes things... better.  Now, if I use one coffee, like a nice dark roast, or another coffee, like a lighter, maybe a medium or a lighter roast, I may use a slightly different amount of ground coffee.  Or, I may allow the grounds to brew just a tad longer with one than the other.

The difference?  I'm not sure.  Maybe that slight variations will impact the coffee.  The tool I use to make coffee and the method I use to make coffee will definitely make a difference.

What does this mean?  I'm not sure.  Except for one thing. 

I know that if I blindly use the SAME amount of coffee, no matter the roast or the manner of which I plan to make it, and use the tools I plan on using - I will be disappointed. 

Here's what I mean.  If I am camping, which I really like to do, I have a handy percolator that I can make coffee in over a camp stove or over the camp fire.  It does a fine job and makes an enjoyable pot of coffee.  It is less work, well, cleanup work anyway than using a coffee press.  It is also less likely to break than the glass container of the coffee press.

If I am traveling on the road, like flying somewhere instead of driving, I may figure something out with making coffee in my hotel room that is less pleasing to me than my normal "at home" methods or when I am camping.  But, by changing how I make coffee, I get a much better cup than the thin stuff one normally gets from "complementary in-room coffee makers." 

The exception to this, perhaps, is the "complementary in-room coffee" I've had in Germany and Estonia.  (Hey, American hotel folks - go to Europe and check these guys out - they really GET coffee.)  Still, the in-room coffee in Europe is still not as good as I make at home.  Its not bad, just not as good as mine or a really good coffee shop.

What is my rambling point?  Well, I think it is this:  Using the same measures for everything, without looking at the broad circumstances, the context in which you are working, and the tools and means available to the task at hand, is foolish.

It does not matter if you are looking at test practices, management practices or coffee making practices.  Applying something without examination, because it is "the best way" to do something, is folly.

It yields disappointing results in testing, management and coffee.