Friday, April 18, 2014

On Testing and Wars of Religion

I'm sitting in my rocker with a nice glass of wine next to me.  Today is Friday, the 18th of April, 2014.  In Western Christian Religious calendars today is Good Friday.  It is also the anniversary of the Battles of Lexington and Concord which Americans may remember as the beginning of the American War of Independence.  Paul Revere and several others had made their midnight ride the night before.  Alas, that is another story.

The determination of how the date for Easter was to be calculated was one of the early points of contention in the Western Christian tradition.  This is an interesting point. Keep that in mind.

The granddaughter in high school had school today.  The interesting thing is that she goes to a "Christian" school.  My lady-wife found it interesting that she had school on a day of such significance in the Christian religious calendar.  I smiled and gently said "It is because of the type of Christian the school is intended to educate."

We talked on this a bit.  Simply put, I had many conversations with people of this particular sect of Christianity.  When I was young, they were the majority if kids in the neighborhood.  There was one family who were Greek Orthodox, one family around the corner who were Jewish my family and then several families of this sect.  I smiled because I remember so many times being told "You're nice and your family is nice but you're still going to Hell when you die."

OK, so consider being 11 or so and being told you would go to Hell when you died because of the way you and your family go to church.

I remember asking my mother on what was going on.  Her response was something to the effect of "They can't imagine being wrong in anything, and since we go to a different church, we must be the ones who are wrong and that means we are going to Hell."

These conversations, so many years apart, have left me thinking this evening and finding the similarities with conversations I have had with certain testers to be notable and quite disheartening.

Testing Must...

Simply put, I have been given a list of items which "Testers Must..." do if they are "really" testers.  These seem to fall under one of several forms of fallacy.  This is generally expressed as "No real <blah> would ever <do thing>."

One item commonly mentioned - Testers Must Verify Requirements.

Really?  Must?  In every circumstance?  If Tester 1 verifies requirements and does that in a day or so, what am I to do? 

I understand that when the contract says you must "provide traceability between tests and requirements" that you need to be able to do this.  Is there one and only one way to document tests and show traceability?

If there is one and only one way to document tests does this imply that any other way to document tests is wrong?

If it is wrong is it bad?  If tests are wrong as they are documented, how can we execute them and be certain that we are doing things right?

What if we are not wrong?

Does this mean we are right?

When we are challenged in our beliefs about testing, do we respond as 11 year old children or do we respond as thinking, mature adults?

What must testing do?  Are we certain?  Do we agree on this?

Based on conversations I recently had and articles I recently read, I am certain we do not agree.  When people condemn others for not agreeing with them, I get a little sad.  When I am condemned for not agreeing with them, I ask "What is it that makes you certain you are correct?"

I find that question to be challenging for people to answer.

If people can not logically explain why they believe things they believe about testing and they can not logically discuss the implications, the result sounds much like the wars of religion from 400 years ago.

Of course, in smaller ways, those wars continued through my youth.  In some places they continue.  Likewise, the wars, and condemnation about doing testing "differently" also continue.

Monday, March 31, 2014

A Little Blog Post on a Big Idea: Does the Software Work?

I don't need to be in to my client's office until later this afternoon.  This gives me a chance for TWO mornings of writing in a row.  I intend to take full advantage of that!

When looking at the question of "Does this work?" have you noticed that all of us tend to have a slightly different view of what "work" means? 

OK, so its not just me then.  Good.  I was a little concerned for a while. 

In my case, I look for a variety of measures and models to determine if a piece of software "works" or not.  Under one model it might work perfectly well.  Under another model, it might not work at all.

These can have the same set of "documented requirements" and completely different models around expected behavior.  In a commercial software setting, I try and get the description of what the sales staff have been pushing that this software will do.  (If the sales folks are setting the customer expectations, I like to know what they are saying.)

If this is software that people inside the company are going to be using, I try and get with representatives from the department where this software will be used.  Not just the "product owners" but the people using the software as part of their daily function.  These groups can, and often do, have distinctly different definitions of "it works" for software impacting their departments.

Finally, there is the view of development, individuals and the team.  On software with many points of collaboration and integration, let me make one thing really, particularly clear:

It does not matter if your part "works" (by some definition of "works") if the entire piece does not "work" (by the definition of someone who matters.)

That goes for functions within the application and the entire application itself.  Resting on "I did my bit." is not adequate if the people working with the piece of software if that piece of software does not do what it needs to do. 

No matter how well the individual pieces work, if they don't work together, the software doesn't work.

Tuesday, March 25, 2014

Projecting Strength and Value in Chaos

I've been talking with a number of people in my area the last couple of weeks.  Hanging with folks and being an ear when they need to vent to someone can actually help me (the ear) as well. 

When I worked for large corporations, universities, privately held companies, or even small companies, reorganizations would be announced with much fanfare and most of the time the staff shrugged and said "Oh, I work for Joe now instead of Mary."  Of course, Joe's staff now worked for Carrie or Dawn.  Carrie's staff now worked for Mary and on and on.

It seems there is something in the water - or maybe its the long winter.  I don't know.  Something seems to be happening and the shrugs are a little more forced.  Not just at one company, but at a BUNCH of companies.  They remind me of when I was working quite a few years ago for a large company and they decided to announce a reorg.

At this company, the bosses said things like "staff reductions" and "strategic reassessment of purpose" and other scary phrases that staff don't like to hear. Oh, one other thing - the dates of the announcements keep getting pushed out.

People who are employees, particularly at some companies, expect to be there for many years if not their entire career.  As a contractor, I know when I go in that eventually I will leave - sometimes sooner rather than later.

When the cart gets upset, people's presumptions and expectations get tossed about and the world seems to be completely unpredictable.  Add to that concerns over "What is management doing?" and an observer of human behavior, like a contractor sitting in the corner or the person sitting in the coffee shop or restaurant listening to people tell stories, you get something like "Lord of the Flies."

The people with awesome skills feel fairly safe.  Those with more modest technical skills or have ensconced themselves into expertise of how things are done now are a in a little trouble - well - maybe a lot of trouble.  If things are changing, then experts in how things USED to be may have limited value.  Particularly if how things are GOING to be are undefined.

Expect to see people asserting their expertise - their value to the organization.  Expect to see people showing their understanding and how they can be proactive - how they can handle challenges.  These are fine - unless they are huge changes in behavior.  When things get really nuts, expect to hear "Look, my part works fine.  If this thing doesn't work its because those (not nice term) people in that other area can't do their jobs."

Why would managers do this?  The thing with telling people there will be staff reductions and strategic changes and everybody's jobs would change - that thing?  I don't know.  Really.  I'm pretty clueless.  I have some ideas, and frankly, I'd be rather depressed if that turned out to be the truth.

If you are on the staff of a company that is doing that?  I don't have a lot of suggestions for you.

Maybe I do have one.

The value you bring to the company and the strength you have are not things that can be easily put on or off at will. 

Be yourself.

Don't try to be what you are not.  Do what you do and be who you are. 

If you don't feel comfortable with the behavior you are seeing in the company, then start looking for new opportunities.  Don't let the folks asserting their expertise or trying to show how much "value" they add define who you are.

Tuesday, March 4, 2014

On Estimation and Stuff, For Chris

I was asked a question by email and responded.  This is a much more full response. 

Estimation is one of those things most of us get pinged about.  "We need estimates on testing the Blah System."  We can give really good reasons why they really don't tell us much and tends to turn into a stick to beat us with - Still, we're expected to come up with some idea as to how long something is going to take when what is being done is foggy at best. 

We make a reasonable effort and come up with what is a reasonable estimate for the amount of work we need to do, and the time and effort that will take.  Then as we learn more information we realize that things will a) take a lot more time/effort or b) take less time/effort because of things we did not or could not account for.

It seems that every time I do a Test Process Improvement workshop, I start with something like "What things does your team and the individuals on your team do now that you (and the members of your team) wish you did better?"  EVERY TIME the first or second answer is "We're lousy at estimates; we need to do estimation better."

I've learned in (mumble mumble) years of software development work that I am not clairvoyant.  I can't see the future nor am I a mind reader.  I've also learned that most people with absolutely accurate estimation calculators are snake oil salesmen.  Or, they are delusional.  They may be out and out liars but I'd prefer to think better of people.

Documentation may help - maybe.  If there is some reference to previous work on the same system, or similar projects, that may help.  If you are operating based on tribal knowledge, then you may have a bit of a challenge in convincing anyone else that this is something other than a wild guess.

If you look to do more than simply test "functional requirements" and look for other stuff, like how the system actually behaves or perhaps non-functional requirements, how do you plan for that, let alone come up with some level of estimate.

Here's one of my favorite tools -

Mind Maps

Huh?  Mind Maps?  Really?  No, really - I use them a couple of ways. 

First, I use them to track requirements (business/solution/whatever) and associate design ideas with the requirements.  Sometimes this leads to a LOT of dotted lines.  Sometimes it shows no lines whatsoever.  Either way, it helps me visualize the software and what needs to be considered.

Second - and here is where it gets interesting - I use them to map what areas of the application or software or whatever CAN be tested.  By this I mean show what is available to be tested or what has been delivered.  I can also show what is scheduled for delivery and what is not expected to change.  Then, as testing progresses, I can associate session reports with them.  Sometimes that takes a bubble or three with the project wiki or share-point locations for session reports associated with each logical piece. 

THAT gives me a reference for both functional and non-functional aspects that need to be exercised. 

This ties back to the question of "How does the software behave?

In this instance, I'm not testing to validate requirements - I'm exercising the software to see how it behaves.  I can then compare the results with the expectations -  one part of which consists of documented requirements.

In the end, I have a full visual representation of what has been exercised and how - and how thoroughly.  This gives me something I can take back to stakeholders and say "This is what we did and how we did it.  These are the areas that we found interesting variations in and we needed to make changes because of.  Are you comfortable with this or would you be more comfortable with more testing?"

Rather than talking about tests and test cases and what has been run and not run, which I've found is of really little value to most people - no matter what "best practice" folks tell us, I talk about the functions within the system we have exercised and the depth to which we exercised them.

But we were talking about estimation, right?

The next time I, or anyone else, is in a project that is changing part of that system, I know what was done before and how long it took.  After all, we have the session reports saved and referenced in the mind map, right? 

This can also help me when I need to consider regression testing - for this project or for future projects.  I have a gauge I can turn to for reference. 
 
So, with that information and a description of what the change is and an understanding of at least an idea around a portion of the risk based on the impact of the change, we can come up with something approaching an estimate that is perhaps better than an absolute guess.

Sunday, February 16, 2014

So, What's New? or The Problem of Change

Has anyone else noticed this?  There are some things that come up year after year in conferences and tech publications and blogs and ... stuff.

You see it at conferences and webinars and meet and greets and ... stuff.

There are folks who are bravely branching out from what they know and are attending their first conference or workshop or reading forums or... whatever.  They are total novices to the ideas presented.  Thus, the need for repeating the fundamentals time and time again.

These folks are looking for "new ideas" and "recent developments" and.... stuff.

People, speakers, writers, etc., calling out fundamental ideas and presenting stuff that seems, well, "not new" get tweeted and blogged about and press - and when others point out that "Its good stuff but not new" the result is blank stares or, sometimes, hostility.

Then there are the people pushing for what is hot now - so many folks seem to be looking for the "next big thing" or the "what comes after {blah}?"

There seems to always be a demand for the new, the cool, the next break though to make things happen.  The next solution to their problems that they can't seem to solve.  They read stuff and go to workshops and conferences and talk with people and get the new hotness in place and... 

Somehow, the same companies/organizations/groups still have the same problems.  Year after year after year.

Why is this?

People who have been around the block more than once and are looking for answers for their recurring problems.  The hard part, for many, is to recognize that sometimes the "solution" lies in understanding how the "problem" we each are trying to solve came about.

Usually, it was by fixing some other problem - or at least attempting to fix some other problem.  We instituted changes hoping they would fix the problem.  The changes proved hard.

Really hard. Sometimes, really incredibly hard.

When it is really incredibly hard some will stop and ask "What are we doing wrong?"

That question can be dangerous in some organizations.  Here is how I try and work through this.

We can look to see why we are doing what we currently do. This is my preferred starting point.  "How did we get here."  Often times, unfortunately, eyes get glassy - sometimes because no one recalls.  Sometimes eyes get defensive - as if people have been accused of something.  Sometimes you get an answer that starts out "Once upon a time..."

We can look to see what the intent is behind the changes we are trying to make. This can be as hard to get a clear answer as the last question was.  Many times it seems like the intent is to fix something, but people are not clear on what needs to be fixed.  If the answer presented is "the system" then I usually consider that the frustration level has reached a point of no return.  That is, something needs to be changed, people are not sure what, but a change needs to be made so make it.

We can look at this and see if what we are doing will actually impact what we want to change.  Alas, this is related to the question above.  If people have a hard time answering that question, then this one is impossible to answer and becomes irrelevant.  The message is clearly "Change something NOW."

Blah, Blah, Change is Hard, Blah, Blah

When we get to that level of dysfunction, forget it.  Ask a different question.

Can we change how we are making the change?  We might recognize that we, as an organization, made mistakes in looking at the cause of our problems and are changing the wrong thing. This may lead us to reconsidering our approach or evaluating what we are hoping to achieve.

Broadly speaking, I've found people who look at problem solving as an incremental process, meaning looking at addressing one aspect at a time tend to have more "luck" than the folks wanting to fix ALL the problems RIGHT NOW!

When the boss or big boss or big-big boss demand that the fix be some form of {insert major-new coolness} to fix all their problems, it is time to get nervous.  Alas, I suspect that this boss, or some level of boss, if they ever had any technical chops have found them rusted from lack of use.  So some general idea of "all your problems will be fixed if you do..." gets some traction with them.

After all, its reasonable that if things are broken, then you can fix stuff that is wrong in one fell swoop.  And then there is the information they picked up at a conference on this - and how the speakers all talked about how the problems were fixed by doing that one thing.  And then there are the consultants who are experts in a given area who come in and consult or coach or do something.  And then there are the sales reps for the cool tools that will help maximize the synergistic effects for the enterprise by using this tool to make that change.

And these folks are often the disillusioned ones who come back time and again.  "That change did not work as expected.  What can we do instead?"

And this happens when the change did not go as easily nor as well as the expert/consultant/conference speaker/sales guy said it would.  So their quest begins for a new fix to their problems.  The next big thing that will solve all their problems, including the ones left over from this "failed" change or improvement effort.  And the one before that.  And the one before that.  Yeah, you get the idea. 

This, from what I have seen, drives a great number of people to return to conferences looking for solutions to their problems.  They want to change things to make things better but they want it to work and not be hard and not miss  any project deadlines and maximize the synergies of partnership for the enterprise.

Basics

Let me share three general ideas I have come to think of as givens:

1. When the person with a vested interest tells you that implementing this change is easy - there is approximately a 99.999999999999999% chance they are not being completely open.

They may not be lying; they may be naive.  Either way it will cost you and you company money.

Lots of money.

Don't get me wrong.  Sometimes things work.  Sometimes people will proclaim the effort a success because to do otherwise will be to admit that a lot of money was spent on a project that failed to deliver its promised results.  (We can't admit that because it might impact earning statements and that will impact share price and stockholders will be mad.)

2. Know when to cut your losses.

Maybe you have heard of throwing good money after bad?   "If we tweak this piece of the process and nudge that a little, I'm certain that your results will be much, much better."

Of course, then you are back to needing to admit that the effort was not successful.  (We can't admit that because it might impact earning statements and that will impact share price and stockholders will be mad.)

As with any project, someone needs to say "I think there's a problem here."  Why not the testers?  Probably their manager/supervisor/boss might be better for that - but if no one else is willing to step up, why not the testers?

3. Sometimes things work and sometimes things don't.

Put another way, smile when something works.  If it doesn't, don't get angry and look for who is to blame.  Instead, look patiently and honestly at what contributed to it not working.

There may not be a single "root cause" to any failure. Why?  Because the one root cause that can be attributed honestly in environments where a single root cause is needed, is that human beings implemented and executed the work and human beings are fallible.

There may be many contributing factors that led to the problem.  If one or more of them was not present then the problem may not have developed.  Look for interactions to see "the cause."

Finally - 

People who make things happen make things happen in spite of problems and obstacles.  Being honest about the obstacles is the first step in making things better.  The people working with you who are to implement the changes are only obstacles if you make them obstacles.

Help them understand.


Sunday, February 9, 2014

Pandora's Box: Testing, Active Consideration & Process Models

Based on emails I've received it seems I've committed an injustice in my previous posts on Pandora's Box Testing.  It seems some people think I'm coming down unfairly on organizations, and testers in particular, that focus their efforts on formal, written test scripts based on "the requirements."

For that, I apologize.  In no way did I mean to imply that my respected colleagues who rely strictly on documented requirements to "drive testing" are always engaging in Pandora's Box Testing. 

My choices are to either write a massive tome or split the idea into chunks as I sort through them in my head.  Or, perhaps more clearly stated, I write on the ideas as they form in my head, and use the writing and consideration I give after writing to grow the ideas further.

Many experienced testers are extremely aware of the problem of Pandora's Box Testing.  Some are rigorous in their investigation and research to consider many possible realms and change "hope" to active decisions around what is and is not to be tested.

It is that recognition, that decision, of examining what can be tested in a meaningful way and what can not, and looking at the reasons why certain functions cannot be tested or should not be tested.

It is in this consideration that we move away from "trust," "belief" and "hope" and into the realm of "This is the right testing to do because..."

Thus, for each project, we consider what needs to be done to serve the stakeholders.  The danger is when testers are told what the stakeholders need done.  If the product owner, business representative and/or customer representative are not in agreement or, more likely, do not understand the implications, testers need to make sure that the implications are clear to all.

This does not need to be confrontational, simply a discussion.

When I have encountered this behavior it has been the result of a few modes of behavior.  It can be that people, like the PM, development leads, etc.,simply don't know any different.  It may be they are convinced that the only testing that really matters is one particular type or approach.  They have been told that such a thing is a "best practice."  Right. 

Other times, they may be suffering from their own version of Pandora's Box Testing:

Pandora's Box Software Development Model 

Hope is the greatest evil let loose from Pandora's Box.  We find software projects brimming with it.

PMs and BAs hope that by "following the process (model)" everything will work.  They hope that by creating the forms on time and having the meetings every week that everything will be fine.

In the mean time, designers have many unanswered questions and hope that the design they come up with will address them.  Developers don't understand the design and hope the designers know what they are doing.  Then they don't have time to unit test, or have been told "all testing" will be "done by QA"

Of course, because the designers and developers have other time-sensitive projects, they really can't sit down and talk things through carefully with each other or with the testers.  Or, for that matter, with the product owners or customer representatives.  So, the hope everything comes together. 

So, when testers "get the code" to test, we may hope that this time, things were done "right."  Sadly, far too often, we find they were not.  Again.

What can we do?  We're just testers, right?

We can ask questions.  We can take actions that may influence the thing we hope happens actually happens.  We can inform people of the impact of their actions:
 
  • We can show developers how "making their date" with delivering code that has not been unit tested will impact further testing;
  • We can show how development/project management that optimistic (at best) or aggressive timelines for development will limit the available time for review and unit testing when problems are encountered;
  • We can show how that limited time will impact further testing; 
  • We can show Designers how "making their date" with a design that is not reviewed or understood will impact developers and testers - and ultimately people using the software;
  • We can show how BA's "making their date" with poorly considered documented requirements impacts all of the above;
  • We can show PMs how communication, honest, open, clear and concise will reduce the above risks.

THAT is how we combat and defeat the evil let loose from Pandora's Box. 

We take action to make the hopes come true.

We take positive action to change things.  

Wait! One more thing... 

To my respected colleagues who emailed me who rely strictly on documented requirements to "drive testing:"

If your organization fits in the description above, and if you dutifully follow the process without variance - then I suspect am reasonably certain that you are engaging in Pandora's Box Testing.

Tuesday, February 4, 2014

Pandora's Box Testing and Requirements

Right.  Hands up - Everyone who has been told "Testing for this project is to only verify the requirements."

This is fine as far as it goes.  Where we get into trouble is what counts as a "requirement."

Most often we are told this means the documented requirements have been analyzed and considered by experts and they are firmly set and we can work from them.  Doing so is a classic example of Pandora's Box Testing.

It is a firm belief that the requirements are fixed and unchanging.  It is not proof.  Frankly, unless you are in a fairly small number of fields of work - e.g., medical, telephony, aeronautics, navigation, I might suggest the first task of a tester is to test the requirements themselves.

I have found it a reliable heuristic, if not a maxim, that if there is an opportunity for more than one interpretation of a requirement or a set of requirements, someone will take advantage of this and interpret them differently than anyone else.

I hear it now: "Pete, if they are communicating and discussing the requirements, then this doesn't happen."  And I suggest that "communicating" and "discussing" are not necessarily the same thing.  Nor are they sometimes related at all.

When "communicating" means "repeating oft-used and oft-heard buzzwords" then does everyone mean the same thing?  Are you all agreeing about the same thing?  Are you certain?

Or are you hoping your plans are based on something more than buzzwords? 

Work through the "documented requirements" is a good start.  Test them.  Do they make sense together?  When you take them as a set, does the set seem to work?  Do they describe what your understanding of the purpose is?  Do they match what your understanding of the business need is?

Now then.  If this is totally new development - that is pretty much where I start with evaluating documented requirements.  Lets face it - most of our projects are not totally new, green field work.  They are updates, changes, modifications to existing software.  Existing systems. 

Cool, right?

Do the requirements you just went through describe how the changes interact with what is there currently?  Do they describe what differences should be expected?  Between them, can you discern what the customers (internal or external) would expect to see?

Do they clearly describe what to you should be looking for in testing?  Will your testing be able to describe information in such a way as you can reveal for the stakeholders if this is correct behavior?
Will they be know if the behavior is correct?  Are they relying on the documented requirements or something else?

Perhaps they are relying on hope?  Maybe the only testing they are familiar with is Pandora's Box Testing.

That would be sad.