Sunday, February 16, 2014

So, What's New? or The Problem of Change

Has anyone else noticed this?  There are some things that come up year after year in conferences and tech publications and blogs and ... stuff.

You see it at conferences and webinars and meet and greets and ... stuff.

There are folks who are bravely branching out from what they know and are attending their first conference or workshop or reading forums or... whatever.  They are total novices to the ideas presented.  Thus, the need for repeating the fundamentals time and time again.

These folks are looking for "new ideas" and "recent developments" and.... stuff.

People, speakers, writers, etc., calling out fundamental ideas and presenting stuff that seems, well, "not new" get tweeted and blogged about and press - and when others point out that "Its good stuff but not new" the result is blank stares or, sometimes, hostility.

Then there are the people pushing for what is hot now - so many folks seem to be looking for the "next big thing" or the "what comes after {blah}?"

There seems to always be a demand for the new, the cool, the next break though to make things happen.  The next solution to their problems that they can't seem to solve.  They read stuff and go to workshops and conferences and talk with people and get the new hotness in place and... 

Somehow, the same companies/organizations/groups still have the same problems.  Year after year after year.

Why is this?

People who have been around the block more than once and are looking for answers for their recurring problems.  The hard part, for many, is to recognize that sometimes the "solution" lies in understanding how the "problem" we each are trying to solve came about.

Usually, it was by fixing some other problem - or at least attempting to fix some other problem.  We instituted changes hoping they would fix the problem.  The changes proved hard.

Really hard. Sometimes, really incredibly hard.

When it is really incredibly hard some will stop and ask "What are we doing wrong?"

That question can be dangerous in some organizations.  Here is how I try and work through this.

We can look to see why we are doing what we currently do. This is my preferred starting point.  "How did we get here."  Often times, unfortunately, eyes get glassy - sometimes because no one recalls.  Sometimes eyes get defensive - as if people have been accused of something.  Sometimes you get an answer that starts out "Once upon a time..."

We can look to see what the intent is behind the changes we are trying to make. This can be as hard to get a clear answer as the last question was.  Many times it seems like the intent is to fix something, but people are not clear on what needs to be fixed.  If the answer presented is "the system" then I usually consider that the frustration level has reached a point of no return.  That is, something needs to be changed, people are not sure what, but a change needs to be made so make it.

We can look at this and see if what we are doing will actually impact what we want to change.  Alas, this is related to the question above.  If people have a hard time answering that question, then this one is impossible to answer and becomes irrelevant.  The message is clearly "Change something NOW."

Blah, Blah, Change is Hard, Blah, Blah

When we get to that level of dysfunction, forget it.  Ask a different question.

Can we change how we are making the change?  We might recognize that we, as an organization, made mistakes in looking at the cause of our problems and are changing the wrong thing. This may lead us to reconsidering our approach or evaluating what we are hoping to achieve.

Broadly speaking, I've found people who look at problem solving as an incremental process, meaning looking at addressing one aspect at a time tend to have more "luck" than the folks wanting to fix ALL the problems RIGHT NOW!

When the boss or big boss or big-big boss demand that the fix be some form of {insert major-new coolness} to fix all their problems, it is time to get nervous.  Alas, I suspect that this boss, or some level of boss, if they ever had any technical chops have found them rusted from lack of use.  So some general idea of "all your problems will be fixed if you do..." gets some traction with them.

After all, its reasonable that if things are broken, then you can fix stuff that is wrong in one fell swoop.  And then there is the information they picked up at a conference on this - and how the speakers all talked about how the problems were fixed by doing that one thing.  And then there are the consultants who are experts in a given area who come in and consult or coach or do something.  And then there are the sales reps for the cool tools that will help maximize the synergistic effects for the enterprise by using this tool to make that change.

And these folks are often the disillusioned ones who come back time and again.  "That change did not work as expected.  What can we do instead?"

And this happens when the change did not go as easily nor as well as the expert/consultant/conference speaker/sales guy said it would.  So their quest begins for a new fix to their problems.  The next big thing that will solve all their problems, including the ones left over from this "failed" change or improvement effort.  And the one before that.  And the one before that.  Yeah, you get the idea. 

This, from what I have seen, drives a great number of people to return to conferences looking for solutions to their problems.  They want to change things to make things better but they want it to work and not be hard and not miss  any project deadlines and maximize the synergies of partnership for the enterprise.

Basics

Let me share three general ideas I have come to think of as givens:

1. When the person with a vested interest tells you that implementing this change is easy - there is approximately a 99.999999999999999% chance they are not being completely open.

They may not be lying; they may be naive.  Either way it will cost you and you company money.

Lots of money.

Don't get me wrong.  Sometimes things work.  Sometimes people will proclaim the effort a success because to do otherwise will be to admit that a lot of money was spent on a project that failed to deliver its promised results.  (We can't admit that because it might impact earning statements and that will impact share price and stockholders will be mad.)

2. Know when to cut your losses.

Maybe you have heard of throwing good money after bad?   "If we tweak this piece of the process and nudge that a little, I'm certain that your results will be much, much better."

Of course, then you are back to needing to admit that the effort was not successful.  (We can't admit that because it might impact earning statements and that will impact share price and stockholders will be mad.)

As with any project, someone needs to say "I think there's a problem here."  Why not the testers?  Probably their manager/supervisor/boss might be better for that - but if no one else is willing to step up, why not the testers?

3. Sometimes things work and sometimes things don't.

Put another way, smile when something works.  If it doesn't, don't get angry and look for who is to blame.  Instead, look patiently and honestly at what contributed to it not working.

There may not be a single "root cause" to any failure. Why?  Because the one root cause that can be attributed honestly in environments where a single root cause is needed, is that human beings implemented and executed the work and human beings are fallible.

There may be many contributing factors that led to the problem.  If one or more of them was not present then the problem may not have developed.  Look for interactions to see "the cause."

Finally - 

People who make things happen make things happen in spite of problems and obstacles.  Being honest about the obstacles is the first step in making things better.  The people working with you who are to implement the changes are only obstacles if you make them obstacles.

Help them understand.


Sunday, February 9, 2014

Pandora's Box: Testing, Active Consideration & Process Models

Based on emails I've received it seems I've committed an injustice in my previous posts on Pandora's Box Testing.  It seems some people think I'm coming down unfairly on organizations, and testers in particular, that focus their efforts on formal, written test scripts based on "the requirements."

For that, I apologize.  In no way did I mean to imply that my respected colleagues who rely strictly on documented requirements to "drive testing" are always engaging in Pandora's Box Testing. 

My choices are to either write a massive tome or split the idea into chunks as I sort through them in my head.  Or, perhaps more clearly stated, I write on the ideas as they form in my head, and use the writing and consideration I give after writing to grow the ideas further.

Many experienced testers are extremely aware of the problem of Pandora's Box Testing.  Some are rigorous in their investigation and research to consider many possible realms and change "hope" to active decisions around what is and is not to be tested.

It is that recognition, that decision, of examining what can be tested in a meaningful way and what can not, and looking at the reasons why certain functions cannot be tested or should not be tested.

It is in this consideration that we move away from "trust," "belief" and "hope" and into the realm of "This is the right testing to do because..."

Thus, for each project, we consider what needs to be done to serve the stakeholders.  The danger is when testers are told what the stakeholders need done.  If the product owner, business representative and/or customer representative are not in agreement or, more likely, do not understand the implications, testers need to make sure that the implications are clear to all.

This does not need to be confrontational, simply a discussion.

When I have encountered this behavior it has been the result of a few modes of behavior.  It can be that people, like the PM, development leads, etc.,simply don't know any different.  It may be they are convinced that the only testing that really matters is one particular type or approach.  They have been told that such a thing is a "best practice."  Right. 

Other times, they may be suffering from their own version of Pandora's Box Testing:

Pandora's Box Software Development Model 

Hope is the greatest evil let loose from Pandora's Box.  We find software projects brimming with it.

PMs and BAs hope that by "following the process (model)" everything will work.  They hope that by creating the forms on time and having the meetings every week that everything will be fine.

In the mean time, designers have many unanswered questions and hope that the design they come up with will address them.  Developers don't understand the design and hope the designers know what they are doing.  Then they don't have time to unit test, or have been told "all testing" will be "done by QA"

Of course, because the designers and developers have other time-sensitive projects, they really can't sit down and talk things through carefully with each other or with the testers.  Or, for that matter, with the product owners or customer representatives.  So, the hope everything comes together. 

So, when testers "get the code" to test, we may hope that this time, things were done "right."  Sadly, far too often, we find they were not.  Again.

What can we do?  We're just testers, right?

We can ask questions.  We can take actions that may influence the thing we hope happens actually happens.  We can inform people of the impact of their actions:
 
  • We can show developers how "making their date" with delivering code that has not been unit tested will impact further testing;
  • We can show how development/project management that optimistic (at best) or aggressive timelines for development will limit the available time for review and unit testing when problems are encountered;
  • We can show how that limited time will impact further testing; 
  • We can show Designers how "making their date" with a design that is not reviewed or understood will impact developers and testers - and ultimately people using the software;
  • We can show how BA's "making their date" with poorly considered documented requirements impacts all of the above;
  • We can show PMs how communication, honest, open, clear and concise will reduce the above risks.

THAT is how we combat and defeat the evil let loose from Pandora's Box. 

We take action to make the hopes come true.

We take positive action to change things.  

Wait! One more thing... 

To my respected colleagues who emailed me who rely strictly on documented requirements to "drive testing:"

If your organization fits in the description above, and if you dutifully follow the process without variance - then I suspect am reasonably certain that you are engaging in Pandora's Box Testing.

Tuesday, February 4, 2014

Pandora's Box Testing and Requirements

Right.  Hands up - Everyone who has been told "Testing for this project is to only verify the requirements."

This is fine as far as it goes.  Where we get into trouble is what counts as a "requirement."

Most often we are told this means the documented requirements have been analyzed and considered by experts and they are firmly set and we can work from them.  Doing so is a classic example of Pandora's Box Testing.

It is a firm belief that the requirements are fixed and unchanging.  It is not proof.  Frankly, unless you are in a fairly small number of fields of work - e.g., medical, telephony, aeronautics, navigation, I might suggest the first task of a tester is to test the requirements themselves.

I have found it a reliable heuristic, if not a maxim, that if there is an opportunity for more than one interpretation of a requirement or a set of requirements, someone will take advantage of this and interpret them differently than anyone else.

I hear it now: "Pete, if they are communicating and discussing the requirements, then this doesn't happen."  And I suggest that "communicating" and "discussing" are not necessarily the same thing.  Nor are they sometimes related at all.

When "communicating" means "repeating oft-used and oft-heard buzzwords" then does everyone mean the same thing?  Are you all agreeing about the same thing?  Are you certain?

Or are you hoping your plans are based on something more than buzzwords? 

Work through the "documented requirements" is a good start.  Test them.  Do they make sense together?  When you take them as a set, does the set seem to work?  Do they describe what your understanding of the purpose is?  Do they match what your understanding of the business need is?

Now then.  If this is totally new development - that is pretty much where I start with evaluating documented requirements.  Lets face it - most of our projects are not totally new, green field work.  They are updates, changes, modifications to existing software.  Existing systems. 

Cool, right?

Do the requirements you just went through describe how the changes interact with what is there currently?  Do they describe what differences should be expected?  Between them, can you discern what the customers (internal or external) would expect to see?

Do they clearly describe what to you should be looking for in testing?  Will your testing be able to describe information in such a way as you can reveal for the stakeholders if this is correct behavior?
Will they be know if the behavior is correct?  Are they relying on the documented requirements or something else?

Perhaps they are relying on hope?  Maybe the only testing they are familiar with is Pandora's Box Testing.

That would be sad.