Monday, March 31, 2014

A Little Blog Post on a Big Idea: Does the Software Work?

I don't need to be in to my client's office until later this afternoon.  This gives me a chance for TWO mornings of writing in a row.  I intend to take full advantage of that!

When looking at the question of "Does this work?" have you noticed that all of us tend to have a slightly different view of what "work" means? 

OK, so its not just me then.  Good.  I was a little concerned for a while. 

In my case, I look for a variety of measures and models to determine if a piece of software "works" or not.  Under one model it might work perfectly well.  Under another model, it might not work at all.

These can have the same set of "documented requirements" and completely different models around expected behavior.  In a commercial software setting, I try and get the description of what the sales staff have been pushing that this software will do.  (If the sales folks are setting the customer expectations, I like to know what they are saying.)

If this is software that people inside the company are going to be using, I try and get with representatives from the department where this software will be used.  Not just the "product owners" but the people using the software as part of their daily function.  These groups can, and often do, have distinctly different definitions of "it works" for software impacting their departments.

Finally, there is the view of development, individuals and the team.  On software with many points of collaboration and integration, let me make one thing really, particularly clear:

It does not matter if your part "works" (by some definition of "works") if the entire piece does not "work" (by the definition of someone who matters.)

That goes for functions within the application and the entire application itself.  Resting on "I did my bit." is not adequate if the people working with the piece of software if that piece of software does not do what it needs to do. 

No matter how well the individual pieces work, if they don't work together, the software doesn't work.

Tuesday, March 25, 2014

Projecting Strength and Value in Chaos

I've been talking with a number of people in my area the last couple of weeks.  Hanging with folks and being an ear when they need to vent to someone can actually help me (the ear) as well. 

When I worked for large corporations, universities, privately held companies, or even small companies, reorganizations would be announced with much fanfare and most of the time the staff shrugged and said "Oh, I work for Joe now instead of Mary."  Of course, Joe's staff now worked for Carrie or Dawn.  Carrie's staff now worked for Mary and on and on.

It seems there is something in the water - or maybe its the long winter.  I don't know.  Something seems to be happening and the shrugs are a little more forced.  Not just at one company, but at a BUNCH of companies.  They remind me of when I was working quite a few years ago for a large company and they decided to announce a reorg.

At this company, the bosses said things like "staff reductions" and "strategic reassessment of purpose" and other scary phrases that staff don't like to hear. Oh, one other thing - the dates of the announcements keep getting pushed out.

People who are employees, particularly at some companies, expect to be there for many years if not their entire career.  As a contractor, I know when I go in that eventually I will leave - sometimes sooner rather than later.

When the cart gets upset, people's presumptions and expectations get tossed about and the world seems to be completely unpredictable.  Add to that concerns over "What is management doing?" and an observer of human behavior, like a contractor sitting in the corner or the person sitting in the coffee shop or restaurant listening to people tell stories, you get something like "Lord of the Flies."

The people with awesome skills feel fairly safe.  Those with more modest technical skills or have ensconced themselves into expertise of how things are done now are a in a little trouble - well - maybe a lot of trouble.  If things are changing, then experts in how things USED to be may have limited value.  Particularly if how things are GOING to be are undefined.

Expect to see people asserting their expertise - their value to the organization.  Expect to see people showing their understanding and how they can be proactive - how they can handle challenges.  These are fine - unless they are huge changes in behavior.  When things get really nuts, expect to hear "Look, my part works fine.  If this thing doesn't work its because those (not nice term) people in that other area can't do their jobs."

Why would managers do this?  The thing with telling people there will be staff reductions and strategic changes and everybody's jobs would change - that thing?  I don't know.  Really.  I'm pretty clueless.  I have some ideas, and frankly, I'd be rather depressed if that turned out to be the truth.

If you are on the staff of a company that is doing that?  I don't have a lot of suggestions for you.

Maybe I do have one.

The value you bring to the company and the strength you have are not things that can be easily put on or off at will. 

Be yourself.

Don't try to be what you are not.  Do what you do and be who you are. 

If you don't feel comfortable with the behavior you are seeing in the company, then start looking for new opportunities.  Don't let the folks asserting their expertise or trying to show how much "value" they add define who you are.

Tuesday, March 4, 2014

On Estimation and Stuff, For Chris

I was asked a question by email and responded.  This is a much more full response. 

Estimation is one of those things most of us get pinged about.  "We need estimates on testing the Blah System."  We can give really good reasons why they really don't tell us much and tends to turn into a stick to beat us with - Still, we're expected to come up with some idea as to how long something is going to take when what is being done is foggy at best. 

We make a reasonable effort and come up with what is a reasonable estimate for the amount of work we need to do, and the time and effort that will take.  Then as we learn more information we realize that things will a) take a lot more time/effort or b) take less time/effort because of things we did not or could not account for.

It seems that every time I do a Test Process Improvement workshop, I start with something like "What things does your team and the individuals on your team do now that you (and the members of your team) wish you did better?"  EVERY TIME the first or second answer is "We're lousy at estimates; we need to do estimation better."

I've learned in (mumble mumble) years of software development work that I am not clairvoyant.  I can't see the future nor am I a mind reader.  I've also learned that most people with absolutely accurate estimation calculators are snake oil salesmen.  Or, they are delusional.  They may be out and out liars but I'd prefer to think better of people.

Documentation may help - maybe.  If there is some reference to previous work on the same system, or similar projects, that may help.  If you are operating based on tribal knowledge, then you may have a bit of a challenge in convincing anyone else that this is something other than a wild guess.

If you look to do more than simply test "functional requirements" and look for other stuff, like how the system actually behaves or perhaps non-functional requirements, how do you plan for that, let alone come up with some level of estimate.

Here's one of my favorite tools -

Mind Maps

Huh?  Mind Maps?  Really?  No, really - I use them a couple of ways. 

First, I use them to track requirements (business/solution/whatever) and associate design ideas with the requirements.  Sometimes this leads to a LOT of dotted lines.  Sometimes it shows no lines whatsoever.  Either way, it helps me visualize the software and what needs to be considered.

Second - and here is where it gets interesting - I use them to map what areas of the application or software or whatever CAN be tested.  By this I mean show what is available to be tested or what has been delivered.  I can also show what is scheduled for delivery and what is not expected to change.  Then, as testing progresses, I can associate session reports with them.  Sometimes that takes a bubble or three with the project wiki or share-point locations for session reports associated with each logical piece. 

THAT gives me a reference for both functional and non-functional aspects that need to be exercised. 

This ties back to the question of "How does the software behave?

In this instance, I'm not testing to validate requirements - I'm exercising the software to see how it behaves.  I can then compare the results with the expectations -  one part of which consists of documented requirements.

In the end, I have a full visual representation of what has been exercised and how - and how thoroughly.  This gives me something I can take back to stakeholders and say "This is what we did and how we did it.  These are the areas that we found interesting variations in and we needed to make changes because of.  Are you comfortable with this or would you be more comfortable with more testing?"

Rather than talking about tests and test cases and what has been run and not run, which I've found is of really little value to most people - no matter what "best practice" folks tell us, I talk about the functions within the system we have exercised and the depth to which we exercised them.

But we were talking about estimation, right?

The next time I, or anyone else, is in a project that is changing part of that system, I know what was done before and how long it took.  After all, we have the session reports saved and referenced in the mind map, right? 

This can also help me when I need to consider regression testing - for this project or for future projects.  I have a gauge I can turn to for reference. 
 
So, with that information and a description of what the change is and an understanding of at least an idea around a portion of the risk based on the impact of the change, we can come up with something approaching an estimate that is perhaps better than an absolute guess.