Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Monday, December 24, 2012

Farewell 2012; Rise Up and Be Strong 2013

The last couple of years I have tended to write blog posts at the change of the year.  One to summarize the year that is ending and one to list the things I am looking forward to in the coming year.  This time it is different.  It feels different.

Changes

Much has happened this year.  As I was considering how to encapsulate it, I read over the posts on changing from 2011 to 2012.  I must admit, I had to smile.  Much has happened, still much remains to be done.

What has happened? Well, in August I submitted my resignation to the company where I was working.  My "old company" had been bought by a much larger competitor and I found myself in a struggle to keep myself focused on what my goals and values were.  I was a little surprised because I had worked for large companies in the past - most of my working life in fact, had been with large companies.

The surprising thing to the person I was a few years ago, was that I resigned without a "company" to go to.  I went independent.  I struck out on my own with a letter of marque sailing against any and every - oh, no, umm - that is being a privateer - not a working independent test professional.  Meh, whatever.

But, that is what I did. The roots for this lie in this post I wrote late in 2011.  Looking back, it was the natural progression of where I was going from and where I was going to.

Now, I did have a contract lined up - which has since been extended.  This made the opportunity a little easier than jumping in cold-turkey - or deciding to go independent after being let go.  I concede this was an advantage.

Of course, now I am working even harder - not simply at "the day job" but in my writing, my learning and my attempts to understand things better.  The push from being sacked, as described in the blog post mentioned above, seems to have led me to the point where I hoisted my own flag, and have so far, avoided being hoist with my own petard.

People

I have been very fortunate in my meetings and comings and goings this past year.  Given the opportunity to speak in Portland at PNSQC and then in Potsdam at Agile Testing Days, I met a massive number of people I had only read of, or read their words.  It was inspiring, encouraging and humbling all at once.  In both instances, I found it easy to not be the smartest person in the room.  I had a pile of people there I could relate to  and learn from.

To each of you, I am deeply indebted.  Its a long list - let's see.  There's Matt Heusser, who is still a bundle of energy and ideas.  Michael Larsen, who is really amazingly smart.  Bernie Berger, Markus Gartner, Janet Gregory, Gojko Adzic, Huib Schoots, Sigge Birgisson, Paul Gerrard, Simon Morley, Jurgen Appelo, James Lindsay, Michael Dedolph, Linda Rising, Ben Simo, and.... the list really does kind of go on.

The people I continue to find to be wonderful teachers and gentle instructors (sometimes not so gentle as well) sometimes through conversation, emails, IM/Skype chats, blog posts and articles.  They include, in no particular order, Elizabeth Hendrickson, Fiona Charles, James Bach, Paul Holand, Michael Bolton, Cem Kaner, Jon Bach, Catherine Powell, Griffin Jones.  There are others, but these folks came to mind as I was writing this.

Community

Wow.  This year has been amazing.  The local group, the GR Testers, are meeting every month, with a variety of people showing up - not "the same folks every time" but people wandering in to check it out.  I find this exciting. 


AST - Association for Software Testing 

What an amazing group of people this is, and is continuing to develop into.  The Education Special Interest Group (EdSIG) is continuing to be an area of interest.  Alas, my intention of participating in "more courses" has been impacted by life stuff.  I've been able to assist with a couple of Foundations sessions for the BBST course, and offered ideas on some discussions but that is about all. 

This past August I was honored to be elected to the Board of Directors of AST.  My participation continues to be as much as I can give on a regular basis - including monitoring/moderating the Forums on the AST website (a really under utilized resource, perhaps we can change this in the coming year) and the LinkedIn AST group's discussion forum (mostly whacking spam). 

A new and exciting development is the Test Leadership Special Interest Group - LeadershipSIG.  This new group is looking into all sorts of interesting questions around Test Management and Test Leadership and - well - stuff - including the interesting question of the difficulty of finding and recruiting Context Driven Test leaders, managers and directors.

CAST is scheduled for August in Madison, Wisconsin.  This is going to be good.

Other Conference / Community Stuff

Conferences coming up include STPCon - in San Diego in April.  Also in April is GLSEC - Great Lakes Software Excellence Conference - that one is in Grand Rapids.  QAI's QUEST conference is also scheduled for the Spring.

There are several conferences I've considered submitting proposals to - and I suspect it is time to do more than consider. 

Writing - Oh my.  I have several projects I've been working through.  I am really excited about some of the potential opportunities.  I'm pretty geeked about this.

Overall, I am excited about what 2013 may hold.  It strikes me that things that have been set up over the last several years are coming into place.  What is in store?  I do not know.  I believe it is going to be good.

After all. I am writing this the evening of December 23.  According to some folks, the world was supposed to end a couple of days ago.  What those folks don't understand is that everything changes.  All the time.  Marking sequences and patterns and tracking them is part of what every society does.  They don't end.  Simply turn the page.

Let us rise up together. 




Wednesday, November 21, 2012

Agile Testing Days, Day 1: Workshops

Monday in Potsdam was a lovely day.  Yeah, a little foggy, maybe a little damp outside, but hey - I was inside where there was good coffee, a variety of juices, waters and the odd snack or two.  A nice luncheon with great conversation following a very enjoyable breakfast with great conversation - Oh, and Matt and I had another opportunity to present Software Testing Reloaded - Our full day workshop.  This time in conjunction with Agile Testing Days.

As usual, we totally messed up the room - this time the staff of the hotel were more amused than horrified.  The folks wandered in after coffee and light snacks and found us playing The Dice Game - Yeah.  That one.

In retrospect, it was a great ice breaker to get people in the room, involved and thinking.  It was a good warmup for what was going to follow.  So, we chatted and conducted the first exercise, had everyone introduce themselves, asked what they were hoping to get from the workshop.

I think Matt and I were equally astounded when a couple of people said they wanted to learn how to test and how to transition from waterfall (well, V-model) to Agile.  We gently suggested that the people who wrote the book were down the hall and perhaps that might be better for them - and reassured everyone that if they were looking for something more that they could either re-evaluate their choice OR they could hang with us. 

So, after a couple of folks took off, and a couple more wandered in, we settled at 11 participants.  It was a lively bunch with a lot going on - great exercises, good interaction.  Kept us on our toes and, I think, we kept them in their toes as well.  

Somehow, we managed to have a complete fail in getting to every single topic that people wanted us to talk to or do exercises around.  Ummm - I think our record is perfect then.  You see, there is always more for us to talk on than there is time.  That is frighteningly like, well, life on a software project. 

We often find ourselves with more stuff to deliver in a given period of time than we can hope to.  If we promise to give everyone everything, we really can't deliver anything.  Well, maybe that is a bit of a stretch.  Maybe it is closer to say we will deliver far less than people expect, and less than what we really can deliver if we prioritize our work differently in advance. 

So, for Matt and I, we try to work our way through the most commonly occurring themes and address them to the best of our ability.  Sometimes we can get most of the list in, sometimes, well, we get less than "most."

Still, we try and let people know in advance that we will probably not be able to get to every single topic.  We will do everything we can to do justice to each one, but...

This got me thinking.  How do people manage the expectations of others when it comes to work, software projects and other stuff of that ilk. 

How well do we let people know what is on the cusp and may not make the iteration?  How do we let people know, honestly, that we can not get something in this release and will get it in the release after? 

I know - the answer depends on our context.

In other news,  It is really dark here in Potsdam (it being Wednesday night now.) 

To summarize, we met some amazingly smart people who were good thinkers and generally all around great folks to meet.  My brain is melted after 3 days of conference mode - and there is one more day to go. 

I've been live blogging on Tuesday and Wednesday, and intend to do the same tomorrow.  I wonder if that has contributed to my brain melt.  Hmmmmmmmmmmm.

Auf Wiedersehen.

Sunday, November 11, 2012

What Makes Software Teams that Work, Work?

In pulling together some notes and reviewing some papers, I was struck by a seemingly simple question, and as I consider it, I pose it here.

Some software development teams are brilliantly successful.  Some teams are spectacular failures.  Most are somewhere in between.

Leaving the question of what constitutes a success or failure aside, I wonder what it is that results in which.

Some teams have strong process models in place.  They have rigorous rules guiding every step to be taken from the initial question of "What would it take for X?" through delivery of the software product.  These teams have strong control models and specific metrics in place that could be used to demonstrate the precise progress of the development effort.

Other teams have no such models.  They may have other models, perhaps "general guidelines" might be a better phrase.  Rather than hard-line metrics and measurement criteria, they have more general ideas.

Some teams schedule regular meetings, weekly, a few days a week or sometimes daily.  Some teams take copious notes to be distributed and reviewed.  Some teams have a shared model in place to track progress and others keep no records at all.

Some of each of these teams are successful - they deliver products on time that their customers want and use, happily.

Some of each of these teams are less successful.  They have products with problems that are delivered late and are not used, or used grudgingly because they have no option.

Do the models in use make a difference or is it something else?

Why do some teams deliver products on time and others do not?

I suspect that the answer does not lie in the pat, set-piece answers but somewhere else. 

I must think on this.

Monday, October 8, 2012

Testers and UX and That's Not My Job

OK.

I don't know if you are one of the several tester types I've talked with over the last couple of months who keep telling me that "Look, we're not supposed to worry about that UX stuff you talk about.  We're only supposed to worry about the requirements."

If you are, let me say this:  You are soooooooooooooooo wrong.

No, really.  Even if there is someone else who will "test" that,  I suggest, gently, that you consider what a reasonable person would expect while you are examining whatever process it is that you are examining.  "Reasonable person" being part of the polyglot that many folk label as "users."  You know - the people who are actually expected to use the software to do what they need to do?  Those folks?

It does not matter, in my experience at least, if those people (because that is what they are) work for your company or if they (or their company) pay you to use the software you are working on. 

Your software can meet all the documented requirements there are.  If the people using it can't easily do what they need to do, then it is rubbish.

OK, so maybe I'm being too harsh.  Maybe, just maybe, I'm letting the events of yesterday (when I was sitting in an airport, looking at a screen with my flight number displayed and a status of "On Time" when it is 20 minutes after I was supposed to be airborne) kinda get to me.  Or, maybe I've just run into a fair number of systems where things were designed - intentionally designed - in such a way that extra work is required by people who need the software to do their jobs.

An Example

Consider some software I recently encountered.  It is a new feature rolled out as a modeling tool for people with investments through this particular firm.

To use it, I needed to sign in to my account.  No worries.  From there, I could look up all sorts of interesting stuff about me generally, and about some investments I had.  There was a cool feature that was available so I could track what could happen if I tweaked some allocations in fund accounts, essentially move money from one account to another - one type of fund to another - and possible impact on my overall portfolio over time.

So far, so good, right?  I open the new feature to see what it tells me.

The first screen asked me to confirm my logon id, my name and my account number.  Well, ok.  If it has the first, why does it need the other two?  (My first thought was a little less polite, but you get the idea.)

So I enter the requested information, click submit and POOF!  A screen appears asking the types of accounts I currently had with them.  (Really?  I've given you information to identify me and you still want me to identify the types of accounts I have?  This is kinda silly, but, ok.)

I open another screen to make sure I match the exact type of account I have with what is on the list of options - there are many that are similar in name, so I did not want to be confused.

It then asked me to enter the current balance I had in each of the accounts.

WHAT????  You KNOW what I have!  It is on this other screen I'm looking at!  Both screens are part of the same system for crying out loud.  (or at least typing in all caps with a bunch of question-marks.)  This is getting silly.

So, I have a thought.  Maybe, this is intended to be strictly hypothetical.  OK, I'll give that a shot.

I hit the back button until I land on the page to enter the types of accounts.  I swap some of my real accounts for accounts I don't have - hit next and "We're sorry, your selections do not agree with our records."  OK - so much for that idea.

Think on

Now, I do not want to cast disparaging thoughts on the people who obviously worked very hard on this software, by some measure.  It clearly does something.  What it does is not quite clear to me.   There is clearly some knowledge of the accounts I have in this tool - but then why do I need to enter the information?

This seems, awkward, at best.

I wonder how the software came to this state.  I wonder if the requirements handed off left room for the design/develop folks to interpret them in ways that the people who were in the requirements discussions did not intend.

I wonder if the objections raised were met with "This is only phase one.  We'll make those changes for phase two, ok?"  I wonder if the testers asked questions about this.  I wonder how that can be.

Actually I think I know.  I believe I have been in the same situation more than once.  Frankly it is no fun.  Here is what I have learned from those experiences and how I approach this now.

Lessons

Ask questions.

Challenge requirements when they are unclear.
Challenge requirements when they are clear.
Challenge requirements when there is no mention of UX ideas,
Challenge requirements when three are mentions of US ideas.

Draw them out with a mind map or decision tree or something.  They don't need to be be fancy, but they can help you focus your thinking and may give you an "ah-HA" moment - paper, napkins, formal tools - whatever.  Clarify them as best you can.  Even if everyone knows what something means, make sure they all know the same thing..

Limit ambiguity - as others if their understanding is the same as yours.

If there are buzzwords in the requirement documents,  as for them to be defined clearly (yeah, this goes back to the thing about understanding being the same.

Is any of this unique to UX?  Not really.  I have a feeling that some of the really painful stuff I've run into lately would have been less painful if someone had argued more strongly early on in the projects where that software was developed.

The point of this rant - If, in your testing, you see behavior that you believe will negatively impact a person attempting to use the software, flag it.

Even if "there is no requirement covering that" - .  Ask a question.  Raise your hand.

I hate to say that requirements are fallible, but they are.  The can not be your only measure for the "quality" of the software you are working on if you wish to be considered a tester.

They are a starting point.  Nothing more. 

Proceed from them thoughtfully. 

Saturday, September 22, 2012

In Defense of the Obvious, Testers and User Experience III

I have had some interesting conversations over the last few months with testers and designers and PM types and experts in a variety of fields.  I ask questions and they answer them, then they ask me a question and I answer it.

That is part of how a conversation works.  Of course, another part is that when Person B is responding to a question by Person A, it is possible, if not likely or probable that A will respond or comment to B.

This leads to B responding to A and so forth. Most folks know this is how a conversation works.

It is not a monologue or lecture or pontification.  It is an exchange of views, ideas and thoughts.

So, do all conversations follow the same model?  Are they essentially the same in form and structure?  Do they resemble those pulp, mass-produced fiction books that follow the "formula" used by the specific publisher?  You know the ones.  Pick one up, change the name of the main characters, change the name of the town - then pick up another from the same publisher and SURPRISE!  Same Story!  Change the name of the characters in the second book to what you changed the names from the first book - and see how similar they are.

OK.  Software folks -  Are your perceptions of users (you know, people who use your software to do what they need to do) as fixed as the characters in the mass-produced fiction books?  Or are your perceptions of users more like the participants in conversations?

Some Ideas I have that may seem really obvious to a fair number of folks, but I suspect are either revolutionary or heretical to others...

No Two People Are the Same

OK.  Obvious idea Number 1 for software testers: No two people are the same.  Duh.  Says so in red just above that, right?  They are the same, right?  Really?  How many differences can you spot?  (Go ahead, try.  Its OK.)

Why do we expect the people using the system to be a homogenous group where they generally act the same?  Think of people you work with who use software - ANY software.  Do they select similar options as each other?  Do they have the same interests?

Do they like the same coffee?  Do they do the same job?  Do they want to do the same job?  No, wait.  When you read the last couple of questions, what was your answer?  Do they REALLY do the same job?  Or do they do the same general function?

Are they doing something similar to each other?  Umm - similar is not the same, right?  If these are question you don't want to deal with - or maybe don't know the answer to - How are you designing your tests?

How are you designing your systems?

What "users" are your "user stories" emulating?

I had a bizarre chat fairly recently.  Boss-type said "We fixed this by using personas.  We can emulate people and mimic their behavior."

OK, says I to myself, reasonable idea and reasonable approach to formulating various scenarios.  They can be very powerful.  "Really," says I, out loud, "tell me about some of them.  Sounds like it could be cool."

"Sure!" says the very proud boss-type, "We have Five of them: One for each department."  Really? So, tell me more.  "Sure!  Persona 1 does the thing-a-ma-bob function.  Persona 2 does the dumaflatchey function.  Persona 3 does the whats-it function.  Persona 4 does the thing-a-ma-jig function (similar to the thing-a-ma-bob function but not the same).  Persona 5 does the whatever function."

So, a total of five personas?  OK, how many people are in each department?

"Well, the smallest department has 15 people.  The others have 75 to 100."

Really?  They are all the same?  They all do the same thing every time?  They never vary in their routine?

Do they all do the same thing your test scenarios do - in that sequence - every single time they go into the system?

Sometimes People Have Bad Days

Yeah, I know you thought that only applied to software folks.   Sometimes super-model types have bad day's too.  Of course, famous folk have bad days - then they get their picture in various tabloids and their "bad day" seems not so bad because all the attention because of the tabloids is worse than the original "bad day."

Bad days can impact more than just our coding or testing or a public figure's dinner plans.  Remarkably enough they can impact people who use the software we're working on. 

Sometimes people have too much fun the night before they are in the office using our software.  Their typing is less than perfect.  They are less accurate than normal in their work - they read things wrong; they invert character sequences; they simply don't notice their own mistakes.

Sometimes they had a really bad night instead of a really good night.  Maybe they were up half the night caring for a sick child.  Maybe it wasn't a child, maybe it was a partner.  What if it was a parent? 

The results may be the same outwardly, but what about the inner turmoil? 

"Is my child/partner/parent doing better now? Do I need to check on them? What if I call and they don't answer the phone?  If they are sleeping, I may wake them.  If they can't get to the phone, why not? Something could be seriously wrong?"

Will they be more irritable than they normally are?  Will that impact others in the group and cause their productivity to drop?

Sometimes the Best People Aren't at Their Best

What?  How can that be?  Aren't they like what the Men In Black are looking for?  Aren't they "Best of the Best of the Best" (sir)?

What if they are too good?  What if they get asked questions and are interrupted and step away from their machines for a minute or lock their screens while they help someone else?  What if their user session times out?

Let's face it.  Anyone can get distracted.  Anyone can be interrupted.  Is the system time-sensitive?  How about state sensitive?  The session can time-out mid-transaction, can't it?  Someone else has a problem so the expert locks her system and helps out the guy with a problem - what happens with her session when she comes back? 

Do you know? 

What if they get called into a conference room with some boss types to answer some questions.  If she signs in from another location, what happens to her first session?

And so forth...

These are not new ideas.  Do we know what happens though? 

Now some of you may be thinking to yourself "But Pete, this is not really UX kind of stuff, is it?"  That makes me wonder what kind of "stuff" they might be.

Do your test scenarios consider these possibilities?  Do they consider any one of them? 

Testing to Requirements

Ah yes.  I hear the chorus of "Our instructions are to test to the requirements.  Things that aren't in the requirements should not be tested.  They are out of scope."  Whose requirements?

The requirements that were written down by the BA (or group of them) or the ones that were negotiated and word-smithed and stated nicely?

What about the requirements that the BA did not understand, hence did not write down.  Or maybe he wrote them down but they made no sense so other folks scrapped them. 

Then there are the implied requirements.  These are the ones that don't make the documented requirements because they seem so obvious.  My favorite is the one about "Saving a new or modified record in the system will not corrupt the database."

You hardly ever see that, but everyone kind of expects that.  Right?  But if you are ONLY testing to DOCUMENTED requirements, then that does not count as a bug, right?  It is out of scope.  RIGHT?

NO?  Really? 

See? That is kind of my point.  You may be considering the experience of the users already.  You just don't know it. 

Now, broaden your field of vision.  Pan back.  Zoom out.  What else is obvious to the users that you have not considered before?

Now go test that stuff, too.



Monday, June 25, 2012

On Value, Part 2: The Failure of Testers

This is the second post which resulted from a simple question my lady-wfe asked at a local tester meeting recently.

This blog post resulted in a fair number of visits, tweets, retweets and other measures that people often use to measure popularity or "quality" of a post.

The comments had some interesting observations.  I agree with some of them, can appreciate the ideas expressed in others.  Some, I'm not so sure about.

Observations on Value

For example, Jim wrote "Yes, it all comes down to how well we "sell" ourselves and our services. How well we "sell" testing to the people who matter, and get their buy-in."

Generally, I can agree with this.  We as testers have often failed to do just that - sell ourselves and what we do, and the value of that.

Aleksis wrote "I really don't think there are shortcuts in this. Our value comes through our work. In order to be recognized as a catalyst for the product, it requires countless hours of succeeding in different projects. So, the more we educate us (not school) and try to find better ways to practice our craft, the more people involved in projects will see our value."

Right.  There are no shortcuts.  I'm not so certain that our value comes through our work.  If there are people who can deliver the same results for less pay (i.e., lower cost) then what does this do to our value?  I wonder if the issue is what that work is?  More on that later, back to comments.

Aleksis also wrote "A lot of people come to computer industry from universities and lower level education. They just don't know well enough testing because it's not teach to them (I think there was 1 course in our university). This is probably one of the reasons why software testing is not that well known."

I think there's something to this as well.  Alas, many managers and directors and other boss-types testers deal with, work with and for, come from backgrounds other than software testing.  Most were developers, or programmers when I did the same job.  Reasonably few did more than minimal testing, or unit testing or some form of functional testing.  To them, when they were doing their testing, it was a side-activity to what their "real work" was.  Their goal was to show they had done their development work right and that was that.

Now, that is all well and good, except that no one is infallible in matters of software.  Everyone makes mistakes, and many deceive themselves about software behavior that does not quite match their expectations.

Jesper chimed in with "It's important that all testing people start considering how they add value for their salary. If they don't their job is on the line in the next offshoring or staff redux." 

That seems related to Jim's comment.  If people, meaning boss-types, don't see the point of your work, you will have "issues" to sort out - like finding your next gig.

The Problem: The View of Testing

Taken together, these views, and the ones expressed in the original blog post,can be summarized as this:  Convincing people (bosses) that there is value in what you do as a tester is hard.

The greater problem I see is not convincing one set of company bosses or another that you "add value."  The greater problem is what I see rampant in the world of software development:

Testers are not seen as knowledge workers by a significant portion of technical and corporate management.


I know - that is a huge sweeping statement.  It has been gnawing at me on how to express it.  There are many ideas bouncing around that eventually led me to this conclusion. For example, consider these statements (goals) I have heard and read in the last several weeks, as being highly desirable
  • Reduce time spent executing manual test cases by X%;
  • Reduce the number of manual test cases executed by Y%;
  • Automate everything (then reduce tester headcount);
There seems to be a pervasive belief that has not been shaken or broken, no matter the logic or arguments presented against it.  Anyone can do testing if the instructions (test steps) are detailed enough. 

The core tenet is that the skilled work is done by a "senior" tester writing the detailed test case instructions.  Then, the unskilled laborers (the testers) follow the scripts as written and report if their results match the documented, "expected" results.

The First Failure of Testers

The galling thing is that people working in these environments do not cry out against this.  Either debating the wisdom of such practices, or arguing that defects found in production could NOT have been found by following the documented steps they were required to follow.

Some folks may mumble and generally ask questions, but don't do more.  I know, the idea of questioning bosses when the economy is lousy is a freighting prospect.  You might be reprimanded.  You may get "written up."  You may get fired.

If you do not resist this position with every bit of your professional soul and spirit, you are contributing to the problem.

You can resist actively, as I do and as do others whom I respect.  In doing so, you confront people with alternatives.  You present logical arguments, politely, on how the model is flawed.  You engage in conversation, learning as you go how to communicate to each person you are dealing with.

Alternatively, you can resist passively, as some people I know advocate you do.  I find that to be more obstructionist than anything else.  Instead of presenting alternatives and putting yourself forward to steadfastly explain your beliefs, you simply say "No."  Or you don't say it, you just don't comply, obey, whatever.

One of the fairly common gripes that comes up every few months on various forums, including LinkedIn, are whinge-fests on how its not fair that developers are paid "so much more" than testers are.

If you...

If you are one of the people complaining about lack of  PAY or RESPECT or ANYTHING ELSE with your chosen line of work, and you do nothing to improve yourself, you have no one to blame but yourself.

If you work in an environment where bosses clearly have a commodity-view of testers, and you do nothing to convince them otherwise, you have no one to blame but yourself.

If you do something that a machine could do just as well, and you wonder why no one respects you, you have no one to blame but yourself.

If you are content to do Validation & Verification "testing" and never consider branching beyond that, you are contributing to the greater problem and have no one to blame but yourself.

I am not blaming the victims.  I am blaming people who are content to do whatever they are told as being a "best practice" and will accept everything at face value.

I am blaming people who have no interest in the greater community of software testers.  I am blaming people who have no vision beyond what they are told "good testers" do.

I am blaming the Lemmings that wrongfully call themselves Testers.

If you are in any of those descriptions above, the failure is yours.

The opportunity to correct it is likewise yours.

Tuesday, June 19, 2012

Testers, 1812, or What if what your beliefs are out of date?

I'm writing this the evening of 17 June, 2012.  It is a Sunday.  Father's Day actually.

I am going through some notes I will need for a project meeting.  I am struck by something in them that makes me think that the world is as cyclic as we really sometimes would wish it was not.

Knowledge, Belief and Assumptions

When a project is starting off we always have certain assumptions.  Well, we may have a bunch of things that we write down and put in notes as being "truths".  Or something.

We KNOW certain things.  They may be about the project or they may be about the application or they may be about what the business intent is or the market forces or reasons behind the project.

We also have a way of dealing with things.  A process that is used to get done what we need to get done, right?  We can move forward with some level of confidence that we will be able to do what we need to do.

We have decisions to make.  We have to get moving and get going so we can show progress and get things done.

The issue, of course, is that we have based our decisions on things we knew some time ago.  They are not wrong, but are they current?  If we based our initial estimates on the last project or the one before that, does it have any real relevance to this one?

Now, you're probably saying "Pete, we don't just take the last estimate and run with that.  That would be foolish."  I've said that too.  Except, even if we introduce variance or some "outside analysis" then we can safely make some assumptions based on previous experience, right?

What if our recollection of that experience does not exactly line up with the documented experience?  What if our estimate is based on the previous project's estimate and not on the actual time spent on that project?  What if things have changed and no one thought fit to pass those "little details" along, because of course you know everything that is going on.  Right? 

We always are operating on current, up-to-date information, and the assumptions for the project always reflect reality.  Every project is this way.  If that describes your organization, I'd be very interested in observing how you work, because I have not seen that to be true on a consistent basis.

June 18, 1812

So, if you went to school in the United States and took American History like I did, you learned about the War of 1812.  You learned about how HMS Leopard had fired on, and forced the surrender of, USS Chesapeake.  You learned about how the British Navy was stopping American ships and impressing sailors from them to serve in the British Navy in their war against the French.  You learned how the British supplied and generally encouraged the "Indians" to attack American settlements.  

You then learned about the USS Constitution defeating HMS Guerriere and HMS Java.  You also learned about the Burning of Washington DC, the defense of Fort McHenry and the writing of the Star Spangled Banner.  You may have learned of the American victory at the Battle of Lake Erie and the American surrender of Detroit to the British.

You probably were not told the Chesapeake surrendered to the Leopard on June 22,1807.  You also were probably not told that American ships going to French ports were being turned away as part of a declared blockade of France.  You also probably were not told that Britain did not recognize the "right" of British subjects to become American citizens, and those fleeing criminal prosecution, for example, for desertion from the navy, could be seized anywhere they were found.  You probably were also not told that Lord Liverpool, the Prime Minister, ordered the repeal of the Orders of Council, which covered pressing British-born sailors into British service from American ships.

Thus, the War of 1812 was actually fought after all the official reasons, save one, had been addressed.

That one reason, inciting Indian attacks against American settlements, was also true of Spain and France, until France sold the Louisiana Territory to the US in 1803.  The simple fact is, the Americans themselves were breaking the treaties with the various Indians, or First Nations, who responded in kind.

Now, some Canadian versions will tell of how the US wanted to expand into Canada.  The American version was that if the US were to invade Canada, they would have a strong bargaining point with Britain. The Northern part of the US was predominantly the stronghold of Federalist Party supporters.  They were generally opposed to the war, and opposed to the idea of invading Canada in particular.

A funny thing happened though.

The Farther Removed From the Situation You Are, the Easier it Appears.

The farther south you went, the stronger the support for the war could be found.  The Democratic-Republican Party was generally pro-war.  Some politicians spoke of how American troops would be welcomed as liberators by the people of Canada and how they would be greeted with cheers and flowers.

Except that a large portion of "Upper Canada" was inhabited by "United Empire Loyalists": the people who moved to Canada from the United States after the Revolution.  These were the "Tories" you learned about in school and were cast in the role of "the bad guys."  They had no notion of welcoming Americans as liberators or anything else.

People who were farther removed from the reality of the situation did not comprehend how difficult that situation was.  How many times have we seen software projects like that?

So invading Canada seemed like a better idea the farther away from Canada you were,  And when American militia with a handful of regulars crossed the Niagara, they ran into a mix of British regulars and Canadian militia who gave them a solid thrashing at Queenston Heights.  Several hundred captured, many dead, a complete route as the American invaders fled across the Niagara River and back to New York.

The hero of the day was General Brock, who was killed in the action.  He had just come East to deal with this invasion threat by soundly defeating another invasion threat to the West - at Detroit.  By capturing the town and forcing the surrender of the would-be invaders.

Know What the Situation Really Is Before You Make Commitments

How many software projects have boss-types said "Oh, this can be delivered by this date."  Thus setting the date in stone. Then you find out what needs to be done.

No one ever runs into that, right?

Projects where the rules are mandated and the "customer needs" are all "captured" in a three-ring binder a three inches thick and these are "simplified" to a seven bullet-point list and if you have questions they need to go through the "single point of contact" who will relay those questions to the appropriate expert who will get in touch with the person who may have had some input into the bullet list however.... ummm, yeah.

Translated, you have no real way of confirming or getting clarifying answers around questions you may have and no real way of finding what needs to be done or what is involved or what is going on or...  what you are in for until you are in it.

The myths told to American students mask the reality of what the country encountered when the decision was made to go to war with Great Britain.  No one knew what they were going to encounter, but they had definite ideas, plans and goals.  And the summer of 1812, near Detroit, Michigan, then near Queenston, across from Lewiston, New York those plans were quickly shredded.

They walked into a situation where they had no concept of what was involved.  The handful who did and spoke out were accused of disloyalty, and not being committed to "the cause."  Whatever that meant.

Software projects run by fiat can encounter the same problem.

Fortunately for the Americans, the British forces were extremely limited the first two years of the war.  British land forces were committed to fighting Napoleon's forces in Europe... until April, 1814 when Napoleon abdicated.  Then the full weight of the British Empire would come to bear.

Making Use of Found Opportunities

The American military learned from the initial, disastrous battles, and those that followed.  While American Naval Forces could hold their own.  Single ship actions and small squadrons actions helped buy time, for both sides.  Privateers wrought havoc on both sides, but the American ones tended to disrupt the British ships to and from the Caribbean.  The Battle of Lake Erie (September, 1813) isolated British forces North of Detroit by cutting off water routes to the British positions.

In the meantime, the American army trained and trained and trained.  Regulars and militia trained.  A small cadre of young officers drove the training, based partly on their experience facing the well trained British forces.  At places like Chippawa and Lundy's Lane the training proved its worth - The American forces did not run away.  Really - I'm not making that up.  Until those battles, they tended to do that when facing British Regulars, or even militia.

As testers, when we're in the middle of something we did not expect, we have a variety of options.  We can "inform management" and "let stakeholders know" that we are encountering difficulties and wait for instruction.  Or, we can look for ways to over come those difficulties and maybe learn something new.

In late 1813, a bright young officer of artillery named Winfield Scott saw a collection of challenges and boiled them down to a few common ideas.  He saw symptoms of a single large problem.  He then proposed a solution.   His opportunity was unplanned.  He knew the army could not take effective action, and his opponents did not take action.  He used the time then to teach his soldiers to be soldiers.  In the middle of a war, one that his government went looking for, he turned the organization on its ear.

As testers, our war is often not of our choosing.  The engagements we are in are not ones we typically go looking for.  As leaders, we need to look for opportunities to improve.
 
I know that being deep into a project that is floundering is not the best time to learn a tool or new technique.  It might be a good time to do some research on your own - to dig into questions around what is blocking you.

We need to determine what the problem is we are facing.  Now, it may not lead you to a resolution.  It may do something else - like allow you to step away from your immediate block so when you return to the problem, you are looking at it with fresh eyes.

What is the PROBLEM?

It may help you think about your problem differently.  You may be attempting to address symptoms, instead of the actual problem.  AS you find a solution to one "problem,"three others appear.  Are you fighting isolated, individual problems or are these actually aspects of one greater problem.

Problem:  Troops are not firing as rapidly as their British counter parts.
Problem:  Troops do not execute field maneuvers properly, let alone quickly.
Problem:  Troop morale is low.
Problem:  When facing organized opposition, even in inferior strength, troops withdraw in confusion.

Are these four problems? 

Winfield Scott said they were not.  He said the problems described above was actually one problem:
American Regular and Militia troops do not have the proper training to be able to fight against a modern, European army.  

Scott was promptly promoted to Brigadier General and told to "fix" the problem. He did not intend to teach an army how to teach itself, but that is precisely what he did. 

His solution:  "Camps of Instruction" where for 10 hours a day, every day, he trained troops.  They in turn trained others.

He then saw that officers needed training as well.  Those that were incapable of leading troops, he replaced.  Sometimes with non-commissioned officers he promoted on the spot.

He then saw that to maintain this level of activity, he needed to make sure his men had good, healthy food - and made sure they got that.

As morale improved, he noted something else - each sub-unit (sometimes companies in a single regiment) had a different "uniform" than the others troops.  He ordered enough uniforms for his entire command to be turned out in what looked, well, uniform.

Learning to differentiate symptoms from problems is really, really hard.  Ask anyone who has tried to do that.  When you're deep into it, taking a deep breath and stopping to think is hard to do - it is also the one thing that sets great testers apart from the "anyone can test" type of testers. 

And so...

The nature of projects have remained the same for... ever.  We find ourselves in situations we did not intend to be in. 

Think clearly, then act precisely.

How do you learn to think?  Some suggestions

Attend CAST - this July in San Jose, California ;
Attend Let'sTest - a conference new this year that was a smashing success from all I have seen (next yer's should be good too!) ;
Attend PSL - Problem Solving Leadership - come on - just search for it, you'll find it ;
Find a Meetup of testers near you - and go REGULARLY;
No Meetup near you?  Start one.

In general, hang with smart people and talk with them  Don't be the smartest person in the room - ask about things you don't know or are looking for insight on.

Then act on what you have learned.   You may not achieve the "great things" the spin-meisters would have you achieve - or would tell people you have achieved.  Sometimes status quo ante bellum is the best you can hope for.

For that, one must know the costs of what you are doing and what the risks are around stopping or continuing.

That is a topic for another day.





Thursday, June 14, 2012

You Call That Testing? Really? What is the value in THAT?

The local tester meetup was earlier this week.  As there was no formal presentation planned it was an extended round table discussion with calamari and pasta and wine and cannoli and the odd coffee.

"What is this testing stuff anyway?"

That was the official topic.

The result was folks sitting around describing testing at companies where they worked or had worked.  This was everything from definitions to war-stories to a bit of conjecture.  I was taking notes and tried hard to not let my views dominate the conversation - mostly because I wanted to hear what the others had to say.

The definitions ranged from "Testing is a bi-weekly paycheck" (yes, that was tongue-in-cheek, I think) to more philosophical, " Testing is an attempt to identify and quantify risk."  I kinda like that one.

James Bach was also referred to with "Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous."

What was interesting to me was how the focus of the discussion was experiential.  There were statements that "We only do really detailed, scripted testing.  I'm trying to get away from that, but the boss doesn't get it.  But, we do some 'exploratory' work to create the scripts.  I want to expand that but the boss says 'No.'" 

That led to an interesting branch in the discussion, prompted by a comment from the lady-wife who was listening in and having some pasta.

She asked "How do you change that?  How do you get people to see the value that you can bring the company so you are seen as an asset and not a liability or an expense?"

Yeah, that is kind of the question a lot of us are wrestling with.

How do you quantify quality?  Is what we do related to quality at all?  Really?

When we test we... 

We exercise software, based on some model.  We may not agree with the model, or charter or purpose or ... whatever.  There it is.  

If our stated mission is to "validate the explicit requirements have been implemented as described" then that is what we do, right?  

If our stated mission is to "evaluate the software product's suitability to the business purpose of the customer" then that is what we do, right?

When we exercise software to validate the requirements we received have been filled, have we done anything to exercise the suitability of purpose?  Well, maybe.  I suspect it depends on how far out of the lines we go.  

When we exercise software to evaluate the suitability to purpose, are we, by definition exercising the requirements?  Well, maybe.  My first question is, do we have any idea at all about how to judge the suitability of purpose?  At some shops, well, maybe - yes.  Others?  I think a fair number of people don't understand enough to understand that they don't understand.

So, the conversation swirled on around testing and good and bad points.

How do we do better testing?

I know reasonably few people who don't care about what kind of a job they do.  Most folks I know want to do the best work they can do.

The problem comes when we are following the instructions, mandate, orders, model, whatever, that we are told to follow, and defects are reported in production.  Sometimes by customers, sometimes by angry customers.  Sometimes by customers saying words like "withhold payment" or "cancel the contract" or "legal action" - that tends to get the attention of certain people.

Alas, sometimes it does not matter what we as testers say.  The customers can say scary words like that and get the attention of people who define the models us lowly testers work within.  Sometimes the result is we "get in trouble" for testing within the model we are told to test within.  Of course, when we go outside the model we may get in trouble for that as well.  Maybe that never happened to you?  Ah well.

Most people want to do good work - I kinda said that earlier.  We (at least I and many people I respect) want to do the absolute best we can.  We will make mistakes.  Bugs will get out into the wild.  Customers will report problems (or not and just grumble about them until they run into someone at the user conference and they compare notes - then watch the firestorm start!)

Part of the problem is many (most) businesses look at testing and testers as expenses.  Plain and simple.  It does not seem to matter if the testers are exercising software to be used internally or commercial software to be used by paying customers.  We are an expense in their minds.


If we do stuff they do not see as "needed" then testing "takes too long" and "costs too much."  What is the cost of testing?  What is the cost of NOT testing?

I don't know.  I need to think on that.  One of the companies I worked for, once upon a time, it was bankruptcy.  Other were less dramatic, but avoiding the national nightly news was adequate incentive for one organization I worked for.

One of the participants in the meeting compared testing to some form of insurance - you buy it, don't like paying the bill, but when something happens you are usually glad you did.  Of course, if nothing bad happens, then people wonder why they "spent so much" on something they "did not need."

I don't have an answer to that one.  I need to think on that, too.

So, when people know they have an issue - like a credibility gap or perceived value gap - how do you move forward?

I don't know that either - at least not for everyone.  No two shops I've been in have followed the same path to understanding, either.  Not the "All QA does is slow things down and get in the way" shop nor the "You guys are just going through the motions and not really doing anything" shop.  Nor any of the other groups I've worked with.

Making the Change


In each of these instances, it was nothing we as testers (or QA Engineers or QA Analysts or whatever) did to convince people we had value and what we did had value.  It was a Manager catching on that we were finding things their staff would not have found.  It was a Director realizing we were working with his business staff and learning from them while we were teaching them the ins and outs of the new system so they could test it adequately.  


They went to others and mentioned the work we were doing.  They SAW what was going on and realized it was helping them - The development bosses saw the work we did as, at its essence, making them and their teams look good.  The user's bosses realized we were training people and helping them get comfortable with the system so they could explain it to others, while we were learning about their jobs - which meant we could do better testing before they got their hands on it.

It was nothing we did, except our jobs - the day-in and day-out things that we did anyway - that got managers and directors and vice-presidents and all the other layers of bosses at the various companies - to see that we were onto something.

That something cost a lot of money in the short-term, to get going.  As time went on, they saw a change in the work going on - slowly.  They began talking about it and other residents of the mahogany row began talking about it.  Then word filtered down through the various channels that something good was going on.  

The people who refused to play along before began to wander in and "check it out" and "look around for themselves." Some looked for a way to turn it to their advantage - any small error or bug would be pounced on as "SEE!  They screwed up!"  Of course, before we came along, any small errors found in production would be swept under the rug as something pending a future enhancement (that never came, of course.)

We proved the value by doing what we did, and humbly, diplomatically going about our work.  In those shops that worked wonders.

And so...

We return then to the question above.  How do we change people's perspectives about what we do? 

Can we change entire industries?  Maybe.  But what do we mean by "industries?"  Can we at least get all the developers in the world to recognize we can add value and help them?  How about their bosses? 

How about we start with the people we all work with, and go from there?  I don't know how to do that in advance.  I hope someone can figure that out and help me understand.

I'll be waiting excitedly to hear back from you.

Sunday, May 27, 2012

On Missing the Mark or What Bagpipe Bands Taught Me About Software Testing

For the third year in a row, I am not where I spent some 30 Memorial Day Weekends.  The Alma Highland Festival is a two day affair on the campus of Alma College, in, Alma, Michigan.  At one point in my life, I took the Friday before off from work, loaded my kilt, drum(s), coolers of adult beverages, anda small cooler of fruit and sandwich makings into the car and drove the 90 minutes or so that it took to get there.  I'd then camp out in the parking lot until we could get into the digs that would be ours until Sunday evening.

There is a two-day pipe band contest, one Saturday and one Sunday.  At one point they had enough bands to stretch from end-zone to end-zone on the football field, with bands lined up every 5 yards or so.  This year they have some 25 or 30, divided into 5 "grades" or levels of experience and expertise.  . 

I got an excited text message yesterday from a band I have been helping out by teaching their drummers once a month or so this last year.  They had played really well and were looking forward to hearing the results at the end of the day.  There were a total of 10 bands in the grade they were competing in and they hoped for a good placement, at least in drumming, if not overall.

A few hours later, I got a very sad text message.  "What do I know?" wrote the sender.  "We ended 9th drums. I thought we did better."

I have a response that has become almost "canned" I have used it so often with so many beginning bands.  It goes something like this:
I would not be surprised that you played well.  You have been working really hard and the improvement shows.  What we don't know is how hard the other bands have been working.  Since it is hard to listen objectively while playing yourself, then comparing yourself to every other band, how do you know you did not do the absolute best you could?  Even if you did, how do you know that the other bands did not do the same?  What if their "best" was simply better than your best for the day?  If you were pleased with how you played, accept that as part of the reward for the hard work.  Recognize that the real point is to improve your level of play and be able to know you gave nothing away for the other bands to capitalize on, and beat you.  If they outdrummed you today, congratulate them, have and {adult beverage} with them and a laugh or two, then work all the harder to get ready for the next contest.

It is a model I've used for years, with every level of band I've played with or worked with from the absolute beginners to Grade 2 - one step away from the god-like heights of Grade 1, the top of the field.  Sometimes, it is hard, other times, it makes things a bit easier to take.  A fair number of times, it is also true.

What does this have to do with software testing?

It is reasonably related - No matter how hard you try and no matter how carefully you work, you will not find every defect in the system.  Full Stop.

No software tester or test team can find every defect.  That is a simple fact.  Some folks feel devastated when a defect "gets away" and is found by the customer or users.  What information did you miss that lead to you not exercising the exact scenario?  Was there any reason to suspect you should exercise that exact sceanrio?  If the choice was to exercise that scenario and not others, what would be the impact of doing so?  What bugs might have been released instead of the one that was?  How can you know?

Contrary to those who cite "defect free" as the target of good testing, you can not possibly exercise every scenario, every combination of environment and every combination of variables to cover everything.

Learn from the defects that get through, examine your presumptions then see if, given what you know now, and the results of the decisions made, would you have made the same decisions you did about testing?  Can you apply these lessons in future projects?  If so, have a nice cold {adult beverage} and move on.

When the results are less than optimal in pipe bands or testing, if you learned something apply that and move on.  Berating yourself or your fellows does no good.


Tuesday, May 15, 2012

On Too Much Process or Too Much, Meh, Whatever

Process - noun - 
   1. a series of actions or steps taken in order to achieve
       a particular end

    (Oxford English Dictionary)

So, a fair number of people have heard me discuss excitedly (find the euphemism for rant) on how too much process stuff gets in the way of actually getting things done.  A fair number of times I pretty well am set in the idea that process should not be a controlling thing or a limiting thing, but a guiding thing.

There is a significant difference between the two of them.  Some people don't want to accept that.

Kind of like some people define "Agile Software Development" as 1) no documentation and 2) no testers.

When people allow the aspects of process to overtake the entire point of what is intended - typically the facilitation of what the process is supposed to, well, facilitate, where the process becomes more important than anything else, you don't have one problem, you have a collection of them.

The opposite end, well, makes bedlam look reasonable.

Having no controls and no processes in place can, and I suspect will, lead to its own problems.  Point of codifying processes, the "How we do stuff" is centered around making sure nothing gets missed: no steps get left out, nothing critical that will impact customer experience (like .JAR files not being where they are supposed to be) and other small things like that.

A process can also help us be certain that everyone who needs to know about something actually knows before that thing happens.  People, myself included, often talk about process as something that is ungood.  I've tempered this over time to be more like, process can help us, it can also hinder us - and the specifics of the situation will no doubt have a direct impact on which one it is.

What often gets lost in the shuffle when talking about process, or some development methodology or approach, is that the point of most software processes is to get things done that need to be done, and make sure people know - I kind of said that a bit before - I know.  But if you look carefully at the second portion of that, the bit that process helps "make sure people know" then that sounds alot, to me at least, like communication.

People forget that communication is less about what is said and is more about what is understood - what is heard on the receiving end.

If you have good communication and people succeed in hearing A when A is said, or written in a report or email or... yeah, you get the idea, then process forms a framework to operate within.  When people don't communicate, process may help - but I suspect it just adds to the noise.  More emails and reports to ignore, more meetings to sit in and do something else whilst they are going on, more of the same Dilbert-esque pointy-haired-boss stuff.

Even companies that quite publicly talk about their lack of formal process have processes - they have rules that people work within - frameworks that guide them  and their activities.

I suspect where I draw the line for processes that are useful and those that get in the way is the willingness of the staff - the people who are directly impacted - to follow and adhere to the given processes.

I prefer processes that are organic - that grow and develop based on the experience and relationships among the people doing the work.

I object to processes which are imposed by someone, or a group of someones, who have never actually done the work (except maybe as an exercise in school or a single project in the "real world") but have read, or attended a workshop or talked with someone at "the club"on some best practice that involved some stuff.  Whatever that is.

If people want to have a thoughtful discussion around what can work for their organization, team, company, whatever, I'd be extremely happy to participate.  If you tell me things must be done this way because of some study or best practice or whatever, don't be surprised if I ask what was studied and what practices were compared to determine which one was best. 

Monday, April 30, 2012

On Controlling Testing, or Being the Boss

I had a revelation recently that I wish I could have had some time ago, like years.  It may have made me a better employee, and in my forays at boss-dom would have made me a better boss.

While my humble outline here may not be enshrined amongst the great writings on leadership that are available for the betterment of leaders everywhere, it certainly represents the sum of experience and belief demonstrated by many boss-types I have worked for and with.  I therefore submit this for consideration toward your professional success as a controlling boss and the success of the group (we'll call them a team) over which you have control.

1.  Encourage Training.  This is important.  This is really important.  You want your testers (they like being called testers better than being called them peons of serfs) to believe that you want them to get better at what they do.  Make sure they know that training is important to their career development. You want to make sure that the training they get is company sponsored training.  Other stuff like, well, that encourages them to think is to be avoided, discouraged and downplayed.

2.  Discourage Outside, Corrupting Influences. We want the people working for us to only consider the information we present to them as being relevant.  When people express an interest in something they read about, maybe on the web somewhere, let them know that it is important they "get all the facts" before deciding to learn about it.  Have them go looking for examples of companies where these wild, new-fangled ideas have actually worked.  When they come back with some examples, make sure they know that these are not really solid examples because they are from outside the industry you are in, or are multi-national, not multi-continental (other way around works as well!)  or they are in environments that are regulated differently than the environment we are in or... any number of reasons why "that won't work here." 

3.  Encourage Engagement.  This is important, too.  You want them to feel warm-fuzzy thoughts in their tummies when they think of the company.  They want them to think, "Wow. The bosses at TLA* really DO have my best interest at heart when they tell me to do something and I'll be rewarded later.  I hope I get a pony as my reward."  This is particularly effective in large urban areas where the belief that a pony might be the "reward"is a complete impossibility where the large urban area does not permit ponies within the city.  The idea is similar to "A rising tide lifts all boats."  That is true, as long as the boats in question have plenty of slack where they are tied-up or moored.  We want to be certain there is no slack at all for the people doing our bidding team, before the tide starts rising.
*TLA: Three Letter Acronym (thanks to Matt Heusser from whom I blatantly stole that concept.)

4.  Discourage Uncomfortable Questions. 
Well, not really discourage them, just redirect them to be discussed "off-line" so people are not side-tracked by "side issues like this."  The beauty is that when people ask questions you don't want asked, you can appear to be concerned with addressing their concerns completely, and at the same time keep those questions, and the discussion around them, from causing discomfort to the rest of the laborers.  This allows the quick-thinking manager to isolate the trouble, and trouble-maker, pat their hand and say "there, there" and reassure them that everything will be fine.  The upside for the manager is the next round of "synergy actions"/"staff rationalization"/ happy-sizing / down-sizing, you already have at least one candidate for "change agent" status.

5.  Encourage "Extra Effort".  Getting people to get things done when most people looking at on schedules that simply can not be achieved at a mere 45 or 50 hours per week per person can be a particular management challenge.  One effective technique is to make the "casual" observation that contracts are tied to these dates and the delivery must be made on time.  Of course, if the delivery cannot be made on time, well, "other options" will need to be considered.  Then, this leaves open the carrot of the "stretch goal" set a week or two ahead of the "mandated goal" - where completing the project early may get recognized with a raise (or at least not a pay cut) IF all the other projects get done on time or early as well.  (Notice the subtle conditional statement slipped in there, its a possibility, not a certainty.)  

6.  Discourage Process Questions.  Yeah, this is kind of a big deal, too.  The Process is sacrosanct.  You are not in a position to suggest improvements to the process until you have moved through it completely, successfully at least once.  Well, maybe twice or three times (because success is a habit, after all.)  If people are having problems working through the process, it is because they are not doing it right.  If they do it right, they have no problems.

7.  Encourage Participation.  This one is important, too.  One way to handle trouble-makers is to get them involved.  If someone asks questions, like a lot of questions, ask if they'd be willing to participate in a study group that has been created to look into that very issue they ask questions about.  It is a great way to get all the people you need to keep an eye on in one place.  Additionally, because this must be done in addition to the project work (see number 5 above) it will be one more way to drain them of extra energy to make trouble.  If they still complain, ask if they have been "participating" with the study group on the issues they are complaining about.  If not, you just transferred blame to them!  That is good leadership.

8.  Discourage Independent Thought.  This is a hard one and it may take all the previous lessons to pull this one off.  You want people who can do decent testing.  That means you may have to hire people from outside the company.  That also means you may need to hire people with some level of experience.  This experience may have given them ideas of their own, or at least carried the lessons they learned from previous jobs with them.  Encourage them to "observe and learn" how things are done at your company.  Then, use the ideas from number 6 above to get them to become fully assimilated into the mindset of the company.  This will, hopefully, keep them from thinking something is wrong with the company by reinforcing the image that the problem is with them (after all, they left their old job because of why?)

Lessons Learned - I've seen these ideas applied often at various shops.  Where two or three are used effectively, the result has been promotion for the manager who did such fine work.  Of course, the fodder complained and whined until they were replaced or learned that complaining led to them "seeking new opportunities" - which pretty sell stopped the complaining.  In public.  Which is as good as stopping complaining altogether.

Remember: Managers are the keepers of the Truth.  The Guardians of the Holy Flame of Knowledge.  Facts can change and shift and using these techniques what the Manager agreed to on Monday, on Tuesday, if you are effective, you can disagree with and explain that the staff misunderstood.  This technique is good for keeping them from ever really understanding what is expected, which is even better because the money set aside for raises and bonuses will go into your pocket.



Wednesday, March 14, 2012

So Much Older Then, Or, What Was Old Is New Now

Not so long ago, a significant portion of persons of a certain gender and a certain age range (at least in the US) were completely taken up, head-over-heels, gaga-over or enthralled by a series of novels about, yes, vampires

Yes, the un-dead, feasting on the mortals around them.  Living on the fringe of society, moving in and out with grace and ensnaring people with their obvious charms. 

Yup.  Interview With a Vampire was a really popular book and movie franchise.  

Wait. 

You thought I meant the Twilight series?  Really?  My oldest granddaughter (a pre- and early teen when the Twilight books first came out) certainly was enthralled by them.  My lady-wife? nah. Me? right.  Do I LOOK like someone who would be enthralled by them?  Not likely. 

Yet, the lady-wife made an observation one night as we were sitting sipping a glass of wine, and watching the 1979 film Dracula. She said "I think every time a new vampire novel or movie comes out, people who have not read or seen the earlier ones latch onto it as if this was the first time anything like that was written."

I've had some conversations with testers and other software folk recently that have convinced me that the lady-wife's view can apply to software development as much as, well, vampire novels and movies.

Part of this is time and age - Some folks of a certain age have seen the same set of ideas come around two or three times.  Slap a new label on it and standard fare from my first programming gig in the early 1980's is all shiny and new.  Just a new buzzword.  In the meantime, there was another buzz-word label for it maybe 10 or 15 years ago. 

I've seen a fair number of "hot trends" in software design and development come and go and come back and go and... heh - kind of like vampires I guess.  They just WON'T STAY IN THEIR GRAVE!

Maybe that is because the underlying problem they were meant to solve was not solved.  The hot trend that replaced them that was absolutely going to solve the problem, did not solve the problem either.

I think back to how eager I was - how ready to change the software world I was - and how I tried to convince the older, kinda-stodgy folks I worked with that I had this bright and shiny new way of doing things.  One of them would sit back and tell a story from 15 years before.  I'd wonder what THAT had to do with Real MODERN software design and development - I mean, punch cards? Might as well be stone tablets, right?  Ah, the enthusiasm of youth.

As I was thinking about this, over a glass of medicinal single-malt scotch whisky last night following the local tester meeting, I got to wondering if the ideas that were bright and shiny-new for me in the early 1980's were retreads of other ideas.  So I dug out some of my older books - stuff that was on the shelf for some time without being disturbed.  I flipped through some of them and ... gads.  There they were!

I wonder if instead of vampires, these ideas were really closer to the immortals in the movie (not the TV series) Highlander - with Sean Connery playing an Egyptian in the service of King Charles V of Spain... with a Japanese katana. (Yeah, that one.)

Ideas don't die - they can't be killed because they are immortal.  You have to take their heads off with a sharp blade to really get rid of them.  If you don't, they'll go underground for a while, change their identity and then come back.  They are always there if you look carefully.  Most people don't look carefully though.

The reason, as near as I can tell, is that the behavior - the underlying mannerisms and actions - of the humans who are developing software have not really changed since Admiral Hopper's team discovered the first "bug" in the computer.  (I know she was not an Admiral at the time, but, you get the idea.)

Our flawed views impact our flawed behaviors which directly impacts our practices in developing software which directly impacts what we put out and what we make and how the software works and interact with humans and their flaws and imperfections and... you get the idea. 

Maybe when an enthusiastic young software person (designer, developer, tester, analyst of any sort) comes into my office with an earth-shattering idea, I may resist the urge to sit back, sip my coffee, stroke my beard and tell a story from 20 or 25 or ... more years ago that leaves them wondering what anything done in COBOL on an IBM Mainframe has to do with modern software development using ... fill in the blank for whatever technology your shop uses. 

What I have learned is that the technology changes.  It changes very quickly - far faster than I expected 30 years ago.  What has not changed in that time, and seemingly has not changed since the time of, well, ever - is how people make things - software in this case and how they interact with those things and each other. 

For those who have not seen anything like this, let me say quite simply.  The problem with computer software has nothing to do with the computers or the software.  The core problem is the behavior of the human persons around it - those who develop the software, design it, use it. 

It took me a long time to understand this, and I think I am beginning to see that I, and many others, have spent most of our careers trying to solve the wrong problems. 

We have been trying to use technology to address a technology problem.  The problem is not in the technology - it is the behavioral relationship we have (both developing and usiing) that technology.  Until we find a way to address that I expect we'll continue to see bright shiny-new labels slapped on older approaches and techniques.  We will continue to have slick snake-oil sold to bosses as THE SILVER BULLET to solve all their problems.

The fact is, none of those things will work.  We need to fix the underlying problem - what we have been looking at "fixing" are the symptoms.

I think this will be an interesting journey.

Sunday, March 4, 2012

Process and Ritual and Testing, Oh My.

I've been having some interesting conversations lately.  Well, I have a lot of interesting conversations, so that is not so unusual.  The interesting thing about these is that they have been, well, interesting. 

Not interesting in the way that some conversations on some online testing forums are interesting.  Not interesting the way that some conversations in groups in LinkedIn are interesting (you know the ones - where someone posts a question to get folks to start to answer then the person posting the question shows how smart they are and gives the "right" answer...

These conversations were around "Process" and "Best Practices" and things of that ilk.  Now, most of you who know me will realize that I take a dim view of 99.99999% of the "practices" that are labeled as "best."  I concede that in some situations, there may be something I am not aware of that can be considered a "best practice" - in the literal definition, not the buzz-wordy defnition.

Where was I?  Ah, yes.

These conversations were debating/arguing/asserting/rejecting the need of control and repeatability and measureability in software testing.  What I apparently failed to comprehend was that sometimes these things must be done in order to make sure the product is of "good quality."  I kept asking "How is it that this practice ensures the quality of the product?  Is there another practice that could give you the same results?"

The answer reminded me of a passage from a pulp-fantasy-fiction book series I read a long time ago.  You see, there was this particular race of dwarves who weren't terribly bright.  One of them found a secret passage.  At the time she found it,(yes, there are female dwarves) she was carrying a dead rat (seemingly for supper) and triggered the locking mechanism by accident.  This opened the door that let her take this "secret short-cut."

Well, she was making the main characters in the book take an oath that they would never divulge the magic of the passage.  One of them mentioned the trigger, which he noticed, she insisted it was magic.  She pulled out the dead rat, waved it in front of the door - then stepped on the trigger.  POOF!  The door opened!

In her mind, the ritual of waving the dead rat then stepping just so on the floor was what opened the door.  The others (outside observers) noticed what the real cause for the door opening was. 

Because it did them no harm to allow her to hold on to her certainty, they let it go.

Now, in software testing, we sometimes find ourselves in the situation of the not-too-bright female dwarf.  It worked this way the first time, therefore, this is the one true way to make it work every time.

Instead of a process, it becomes a ritual.

Are the processes we are working though aiding the testing effort, or are they gestures?  Are they helping us understand the application, or is this the ritual we must go through to get green-lights on all the test cases?

If its the later, would an incantation help?  Maybe something Latin-ish sounding like in Harry Potter?

Wednesday, February 29, 2012

On Metrics & Myths or Your Facts are From the Land of Make Believe

Its been a quiet week in Lake... Oh wait.  I'm not a famous radio personality with a show centered on a town that does not exist.  I'm a tester.

Sometimes though, I feel less like someone from Lake Wobegon, MN and closer to someone from Brigadoon.  Both are fictional, mythical if you will, and both have certain charms and appeal about them.  Except for one minor point.  Out of context, they make very little sense. 

So, the last several weeks I have been working away on studying metrics and concepts around them and things of that ilk.  The cause of that was the combination of "training" required by the day-job, and getting the new set of metrics for the "Scorecard" - yup - Metrics applied to the individual, team, group and department.  Oh my.

So, I went digging though my notes and found a variety of ideas, some good and some less than good, from a variety of sources, some reliable and some less than reliable.  Some of these we just plain contradictory.  Some had ideas that, in and of themselves seemed reasonable, until you considered the assumptions and presumptions that must be made and taken in for the numbers to actually make sense.

I found myself rereading articles by Cem Kaner, Doug Hoffman and others cautioning against misusing metrics.  I likewise found learned discussions around how metrics can be relied on if you take emotion out of the equation and look just at the hard, empirical data.

Then I saw a tweet from Michael Bolton, recommending the writings of  Laurent Bossavit as being worthy of  consideration.  So, I followed the link and began reading.  What I found was a fellow who had written an e-book that seems interesting.   Don't take my word for it.  His Twitter handle is @Morendil.  Search for him and begin reading.  Or, check out his e-book - Cool title - The Leprechauns of Software Engineering.  Find it here:  http://leanpub.com/leprechauns  You may not agree with everything, but much is worth your consideration.

Where was I?  Oh yeah.  Metrics.

Matt Heusser and I had an interesting chat last month while on a flight to New York.  He asked me my view on metrics.  I responded that my general view was that most people misuse the term and the concept. 

I believe that metrics should serve to address questions we are seeking enlightenment on (kind of like testing, no?)  A painfully large number of companies focus on stuff that is easy to count, without looking to see what that information might tell them - beyond the obvious.

I believe that most people trying to address questions with these "metrics" really don't have a good idea what the questions they want to ask are - and so they settle for what they can get easily.  Things like bug counts, test cases, test cases executed per day, failure rates and things of that ilk.  Instead of looking for things to help constructively help their staff, their people, do their work better, it is easier to look for control metrics.

They'll misquote Drucker or Lord Kelvin or - heck - maybe they've just heard so many truisms (that aren't really true) that are misquotes that they accept them at face value - an awful lot of us do.  They'll look to change behaviors by making a big deal about metrics and ... well, stuff.  What they get may not be what they intended to get.

Be careful in dealing with metrics - not all is what they appear.

Be careful when playing with dragons for you are crunchy and good with ketchup.