Saturday, December 28, 2013

Change of Years: 2013-2014

Looking back at my previous year-ending posts and those that looked forward to the coming year is something I've come to enjoy.  Partly, I get to see how close I was in my expectations to what would happen, and did happen.  At times I was very close - other times, not at all. 

And so I launch into another consideration while sitting in my house on a quiet Saturday morning in December. 

Overall, this has been a good year for me.  I've grown and learned and developed in ways I did not expect to.  I have strengthened bonds of friendship, loosened some others and discovered much about myself and those around me.

I participated in several conferences this year.  I find conferences to be enlightening on multiple levels.  I know many people go to learn new things.  Some people go to enhance their reputation.  Others go simply because their company is paying for it.  For me, yes, even when I'm presenting, it is an opportunity to meet people I do not know, or have never met in person.  I try and keep an open mind, although sometimes my reactions to what appears to be rubbish get the better of me.  I try hard to not attend sessions by people I know, unless they are presenting a topic new to me - I really do try and avoid epistemological bubbles.

The contract-consulting thing is going well for me.  I stepped out in 2012 away from the perceived security of being an employee of some company and became a privateer.  I'll do work for your company, but only on terms I'm OK with.  If something is stupid I reserve the right to say "This is stupid."  Mind you, I did that before - but the sense of 'they might fire me' is gone.  I realized people can pay you for doing good work and speaking factual truth to them.

Sometimes doing so bluntly is called for.

Oh, you don't need to be a contractor or a consultant to do those things.  realizing you are responsible for your career and your growth as a craftsman is the first step.  Speaking truth is part of that.  Removing the wall of fear about "losing your job" is a huge one.

Once that fear is gone - you are free.

So, yes.  I participated in regular meetups and some conferences and a few workshops.

Conferences & Workshops

These stand out in my mind:

STPCon in San Diego was fun.  I got an excited email from the organizers about the huge number of people who were in the room and the massive number of "5" ratings my presentation on leadership got.  Apparently more people liked it than not. 

CAST in Madison, WI was a lot of work. I found myself really busy - more busy than I expected to be. I learned much and enjoyed that conference greatly.   

Agile Testing Days in Potsdam, Germany - my second outing there - was also much work and much fun.  My workshop had generally positive things in the twittersphere and blogosphere - and that the numbers of participants increased after the break half way through may speak to something - but I have a feeling none of the participants filled out the official "rate this session" web page.  The Halloween costume contest was much fun.  I was a bit - along with Matt Heusser, Huib Schoots and my dear lady-wife Connie.

WHOSE - the workshop on self-education in software testing - in Cleveland - Oh, my.  I have much work to do from that still.  This was an AST function/event to derive a list of skills needed for software testers.  I drove down on a Wednesday evening after work, had a good night's sleep - then did not get much more sleep until Saturday night after driving home.  Mentally exhausted does not begin to describe the state I was in.  I need to blog on that soon.  It was good - and a good deal of work was done.

Personal Learning

Loads of people have helped me learn this year.  Some of these were engaged actively in that learning, others in conversations (where they thought they were the ones learning) and others through their writing.  Thanks to these in particular - Matt Heusser, Robert Sabourin, Michael Bolton, Chris George, Dan Ashby, Mike McIntosh, Ben Yaroch, Ben Simo, James Bach.

There have been others, of course.  Many people contributed.  Some greatly and positively - Some have shown me how not to be or act (I did not name any of those folks.)

I have two major areas of interest I am working on now.  One is an ongoing quest for "What information around testing is of value to the business?"  The other is one I've been dealing in fits and starts - and for the last two or three months have been looking into more deeply - "What skills does an organization need in their software testers?" 

These are related questions.  They are tied into work I have been doing at my client company of late.  They are also things I am wrestling with in my own mind.  I can see these as being fairly extensive in my studies and effort into the coming year.

The Future

Conferences - People ask me what conferences I will be attending and participating in this year.  I don't know.  The number I am considering submitting proposals to is fairly small.  My calendar is messy - I have projects at my client that need help - that is why I am there, after all.  When this contract is up, then perhaps this will change.  Sometimes, the idea of hopping on jets and flying hither and yon seems cool.  I spent enough time in airports waiting for that to have lost some luster.  I don't feel the need to travel giving talks about some aspect of software testing.  I'd rather be doing the testing and talking a little about it.

Writing - My writing has fallen way off this last year.  I want to get back to that some more this coming year.  I have a bunch of projects that are in the "Outlined" stage but have not had the effort given to them to actually develop them into something useable.  That needs to change. 

Work - My current client is "reviewing contracts" for the future.  The projects I am on are slated to run through much of the summer.  Its interesting, but like everything else in Corporate-land, nothing is certain.  Folks, this is normal.  Every company does this regularly.  Sometimes they are dealing with contractors/consultants - sometimes they are dealing with employees.  "Job Security" is a myth for most folks doing software, or any form of Information work.

Meetups - The GR Testers are going as always.  We get together monthly and discuss topics of interest to the group.  Sometimes there are presentations, other times, it is organized chaos as we go through ideas.  Other things in the works - when I can, I get to a (fairly new) Code & Coffee meetup that happens in the morning before heading in to the office.  I find it an excellent way to start the day.  Others?  The Mid-Michigan Testers Meet Down gets together in the Lansing, MI area, sporadically.  I'd like to attend more than the 1 time I did this year. 

All in all - I'm looking forward to 2014.  Not the wide-eyed wonder some folks have, or think they should have.  More of "I bet something interesting will happen." 

I'll leave notes along the way so folks can come along for the fun of it, if you want to. 

Cheers -

Happy New Year!

Sunday, December 22, 2013

Controlling Management or How the Grinch Stole Agile

We begin with a Poem:

No cute rhymes;
No clever gimmicks.
Domineering Managers
Really mess with things.

OK, so it doesn't rhyme.  It doesn't have any fun or funny made up words and generally sounds pretty negative.  It is.

I cannot count the number of times I've been working with a company where an experimental project is launch using some form of Agile Development, and the development managers say they'll support the experiment - and then proceed to take shots at every opportunity.

The common theme?  "How do you control that process?  It can't be managed."

Somehow, it amazes me when I encounter that mindset.

Consider, organizations realize that the size and nature of a project will not work well with their conventional software development approach.  They publicly state that because of the nature of the project and the lack of surety in the requirements defined in advance that a different approach is needed.  They recognize that people working on the project will need to dedicate the majority of their time to that project and that project alone - meaning they can be available for problems/support activities, but not other projects.

The project room is set up with whiteboards, an open environment and a relaxed schedule where everyone participating agrees on "regular" hours - meaning the hours when they will be there and available - recognizing that some folks like starting earlier in the day and some folks like starting later.  The daily "catch-ups" or "stand-ups" or whatever you choose to call them are at a time everyone agrees make sense, not mandated by someone.  The room is situated so that the people who will actually use the software can stop in, ask questions, check out what is going on or maybe just have a cup of coffee.  (I really push for coffee/treats always being present even if I'm the one bringing stuff in.)

People get together and for four or five hours a day work on stuff.  Together.  Working.  Talking.  Comparing notes.

The first sprint is a little shaky - I've seen that happen most of the time.  People are a bit uncertain what is expected of them and how things will work that the first sprint is almost always a bit shaky.   (Where I've seen that as a given is where "this Agile stuff" is new or an experiment at a more traditional organization.)  The first sprint wraps - those participating for the first time learn and the second goes much better.  Stuff get delivered and demonstrated and stuff works - OR - you've made progress and you know what it takes to finish that off and some other tasks.

About the third sprint, the management grumbling begins.

"What are they doing over there?  Is anything getting done or are they just screwing around?  Why don't we hear anything?  Why aren't they submitting the project documents like they should?"  on and on ad nauseum.

When the invitations are made to sit in on meetings, reviews, or simply come over and check it out - Or - at LEAST - actually attend the progress report sessions we have.  (The name varies by organization - essentially, the summary of what was finished at the end of each sprint.)

Yo! Managers!

If you are invited to attend a meeting where "your people" that are participating in an "Agile" (by some definition of Agile) are talking about and presenting what they have finished - then GO.  ATTEND.  PARTICIPATE.  This takes the place of the documents where you skim the "Executive Summary."

Stop for a minute.  Really.  Consider - We have an idea what the finished product is supposed to do.  We have an idea what the stuff we agree to do every sprint looks like.  We (developers, testers, business experts, customers) work together to make sure each piece does what we believe it is supposed to do.

Does it matter what the details that we did are?  What about "This is the work we have to do to finish this." instead?  If we focus on where we are going, and move forward in measurable chunks, does that not answer the question of "What are you doing?"


The Questions Begin...

What about the stage gate meetings?
Answer - We have them every morning.  You are welcome to attend.

What about the SDLC documents?  The design and architecture plans?  The requirements documents?  The estimates?  The test plans?  WHAT ABOUT THAT IMPORTANT STUFF?
Answer - Those are on the wall over there.  The stuff we have not started on or are going to be in future sprints are on that other wall.  We are tracking all of that real time.  And we talk about them every day.

But how can you measure this stuff?  Your tasks show up in the time reporting system and dare closed every two weeks.  How can that be?  You can't plan anything that way!  Tasks show up on Monday - they aren't even entered before then - and time gets charged against them and then they are closed in two weeks.  That's crazy!   
Answer - Actually, since we're working on pieces as we can, based on system availability, environment readiness and our mutual commitment to deliver top quality software each sprint, what you see makes sense.  The participants figure out what needs to be done, how we can do it and what it will take to make happen.  Then we make it happen.  Can progress be observed and measured by this?  Of course - as we complete tasks and they move from "Planned" to "In Progress" to "Done" we have a definite track of where the project is.

But you cannot really measure stuff unless you can show what you've DONE!  This stuff you're telling me makes no sense.  How am I supposed to know what my people are doing?  If I'm not watching what they are doing they will waste time and not do what I want them to do!
Answer - Is that the crux of the issue? Are you in such need of directing every piece of what the staff you hired - because they were "the best" - that you must treat them as if they are novices?

But they keep making mistakes!  I can't trust them to do the right thing! 
Response - Ah.  So that IS the crux of the issue.  The staff you hired because of their expertise and experience do things differently than you would.  So you intervene, and direct their actions.  You countermand their decisions and direct the architecture and the design you hired them to be able to do.  And there are still problems?

You don't understand!  I want to trust them but I can't.  They have proven that time and again!
Response - Ah.  So you hired the best, they don't do what you want and direct what they are to do and they do what you tell them to do and there are problems with the result.  Are the problems the result of the staff's efforts?

You don't understand!  Why do I waste my time with your questions? 
Answer - You asked me for suggestions and comments on the Agile practices of your organization.  Accepting or rejecting them is entirely your perrogative.

And so...

No.  There is no happy ending.

There is no singing around the spot where the Who-ville Christmas Tree stood.  There is no redemption of the Grinch in this story.  The staff continue in their drudgery and wonder and worry about the future.  The Manager(s) continue to mandate the minutiae of the work their staff does instead of allowing them to solve problems presented. 

Tuesday, December 17, 2013

On Test Cases and Conversing with Unicorns

The other day, I was sitting quietly contemplating some measurement functions people were asking about.  Whilst sipping a nice coffee in a small coffee shop, I heard a voice beside me - someone clearing their throat and asking if they could join me.

"Are you Pete?  May I join you?"

Now normally, I'm not easily taken aback.  This time, I was.  It was a unicorn speaking with me.  Apparently he, I think it was a he, asked was waiting for a friendly griffin who did a mix of java work for his day gig but was fluent in other languages.  Alas, the griffin was late.  You may not know it but griffins are notorious for unpunctuality. 

We got to talking about software and software development and software testing.  The unicorn asked me what was on my mind.  This struck me as odd.  I suspect he was simply being polite.  Unicorns can read minds of non magical humans, you see.

I explained that like many companies, I was trying to help people understand something that I thought was pretty fundamental.  The issue was one that it seems a fair number of people are wrestling with these days.

People are being asked to count things.  Tests.  Bugs.  Requirements.  Effort.  Time.  Whatever.

And the unicorn looked at me and asked "Why?"

It seems that people are looking to estimate work and measure effectiveness.  Their managers are trying to find ways to measure progress and estimate the amount of work remaining.

The unicorn started laughing - no, really.  He did.  Have you ever heard a unicorn laugh?  Yeah.  Its kind of interesting.

He looked at me and said "They've always wanted to know that stuff.  It seems things haven't progressed very far.  In the old days, we looked at the work and worked together to make really good software.  It would be ready as soon as it could and we could tell managers when we got close to it being ready.  Now, we expect people to be able to parse tasks and effort before they even figure out everything that needs to be done?  What are the odds of that actually happening?"

We sighed and sipped coffee for a moment.

The problem, of course, is that sometimes we're not quite sure what else can be counted.  The issue with that, the whole metrics thing?  When we latch onto the easy to count stuff it seems that the only stuff we count really never matters very much to the actual outcome of the project.  Why is that?

So, the conversation flowed.  We each had another coffee.

My thoughts focused on test cases.  Why do so many folks insist on counting test cases and the number that passed and failed?  What does that tell us about the software?  If we can logically define for every situation what test cases should look like, and can define instances where they will always be true guidelines, that may work.

My problem is simple:  I can't recall two projects ever conforming to the same rules.  That set of rules does not seem to work most of the environments I've worked in.

The unicorn seemed to understand.

He said "I tend to use failure points at steps in documented test scripts when I need them.  Some people use each failure point as a test case. They get many, many more test cases than I do.  Does that make their tests better?  Are they better testers because of the way they define their test cases?"

We both agreed that simply having more test cases means almost nothing as far as the quality of testing.  That in turn, tells us nothing about the quality of the software.

If "a test case" fails and there are ten or twenty bugs written up - one for each of the failure points - does that tell us something more or less and if ten or twenty test cases resulted in the same number of bugs being written - again - one for every failure point.

What does this mean? 

Why do we count test cases and all the other things we count? 

The unicorn looked at me and said that he could not answer that question.  He said that he preferred to consider more important things, like, whether or not unicorns can talk with humans.

Monday, December 16, 2013

Tea Parties, Perspective and Expectations or What Makes a Bug?

I'm writing this the evening December 16.  This is the anniversary of an event that gets a lot of attention in a fair number of middle school and high school American History classes.  It struck me as I was thinking about this while walking to the office today, that while some people consider this a watershed event, in reality, it was part of a continuum of tumultuous events that happened in a fairly short order. 

December, 1773

Consider two descriptions: 
1. Militant Anti-government terrorists destroy massive amounts of private property in Boston Harbour.
2. Freedom-loving Patriots destroy hated symbol of unjust oppression by dumping tea in Boston Harbour.

Both describe the same event, each from a distinct perspective.  Both engage in hyperbole, ostensibly to make a point.  The facts of the matter are these:  Between 30 and 130 men, some dressed as Mohawk warriors, boarded three ships owned by the British East India Company, over powered the anchor watch and threw 342 casks of tea into the water. 

There are bits that often get left out of the narrative.  For example, the tax on tea was originally passed in 1767.  At the time, Britain was in deep financial trouble as a result of the Seven Years War - what is taught as the French and Indian War.  Much of the expense of the war was to defend these same American Colonists from the French.  It seemed reasonable that some measure of tax be levied to pay for the bills of the war.  The East India Company argued against the tax, and through a series of negotiations and compromise with Parliament had them offset for a period time. 

These expired in 1772.  The taxes were modified slightly in the Tea Act of 1773.  The East India Company tried to extend these "tax breaks" again, and offset these taxes.  The government of Lord North refused.  They suspended some, but not all of these taxes - some 10% of the value remained.  This worked out to be 3 pence in taxes per pound of tea.  In doing this, there was a "minor change." The salaries of some colonial officials would be paid from these funds.

The East India Company attempted to cover the taxes themselves, to simply pay the tax and keep the retail price the same.  To put it gently, the colonists would have nothing to do with it.

There was outrage.  There was fury.  There was anger directed at individuals in the colonies and in London.  

Never mind that among the people who were most vocally opposed to both the tax and the actions of the East India Company to minimize the impact on the colonists, their customers, were smugglers of tea. 

Perspectives

The perspectives around the facts drive the narrative. Both of the above descriptions, the ones about "terrorists" and "patriots", are accurate depending on the perspective of the individual.

Let us consider how this same "details" impact software.  One customer likes a given feature, one does not.  Which one is right?  One complains of "bugs" and demands a fix immediately.  One refuses to consider any change at all.

I've actually encountered that.  Two equally large customers - one likes the software as it is and the other demands changes.

Which one has the "bug"?  How do you count that?  The description of the software and the promises of the sales staff could easily be interpreted either way. 

When people demand "bug free software" I wonder if they have any idea what that means?

A bug is a bug only if everyone involved with the software agrees it is a bug.  

A bug is not a thing - it is a description of a relationship.  That relationship describes a variance between expectations, perceptions and the actual behavior of the software.

In setting expectations, we must be able to anticipate and describe perspective of the persons using, or responsible for using the software we are working on.

Do we understand how people use the software?

We understand what how we think they use the software.  We may understand how we think they will use it.

If our perspectives are wrong, if our expectations are wrong, we are exercising - and looking for "expected results" that may not be what anyone else would describe as "expected."