Friday, April 15, 2016

On Facts, Numbers, Emotions and Software Releases

A recent study published in Science Magazine looks at communication, opinion, beliefs and how they can be influenced, in some cases over very long terms, by a fairly simple technique: open communication and honest sharing.

What makes this particular study interesting is that it was conducted by two researchers who attempted to replicate the results of a previous study also published in Science Magazine on the same topic. The reason they were unable to do so was simple: The previous study had been intentionally fraudulent.

The results of the second study were more astounding, in some ways, than the previous study. In short, people are influenced to the point of changing opinions and views on charged, sensitive topics, after engaging in non-confrontational, personal, anecdotal-based conversation.

The topics viewed included everything from abortion to gay and transgender rights. Hugely sensitive topics, particularly in the geographic areas where the studies were conducted.

In short, when discussing sensitive topics, basing your arguments in "proven facts" does little to bring about a change in perception or understanding with people with firmly held and different beliefs.

Facts don't matter.

Well-reasoned, articulate, fact-based dissertations will often do little to change people's minds about pretty much anything. They may "agree" with you so you will go away, but they really have not been convinced. There are scores of examples currently in the media, I won't bore (or depress) anyone (including myself) with listing any of them.

Instead, consider this: Emotions have a greater impact on most people's beliefs and decision making processes than the vast majority of people want to believe.

This is as true for "average voters" as it is for people making decisions about releasing software.

That's a pretty outrageous statement, Pete. How can you honestly say that? Here's one example...

Release Metrics

Bugs: If you have ever worked at a shop, large or small, that had a rule of "No software will be released to production with known P-0 or P-1 bugs" it is likely you've encountered part of this. It is amazing how quickly a P-1 bug becomes a P-2 bug and the fix gets bumped to the next release if there is a "suitable" work-around for that.

When I hear that, or read it, I wonder "Suitable to whom?" Sometimes I ask flat out what is meant by "suitable." Sometimes, I smile and chalk that up to the emotion of the release.

Dev/Code Complete: Another favorite is "All features in the release must be fully coded and deployed to the Test Environment {X} days (or weeks) before the release date. All code tasks (stories) will be measured against this at the quality of the release will be compared against the percentage of stories done of all stories tasks in the release." What?

That is really hard for me to say aloud and is kind of goofy in my mind. Rules like this make me wonder what has happened in the past to have strict guidelines in place.I can understand wanting to make sure there are no last-minute code changes going in. I have also found changing people's behaviors tends to work better by using the carrot - not a bigger stick to hit them with.

Bugs Found in Testing: There is a fun mandate that gets circulated sometimes. "The presence of bugs found in the Test Environment indicates Unit Testing was inadequate." Hoo-boy. It might indicate that unit testing was inadequate. It might also indicate something far more complex and difficult to address by demanding "more testing." 

Alternatives?

Saying "These are bad ideas" may or may not be accurate. They may be the best ideas available to the people making "the rules." They may not have any idea on how to make them better.

Partly, this is the result of people with glossy handouts explaining to software executives how their "best practices" will work to eliminate bugs in software and eliminate release night/weekend disasters. Of course, the game there is that these "best practices" only work if the people with the glossy handouts are doing the training and giving lectures and getting paid large amounts of money to make things work.

And when they don't, more times than not the reason presented is because the company did not "follow the process correctly" or is "learning the process." Of course, if the organization tries to follow the consultant's model based on the preliminary conversations, the effort is doomed to failure and will lead to large amounts of money going to the consultant anyway.

Consider

A practice I encountered the first time many years ago, before "Agile" was a cool buzzword was enlightening. I was working with on a huge project as a QA Lead. Each morning, early, we had a brief touch point meeting of project leadership (development leads and managers, me as QA Lead, PM, other boss-types) discussing what was the goal for the day in development and testing.

As we were coming close to the official implementation date, a development manager proposed a "radical innovation." At the end of one of the morning meetings, he went around the room asking the leadership folks how they felt about the state of the project. I was grateful because I was pushing hard to not be the gatekeeper for the release or the Quality Police.

How he framed the question of "state of the project" was interesting - "Give a letter grade for how you think the project is going where 'A' is perfect and 'E' is doomed." Not surprising, some of the participants said "A - we should go now, everything is great..." A few said "B - pretty good but room for improvement..." A couple said "C - OK, but there are a lot of problems to deal with." Two of us said "D - there are too many uncertainties that have not been examined."

Later that day, he and I repeated the exercise in the project war-room with the developers and testers actually working on the project. The results were significantly different. No one said "A" or "B". A few said "C". Most said "D" or "E".

The people doing the work had a far more negative view of the state of the project than the leadership did. Why was that?

The leadership was looking at "Functions Coded" (completely or in some state of completion) and "Test Cases Executed" and "Bugs Reported" and other classic measures.

The rank-and-file developers and testers were more enmeshed in what they were seeing - the questions that were coming up each day that did not have an easy or obvious answer; the problems that were not "bugs" but were weird behaviors and might be bugs; a strong sense of dread of how long it was taking to get "simple, daily tasks" figured out.

Upshot

Management had a fit. Gradually, the whiteboards in the project room were covered with post-its and questions written in colored dry-erase markers. Management had a much bigger fit.

Product owner leadership was pulled in to weigh in on these "edge cases" which lead to IT management having another fit. The testers were raising legitimate questions. When the scenarios were being explained to the bosses of people actually using the software, they tried it. And sided with the testers and the developers: There were serious flaws.

We reassessed the remaining tasks and worked like maniacs to address the problems uncovered. We delivered the product some two months late - but it worked. Everyone involved, including the Product Owner leadership who were now regularly in the morning meetings, felt far more comfortable with the state of the software.

Lessons

The "hard evidence" and metrics and facts all pointed to one conclusion. The "feelings" and "emotions" and "beliefs" pointed to another.

In this case, following the emotion-based decision path was correct.

Counting bugs found and fixed in the release was interesting, but did not give a real measure of the readiness of the product. Likewise, counting test cases executed gave a rough idea of progress in testing and did nothing at all to look at how the software actually functioned for the people really using it.

I can hear a fair number of  folks yelling "PETE! That is the point of Agile!"

Let me ask a simple question - How many "Agile" organizations are still relying on "facts" to make decisions around implementation or delivery?

1 comment:

  1. "That's a pretty outrageous statement, Pete. How can you honestly say that?"

    Notice that your interlocutor did not say "illogical" or "unsupportable" or "invalid", but "outrageous". Sounds like an emotionally charged response to me.

    ---Michael B.

    ReplyDelete