Sunday, November 18, 2018

Grand Pronouncements, Best Practices and the Certainty of Failure

Many times, those "in charge" will issue mandates or directives that seem perfectly reasonable given specific ideas, conditions and presumptions.  We've seen this loads of time on things related to software development and testing in particular.

We've seen this many, many times.

1 July, 1916.

British infantry launched a massive assault in France along the Somme River. It was a huge effort - days of artillery bombardment intended to destroy German trenches and defensive positions, as well as destroy the barbed wire obstacles in front of the German positions.
The best practices mandated by High Command included forming ranks after scrambling "over the top" of the trenches, then march across no-mans-land, overcome what would be left of the German defenses and capture the German positions, thus breaching the lines and opening a hole miles long though which reinforcements could pour, and send the Germans reeling backward in defeat. Troops in the first two waves were promised that field kitchens would follow behind them with a hot dinner, and supplies of ammunition and more field rations would follow.

Brilliant plan. Conformed to all the official Best Practices of the day. In a training setting, the planners would have gotten very high marks indeed.

One very minor issue was it was based completely on unrealistic presumptions.

It did not work. Thousands were killed on the first day. Entire battalions simply ceased to exist as viable combat units. Some, like the Newfoundland Regiment, were destroyed trying to get to their launch point.

With luck, the best practices and directives you are getting are not in the same scale of life and death.

Being Done

What I have seen time and again, are mandates for a variety of things:
  • All sprints must be 2 weeks long;
  • Each team's Definition of Done MUST have provisions that ALL stories have automated tests;
  • Automated tests must be present and run successfully before a story can be considered "done;"
  • There is a demand for "increased code coverage" in tests - which means automated tests;
  • Any tests executed manually are to be automated;
  • All tests are to be included in the CI environment and into the full regression suite;
  • Any bugs in the software means the Story is not "Done;"
  • Everyone on the team is to write production/user-facing code because we are embracing the idea of the "whole team is responsible for quality."

Let me say that again.

  • All "user stories" must have tests associated with them before they can be considered "Done;"
  • Manual tests don't count as tests unless they are "automated" by the end of the sprint;
  • All automated tests must be included in the CI tests;
  • All automated tests must be included in the Regression Suite;
  • All automated tests must increase code coverage;
  • No bugs are allowed;
  • Sprints must be two-weeks;
  • Everyone must write code that goes into production;
  • No one {predominantly/exclusively} tests, because the "whole team" is responsible for quality.

It seems to me organizations with controls like these tend to have no real idea how software is actually made.

There is another possibility - the "leaders" know these will be generally ignored.

Unfortunately, when people's performance is measured against things like "automated tests for every story" and "increased code coverage in automated tests" people tend to react precisely as most people who have considered human behavior would expect - their behavior and work changes to reflect the letter of the rules whilst ignoring the intent.

What will happen?

Automated tests will be created to demonstrate the code "works" per the expectation. These will be absolutely minimalist in nature. They will be of the "Happy Path" nature that confirms the software "works."

Rarely will you find deep, well considered tests in these instances because they take too long to develop, exercise, test (as in see if they are worth further effort) and then implement.

With each sprint being two weeks, and a mandate that no bugs are allowed, the team will simply not look very hard FOR the bugs.

When these things come together, all the conditions will be met:
  • All the tests will be automated;
  • All the tests will be included in the CI environment;
  • All the tests will be included in the (automated) Regression suite; 
  • Code coverage will increase with each automated test (even if ever so slightly);
  • Any bugs found will be fixed and no new ones will be discovered;
  • Everything will be done within the 2 week sprint.
There is one other thing that is very likely once the product gets deployed to whatever the next level is, there will be any number of bugs found. If this is some other group in the organization, the product will be sent back to be corrected.

If this is to some other group, it is probable that group will howl about the product. They likely will hound your support people and hammer on them. Expect them (or more likely their manager/director/big-boss) to hammer on your boss.

But, the fact remains, all the conditions for "Done" were met.

And together, they ensured failure.



3 comments:

  1. Continuing your military theme, there are two dictums that my father lived by, which he took from Field Marshal Montgomery (who himself had served during WW1):

    1) A bad plan is better than no plan at all; but

    2) No plan survives first contact with the enemy.

    In our case, "the enemy" is "reality".

    ReplyDelete
    Replies
    1. Indeed! I've heard that same dictum credited to Moltke and Clausewitz. And Americans tend to credit Eisenhower with it!

      For us, reality is always the destroyer of nice plans.

      Thanks for reading! (There is a followup coming shortly - would have made this too long for a single post.)

      Delete
  2. I have a follow-up to this post here: http://bit.ly/2QVI7l8

    ReplyDelete