Friday, November 30, 2018

The Man's the Gowd For A' That

The title here is the last line of the first verse of the Robert Burns' poem and song commonly referred to as "A Man's a Man." For the late 1790's, it reflected a huge portion of the enlightenment's understanding of human kind.


Is there for honest Poverty
That hings his head, an' a' that; 
The coward slave - we pass him by,
We dare be poor for a' that!
For a' that, an' a' that!
Our toils obscure an' a' that,
The rank is but the guinea's stamp,
The Man's the gowd for a' that.
What makes me think of this today?

Simple, a short phone call that was followed by a short conversation with my lady-wife.

I've been looking for a new software adventure for some time. Yes, I've had several opportunities come across the desk, many have not felt right to me. Some I applied for and things have been slow in progressing. A week or so ago, a placement specialist/recruiter/head-hunter called me. He had seen the resume I submitted for a different position, and wondered if I would be interested in one that had come into their office that morning.

He then described the job I was looking for.

We talked about the generalities and then dug down into greater specifics, as these conversations tend to go. He said he'd run the information past his manager and get back to me. An hour or so later he called again. We chatted some more.

I made a couple minor tweaks to the cover letter and resume to tailor it better for this position (not making things up - it bugs me when people do that - but emphasizing work I took for granted that others not doing stuff with Agile or Scrum or Testing would be looking for.

We agreed on a billable rate and off we went.

We had a couple emails back and forth since then, just checking in.

Last night, as we were watching the fish in the fish tank (really, that is what we were doing) waiting for the "dinner's greatest hits" to warm up in the oven, the phone rang - it was him again.

"Hello, is this Pete?"
"Yes it is."
"Hi Pete, this is {him} we talked last week about submitting you for a position at {company}. Do you remember?"
"Of course, {him} I remember. How are you doing today?"

A simple polite nothing - small talk in some ways, but a bridge that is so important.

The change in tone and energy was immediate. From being rather mechanical, almost awkward, everything became much more human.

"I am good today, thank you for asking."

The manner of the conversation changed with that simple question. Recognizing him as a person, recognizing he was trying to do good work to support his family and, incidentally, help a client company connect with a candidate with specific skills.

We finished the business, I wished him a good evening at the end and the conversation ended.

My lady-wife was watching with great interest.

"His entire energy changed when you asked how he was doing, didn't it."

Yup. It did. At the end, you could almost hear him smiling.

Sometimes, the purpose of such small things as asking how someone is, asked in a sincere manner, does more for that person than any other thing you could do right then. Such "polite nothings" are similar to the honorifics that once were part of everyday society.

"Good morning, Mr Jones."
"Good afternoon, Miss Radzikowska."
"Good evening, Ms Neal."

Giving people such a greeting sometimes feels awkward today, when many people cast off such artifice and defer to first names as being more "real" or "honest."

I'm not so sure.

I prefer to not abandon them out of hand and presume a familiarity that is not honestly present. Such things help keep the wheels and cogs of society moving as smoothly as possible, when they tend to be clunky at best.

Reach out with open handed kindness to another human person. Recognize them as worthy of respect and kindness. We don't know what they are struggling with themselves and sometimes small things might help them get through the day.

Be kind, even when it is hard for you to feel kind.

As Burns wrote over 200 years ago -
Then let us pray that come it may,
(As come it will for a' that,)
That Sense and Worth, o'er a' the earth,
Shall bear the gree, an' a' that.
For a' that, an a' that,
It's comin' yet for a' that,
That Man to Man, the world o'er,
Shall brothers be for a' that.

Monday, November 26, 2018

Testing, Limiting Failure and Improving Better

In this post, I wrote about demands and practices that lead to myriad problems in software development - even though every story (really, Backlog Item, but, whatever) is marked as "Done."

In this followup post, I wrote about things that can be done to mitigate the damage, a bit.

This is looking at the problems in the first post (above) and how this might be tied to answering the implied question in this post.

I suspect these are all tied together. I also suspect that people have been told by experts, or read a book, or talked with a peer at a large company who heard that a cool, Silicon Valley company is now doing this - and so they are jumping in so they can attract the best talent. Or something.

Let's talk about a couple of subtle points.

Testing

That is something all of us do, every day, whether we want to admit it or not. It may not be testing software, but it may be something else, like "I wonder what happens if I do this." At one time, most people writing production facing, customer impacting code were expected to test it - thoroughly. Then we'd get another person on the development team to test it as well. It was a matter of professional pride to have no bugs found by that other person, and a matter of pride to find bugs in other people's work.

I remember one Senior guy telling me "Before you hand this off to someone to test for you, make sure you have tested everything you possibly can. Document what you did and how, so that if something IS found in later testing, we can see what the difference is. Then the next time, you can test that condition when you test the others. Make them work hard to find something wrong."

That stuck with me and helps guide my thinking around testing - even 30+ years later.

Things have shifted a bit. Much of what we did then can be done fairly quickly using one or more tools to assist us - Automation. Still, we need some level of certainty that what we are using to help us is actually helping us. The scripts for the "automated tests" are software, and need diligent testing just as the product we need to test so the company can sell product, customers are happy and we don't get sued - or worse, brought up on criminal charges.

Still, when done properly, automated test scripts can help us blow through mundane tasks and allow people to focus on the "interesting" areas that need to be examined.

OK, caveat #1 - check the logs generated by the tests - don't just check to make sure the indicator is green. There MAY be something else happening you have not accounted for. Just, do a sanity check.

Then, while code is being worked on, and unit tests are being prepared (presuming you are doing TDD or something similar) someone NOT working on that piece of code look at the story (or backlog item, or whatever) and look at the tests defined in TDD and ask "What would happen if this happened?"

Now, that could be something like, an unexpected value for a variable is encountered. It could also be something more complex, for example, a related application changes the state of the data this application is working with.

One approach I have used very often, look at a representation (mindmap/decision tree/state diagram) of what the software looks like before, and after this piece is added. What types of transactions are being impacted by this change? Are there any transactions that should not be impacted? Does the test suite, as it is running, reflect these possible paths?

Has someone evaluated the paths through the code? Beyond simply line and branch coverage, how confident are you in understanding the potentially obscure relationship between the software, and say, the machine it is running on going to sleep?

Have you explored the behavior of the software, not simply if it "works?" Are there any areas that have not been considered? Are there any "ghosts in the machine"? How do you know?

Testing in Almost-Agile

I have been told by many "experts" that there is no testing in "Agile." They base this, partly, on the language or wording in the Agile Manifesto. They base it partly on the emphasis on everyone being responsible for quality in a "whole team" environment.

Some even point to the large Silicon Valley companies mentioned earlier who state very publicly they "don't have any testers."

Yet, when pressed, there are often clarifying statements like "We don't have testers in the Scrum teams, because we rely on automation for the Function testing." Here is where things kind of break down.

No "testers" in the "Scrum teams" because people write automation code to test the functions. When asked about "Integration Testing" or Load or Performance or Security or any of the other aspects of testing that testing specialists (sometimes referred to as "Testers") can help you with, do really well, and limit exposure to future problems, and possibly future front pages and lawsuits - the response often is "We have other teams do that."

Wait - What?

The "Scrum Team" declares a piece of work "Done" and then at least one or two other teams do their thing and demonstrate it is not really "Done"?

Mayhap this is the source of "Done-Done"? Is it possible to have Done-Done-Done? Maybe, depending on how many teams outside of the people developing the software there are.

That sounds pretty Not-Agile to me - maybe Almost-Agile - certainly not Scrum. It sounds much more like one of the Command-and-Control models that impose Agile terms (often Scrum) on top of some form of "traditional software development methodology" like "Waterfall." Then they sing the praises of how awesome "Agile" is and how much everything else stinks - except they are busy in stage gate meetings and getting their "Requirement Sprint" stuff done and working on their "Hardening Sprints."

Another Way - Be Flexible

Look at what the team can work on NOW for the greatest impact to the benefit of the project.

What do I mean? Let's start with one thing that the customer (in the person of the product owner or some other proxy) really, really wants or needs - more than anything else.

Figure out the pieces to make that happen - at least the big ones;
make sure people understand and agree on what the pieces mean and what they really are.

Then pick a piece - like, the one that looks like it will deliver the biggest bang NOW -
OR - one that will set you up to deliver the biggest band in the next iteration;

Then, figure out...
  • how to know if that piece works individually; 
  • how to know if that piece works with other pieces that are done; 
  • how to know if that piece will negatively impact the way the whole package is supposed to work;
  • how to know if that piece might open a security vulnerability;
  • if there is a way to find any unexpected behaviors in this piece or by adding this piece to what has been done.

I'm not advocating doing this all at once for every piece/ticket/story/whatever. I am suggesting these be defined and worked on and then actually executed or completed before any task is labelled "Done."

Some of these may be easily automated - most people automate the bit about each piece "works individually."

If your team does not have people on it with the skill sets needed to do these things, I'd suggest you are failing in a very fundamental way. Can I really say that? Consider that Scrum calls for cross-functional teams as being needed for success. Now, it might be that you also need specialists in given areas and you simply can't have 1 per team - but you can share. Through decent communication and cooperation, that can be worked out.

Still, the tasks listed above will tend to be pretty specific to the work each team is doing. The dynamics of each team will vary, as will the nature of some of the most fundamental concepts like - "does it work?"

Of these tasks, simple and complex alike, perhaps the most challenging is the last one in the list.

Is there a way to find unexpected behavior in this individual piece we are working on? Is there a way to find unexpected behavior when it gets added to the whole?

These are fundamentally different from what most people mean by "Regression Testing." They are tasks that are taken up in the hopes of illuminating the unknown. We are trying to shine a light into what is expected and show what actually IS.

But, who has those kinds of skills? 

They need to be able to understand the need of the customer or business AND understand the requirements and expectations AND understand the risks around poor performance or weak security, and the costs or trade-offs around making these less vulnerable. They need to be able to understand these things and explain them to the rest of the team. They need to be able to look beyond what is written down and see what is not - to look beyond the edge of the maps  and consider "what happens if..." Then, these people need to be able to share these concepts and techniques with other people so they understand what can be done, and how it can be done differently and better.

These things are common among a certain group of professionals. These professionals work very hard honing their craft, making things better a little at a time. These people work hard to share ideas and get people to try something different.

They are often scorned and looked down upon by people who do not understand what the real purpose is.

These people have many names and titles. Sometimes they are "Quality Advocates" other times they are "Customer Advocates." However, they are commonly called Testers.




Monday, November 19, 2018

Moving From Failure to Better

In this blog post I described scenarios I have seen play out many times. Official mandates based around some understanding of Scrum, some version of "Best Practices" and fairly shallow understanding of software development and testing.

If we stop there, it appears that there is no avoiding the traps that lead to failure of the sprint and the work the sprint is supporting. But, there are options to make things a wee bit better.

Common Option 1: Hardening Sprints

I know - the point of Scrum us to produce regular product increments that can be potentially released to a customer or the production environment or some other place. For many large organizations, the idea of incremental improvements, particularly when it comes to their flagship software, seems anathema.

The result is bundling the work of many development teams from many sprints into one grand release.

When each team looks up and outside their silo for the first time after a sprint, or four, the collected product increments (new version of the software) are pulled together. The next step is often something like a "hardening sprint" to exercise all the pieces that were worked on from all the teams and make sure everything works.

As much as this violates Scrum orthodoxy, I can see where this might seem a really good idea. After all, you have the opportunity to exercise all the changes en masse and try and work it with as close to "real world activity" as possible in a test environment.

The problem I see many, many times, is each team simply reruns the same automated scripts they ran when pushing to finish the sprint and get to "Done." The interesting thing to me is that sometimes bugs are still found, even when nothing has "changed."

This can be from any number of causes from the mundane, finding data that was expected to have certain values has been changed, to interesting, when team X is running part of their tests when team Y is running part of their tests, unexpected errors are encountered by one, or both teams.

Another challenge I have seen often is people remember what was done early in the cycle - possibly months before the "Hardening Sprint" started. Some changes are small and stand alone. Some are build on by later sprints. Do people really remember which was which? When they built their automated acceptance tests did the update the tests for work earlier in the iteration?

In the end, someone, maybe a Release Manager, declares "Done" for the "Hardening Sprint" and the release is ready to be moved to production, or the customer, or, wherever it is supposed to go.

And more bugs are found, even when no known bugs existed.

Less Common Option 2: Integrating Testing

In a growing number of organizations, the responsibility for exercising how applications work together, how well they integrate, is not under the purview of the people making the software. The reasons for this are many, and most of them I reject out of hand as being essentially Tayloristic "Scientific Management" applied to software development.

The result is people run a series of tests against various applications in a different environment than they were developed in, and sending the bugs back to the development teams. This generally happens after the development team has declared "Done" and moved on.

Now the bugs found by the next group testing the software come back, get pulled into the backlog, presumably they get selected for the next sprint. Now it is two weeks at least since they were introduced, probably four and likely six - depending on how long it takes them to get to exercising new versions.

What if, we collaborated?

What if we recognize that having a group doing testing outside of the group that did the development work is not what the Scrum Guide means when referring to Cross-functional teams? (Really, here's the current/2017 version of The Scrum Guide)

What if we ignore the mandates and structure and cooperate to make each other's lives easier?

What if we call someone from that other team, meet for a coffee, maybe a donut as well, possibly lunch, and say something like "Look. It sucks for us that your tests find all these bugs in our stuff. It sucks for you that you get the same stuff to test over and over again. Maybe there's something to help both of us..."

"Can we get some of the scripts you run against our stuff so we can try running them and catching this stuff earlier? I know it means we'll need to configure or build some different test data, but maybe that's part of the problem? If we can get this stuff running, I think it might just save us both a lot of needless hassle. What do you think?"

Then, when you get the new tests and teste data ready and you fire them off - check the results carefully. Check the logs, check the subtle stuff. THEN, take the results to the team and talk about what you found. Share the information so you can all get better.

Not everyone is likely to go for the idea. Still, if you are willing to try, you might just make like a little better for both teams - and your customers.

Your software still won't be perfect, but it will likely be closer to better.

I've seen it.
I've done exactly that.
It can work for you, too.
Try it.


Sunday, November 18, 2018

Grand Pronouncements, Best Practices and the Certainty of Failure

Many times, those "in charge" will issue mandates or directives that seem perfectly reasonable given specific ideas, conditions and presumptions.  We've seen this loads of time on things related to software development and testing in particular.

We've seen this many, many times.

1 July, 1916.

British infantry launched a massive assault in France along the Somme River. It was a huge effort - days of artillery bombardment intended to destroy German trenches and defensive positions, as well as destroy the barbed wire obstacles in front of the German positions.
The best practices mandated by High Command included forming ranks after scrambling "over the top" of the trenches, then march across no-mans-land, overcome what would be left of the German defenses and capture the German positions, thus breaching the lines and opening a hole miles long though which reinforcements could pour, and send the Germans reeling backward in defeat. Troops in the first two waves were promised that field kitchens would follow behind them with a hot dinner, and supplies of ammunition and more field rations would follow.

Brilliant plan. Conformed to all the official Best Practices of the day. In a training setting, the planners would have gotten very high marks indeed.

One very minor issue was it was based completely on unrealistic presumptions.

It did not work. Thousands were killed on the first day. Entire battalions simply ceased to exist as viable combat units. Some, like the Newfoundland Regiment, were destroyed trying to get to their launch point.

With luck, the best practices and directives you are getting are not in the same scale of life and death.

Being Done

What I have seen time and again, are mandates for a variety of things:
  • All sprints must be 2 weeks long;
  • Each team's Definition of Done MUST have provisions that ALL stories have automated tests;
  • Automated tests must be present and run successfully before a story can be considered "done;"
  • There is a demand for "increased code coverage" in tests - which means automated tests;
  • Any tests executed manually are to be automated;
  • All tests are to be included in the CI environment and into the full regression suite;
  • Any bugs in the software means the Story is not "Done;"
  • Everyone on the team is to write production/user-facing code because we are embracing the idea of the "whole team is responsible for quality."

Let me say that again.

  • All "user stories" must have tests associated with them before they can be considered "Done;"
  • Manual tests don't count as tests unless they are "automated" by the end of the sprint;
  • All automated tests must be included in the CI tests;
  • All automated tests must be included in the Regression Suite;
  • All automated tests must increase code coverage;
  • No bugs are allowed;
  • Sprints must be two-weeks;
  • Everyone must write code that goes into production;
  • No one {predominantly/exclusively} tests, because the "whole team" is responsible for quality.

It seems to me organizations with controls like these tend to have no real idea how software is actually made.

There is another possibility - the "leaders" know these will be generally ignored.

Unfortunately, when people's performance is measured against things like "automated tests for every story" and "increased code coverage in automated tests" people tend to react precisely as most people who have considered human behavior would expect - their behavior and work changes to reflect the letter of the rules whilst ignoring the intent.

What will happen?

Automated tests will be created to demonstrate the code "works" per the expectation. These will be absolutely minimalist in nature. They will be of the "Happy Path" nature that confirms the software "works."

Rarely will you find deep, well considered tests in these instances because they take too long to develop, exercise, test (as in see if they are worth further effort) and then implement.

With each sprint being two weeks, and a mandate that no bugs are allowed, the team will simply not look very hard FOR the bugs.

When these things come together, all the conditions will be met:
  • All the tests will be automated;
  • All the tests will be included in the CI environment;
  • All the tests will be included in the (automated) Regression suite; 
  • Code coverage will increase with each automated test (even if ever so slightly);
  • Any bugs found will be fixed and no new ones will be discovered;
  • Everything will be done within the 2 week sprint.
There is one other thing that is very likely once the product gets deployed to whatever the next level is, there will be any number of bugs found. If this is some other group in the organization, the product will be sent back to be corrected.

If this is to some other group, it is probable that group will howl about the product. They likely will hound your support people and hammer on them. Expect them (or more likely their manager/director/big-boss) to hammer on your boss.

But, the fact remains, all the conditions for "Done" were met.

And together, they ensured failure.



Why Don't Managers Understand What Good Testing Is?

Several years ago, I was sitting having a quiet drink waiting for a couple of friends to arrive when a fellow walked over and sat down. Did not ask if he could, just sat down. Rude bugger.

He was a senior boss type in his own organization. I was... not a boss type.

Still, he sits down and asks me a question. "I don't know how you deal with this. I would think you get this all the time. It seems like every few months I get called into  my manager's office to answer questions about why we do what we do and why we don't do what these "experts" say we should be doing. It can be everything from metrics that are supposed to show us where all the problems are or sometimes some best practice that someone is pushing or some tool or other for test management or some requirements tracking tool that lets you show what you're testing and how much testing you're doing and how good a job testing you're doing or something. I explain to him why that stuff isn't real and how those things don't actually work. He seems OK with it, then another couple months and I'm back having the same conversation with him. Don't you find that frustrating?"

My response was something really non-committal. Something like, "Yup, that can be really frustrating and a huge energy draw."

He felt better after venting or maybe getting some form of confirmation that it really IS frustrating and went away - he went to hang with other manager boss types.

Here's what was running through my mind as I finished my beer.

Maybe the reason why this keeps coming up is in what he said to me.

Your manager or manager's manager or someone up the food chain is looking for information. If they are not seeing anything they understand or can report to THEIR manager on, they'll look for what is commonly discussed among their peers or in article or books or webinars or conference presentations.

They are looking for information they can use that is presented in a way they and other managers can understand. Let's be realistic. They don't have time to filter through 18 pages of buzz words, technical jargon, mumbo-jumbo and falderal. They want a single page, bullet pointed list that summarizes all that rubbish. They would also probably like some graphic representation that presents information clearly - and accurately. They don't have time or patience to sift through a dozen footnotes explaining the graphic.

You may object strongly to some level of manager higher than you being sucked in by snake-oil salesmen or some other word for con-artists.

Still, if the con-artists and snake-oil salesmen are presenting them with a tool or a "solution" or a method that gives them something resembling what they want and need, that will seem like The Solution to them, no matter how wrong you think The Solution is.

Then again, maybe the solution people are looking for, the one that looks right, based on their understanding will work. Maybe, just maybe, people will land on something that sounds like what "experts" are talking about. There are looking at the results of "software testing tools" in their favorite search engine and wondering why their company is not using one or some of these tools.

Then they enter "best software testing tools" into the search engine and see MORE results. And these are for the BEST testing tools. Some are "Manual Testing Tools" some are "Automated Testing Tools" and when you read the ad copy on the webpage - they sound AWESOME.

Then they wonder why their company is not using one of these BEST tools.

They read articles online or in a magazine and they talk about "test everything" because if you don't then bugs might get through. And they read about how they can have software with ZERO BUGS. And then there are the articles about choosing the RIGHT things to test since there isn't really time to test everything. And then they get confused because given the stuff they read in their search results talk about testing faster and better with these tools - and deliver results that can be tracked.

So they think about how to track results and how to measure things like "improvement" and "quality" and to see if there is anyway to tell if anything is being done, let alone done right, and when changes are implemented, do they make any difference at all,

This leads to things like methodologies, processes, process models, and SCRUM an KANBAN and how "Agile" is better than other ways of working and how to be Agile and how to measure how Agile you are and how to show that being Agile is better and how to Scale Agile and Disciplined Agile... and... and...

Still, the bosses want to know why we (the resident "experts" in testing) don't do things like they read about or hear about in meetings or conferences or training sessions or podcasts. We can explain how those things are not really helpful and don't really work - and the software being made still sucks and if more large customers don't like the software and cancel and it is likely that someday we'll run out of new customers to use the software we make and if we can't keep more of our customers happy, and then we are all screwed completely.

Why don't managers understand what we are doing and what good testing is?

Why is it they keep coming back and asking fundamental questions about what we do, how to evaluate progress, how to look for improvement in quality, how to track customer satisfaction, how to know the software works the way the sales people say it works...

Why is that?

When managers, bosses, whatever, are looking for help to make things happen - make things better or find some sense of progress, how do we respond?

Do we scoff openly at them and say "that will never work?" Personally, I find it not wise to scoff openly at managers and directors and VPs of whatever, but your experience may be different than mine.

Maybe we say "This is not going to help because..." and explain why it won't do what they are hoping it will. Perhaps our more sophisticated thought leader types might patiently explain what a "good" thing is (metric or tracking tool or some other tool.)

Then maybe give some sage advice like "You need to decide what you want to learn from this and what you intend to do with what you learn."

They blink. Maybe they realize they have no idea what that means. Let's face it - an awful lot of people have no idea what that statement means.

So far, except for the scoffing part, there is not a lot to object to in my experience.

The problem, and the reason why the questions keep coming back, is we have not provided an example of a good alternative.

If our method of teaching managers about testing and test management extends only as far as what won't work and why things are a bad idea, then we have greater issues.

We have only done half of what we need to do.

If a tool will not do what is needed, is there an alternative? Maybe there will need to be more than one working in parallel.

If they are trying to discover something about quality of the software, do we suggest paths to discover what is needed? Do we offer to help them with this and work on finding the solution together?

In short, if all we do is tell them something won't work, we are not doing our job.

We have no grounds to complain if we have not worked hard to provide viable alternatives they can understand.

Maybe the great gulf and obstacle to understanding is more simply put.

People (like Managers, Developers, Product Owners, Business Analysts, Project Managers, Scrum Masters and Testers) have no shared concept of what testing itself is.

Can we blame them if they do not understand what Good Testing is?