In this blog post I described scenarios I have seen play out many times. Official mandates based around some understanding of Scrum, some version of "Best Practices" and fairly shallow understanding of software development and testing.
If we stop there, it appears that there is no avoiding the traps that lead to failure of the sprint and the work the sprint is supporting. But, there are options to make things a wee bit better.
Common Option 1: Hardening Sprints
I know - the point of Scrum us to produce regular product increments that can be potentially released to a customer or the production environment or some other place. For many large organizations, the idea of incremental improvements, particularly when it comes to their flagship software, seems anathema.
The result is bundling the work of many development teams from many sprints into one grand release.
When each team looks up and outside their silo for the first time after a sprint, or four, the collected product increments (new version of the software) are pulled together. The next step is often something like a "hardening sprint" to exercise all the pieces that were worked on from all the teams and make sure everything works.
As much as this violates Scrum orthodoxy, I can see where this might seem a really good idea. After all, you have the opportunity to exercise all the changes en masse and try and work it with as close to "real world activity" as possible in a test environment.
The problem I see many, many times, is each team simply reruns the same automated scripts they ran when pushing to finish the sprint and get to "Done." The interesting thing to me is that sometimes bugs are still found, even when nothing has "changed."
This can be from any number of causes from the mundane, finding data that was expected to have certain values has been changed, to interesting, when team X is running part of their tests when team Y is running part of their tests, unexpected errors are encountered by one, or both teams.
Another challenge I have seen often is people remember what was done early in the cycle - possibly months before the "Hardening Sprint" started. Some changes are small and stand alone. Some are build on by later sprints. Do people really remember which was which? When they built their automated acceptance tests did the update the tests for work earlier in the iteration?
In the end, someone, maybe a Release Manager, declares "Done" for the "Hardening Sprint" and the release is ready to be moved to production, or the customer, or, wherever it is supposed to go.
And more bugs are found, even when no known bugs existed.
Less Common Option 2: Integrating Testing
In a growing number of organizations, the responsibility for exercising how applications work together, how well they integrate, is not under the purview of the people making the software. The reasons for this are many, and most of them I reject out of hand as being essentially Tayloristic "Scientific Management" applied to software development.
The result is people run a series of tests against various applications in a different environment than they were developed in, and sending the bugs back to the development teams. This generally happens after the development team has declared "Done" and moved on.
Now the bugs found by the next group testing the software come back, get pulled into the backlog, presumably they get selected for the next sprint. Now it is two weeks at least since they were introduced, probably four and likely six - depending on how long it takes them to get to exercising new versions.
What if, we collaborated?
What if we recognize that having a group doing testing outside of the group that did the development work is not what the Scrum Guide means when referring to Cross-functional teams? (Really, here's the current/2017 version of The Scrum Guide)
What if we ignore the mandates and structure and cooperate to make each other's lives easier?
What if we call someone from that other team, meet for a coffee, maybe a donut as well, possibly lunch, and say something like "Look. It sucks for us that your tests find all these bugs in our stuff. It sucks for you that you get the same stuff to test over and over again. Maybe there's something to help both of us..."
"Can we get some of the scripts you run against our stuff so we can try running them and catching this stuff earlier? I know it means we'll need to configure or build some different test data, but maybe that's part of the problem? If we can get this stuff running, I think it might just save us both a lot of needless hassle. What do you think?"
Then, when you get the new tests and teste data ready and you fire them off - check the results carefully. Check the logs, check the subtle stuff. THEN, take the results to the team and talk about what you found. Share the information so you can all get better.
Not everyone is likely to go for the idea. Still, if you are willing to try, you might just make like a little better for both teams - and your customers.
Your software still won't be perfect, but it will likely be closer to better.
I've seen it.
I've done exactly that.
It can work for you, too.