In
this post, I wrote about demands and practices that lead to myriad problems in software development - even though every story (really, Backlog Item, but, whatever) is marked as "Done."
In this followup
post, I wrote about things that can be done to mitigate the damage, a bit.
This is looking at the problems in the first post (above) and how this might be tied to answering the implied question in
this post.
I suspect these are all tied together. I also suspect that people have been told by experts, or read a book, or talked with a peer at a large company who heard that a cool, Silicon Valley company is now doing this - and so they are jumping in so they can attract the best talent. Or something.
Let's talk about a couple of subtle points.
Testing
That is something all of us do, every day, whether we want to admit it or not. It may not be testing software, but it may be something else, like "I wonder what happens if I do
this." At one time, most people writing production facing, customer impacting code were expected to test it - thoroughly. Then we'd get another person on the development team to test it as well. It was a matter of professional pride to have no bugs found by that other person, and a matter of pride to find bugs in other people's work.
I remember one Senior guy telling me "Before you hand this off to someone to test for you, make sure you have tested everything you possibly can. Document what you did and how, so that if something IS found in later testing, we can see what the difference is. Then the next time, you can test that condition when you test the others. Make them work hard to find something wrong."
That stuck with me and helps guide my thinking around testing - even 30+ years later.
Things have shifted a bit. Much of what we did then can be done fairly quickly using one or more tools to assist us - Automation. Still, we need some level of certainty that what we are using to help us is actually helping us. The scripts for the "automated tests" are software, and need diligent testing just as the product we need to test so the company can sell product, customers are happy and we don't get sued - or worse, brought up on criminal charges.
Still, when done properly, automated test scripts can help us blow through mundane tasks and allow people to focus on the "interesting" areas that need to be examined.
OK,
caveat #1 - check the logs generated by the tests - don't just check to make sure the indicator is green. There MAY be something else happening you have not accounted for. Just, do a sanity check.
Then, while code is being worked on, and unit tests are being prepared (presuming you are doing TDD or something similar) someone NOT working on that piece of code look at the story (or backlog item, or whatever) and look at the tests defined in TDD and ask "What would happen if
this happened?"
Now, that could be something like, an unexpected value for a variable is encountered. It could also be something more complex, for example, a related application changes the state of the data this application is working with.
One approach I have used very often, look at a representation (mindmap/decision tree/state diagram) of what the software looks like before, and after this piece is added. What types of transactions are being impacted by this change? Are there any transactions that should not be impacted? Does the test suite, as it is running, reflect these possible paths?
Has someone evaluated the paths through the code? Beyond simply line and branch coverage, how confident are you in understanding the potentially obscure relationship between the software, and say, the machine it is running on going to sleep?
Have you
explored the
behavior of the software, not simply if it "works?" Are there any areas that have not been considered? Are there any "ghosts in the machine"? How do you know?
Testing in Almost-Agile
I have been told by many "experts" that there is no testing in "Agile." They base this, partly, on the language or wording in the Agile Manifesto. They base it partly on the emphasis on everyone being responsible for quality in a "whole team" environment.
Some even point to the large Silicon Valley companies mentioned earlier who state very publicly they "don't have any testers."
Yet, when pressed, there are often clarifying statements like "We don't have testers in the Scrum teams, because we rely on automation for the Function testing." Here is where things kind of break down.
No "testers" in the "Scrum teams" because people write automation code to test the functions. When asked about "Integration Testing" or Load or Performance or Security or any of the other aspects of testing that testing specialists (sometimes referred to as "Testers") can help you with, do really well, and limit exposure to future problems, and possibly future front pages and lawsuits - the response often is "We have other teams do that."
Wait - What?
The "Scrum Team" declares a piece of work "Done" and then at least one or two other teams do their thing and demonstrate it is not really "Done"?
Mayhap this is the source of "Done-Done"? Is it possible to have Done-Done-Done? Maybe, depending on how many teams outside of the people developing the software there are.
That sounds pretty Not-Agile to me - maybe Almost-Agile - certainly not Scrum. It sounds much more like one of the Command-and-Control models that impose Agile terms (often Scrum) on top of some form of "traditional software development methodology" like "Waterfall." Then they sing the praises of how awesome "Agile" is and how much everything else stinks - except they are busy in stage gate meetings and getting their "Requirement Sprint" stuff done and working on their "Hardening Sprints."
Another Way - Be Flexible
Look at what the team can work on NOW for the greatest impact to the benefit of the project.
What do I mean? Let's start with one thing that the customer (in the person of the product owner or some other proxy) really, really wants or needs - more than anything else.
Figure out the pieces to make that happen - at least the big ones;
make sure people understand and agree on what the pieces mean and what they really are.
Then pick a piece - like, the one that looks like it will deliver the biggest bang NOW -
OR - one that will set you up to deliver the biggest band in the next iteration;
Then, figure out...
- how to know if that piece works individually;
- how to know if that piece works with other pieces that are done;
- how to know if that piece will negatively impact the way the whole package is supposed to work;
- how to know if that piece might open a security vulnerability;
- if there is a way to find any unexpected behaviors in this piece or by adding this piece to what has been done.
I'm not advocating doing this all at once for every piece/ticket/story/whatever. I am suggesting these be defined and worked on and then actually executed or completed before
any task is labelled "Done."
Some of these may be easily automated - most people automate the bit about each piece "works individually."
If your team does not have people on it with the skill sets needed to do these things, I'd suggest you are failing in a very fundamental way. Can I really say that? Consider that Scrum calls for cross-functional teams as being needed for success. Now, it might be that you also need specialists in given areas and you simply can't have 1 per team - but you can share. Through decent communication and cooperation, that can be worked out.
Still, the tasks listed above will tend to be pretty specific to the work each team is doing. The dynamics of each team will vary, as will the nature of some of the most fundamental concepts like - "does it work?"
Of these tasks, simple and complex alike, perhaps the most challenging is the last one in the list.
Is there a way to find unexpected behavior in this individual piece we are working on? Is there a way to find unexpected behavior when it gets added to the whole?
These are fundamentally different from what most people mean by "Regression Testing." They are tasks that are taken up in the hopes of illuminating the unknown. We are trying to shine a light into what is expected and show what actually
IS.
But, who has those kinds of skills?
They need to be able to understand the need of the customer or business AND understand the requirements and expectations AND understand the risks around poor performance or weak security, and the costs or trade-offs around making these less vulnerable. They need to be able to understand these things and explain them to the rest of the team. They need to be able to look beyond what is written down and see what is not - to look beyond the edge of the maps and consider "what happens if..." Then, these people need to be able to share these concepts and techniques with other people so they understand what can be done, and how it can be done differently and better.
These things are common among a certain group of professionals. These professionals work very hard honing their craft, making things better a little at a time. These people work hard to share ideas and get people to try something different.
They are often scorned and looked down upon by people who do not understand what the real purpose is.
These people have many names and titles. Sometimes they are "Quality Advocates" other times they are "Customer Advocates." However, they are commonly called
Testers.