In her book, "The March of Folly - From Troy to Vietnam" Barbara Tuchman talks about "the pursuits by governments contrary to their self interests." Its an exceptional book. I strongly suggest you read it. Here's a link.
Any software tester who has an interest in how things can be different, or at least has an inkling that things either in their organization or in general, are not working or are simply messed up really could use with reading the first two pages. (My paperback copy is a 1985 Ballantine Books Edition of the 1984 copyright renewal) Its good. You could substitute "test managers" (or "software managers" or "software development leaders") for "governments" and not change that first page one iota.
Tuchman does not throw everyone who makes a mistake, or even a serious blunder, under the bus of "folly." No, she is more precise than that. To be painted with her brush several specific conditions must be met.
In her words "the policy adopted must meet three criteria" (sounds better that way, doesn't it?)
First, "it must have been perceived as counter productive in its own time, not in hindsight." That is important. After the wheels fall off, its pretty easy to track back to the "Oh, THIS should have been done differently."
Second, "a feasible alternative course of action must have been available." OK, fair game that. If you know something is likely to be a bad idea and there aren't any other options to try, then it's not "folly." It often is categorized as "bad luck."
Third, "the policy in question must be that of a group, not an individual ruler, and should persist beyond any one political lifetime." That one is a little more challenging. How long is a "political lifetime?" That rather varies, doesn't it? It could be an "administration" in the US, it could be a Board of Directors configuration for a company. It could be a "management team" structure. It could be several things - all of which add up to a passage of time, not a quick "policy du jour."
And software - Antediluvian era
Some 30 years ago, almost exactly 30, where I was working as a programmer. For clarification, it was common practice to have programmers,
programmer analysts and folks with similar titles do things like, gather
requirements, confirm requirements with users/customers, create the design, plan any file structure changes that might be needed, write the code and test. Sometimes we worked in pairs. Sometimes there would be three of us working the same project.
The company adopted a new process model for making software.
It relied on defining requirements in advance, and hammering down every possibility and variance up front, then getting everyone to "sign off" on a form saying "Yes, this is precisely what we want/need." Then we would take those requirements and use them for building the design of the software. Once we had the signatures, there was no need to "bother" the "users" again. If there were questions we'd refer to those same requirements.
Then we would use the documented requirements to drive how we documented pretty much everything else. Our design referenced the requirements. If we had a new screen, report or other form of interface, we could
make mock-ups to show exactly what each item would look like - and how it related to which requirements.
We even had comments in the code to reflect the section of the requirements this piece of code was addressing. We used them when building test strategies and test plans and detailed test cases.
We could precisely identify each section of each requirement then show how every section of every requirement could be referenced in every piece of the design, the code, the file structure and then in the test plan - specifically each test case.
I saw this and thought - "Wow. This will fix so many problems we have." The very senior person on the team, his title was actually "Senior Programmer Analyst" - he was about as high as you could go without turning into a manager, had doubts. He was not sure that everything would work as neatly as we were told to expect it would. I shrugged and wrote his reservations off as being "old school."
And then I tried it.
Hmmm. Things were more complicated than anyone thought. We kept finding conditions that we did not anticipate during the weeks of developing the requirements and doing design. We kept finding holes in our logic.
The "good" news was that since everyone had signed off and said "This is it!" I only got into a little trouble. The project was delayed while we reworked the requirements and got the change agreements signed then changed the code and... right. We found more stuff that needed to change.
The folks running the initiative gently patted my hand and said "As you get more experience with the process, it will be easier. The first few projects will have problems as you learn about the process. Once you follow it precisely, these problems will go away. You'll see."
That seemed comfortable. I took solace in that and tried again.
Three major projects and - somehow - the same thing happened. Not just for me, but all the programmers, programmer analysts, senior programmer analysts - everyone who wrote code and did this stuff ran into the same problems.
Somehow, none of us were following the process "correctly." If we were, these problems would not be happening.
Several years later... Deja Vu
At another company, I was now the senior developer. My title was "Information Analyst." I was working with some very talented people working on cross-platform technologies. At the time, it was very bleeding edge. Unix-based platforms doing some stuff, the trusty IBM mainframe filling the role of uber-data-server/host and them some Windows based stuff all talking and working together. Along with code stuff, I was also mentoring/helping with testing stuff. There wasn't a 'test team' at this shop, we worked together and some folks coded, some folks tested. The next project, those roles swapped. I was fortunate to have a talented, open minded group to work with.
There was a change in leadership. We needed structure. We needed repeatability. We needed to fix serious problems in how we made software.
They rolled out a new model - a new software development process. Everyone had defined roles.
We needed to focus on getting the requirements fully identified before we did anything else. We needed to get every possible combination of conditions identified before any design work was done. We needed to do work around codifying how requirements looked.
Then we could design software that conformed perfectly to the requirements. We could examine all the aspects of a given requirement and handle that in out design and then our code. If it had a new screen, report or other form of interface, we could make mock-ups to show exactly what each item would look like - and how it related to which requirements.
Then the code could be developed according to the design, and could reference the design points and the related requirements for how the code was intended to function and what purpose it was supposed to fill.
Testing - we could build the test strategy and plan simply be reading the requirements document. Since that was so complete, there was no reason for clarifying question to the BA's or the users or... anyone else. Testers could sit in their cubes and design tests and then execute them when the code was ready. Except we did not really have testers, we had developers who also did testing for projects they did not write the code for. Except that we sometimes had a problem.
We could map out the expected results in the testing and then ask the people doing the test scripts to check "Y" or "N" if the expected results came up.
Somehow, this company, too ran into similar problems as the other company. We kept finding conditions we had not accounted for in the detail requirements gathering. We kept finding conditions no one had anticipated. We kept finding holes.
When we asked about it, we were told the problem was we were not following the process correctly. If we had, we would not have these problems.
Hmmmm.... this sounds really familiar.
The "good news" for my team was that we generally avoided being in too much trouble because everyone who needed to sign off on the requirements had done so. There was some grumbling about we should have done a better job of identifying the requirements, but since everyone had said "Yes, these are all of them" we were able to avoid taking the fall for everyone else.
Still, it was extremely uncomfortable.
A couple years later... Deja Vu Again
Now I was the QA Lead. That was my title. I was working with a small team making testing happen on code that some really talented developers were making happen. We talked about the system, they made code and we tested it.
The customers liked it - really - the folks who used the software who did not work for the company. They noticed the improvement and they liked it - a lot. The Customer Service folks like it a lot. They got a lot fewer calls from angry customers and got more of the "Here's what I'm trying to do and I'm not sure how to do it" sort of calls. They tend to prefer those - at least at that company.
Things were working well, so well in fact that the test team was moved from that one project group of to doing testing for the entire IS Development area. That was fine, except there were two of us for some 100 developers. Ouch.
The "approved model" for making software looked a LOT like the last one. This time, there was the call of "repeatable process" included. We can make everything repeatable and remove errors (I believe "drive them out" was the phrase) by being extremely consistent.
This was applied not only to the information gathering in requirements, but also in design and code development. As one might expect, it was handed on to testing as well. Everything needed to be repeatable. Not only in the design process but absolutely in the execution.
So, while we strove to make these design efforts repeatable, the demand was that all tests would be absolutely repeatable. That struck me as "I'm not sure this really makes sense" but I was game to try. After all, a consultant was in explaining how this worked and we were essentially hushed if we had questions or doubts.
The response was something like "The first few times you try it, you will likely have problems. Once you get used to the process and really apply it correctly, the problems will go away and things will run smoothly."
We still had problems. We still struggled. Somehow, even the "golden children" - the ones who were held up as examples to the rest of the staff - had trouble making this stuff work.
A few years later... Deja Vu All Over Again
I was working at a small company. In the time I had been there we had shifted from a fairly dogmatic approach to testing, where precise steps were followed for each and every test, to a more open form. Simply put, we were avoiding the problem of executing the same tests over and over again, eliminating bugs in that path and ignoring any bugs slightly off that path.
The product was getting better. We had built rules to document the steps we actually took, not the ones we planned to take. When we found a bug, we had the steps that led to it already recorded and so we could plug them straight into the bug tracker. The developers found this more helpful than a general description.
We had documents that were needed - requirements, etc. They were sometimes more vague than they should have been. So we tested those as well. This allowed us to have meaningful conversations with people as we worked to define what precisely they expected. Of course, we carried these conversations on as we were working through designing the software and considering how to test it.
Sure, we had to sometimes "redo" what we did - but generally, things worked pretty well. They were getting better with each project.
Then we were bought.
After the inevitable happy-sizing, the "staff re-alignment"that left us with a fragment of the old company staff, we received the instruction in the "new way" of creating software.
You start by completely defining requirements - in advance. Nothing happens until everyone agrees that all the requirements are fully documented and are complete. Then design happens and everyone relates the design to the requirements. And coding is done strictly against the design to make sure everything is according to the requirements. Testing planning is done with a specific strategy created to reflect the requirements. Then test plans are created, from the strategy, and refer to the requirements. The test cases are detailed, repeatable sets of instruction to make sure the tests confirm to the requirements and can be executed many times without variation.
"Yes," we were assured, "this will take some getting used to, but once you understand the new process and follow it, you won't have any problems and the software will be great." As projects had problems, of course it was because we were not "following the process correctly."
Looking back...
The first time I encountered a process like that, I was all over it. In my very junior programmer mind it seemed to make perfect sense. The next time, I was wary. I had seen it fail before - and let's face it. If problems are the result of people "not following the process" correctly - or "not understanding" the process. I think there may be a problem with the process. After all, these were smart people who had gone through the training for the new way of doing things.
The third time, right. Good luck with that. I expressed my concerns and reasons for being unconvinced. The last time - I rebelled. Openly and loudly. I broke it down with them, made reference to the model they had drawn on and showed documentation for multiple places that demonstrated that model was innately flawed. No amount of "tweaking" would fix the central issues.
I was told, "No, this was developed for us, specifically." I challenged that by pointing out the reference materials readily available on the internet showing this process model - complete with the same step names, artifact names and descriptions of the process. I then explained why this model would not and could not work in their context. (By the way, it was the same reason it was doomed in each of the previous instances I encountered it...)
Those issues are this -
* Human language is an imprecise form of communication. People can and will misunderstand intent.
* Requirements are rarely understood, even by the people who "know" the requirements best - the people asking for the change. People have a hard time considering all the possible paths and flows that result from a given decision. Once they see the result, they will better understand their own needs.
* Humans do not think in a linear manner. That is the single biggest problem I see in the "repeatable" models put forward. At some point there is a cloud with the word "Think" present. At that point, the linear model fails.
With each new standard model put forward, there are people working in the industry governed by the standard with practical experience around the work the standard is intended to direct and mold.
When they raise objections, dismissing them as "self-serving" is, in itself, self-serving.
Your pet project may well be ugly and unwieldy. Admit that possibility to yourself at least, or join the list of "leaders" who commit folly - and destroy the thing they are trying to save or build.
Sunday, September 28, 2014
Subscribe to:
Posts (Atom)