I was asked a question by email and responded. This is a much more full response.
Estimation is one of those things most of us get pinged about. "We need estimates on testing the Blah System." We can give really good reasons why they really don't tell us much and tends to turn into a stick to beat us with - Still, we're
expected to come up with some idea as to how long something is going to
take when what is being done is foggy at best.
We make a reasonable
effort and come up with what is a reasonable estimate for the amount of work we need to do, and the time and effort that will take. Then as we learn more information we
realize that things will a) take a lot more time/effort or b) take less
time/effort because of things we did not or could not account for.
It seems that every time I do a Test Process Improvement workshop, I start with something like "What
things does your team and the individuals on your team do now that you
(and the members of your team) wish you did better?" EVERY TIME the
first or second answer is "We're lousy at estimates; we need to do
estimation better."
I've learned in (mumble mumble) years of software development work that I am
not clairvoyant. I can't see the future nor am I a mind reader. I've also learned that most people with absolutely accurate estimation calculators are snake oil salesmen. Or, they are delusional. They may be out and out liars but I'd prefer to think better of people.
Documentation may help - maybe. If there is some reference to previous work on the same system, or similar projects, that may help. If you are operating based on tribal knowledge, then you may have a bit of a challenge in convincing anyone else that this is something other than a wild guess.
If you look to do more than simply test "functional requirements" and look for other stuff, like how the system actually behaves or perhaps non-functional requirements, how do you plan for that, let alone come up with some level of estimate.
Here's one of my favorite tools -
Mind Maps
Huh? Mind Maps? Really? No, really - I use them a couple of ways.
First, I use them to track requirements (business/solution/whatever) and
associate design ideas with the requirements. Sometimes this leads to a LOT of dotted lines. Sometimes it shows no lines whatsoever. Either way, it helps me visualize the software and what needs to be
considered.
Second - and here is where it gets interesting - I use them to map
what areas of the application or software or whatever CAN be tested. By this I mean show what is available to be tested or what has been delivered. I can also show what is scheduled for delivery and what is not expected to change. Then, as testing progresses, I can associate session reports with them. Sometimes that takes a bubble or three with
the project wiki or share-point locations for session reports associated
with each logical piece.
THAT gives me a reference for both functional and non-functional aspects that need to be exercised.
This ties back to the question of "How does the software behave?"
In this instance, I'm not
testing to validate requirements - I'm exercising the software to see
how it behaves. I can then compare the results with the expectations - one
part of which consists of documented requirements.
In the end, I have a full visual representation of what has been
exercised and how - and how thoroughly. This gives me something I can
take back to stakeholders and say "This is what we did and how we did
it. These are the areas that we found interesting variations in and we
needed to make changes because of. Are you comfortable with this or
would you be more comfortable with more testing?"
Rather than talking about tests and test cases and what has been run and
not run, which I've found is of really little value to most people - no
matter what "best practice" folks tell us, I talk about the functions
within the system we have exercised and the depth to which we exercised them.
But we were talking about estimation, right?
The next time I, or anyone else, is in a project that is changing part of that system, I
know what was done before and how long it took. After all, we have the session
reports saved and referenced in the mind map, right?
This can also help me when I need to consider regression testing - for this project or for future projects. I have a gauge I can turn to for reference.
So, with that information and a description of what the change is and an
understanding of at least an idea around a portion of the risk based on
the impact of the change, we can come up with something approaching an
estimate that is perhaps better than an absolute guess.
Tuesday, March 4, 2014
Subscribe to:
Post Comments (Atom)
Interesting post Pete. I like mindmaps too and often use them to map out requirements, ideas, areas to focus on and other useful stuff related to test projects. I inevitably find though that over time I end up with either a bunch of loosely associated mindmaps or one huuge mindmap that I abandon because it's too much effort to keep up to date. So I wondered, have you struggled with mindmaps or did using them just come naturally. If you struggled, how did you overcome and have you got some tips you can pass on?
ReplyDeleteSimon! Excellent question deserving a thought-out answer. (translated: I think you prompted a follow-up blog post.) Short answer is "Yes." :)
DeleteIt really depends on the context of the effort, no? The nature of the project will determine the nature/complexity of the mind map. Some I abandon quickly, others I break into components and only pull them together as a visual for people trying to downplay the complexity of the project (not nice, but a helpful bit of manipulation.)
Some are very straight forward - easy to read/maintain. I had one that tracked where a customer's language preference when registering on a website impacted the system. That was for a simple change code-wide, but impact was massive.
As for learning, I need to think on explaining that further - hence the follow-up blog post.
Thanks for the comment!