OK.
I don't know if you are one of the several tester types I've talked with over the last couple of months who keep telling me that "Look, we're not supposed to worry about that UX stuff you talk about. We're only supposed to worry about the requirements."
If you are, let me say this: You are soooooooooooooooo wrong.
No, really. Even if there is someone else who will "test" that, I suggest, gently, that you consider what a reasonable person would expect while you are examining whatever process it is that you are examining. "Reasonable person" being part of the polyglot that many folk label as "users." You know - the people who are actually expected to use the software to do what they need to do? Those folks?
It does not matter, in my experience at least, if those people (because that is what they are) work for your company or if they (or their company) pay you to use the software you are working on.
Your software can meet all the documented requirements there are. If the people using it can't easily do what they need to do, then it is rubbish.
OK, so maybe I'm being too harsh. Maybe, just maybe, I'm letting the events of yesterday (when I was sitting in an airport, looking at a screen with my flight number displayed and a status of "On Time" when it is 20 minutes after I was supposed to be airborne) kinda get to me. Or, maybe I've just run into a fair number of systems where things were designed - intentionally designed - in such a way that extra work is required by people who need the software to do their jobs.
An Example
Consider some software I recently encountered. It is a new feature rolled out as a modeling tool for people with investments through this particular firm.
To use it, I needed to sign in to my account. No worries. From there, I could look up all sorts of interesting stuff about me generally, and about some investments I had. There was a cool feature that was available so I could track what could happen if I tweaked some allocations in fund accounts, essentially move money from one account to another - one type of fund to another - and possible impact on my overall portfolio over time.
So far, so good, right? I open the new feature to see what it tells me.
The first screen asked me to confirm my logon id, my name and my account number. Well, ok. If it has the first, why does it need the other two? (My first thought was a little less polite, but you get the idea.)
So I enter the requested information, click submit and POOF! A screen appears asking the types of accounts I currently had with them. (Really? I've given you information to identify me and you still want me to identify the types of accounts I have? This is kinda silly, but, ok.)
I open another screen to make sure I match the exact type of account I have with what is on the list of options - there are many that are similar in name, so I did not want to be confused.
It then asked me to enter the current balance I had in each of the accounts.
WHAT???? You KNOW what I have! It is on this other screen I'm looking at! Both screens are part of the same system for crying out loud. (or at least typing in all caps with a bunch of question-marks.) This is getting silly.
So, I have a thought. Maybe, this is intended to be strictly hypothetical. OK, I'll give that a shot.
I hit the back button until I land on the page to enter the types of accounts. I swap some of my real accounts for accounts I don't have - hit next and "We're sorry, your selections do not agree with our records." OK - so much for that idea.
Think on
Now, I do not want to cast disparaging thoughts on the people who obviously worked very hard on this software, by some measure. It clearly does something. What it does is not quite clear to me. There is clearly some knowledge of the accounts I have in this tool - but then why do I need to enter the information?
This seems, awkward, at best.
I wonder how the software came to this state. I wonder if the requirements handed off left room for the design/develop folks to interpret them in ways that the people who were in the requirements discussions did not intend.
I wonder if the objections raised were met with "This is only phase one. We'll make those changes for phase two, ok?" I wonder if the testers asked questions about this. I wonder how that can be.
Actually I think I know. I believe I have been in the same situation more than once. Frankly it is no fun. Here is what I have learned from those experiences and how I approach this now.
Lessons
Ask questions.
Challenge requirements when they are unclear.
Challenge requirements when they are clear.
Challenge requirements when there is no mention of UX ideas,
Challenge requirements when three are mentions of US ideas.
Draw them out with a mind map or decision tree or something. They don't need to be be fancy, but they can help you focus your thinking and may give you an "ah-HA" moment - paper, napkins, formal tools - whatever. Clarify them as best you can. Even if everyone knows what something means, make sure they all know the same thing..
Limit ambiguity - as others if their understanding is the same as yours.
If there are buzzwords in the requirement documents, as for them to be defined clearly (yeah, this goes back to the thing about understanding being the same.
Is any of this unique to UX? Not really. I have a feeling that some of the really painful stuff I've run into lately would have been less painful if someone had argued more strongly early on in the projects where that software was developed.
The point of this rant - If, in your testing, you see behavior that you believe will negatively impact a person attempting to use the software, flag it.
Even if "there is no requirement covering that" - . Ask a question. Raise your hand.
I hate to say that requirements are fallible, but they are. The can not be your only measure for the "quality" of the software you are working on if you wish to be considered a tester.
They are a starting point. Nothing more.
Proceed from them thoughtfully.
Showing posts with label Requirements. Show all posts
Showing posts with label Requirements. Show all posts
Monday, October 8, 2012
Monday, October 4, 2010
Improving Processes, Part III, or, Why Don Quixote's Quest May Have Ended Better Than Yours Will
A few weeks ago, while looking for some other information, I stumbled across the power point slides of a conference session on Test Process Improvement that I decided was "not a good fit" for me. Yeah, I walked out... about 10 minutes into it.
The premise was "If you don't have a Process, you need one. If you don't have a Process, you have Chaos and Chaos is bad." Following the obligatory introduction, and some seven minutes of what appeared to be gratuitous assertions, I said, "Enough" and walked out.
Having a Process is not a silver bullet. Simply having a Process will not magically fix your Chaotic environment. If you are trying to impose Process on the organization wearing your "Tester" white hat or the plate mail of the Quality Paladin, good luck. Most places where I've seen Chaos rule, its because someone with a lot of scrambled eggs on their hat likes it that way. (I wonder how many metaphors I can pull into one paragraph? Better quit there.)
However, if you have a Process and no one follows it, the question should be why not? My previous blog posts (Part II and Part I of this thread) talked about how the "problem" might not be the real problem and how you need to seriously look at what you are doing before you can fix what might need fixing.
When you look long and hard and honestly at what you and your group is doing, when you find the places where what is done varies from what The Process says, you must determine why this difference exists.
I suspect that it will boil down to a matter of relevance. The official Process has no relevance to the reality of what actually is needed in those situations. If it is a one-off, then there may be something that can be tweaked. If it is a regular occurrence, then the value of The Process comes into question. If it doesn't work, why pretend it does? Why bother having it at all?
Granted, The Process may have been relevant at one time and things may have changed since it was introduced. However, nothing is permanent. Change is inevitable. Even The Process may need to be updated from time to time.
When you do, look to the Purpose your team is to fulfill. Why do you exist? What is your Charter? What is your Mission? Do you have a Mission? I'll bet you do, even if you don't know what it is.
To start, look to what Management expects. If a boss-type is telling you that the Test Process needs improvement, try talking with them. Discuss with them what they believe needs to be improved or where the gaps are. This may become the basis of the group's Charter.
The Quest that you are expected to follow.
What are they seeing as "broken" that needs to be fixed?
If the gist is "there are too many defects being found by customers" ask if there are specific examples. Anecdotal evidence can paint a compelling story, yet without specific examples, you may never be able to find hard facts Is this a hunch or are there specific examples? Are these defects, as in they should have been found in testing?
Maybe these are aspects of the application that the customers expected to behave differently than they received? If they are, why is that? How can that be? How can the expectations be so different than what you believed it would be? After all! The Design and Requirements, that you based the tests on, matched perfectly!
Let us ask Dulcinea how these things can be so different than what they appear to be?
The premise was "If you don't have a Process, you need one. If you don't have a Process, you have Chaos and Chaos is bad." Following the obligatory introduction, and some seven minutes of what appeared to be gratuitous assertions, I said, "Enough" and walked out.
Having a Process is not a silver bullet. Simply having a Process will not magically fix your Chaotic environment. If you are trying to impose Process on the organization wearing your "Tester" white hat or the plate mail of the Quality Paladin, good luck. Most places where I've seen Chaos rule, its because someone with a lot of scrambled eggs on their hat likes it that way. (I wonder how many metaphors I can pull into one paragraph? Better quit there.)
However, if you have a Process and no one follows it, the question should be why not? My previous blog posts (Part II and Part I of this thread) talked about how the "problem" might not be the real problem and how you need to seriously look at what you are doing before you can fix what might need fixing.
When you look long and hard and honestly at what you and your group is doing, when you find the places where what is done varies from what The Process says, you must determine why this difference exists.
I suspect that it will boil down to a matter of relevance. The official Process has no relevance to the reality of what actually is needed in those situations. If it is a one-off, then there may be something that can be tweaked. If it is a regular occurrence, then the value of The Process comes into question. If it doesn't work, why pretend it does? Why bother having it at all?
Granted, The Process may have been relevant at one time and things may have changed since it was introduced. However, nothing is permanent. Change is inevitable. Even The Process may need to be updated from time to time.
When you do, look to the Purpose your team is to fulfill. Why do you exist? What is your Charter? What is your Mission? Do you have a Mission? I'll bet you do, even if you don't know what it is.
To start, look to what Management expects. If a boss-type is telling you that the Test Process needs improvement, try talking with them. Discuss with them what they believe needs to be improved or where the gaps are. This may become the basis of the group's Charter.
The Quest that you are expected to follow.
What are they seeing as "broken" that needs to be fixed?
If the gist is "there are too many defects being found by customers" ask if there are specific examples. Anecdotal evidence can paint a compelling story, yet without specific examples, you may never be able to find hard facts Is this a hunch or are there specific examples? Are these defects, as in they should have been found in testing?
Maybe these are aspects of the application that the customers expected to behave differently than they received? If they are, why is that? How can that be? How can the expectations be so different than what you believed it would be? After all! The Design and Requirements, that you based the tests on, matched perfectly!
Let us ask Dulcinea how these things can be so different than what they appear to be?
Labels:
Design,
Don Quixote,
Process Improvement,
Requirements
Wednesday, September 29, 2010
Requirements, Traceability and El Dorado
The day-job has been crazy busy the last several weeks. I have several half-written entries that I want to finish and post and with the project hours, and stuff needed to be done at home, there simply has not been much time. However, I've been lurking in a couple of places, reading posts and email conversations, getting my fix of "smart people's thoughts" that way.
The interesting thing is that a couple of themes have crept back up and I finally have the chance to take a look at the topic(s) myself and consider some aspects I may not have considered.
The initial question revolved around defining requirements and establishing traceability of test plans, test cases and the like back to requirements. By extension, when executing the test cases defects found should likewise be able to be traced back to said requirements.
Now, folks who have read my blog in the past will realize that I've been writing about requirements off an and for some time. Well, actually, its more "on" than "off." I write about requirements and testing a lot. Possibly this is because the struggles of the company I work with on defining requirements and the subsequent struggles to adequately test the software products created from those requirements. Now, to be clear, it is not simply this company I am with that has an issue. Most places I've worked have been seriously "requirements challenged."
One thing that sends up every warning flag the back of my neck has is the idea that we can fully define the requirements before doing anything else. I know, Robin Goldsmith has an interesting book on defining "REAL requirements" and he has some good ideas. In light of the shops where I have worked over the last, oh, 25 and more years, some of these ideas simply don't apply. They are not bad ideas, in fact, I think testers should read the book and get a better understanding of it. (Look here to find it, yeah, I know its pretty pricey - expense it.)
Having said that, how many times have we heard people (developers, testers, analysts of some flavor, project managers, et al.) complain that the "customers" either "don't know what they want" or "changed their requirements." I've written before about the understanding of requirements changing, and how considering one aspect of a project may inform understanding on another. When this happens in design, development, or worse testing, the automatic chorus is that the "users" don't know what they want and work will need to be changed. All of us have encountered this right? This is nothing new, presumably.
My point with this revisit is that if you are looking to find the cause of this recurring phenomenon, look in a mirror. All of us have our own biases that effect everything we do - whether we intend to or not.
So, if your shop is like some I've worked in, you get a really nice Requirements Document that formally spells out the requirements for the new system or enhancement to the existing system. The "designers" take this and work on their design. Test planners start working on planning how they will test the software and (maybe) what things they will look for when reviewing the design.
Someone, maybe a tester, maybe a developer, will notice something; maybe an inconsistency, maybe they'll just have a hunch that the pieces don't quite go together as neatly as they should. So a question will be asked. Several things may happen. In some cases, the developer will be given a vague instruction to "handle it." In some cases, there will be much back and forth over what the system "should" do, then the developer will be told to "handle it."
At one shop I worked at, the normal result was a boss type demanding why QA (me) had not found the problem earlier.
My point is, defining requirements itself is an ongoing process around which all the other functions in software development operate.
Michael Bolton recently blogged on test framing. It is an interesting read. It also falls nicely into a question raised by Rebecca Staton-Reinstein's book Conventional Wisdom around how frames and perspectives can be both limiting and liberating.
This brings me back to my unanswered question on Requirements: How do you show traceability and coverage in advance when it is 99.99% certain that you do not know all the requirements? Can it really be done or is it a fabled goal that can't be reached - like the city of gold?
Wiser people than me may know the answer.
The interesting thing is that a couple of themes have crept back up and I finally have the chance to take a look at the topic(s) myself and consider some aspects I may not have considered.
The initial question revolved around defining requirements and establishing traceability of test plans, test cases and the like back to requirements. By extension, when executing the test cases defects found should likewise be able to be traced back to said requirements.
Now, folks who have read my blog in the past will realize that I've been writing about requirements off an and for some time. Well, actually, its more "on" than "off." I write about requirements and testing a lot. Possibly this is because the struggles of the company I work with on defining requirements and the subsequent struggles to adequately test the software products created from those requirements. Now, to be clear, it is not simply this company I am with that has an issue. Most places I've worked have been seriously "requirements challenged."
One thing that sends up every warning flag the back of my neck has is the idea that we can fully define the requirements before doing anything else. I know, Robin Goldsmith has an interesting book on defining "REAL requirements" and he has some good ideas. In light of the shops where I have worked over the last, oh, 25 and more years, some of these ideas simply don't apply. They are not bad ideas, in fact, I think testers should read the book and get a better understanding of it. (Look here to find it, yeah, I know its pretty pricey - expense it.)
Having said that, how many times have we heard people (developers, testers, analysts of some flavor, project managers, et al.) complain that the "customers" either "don't know what they want" or "changed their requirements." I've written before about the understanding of requirements changing, and how considering one aspect of a project may inform understanding on another. When this happens in design, development, or worse testing, the automatic chorus is that the "users" don't know what they want and work will need to be changed. All of us have encountered this right? This is nothing new, presumably.
My point with this revisit is that if you are looking to find the cause of this recurring phenomenon, look in a mirror. All of us have our own biases that effect everything we do - whether we intend to or not.
So, if your shop is like some I've worked in, you get a really nice Requirements Document that formally spells out the requirements for the new system or enhancement to the existing system. The "designers" take this and work on their design. Test planners start working on planning how they will test the software and (maybe) what things they will look for when reviewing the design.
Someone, maybe a tester, maybe a developer, will notice something; maybe an inconsistency, maybe they'll just have a hunch that the pieces don't quite go together as neatly as they should. So a question will be asked. Several things may happen. In some cases, the developer will be given a vague instruction to "handle it." In some cases, there will be much back and forth over what the system "should" do, then the developer will be told to "handle it."
At one shop I worked at, the normal result was a boss type demanding why QA (me) had not found the problem earlier.
My point is, defining requirements itself is an ongoing process around which all the other functions in software development operate.
Michael Bolton recently blogged on test framing. It is an interesting read. It also falls nicely into a question raised by Rebecca Staton-Reinstein's book Conventional Wisdom around how frames and perspectives can be both limiting and liberating.
This brings me back to my unanswered question on Requirements: How do you show traceability and coverage in advance when it is 99.99% certain that you do not know all the requirements? Can it really be done or is it a fabled goal that can't be reached - like the city of gold?
Wiser people than me may know the answer.
Tuesday, August 10, 2010
Of Walkways and Fountains
A Story
Once upon a time there was a business person who knew exactly what she wanted. So, she explained to an analyst precisely what it was that she wanted and all of the points that she wanted addressed and precisely how she wanted it addressed. The analyst said he understood exactly what she wanted.
So, the analyst went and assembled all the requirements and looked at everything that was spelled out. He gathered everything together.
He found that he had some very usable information and some that was less than useable. So, he dug and dug and found the perfect item that would fit the needs the user described and make it pleasing to her.
Then he assembled all the components and tested the product and found that it matched exactly what the user had asked for - and everything worked perfectly. The finished product was, indeed, a thing of beauty.
So, he called the user over to see the wonderful product he had made. She looked at it and said, "What is this?"
"Its what you asked for! It has everything you wanted!"
"No, this is..."
Have you ever heard of a project that matched the requirements precisely for what was needed to be included in the "finished product" only to find there was a complete mis-understanding about what the real purpose was?
Tuesday, July 6, 2010
Defining and Redefining Requirements
Its been crazy-busy at work. In addition, its summer which means lots of stuff to do in the garden and around the house. Time for other activities has been pretty rare. Sometimes, "other things" kick in.
Last Friday I mentioned I was a bit "under the weather." The worst part was the enforced physical idleness - not "not doing anything" idle, but not able to do the things I otherwise would do or needed to do. So, I've been catching up on my reading.
One book I've been reading is by Rebecca Staton-Reinstein called Conventional Wisdom How Today's Leaders Plan, Perform, and Progress Like the Founding Fathers. Its been an interesting read.
Vignettes from the (United States) Constitutional Convention give a framework for the lessons provided in contemporary case studies. The book is laid out by "Articles" - mimicking the US Constitution. That got me thinking about previous teams I've been a part of, and how un-like the Framers some of them functioned.
My blog post from Friday contained, well, remembrances from a past position. One thing from that was the Forced Best Practices environment. There were a fair number of people who wanted to do good work. There were others who tried to "follow the process" come-what-may. This created a potential for people to derail projects they wanted to fail, or, to assert their authority and dominate the process, inspite of what those tasked with running the project wished to do. In short, the project leaders/managers for a fair number of the most productive projects found a way by-pass the "official" process and focus on what needed to be done.
One of the tactics among those intending to derail the process was to bemoan "thrash" or "continually revisiting things we already decided."
On the surface, this makes a great deal of sense. After all...
Time is money and once a decision is made we must move forward because otherwise we're never going to make any progress. Once we have a direction we most move immediately. We must act. If we find weve acted wrongly, act again to correct it. They may well cite Ulysses S. Grant (who is eminantly quotable, by the way) on always moving towards objectives and "retreating forward."
The problem is, as the framers of the Constitution knew, one decision informs another. The deliberations around one topic may shed light on other topics. If the deliberations shed light on a topic that was "settled," the framers considered it entirely reasonable to reconsider that topic and any other previously settled decision that may be impacted.
What an amazingly time consuming process. Is it reasonable in today's software development process to see this as a reasonable approach? Can we really consider, or reconsider requirements that were "defined" two hours ago? What about two weeks? Is two months out of the question?
When a software project runs into an "issue" with requirements - why is that? Did the scope change? Did the requirements change? Did the "users" change what they wanted? Or did the understanding of the requirements change?
Are there presumptions in place that "everyone" knew, but did not communicate? Was the understanding match among all participants?
I'm not a world famous testing guru. I'm not a sought after speaker for software conferences and conventions. I'm not a famous historian or student of history. I do software testing for a living. I've seen some really good projects and some that were absolute trainwrecks. Some of those can be categorized as U. S. Grant did that "errors were of judgement, not intent." Where do we lowly software testers fall?
My assertion is, requirements will almost always be revisted. Sometimes it will be in the "requirements discovery process" other times it will be while the design is being worked in. Other times, while program code is being written, mis-matches or conflicts may be found. Occasionally, software testing will find inconsistencies in requirements when they "put all the pieces together."
Each of these instances is a far more expensive exercise than taking the time to revisit requirements and discuss them fully. What precisely does a phrase mean? What is the intent of this? Most importantly: Do the requirements work as a whole? Do they define a complete entity?
Do they summarize what the project team is to address? Do they describe the business needs to be fulfilled? Does everyone share the vision that is needed to fill those needs?
Defining your requirements does not mean that you need to know how the team will meet them. That is what the design process is for. If you can define the needs - you can define the way to fill those needs.
Last Friday I mentioned I was a bit "under the weather." The worst part was the enforced physical idleness - not "not doing anything" idle, but not able to do the things I otherwise would do or needed to do. So, I've been catching up on my reading.
One book I've been reading is by Rebecca Staton-Reinstein called Conventional Wisdom How Today's Leaders Plan, Perform, and Progress Like the Founding Fathers. Its been an interesting read.
Vignettes from the (United States) Constitutional Convention give a framework for the lessons provided in contemporary case studies. The book is laid out by "Articles" - mimicking the US Constitution. That got me thinking about previous teams I've been a part of, and how un-like the Framers some of them functioned.
My blog post from Friday contained, well, remembrances from a past position. One thing from that was the Forced Best Practices environment. There were a fair number of people who wanted to do good work. There were others who tried to "follow the process" come-what-may. This created a potential for people to derail projects they wanted to fail, or, to assert their authority and dominate the process, inspite of what those tasked with running the project wished to do. In short, the project leaders/managers for a fair number of the most productive projects found a way by-pass the "official" process and focus on what needed to be done.
One of the tactics among those intending to derail the process was to bemoan "thrash" or "continually revisiting things we already decided."
On the surface, this makes a great deal of sense. After all...
Time is money and once a decision is made we must move forward because otherwise we're never going to make any progress. Once we have a direction we most move immediately. We must act. If we find weve acted wrongly, act again to correct it. They may well cite Ulysses S. Grant (who is eminantly quotable, by the way) on always moving towards objectives and "retreating forward."
The problem is, as the framers of the Constitution knew, one decision informs another. The deliberations around one topic may shed light on other topics. If the deliberations shed light on a topic that was "settled," the framers considered it entirely reasonable to reconsider that topic and any other previously settled decision that may be impacted.
What an amazingly time consuming process. Is it reasonable in today's software development process to see this as a reasonable approach? Can we really consider, or reconsider requirements that were "defined" two hours ago? What about two weeks? Is two months out of the question?
When a software project runs into an "issue" with requirements - why is that? Did the scope change? Did the requirements change? Did the "users" change what they wanted? Or did the understanding of the requirements change?
Are there presumptions in place that "everyone" knew, but did not communicate? Was the understanding match among all participants?
I'm not a world famous testing guru. I'm not a sought after speaker for software conferences and conventions. I'm not a famous historian or student of history. I do software testing for a living. I've seen some really good projects and some that were absolute trainwrecks. Some of those can be categorized as U. S. Grant did that "errors were of judgement, not intent." Where do we lowly software testers fall?
My assertion is, requirements will almost always be revisted. Sometimes it will be in the "requirements discovery process" other times it will be while the design is being worked in. Other times, while program code is being written, mis-matches or conflicts may be found. Occasionally, software testing will find inconsistencies in requirements when they "put all the pieces together."
Each of these instances is a far more expensive exercise than taking the time to revisit requirements and discuss them fully. What precisely does a phrase mean? What is the intent of this? Most importantly: Do the requirements work as a whole? Do they define a complete entity?
Do they summarize what the project team is to address? Do they describe the business needs to be fulfilled? Does everyone share the vision that is needed to fill those needs?
Defining your requirements does not mean that you need to know how the team will meet them. That is what the design process is for. If you can define the needs - you can define the way to fill those needs.
Sunday, June 13, 2010
Requirements and Presumptions
A common theme at conferences and workshops is "tell a story" - so here we go.
Once upon a time, there was a software project. This project had a variety of components, some were better understood than others. As sometimes happens, information was presented decisions were made in discussions where not all the interested parties are present. In this case, the testers were wrapping up another project and others involved in this one made the decision that testers weren't really "needed" yet.
So, life went on. The testers finished their project. They were given copies of the documents that had been prepared on this project. So they read the documents and worked on their own documents, like test plans and test cases. More meetings happened and conversations happened and emails were sent and read and replied to (well, were certainly replied to, maybe they were read by everyone.) Some of these had everyone present or copied or included, and some did not.
When the intrepid testing folks compared notes on an item described in the requirements documentation, they realized their understanding on did not match. No problem! They asked the tech lead to clarify what the correct interpretation should be. The answer? Neither was correct. And he explained why.
He then decided to verify what he had reported and brought the question to the business experts. Their answer consisted of "Well, everyone knows what that means..." and then they realized they each "business expert" had a different understanding of this very, basic element of their requirements.
No one had thought to define the terms. Then, to make it worse, no one had asked what the terms meant. It is possible that each participant believed they shared a common understanding. However, no one made sure that was the case.
I was not part of this project. It strikes me that most people fall into this trap at least once. Sometimes, twice.
Always question your unquestionable facts. They may not be as factual as you presume.
Once upon a time, there was a software project. This project had a variety of components, some were better understood than others. As sometimes happens, information was presented decisions were made in discussions where not all the interested parties are present. In this case, the testers were wrapping up another project and others involved in this one made the decision that testers weren't really "needed" yet.
So, life went on. The testers finished their project. They were given copies of the documents that had been prepared on this project. So they read the documents and worked on their own documents, like test plans and test cases. More meetings happened and conversations happened and emails were sent and read and replied to (well, were certainly replied to, maybe they were read by everyone.) Some of these had everyone present or copied or included, and some did not.
When the intrepid testing folks compared notes on an item described in the requirements documentation, they realized their understanding on did not match. No problem! They asked the tech lead to clarify what the correct interpretation should be. The answer? Neither was correct. And he explained why.
He then decided to verify what he had reported and brought the question to the business experts. Their answer consisted of "Well, everyone knows what that means..." and then they realized they each "business expert" had a different understanding of this very, basic element of their requirements.
No one had thought to define the terms. Then, to make it worse, no one had asked what the terms meant. It is possible that each participant believed they shared a common understanding. However, no one made sure that was the case.
I was not part of this project. It strikes me that most people fall into this trap at least once. Sometimes, twice.
Always question your unquestionable facts. They may not be as factual as you presume.
Thursday, May 6, 2010
Listening Vs. Hearing
Yesterday I wrote about the role of Testers and how Testers should listen. Pretty straight forward, no? Just listen. Simple.
I find the challenge to be actually listening for what is said. Not for what you think is being said or what you want to hear.
Did you ever play "The Telephone Game"? The one where a person whispers a sentence to another person, then they pass the message on to the next person. They tell the next person and it goes on until it gets back to the first person.
Usually, well, pretty much always, there is no resemblance to the original sentence.
Our challenge - Our biggest problem - is making sure that the message that we are hearing is what is really being said.
I recently had the opportunity to play a testing game with some other testers. Not the dice games of Michael Bolton (and others) and not the card game of Lynn McKee and Nancy Kelln. This game was a simple word game.
I gave clues to a puzzle and then had them ask Yes/No questions around those clues to get the solution. These testers did eventually get the correct answer.
The interesting thing was that when they repeated the puzzle, they stated it differently. It wasn't wrong, just, different. Things that were actually answers to questions were integrated into the "original clues." This changed the dynamic of the puzzle slightly, but not terribly.
It struck me that if a simple game like this can go awry, how much easier is it to get requirements or "business purpose" wrong when you may not be terribly familiar with the field or industry? Can we, as testers, test our own suppositions about what is "right" to the point where we can arrive at the same conclusion of the need as those on whose behalf we are working?
I find the challenge to be actually listening for what is said. Not for what you think is being said or what you want to hear.
Did you ever play "The Telephone Game"? The one where a person whispers a sentence to another person, then they pass the message on to the next person. They tell the next person and it goes on until it gets back to the first person.
Usually, well, pretty much always, there is no resemblance to the original sentence.
Our challenge - Our biggest problem - is making sure that the message that we are hearing is what is really being said.
I recently had the opportunity to play a testing game with some other testers. Not the dice games of Michael Bolton (and others) and not the card game of Lynn McKee and Nancy Kelln. This game was a simple word game.
I gave clues to a puzzle and then had them ask Yes/No questions around those clues to get the solution. These testers did eventually get the correct answer.
The interesting thing was that when they repeated the puzzle, they stated it differently. It wasn't wrong, just, different. Things that were actually answers to questions were integrated into the "original clues." This changed the dynamic of the puzzle slightly, but not terribly.
It struck me that if a simple game like this can go awry, how much easier is it to get requirements or "business purpose" wrong when you may not be terribly familiar with the field or industry? Can we, as testers, test our own suppositions about what is "right" to the point where we can arrive at the same conclusion of the need as those on whose behalf we are working?
Wednesday, May 5, 2010
Requirements and Listening
At the QUEST conference in Dallas, there were many presentations, exercises and discussions around testers and requirements. Along with stressing the importance of requirements to project success, a regular theme was getting testers involved early in the project to help get the requirements “right.”
What was not often discussed was how the testers were to actually help get the requirements “right.” The problem, as I see it, is that there is not a clearly defined argument that can explain to me how being a good “tester” automatically makes a person a good “requirements definer.”
There were a couple of points made that people may have missed. One was part of a hall conversation -unfortunately I don’t recall who made it. This fellow's point was that the testers needed to do more than simply insist on “testable” requirements. Without being able to bring something to the discussion – without being able to help define the requirements, what purpose does a tester really serve at the discussion?
Nancy Kelln gave a presentation on testing in an Agile environment. It was interesting watching some of the attendees grappling with some of the basic premises found in a variety of Agile methodologies. While talking about Stand-ups, she answered the above question very succinctly. She said, in essence, the role of the tester in an Agile Stand-up, is to listen.
Simple, no? Its what all of us are supposed to do anyway, but usually find ourselves thinking about other things for at least part of the time.
By listening – by hearing what is being said, the tester can gain insight into some of the reasoning or logic or problems that are being encountered. If a tester is listening critically, and thinking like a tester, they can hear not only what is being said, but can hear what is not being said.
The thing is, most people who do not work in an Agile environment would argue something like "Well, that's Agile. We don't do Agile." You don't need to work in an Agile environment to do this. At Requirements reviews, or better yet, Requirements gathering/discovery meetings - the same technique can work: listen.
Listen critically, then, don't be afraid to ask questions. These questions can sometimes be straightforward. For example “We’ve talked about regulations changing around Y. Are there any regulations we need to consider for X?"
How many times have you been in a conversation and asked a question because you were looking for insight, and the person you asked it of had an "Ah-HA!" moment because of it? They realized that something was missing and there was an unconsidered possibility or gap.
By asking questions of the experts, the tester can clarify their own thoughts and maybe trigger others to also ask questions. Sometimes, the strength of not knowing things is asking questions and listening carefully to the answers.
What was not often discussed was how the testers were to actually help get the requirements “right.” The problem, as I see it, is that there is not a clearly defined argument that can explain to me how being a good “tester” automatically makes a person a good “requirements definer.”
There were a couple of points made that people may have missed. One was part of a hall conversation -unfortunately I don’t recall who made it. This fellow's point was that the testers needed to do more than simply insist on “testable” requirements. Without being able to bring something to the discussion – without being able to help define the requirements, what purpose does a tester really serve at the discussion?
Nancy Kelln gave a presentation on testing in an Agile environment. It was interesting watching some of the attendees grappling with some of the basic premises found in a variety of Agile methodologies. While talking about Stand-ups, she answered the above question very succinctly. She said, in essence, the role of the tester in an Agile Stand-up, is to listen.
Simple, no? Its what all of us are supposed to do anyway, but usually find ourselves thinking about other things for at least part of the time.
By listening – by hearing what is being said, the tester can gain insight into some of the reasoning or logic or problems that are being encountered. If a tester is listening critically, and thinking like a tester, they can hear not only what is being said, but can hear what is not being said.
The thing is, most people who do not work in an Agile environment would argue something like "Well, that's Agile. We don't do Agile." You don't need to work in an Agile environment to do this. At Requirements reviews, or better yet, Requirements gathering/discovery meetings - the same technique can work: listen.
Listen critically, then, don't be afraid to ask questions. These questions can sometimes be straightforward. For example “We’ve talked about regulations changing around Y. Are there any regulations we need to consider for X?"
How many times have you been in a conversation and asked a question because you were looking for insight, and the person you asked it of had an "Ah-HA!" moment because of it? They realized that something was missing and there was an unconsidered possibility or gap.
By asking questions of the experts, the tester can clarify their own thoughts and maybe trigger others to also ask questions. Sometimes, the strength of not knowing things is asking questions and listening carefully to the answers.
Subscribe to:
Posts (Atom)