It is interesting to me how many people, pundits, whatever, mutter about whatever legislative system the citizens of the country they live in has to deal with. It does not seem to make much difference where you live, the legislature - Parliament, Provincial, Federal, whatever - House of Representatives, US or individual State Houses or Assemblies - Senate, US or State - every legislature seems corrupt or broken or failing by some measure.
Every few years a new crop of bright-eyed idealists run promising Reform - throw the bums out - make a change - represent... whatever. And they start out great guns - NO one will influence them. No "special interest" group will get their vote. No one will corrupt them.
Except that is not how a lot of legislation works - particularly in the US.
People make compromises. I'll support your proposal if you support mine. This can benefit both of our districts. And so it begins. Soon, they are being challenged by some bright--eyed newcomer talking about how corrupt they are. But they are not the corrupt ones! They fought the system and... challenged the status quo and... found out that the real world does not always work the way people want to believe it does.
In order to change the system, you must be willing to fly in the face of opposition. You must be willing to be called an unending string of names. Face accusations, and accusers, and know you are doing the right thing.
Why then, if this is what it takes to change the way legislatures work, do we not think that something similar must happen to change the way that so many organizations view testing?
We cringe at phrases like "QA this" or "as soon as this is QA'd." Ewwwwwwwwwwwwwwww.
We might object once or twice - possibly more. Eventually, how many people simply give up that fight?
Then we get heavily documented test processes - the ones where we match test scenarios to requirements from the requirements document and we record in the exception document why we need more than one test scenario to test this requirement. Then we make sure that all the steps and all the expected results for each step in each scenario align with the documented requirements.
Then we find that we are going to do more of the same. Forever. We spend more time documenting stuff than we actually do testing. Then Managers and VPs and Directors scream about the cost of testing and how could we have missed the defects the customer is complaining about.
My dear testers... and QA representatives and analysts and specialists - if this describes your work world, you have no moral right to complain about legislators "selling out." You have as well. You are in the same club.
When was the last time you were proud of the work you did?
What value are you adding?
Consider this...
Monday, January 28, 2013
Sunday, January 20, 2013
Triumph of Failure
This last week has been one for the record-books. No, I did not get sacked. To the best of my knowledge, there was not even a concerted effort to terminate my contract early. Instead, I stood my ground and was overrun in the process.
I asked questions that some people would rather not have had asked. I raised the red flag. Not the flag of rebellion, as in Les Miserables, but of THERE IS A PROBLEM HERE! A fair amount of time, I try and qualify that message - there MAY BE a problem here. There MIGHT POSSIBLY be a problem here. Nope. This one was flat out, by all known measures in the context of the organization, THERE IS A PROBLEM HERE.
I had a quiet chat with two big bosses, who ended with nodding and saying I did the right thing. One said something to the effect of "I know this was uncomfortable for you, and I'm glad you stepped up and did what you did." I spent the rest of the week getting lambasted by people for not doing it earlier.
Yeah. It was one of those weeks where no amount of rhubarb pie will take the bad taste away. Nor will any amount of really good single malt whisky or highly drinkable brandy remove the sting of personal words said in public. Ya know what made it worth it?
I was right.
One of my colleagues asked what I was going to do. I said I would wait. I would bide my time. I learned long ago that patience is a strength.
After three days of being told by certain people that I had done everything wrong and that the fault for the project's status was entirely on me, they did what they were supposed to have done in the first place. They began communicating. They began talking with each other and talking with other people involved.
In their mind I was a complete failure. They made noises of not wanting to work with me again. They pointed to the official process model and said 'These are what you are supposed to do! This is what I am supposed to do! Do your job!'
I very quietly asked, "What happens when someone does not understand something?" "Inconceivable! There is no way this can possibly be misunderstood!" Yeah. I then got to use that line from Princess Bride - "I do not think that means what you think it means."
There are at least three people on the project who did not understand the meaning of statement Z. Each of us asked questions and got the answer that it was all documented in the requirements. And yet reading the requirements as documented, the three of us each came away with a different understanding. None of us came away with anything like a warning that there was a problem with the way the software currently worked.
In the end, the problems were resolved and the project moved forward. In the minds of four people, I am a failure. In the minds of others, I did the right thing and confronted poor behavior with proper behavior.
My take away - When confronted with poor, unprofessional behavior, hold fast and do not allow yourself to respond in kind. There comes a point where some response may be required (there is one coming from this, have no doubt) but do so in an appropriate manner.
Remove emotion (as best as you can) and respond with fact.
Don't lose your temper.
Die hard the 57th, die hard.
William Inglis, Lt Col, 57th Regt of Foot, Battle of Albuhera
I asked questions that some people would rather not have had asked. I raised the red flag. Not the flag of rebellion, as in Les Miserables, but of THERE IS A PROBLEM HERE! A fair amount of time, I try and qualify that message - there MAY BE a problem here. There MIGHT POSSIBLY be a problem here. Nope. This one was flat out, by all known measures in the context of the organization, THERE IS A PROBLEM HERE.
I had a quiet chat with two big bosses, who ended with nodding and saying I did the right thing. One said something to the effect of "I know this was uncomfortable for you, and I'm glad you stepped up and did what you did." I spent the rest of the week getting lambasted by people for not doing it earlier.
Yeah. It was one of those weeks where no amount of rhubarb pie will take the bad taste away. Nor will any amount of really good single malt whisky or highly drinkable brandy remove the sting of personal words said in public. Ya know what made it worth it?
I was right.
One of my colleagues asked what I was going to do. I said I would wait. I would bide my time. I learned long ago that patience is a strength.
After three days of being told by certain people that I had done everything wrong and that the fault for the project's status was entirely on me, they did what they were supposed to have done in the first place. They began communicating. They began talking with each other and talking with other people involved.
In their mind I was a complete failure. They made noises of not wanting to work with me again. They pointed to the official process model and said 'These are what you are supposed to do! This is what I am supposed to do! Do your job!'
I very quietly asked, "What happens when someone does not understand something?" "Inconceivable! There is no way this can possibly be misunderstood!" Yeah. I then got to use that line from Princess Bride - "I do not think that means what you think it means."
There are at least three people on the project who did not understand the meaning of statement Z. Each of us asked questions and got the answer that it was all documented in the requirements. And yet reading the requirements as documented, the three of us each came away with a different understanding. None of us came away with anything like a warning that there was a problem with the way the software currently worked.
In the end, the problems were resolved and the project moved forward. In the minds of four people, I am a failure. In the minds of others, I did the right thing and confronted poor behavior with proper behavior.
My take away - When confronted with poor, unprofessional behavior, hold fast and do not allow yourself to respond in kind. There comes a point where some response may be required (there is one coming from this, have no doubt) but do so in an appropriate manner.
Remove emotion (as best as you can) and respond with fact.
Don't lose your temper.
Die hard the 57th, die hard.
William Inglis, Lt Col, 57th Regt of Foot, Battle of Albuhera
Monday, December 24, 2012
Farewell 2012; Rise Up and Be Strong 2013
The last couple of years I have tended to write blog posts at the change of the year. One to summarize the year that is ending and one to list the things I am looking forward to in the coming year. This time it is different. It feels different.
Changes
Much has happened this year. As I was considering how to encapsulate it, I read over the posts on changing from 2011 to 2012. I must admit, I had to smile. Much has happened, still much remains to be done.
What has happened? Well, in August I submitted my resignation to the company where I was working. My "old company" had been bought by a much larger competitor and I found myself in a struggle to keep myself focused on what my goals and values were. I was a little surprised because I had worked for large companies in the past - most of my working life in fact, had been with large companies.
The surprising thing to the person I was a few years ago, was that I resigned without a "company" to go to. I went independent. I struck out on my own with a letter of marque sailing against any and every - oh, no, umm - that is being a privateer - not a working independent test professional. Meh, whatever.
But, that is what I did. The roots for this lie in this post I wrote late in 2011. Looking back, it was the natural progression of where I was going from and where I was going to.
Now, I did have a contract lined up - which has since been extended. This made the opportunity a little easier than jumping in cold-turkey - or deciding to go independent after being let go. I concede this was an advantage.
Of course, now I am working even harder - not simply at "the day job" but in my writing, my learning and my attempts to understand things better. The push from being sacked, as described in the blog post mentioned above, seems to have led me to the point where I hoisted my own flag, and have so far, avoided being hoist with my own petard.
People
I have been very fortunate in my meetings and comings and goings this past year. Given the opportunity to speak in Portland at PNSQC and then in Potsdam at Agile Testing Days, I met a massive number of people I had only read of, or read their words. It was inspiring, encouraging and humbling all at once. In both instances, I found it easy to not be the smartest person in the room. I had a pile of people there I could relate to and learn from.
To each of you, I am deeply indebted. Its a long list - let's see. There's Matt Heusser, who is still a bundle of energy and ideas. Michael Larsen, who is really amazingly smart. Bernie Berger, Markus Gartner, Janet Gregory, Gojko Adzic, Huib Schoots, Sigge Birgisson, Paul Gerrard, Simon Morley, Jurgen Appelo, James Lindsay, Michael Dedolph, Linda Rising, Ben Simo, and.... the list really does kind of go on.
The people I continue to find to be wonderful teachers and gentle instructors (sometimes not so gentle as well) sometimes through conversation, emails, IM/Skype chats, blog posts and articles. They include, in no particular order, Elizabeth Hendrickson, Fiona Charles, James Bach, Paul Holand, Michael Bolton, Cem Kaner, Jon Bach, Catherine Powell, Griffin Jones. There are others, but these folks came to mind as I was writing this.
Community
Wow. This year has been amazing. The local group, the GR Testers, are meeting every month, with a variety of people showing up - not "the same folks every time" but people wandering in to check it out. I find this exciting.
AST - Association for Software Testing
What an amazing group of people this is, and is continuing to develop into. The Education Special Interest Group (EdSIG) is continuing to be an area of interest. Alas, my intention of participating in "more courses" has been impacted by life stuff. I've been able to assist with a couple of Foundations sessions for the BBST course, and offered ideas on some discussions but that is about all.
This past August I was honored to be elected to the Board of Directors of AST. My participation continues to be as much as I can give on a regular basis - including monitoring/moderating the Forums on the AST website (a really under utilized resource, perhaps we can change this in the coming year) and the LinkedIn AST group's discussion forum (mostly whacking spam).
A new and exciting development is the Test Leadership Special Interest Group - LeadershipSIG. This new group is looking into all sorts of interesting questions around Test Management and Test Leadership and - well - stuff - including the interesting question of the difficulty of finding and recruiting Context Driven Test leaders, managers and directors.
CAST is scheduled for August in Madison, Wisconsin. This is going to be good.
Other Conference / Community Stuff
Conferences coming up include STPCon - in San Diego in April. Also in April is GLSEC - Great Lakes Software Excellence Conference - that one is in Grand Rapids. QAI's QUEST conference is also scheduled for the Spring.
There are several conferences I've considered submitting proposals to - and I suspect it is time to do more than consider.
Writing - Oh my. I have several projects I've been working through. I am really excited about some of the potential opportunities. I'm pretty geeked about this.
Overall, I am excited about what 2013 may hold. It strikes me that things that have been set up over the last several years are coming into place. What is in store? I do not know. I believe it is going to be good.
After all. I am writing this the evening of December 23. According to some folks, the world was supposed to end a couple of days ago. What those folks don't understand is that everything changes. All the time. Marking sequences and patterns and tracking them is part of what every society does. They don't end. Simply turn the page.
Let us rise up together.
Changes
Much has happened this year. As I was considering how to encapsulate it, I read over the posts on changing from 2011 to 2012. I must admit, I had to smile. Much has happened, still much remains to be done.
What has happened? Well, in August I submitted my resignation to the company where I was working. My "old company" had been bought by a much larger competitor and I found myself in a struggle to keep myself focused on what my goals and values were. I was a little surprised because I had worked for large companies in the past - most of my working life in fact, had been with large companies.
The surprising thing to the person I was a few years ago, was that I resigned without a "company" to go to. I went independent. I struck out on my own with a letter of marque sailing against any and every - oh, no, umm - that is being a privateer - not a working independent test professional. Meh, whatever.
But, that is what I did. The roots for this lie in this post I wrote late in 2011. Looking back, it was the natural progression of where I was going from and where I was going to.
Now, I did have a contract lined up - which has since been extended. This made the opportunity a little easier than jumping in cold-turkey - or deciding to go independent after being let go. I concede this was an advantage.
Of course, now I am working even harder - not simply at "the day job" but in my writing, my learning and my attempts to understand things better. The push from being sacked, as described in the blog post mentioned above, seems to have led me to the point where I hoisted my own flag, and have so far, avoided being hoist with my own petard.
People
I have been very fortunate in my meetings and comings and goings this past year. Given the opportunity to speak in Portland at PNSQC and then in Potsdam at Agile Testing Days, I met a massive number of people I had only read of, or read their words. It was inspiring, encouraging and humbling all at once. In both instances, I found it easy to not be the smartest person in the room. I had a pile of people there I could relate to and learn from.
To each of you, I am deeply indebted. Its a long list - let's see. There's Matt Heusser, who is still a bundle of energy and ideas. Michael Larsen, who is really amazingly smart. Bernie Berger, Markus Gartner, Janet Gregory, Gojko Adzic, Huib Schoots, Sigge Birgisson, Paul Gerrard, Simon Morley, Jurgen Appelo, James Lindsay, Michael Dedolph, Linda Rising, Ben Simo, and.... the list really does kind of go on.
The people I continue to find to be wonderful teachers and gentle instructors (sometimes not so gentle as well) sometimes through conversation, emails, IM/Skype chats, blog posts and articles. They include, in no particular order, Elizabeth Hendrickson, Fiona Charles, James Bach, Paul Holand, Michael Bolton, Cem Kaner, Jon Bach, Catherine Powell, Griffin Jones. There are others, but these folks came to mind as I was writing this.
Community
Wow. This year has been amazing. The local group, the GR Testers, are meeting every month, with a variety of people showing up - not "the same folks every time" but people wandering in to check it out. I find this exciting.
AST - Association for Software Testing
What an amazing group of people this is, and is continuing to develop into. The Education Special Interest Group (EdSIG) is continuing to be an area of interest. Alas, my intention of participating in "more courses" has been impacted by life stuff. I've been able to assist with a couple of Foundations sessions for the BBST course, and offered ideas on some discussions but that is about all.
This past August I was honored to be elected to the Board of Directors of AST. My participation continues to be as much as I can give on a regular basis - including monitoring/moderating the Forums on the AST website (a really under utilized resource, perhaps we can change this in the coming year) and the LinkedIn AST group's discussion forum (mostly whacking spam).
A new and exciting development is the Test Leadership Special Interest Group - LeadershipSIG. This new group is looking into all sorts of interesting questions around Test Management and Test Leadership and - well - stuff - including the interesting question of the difficulty of finding and recruiting Context Driven Test leaders, managers and directors.
CAST is scheduled for August in Madison, Wisconsin. This is going to be good.
Other Conference / Community Stuff
Conferences coming up include STPCon - in San Diego in April. Also in April is GLSEC - Great Lakes Software Excellence Conference - that one is in Grand Rapids. QAI's QUEST conference is also scheduled for the Spring.
There are several conferences I've considered submitting proposals to - and I suspect it is time to do more than consider.
Writing - Oh my. I have several projects I've been working through. I am really excited about some of the potential opportunities. I'm pretty geeked about this.
Overall, I am excited about what 2013 may hold. It strikes me that things that have been set up over the last several years are coming into place. What is in store? I do not know. I believe it is going to be good.
After all. I am writing this the evening of December 23. According to some folks, the world was supposed to end a couple of days ago. What those folks don't understand is that everything changes. All the time. Marking sequences and patterns and tracking them is part of what every society does. They don't end. Simply turn the page.
Let us rise up together.
Labels:
Advertising,
Community,
Conferences,
leadership,
testing,
thinking
Sunday, December 2, 2012
Why Can't You See That? or More Than Meets the Eye
There are times when things seem so clear. Other times it is like looking through a cloud.
How many times have we stumbled across something in an area another tester had worked on and wondered just how it is that they did not see the problem? After all, we have done piles of training and exercises and we have really good process models in place and - what is WRONG with them that they are not seeing these problems?
So, anyone else run into that? It seems like there are piles of stories of people who were just "inappropriately motivated" or essentially a lazy individual, right? People just don't get it., do they?
Let's see. Something happened in Friday that made me wonder about some of those presumptions.
The last few months, my dear lady-wife has made "observations" on some "stuff" around the house. Stuff? Well, like, little fluffs of cat hair in the corner or bits of stuff on the carpeted steps or, well, yeah, stuff after I vacuumed (Hoovered for some folks). Stuff like "How can you not see that? What is going on? Aren't you paying attention?"
Well, I thought I was. I was also having a problem reading really small fonts... and kept changing the resolution on my already huge laptop to make it easier to read. Then dealing with small screws on small devices and really small screwdrivers - it just has been getting really hard.
So, I went more slowly and was more careful with what I was doing. Still, there were bits of ... fluff - like cat hair - that seemed to evade whatever I did, or tried to do, while cleaning. Man. Talk about frustrating.
Sounds kinda like what some of those "less than stellar" testers may have run into, no? No matter how careful they were, glaring huge problems still got through. Then they try harder and be as diligent as they can and they get in trouble for not getting through enough test cases each day.
So, folks may find themselves in a spiral without knowing what the problem is. For testers, it could be simply that they are overwhelmed by the sheer quantity of stuff they are finding. Maybe there are problems in their understanding of what is needed or expected of them or... yeah.
In my case, the very nice eye doctor - yeah, I figured part of the problem was that my eyeglass prescription was in need of updating. Well, that seemed reasonable - but it was wrong. Way wrong.
In fact, the very nice eye doctor said "The lens for the left eye is still perfect. That is correcting to 20/20 with no problem. The problem is the cataract in your right eye." What? No way. Yeah. She did some really simple demonstrations and showed both of us the problem wasn't my glasses (my tools) it was my eye. Funny. Hmmm. Who'd have thought?
In a flash, everything made sense. Well, not really a flash, more like a "Oh, bother" moment. So, now I know what is going on with my eye and what needs to be done to deal with that. After that problem has been addressed, we'll see what other problems may have been masked by this really BIG problem. So, I may need an updated prescription after the dust settles. But we won't know until we get to that point.
Kind of like finding bugs in software. We find a bunch of BIG stuff. It gets addressed. But we don't know what else may be there. And if time constraints get in the way, what then?
What BIG HUGE problems go undetected?
How many times have we stumbled across something in an area another tester had worked on and wondered just how it is that they did not see the problem? After all, we have done piles of training and exercises and we have really good process models in place and - what is WRONG with them that they are not seeing these problems?
So, anyone else run into that? It seems like there are piles of stories of people who were just "inappropriately motivated" or essentially a lazy individual, right? People just don't get it., do they?
Let's see. Something happened in Friday that made me wonder about some of those presumptions.
The last few months, my dear lady-wife has made "observations" on some "stuff" around the house. Stuff? Well, like, little fluffs of cat hair in the corner or bits of stuff on the carpeted steps or, well, yeah, stuff after I vacuumed (Hoovered for some folks). Stuff like "How can you not see that? What is going on? Aren't you paying attention?"
Well, I thought I was. I was also having a problem reading really small fonts... and kept changing the resolution on my already huge laptop to make it easier to read. Then dealing with small screws on small devices and really small screwdrivers - it just has been getting really hard.
So, I went more slowly and was more careful with what I was doing. Still, there were bits of ... fluff - like cat hair - that seemed to evade whatever I did, or tried to do, while cleaning. Man. Talk about frustrating.
Sounds kinda like what some of those "less than stellar" testers may have run into, no? No matter how careful they were, glaring huge problems still got through. Then they try harder and be as diligent as they can and they get in trouble for not getting through enough test cases each day.
So, folks may find themselves in a spiral without knowing what the problem is. For testers, it could be simply that they are overwhelmed by the sheer quantity of stuff they are finding. Maybe there are problems in their understanding of what is needed or expected of them or... yeah.
In my case, the very nice eye doctor - yeah, I figured part of the problem was that my eyeglass prescription was in need of updating. Well, that seemed reasonable - but it was wrong. Way wrong.
In fact, the very nice eye doctor said "The lens for the left eye is still perfect. That is correcting to 20/20 with no problem. The problem is the cataract in your right eye." What? No way. Yeah. She did some really simple demonstrations and showed both of us the problem wasn't my glasses (my tools) it was my eye. Funny. Hmmm. Who'd have thought?
In a flash, everything made sense. Well, not really a flash, more like a "Oh, bother" moment. So, now I know what is going on with my eye and what needs to be done to deal with that. After that problem has been addressed, we'll see what other problems may have been masked by this really BIG problem. So, I may need an updated prescription after the dust settles. But we won't know until we get to that point.
Kind of like finding bugs in software. We find a bunch of BIG stuff. It gets addressed. But we don't know what else may be there. And if time constraints get in the way, what then?
What BIG HUGE problems go undetected?
Friday, November 30, 2012
Thinking and Working Agile in an Unbending World
Yeah - that was the title of the session I presented at Agile Testing Days in Potsdam.
My presentation at Agile Testing Days went through a several changes. The last major one was the time slot. When I saw the slot I was assigned, the last track session on the last day, I was concerned about the path I had taken. I realized that a heavy presentation when most folks would be already tired would probably not be ready for one more intense session.
Then Unicorns took over the conference.
No, really. They were everywhere. So, I made one more set of revisions - and added Unicorns to mine. (Why not?) This also gave me another idea I had been trying to get out - and unicorns helped me. So, I ran with it. Hey! Its part of being flexible, no?
Sigurdur Birgisson was kind enough to use MindMeister to record my presentation as a live blog. Its kinda cool. Thanks Sigge! Find it here.
Right -
The cool cats all do Agile. Agile rocks. But if the company does not "do Agile" then, what happens to the talented people working there? Are they doomed to being Not Cool? Can they move from the "Real" World to Unicorn Land of Agile?
There are flavours of the idea of what is Agile that, I believe, leads to its own problems. We all know about the terms laid out in the Agile Manifesto. We know how it is phrased. We know the ideas around...
Consider these questions -
From what I can see, this depends a great deal is meant by "Agile." Now, the Agile Methodologies tend to have their own baggage and... stuff. How can this be? If we're Agile we're supposed to be "agile," right?
Alas, I fear I'm getting ahead of myself.
Consider this - the Lightweight Models and Methodologies (that became "Agile") were a response to the heavier, more formal models in place. These were considered slow and unable to respond to change. They bogged down the development of software and generally, based on some versions of the story, kept people from doing good work.
The thing is, no process model in place was ever put there, from what I have been able to discover, was put in place to do those things. No model was ever put in place to crush the creativity out of the talented people doing the work.
Why Are Things the Way They Are?
Rather than rail against the horrible, crushing, models - which is what I did when I was younger, less experienced and generally less aware than I am now. Well, maybe less understanding than I am now.
Try these questions instead:
Those "horrible, crushing models" were an attempt to address something. Those questions may help you find out what that something was. Learning that may help you approach the great question of "Are these process models still needed?"
This is a slightly difference question, and I find often times less painful question, than "Are these process models still relevant?"
Both are important. The answer to the first can inform your phrasing for the second.
Oftentimes people will fall into the trap of ritual when it comes to software. "We had this problem and we did this ever since and that problem never came back." Very good! Now, did the problem never coming back have anything at all to do with the change you made in the model? Have you encountered other problems? Has the model gotten more convoluted as you make changes to handle problems?
At what point does your house of cards topple?
Do the steps added or changed to the process model still add value? Are they worth the expense of executing them or have they become boxes to check-off?
Considering the questions presented, and the answers received from them, can help you take a step toward agile that I find terribly important. The odd thing is, when I present it as my "definition" of what makes Agile, well, agile, I get nods of agreement, scowls (and sometimes proclamations) of "you're wrong!" and everything in between.
Here is how I look at Agile in an agile way. Ready?
Is something needed, really? Or are you checking a box that someone says must be checked? Does the information exist elsewhere? If so, why are you repeating it?
Now, all the Agile folks are saying "Yeah, we know this. Waterfall is bad."
Consider how many times have you heard (maybe said?) "If you are doing <blah> you are not Agile." Now, how many times have folks said "If you are not doing <blah2> you are not Agile."
Rubbish.
If what someone is doing really adds value and makes sense for the context of the organization, and they are delivering good software, how is that wrong? How is that less-than desirable?
Consider these questions about your own software:
When I asked the set of questions about customers with a choice and installing the upgrades at Agile Testing Days, a number of the very confident faces changed from "We're Agile, We're Good." to somewhere between "Deer in the headlights" and "Oh, oh. How did he know?"
If the software we are making does not do what people, like our customers, need it to do, does it really matter what hoops we jump through to make it?
Consider this paraphrase of Jerry Weinberg's definition of Quality (with the Bach/Bolton Addendum) and think about all the stuff we talk about with Agile and ... yeah.
If your software product does not fit that definition, does it matter how you make it?
Do stuff that makes sense to do and don't do stuff that doesn't make sense to do.
My presentation at Agile Testing Days went through a several changes. The last major one was the time slot. When I saw the slot I was assigned, the last track session on the last day, I was concerned about the path I had taken. I realized that a heavy presentation when most folks would be already tired would probably not be ready for one more intense session.
Then Unicorns took over the conference.
No, really. They were everywhere. So, I made one more set of revisions - and added Unicorns to mine. (Why not?) This also gave me another idea I had been trying to get out - and unicorns helped me. So, I ran with it. Hey! Its part of being flexible, no?
Sigurdur Birgisson was kind enough to use MindMeister to record my presentation as a live blog. Its kinda cool. Thanks Sigge! Find it here.
Right -
The cool cats all do Agile. Agile rocks. But if the company does not "do Agile" then, what happens to the talented people working there? Are they doomed to being Not Cool? Can they move from the "Real" World to Unicorn Land of Agile?
There are flavours of the idea of what is Agile that, I believe, leads to its own problems. We all know about the terms laid out in the Agile Manifesto. We know how it is phrased. We know the ideas around...
Fine. So what does the word agile mean? Well, being an American, I looked up the definition in Webster's Dictionary. Here's what I found:Individuals & Interactions over Processes & ToolsWorking Software over Comprehensive DocumentationCustomer Collaboration over Contract NegotiationResponding to Change over Following a Plan
The question I have is that if this Agile... Stuff works, what makes it work? Does it work better than, say, the non-Agile stuff? Like, Waterfall?
- Marked by ready ability to move with quick easy grace
- Having a quick resourceful and adaptable character
Consider these questions -
- Why is it that some teams are successful and some are not, no matter the approach they use?
- Why is it that some teams are successful using strong command and control models?
- Why is it that some teams are successful using agile methodologies?
- Are Agile Methodologies always, well, flexible? Do they always make things better?
From what I can see, this depends a great deal is meant by "Agile." Now, the Agile Methodologies tend to have their own baggage and... stuff. How can this be? If we're Agile we're supposed to be "agile," right?
Alas, I fear I'm getting ahead of myself.
Consider this - the Lightweight Models and Methodologies (that became "Agile") were a response to the heavier, more formal models in place. These were considered slow and unable to respond to change. They bogged down the development of software and generally, based on some versions of the story, kept people from doing good work.
The thing is, no process model in place was ever put there, from what I have been able to discover, was put in place to do those things. No model was ever put in place to crush the creativity out of the talented people doing the work.
Why Are Things the Way They Are?
Rather than rail against the horrible, crushing, models - which is what I did when I was younger, less experienced and generally less aware than I am now. Well, maybe less understanding than I am now.
Try these questions instead:
- Why are these process models in place?
- What happened that made this seem like a good solution?
- Have things changed since then?
- Have the problems resurfaced since then?
Those "horrible, crushing models" were an attempt to address something. Those questions may help you find out what that something was. Learning that may help you approach the great question of "Are these process models still needed?"
This is a slightly difference question, and I find often times less painful question, than "Are these process models still relevant?"
Both are important. The answer to the first can inform your phrasing for the second.
Oftentimes people will fall into the trap of ritual when it comes to software. "We had this problem and we did this ever since and that problem never came back." Very good! Now, did the problem never coming back have anything at all to do with the change you made in the model? Have you encountered other problems? Has the model gotten more convoluted as you make changes to handle problems?
At what point does your house of cards topple?
Do the steps added or changed to the process model still add value? Are they worth the expense of executing them or have they become boxes to check-off?
Considering the questions presented, and the answers received from them, can help you take a step toward agile that I find terribly important. The odd thing is, when I present it as my "definition" of what makes Agile, well, agile, I get nods of agreement, scowls (and sometimes proclamations) of "you're wrong!" and everything in between.
Here is how I look at Agile in an agile way. Ready?
Doing what needs to be done to deliver the best possible quality and
high-value software product in a timely and cost effective manner.
high-value software product in a timely and cost effective manner.
Is something needed, really? Or are you checking a box that someone says must be checked? Does the information exist elsewhere? If so, why are you repeating it?
Now, all the Agile folks are saying "Yeah, we know this. Waterfall is bad."
Consider how many times have you heard (maybe said?) "If you are doing <blah> you are not Agile." Now, how many times have folks said "If you are not doing <blah2> you are not Agile."
Rubbish.
If what someone is doing really adds value and makes sense for the context of the organization, and they are delivering good software, how is that wrong? How is that less-than desirable?
Consider these questions about your own software:
- Do your customers have a choice about using your product?
- If they purchased it, are they installing the upgrades/patches when you ship them?
- Are they letting them linger?
When I asked the set of questions about customers with a choice and installing the upgrades at Agile Testing Days, a number of the very confident faces changed from "We're Agile, We're Good." to somewhere between "Deer in the headlights" and "Oh, oh. How did he know?"
If the software we are making does not do what people, like our customers, need it to do, does it really matter what hoops we jump through to make it?
Consider this paraphrase of Jerry Weinberg's definition of Quality (with the Bach/Bolton Addendum) and think about all the stuff we talk about with Agile and ... yeah.
Quality is value to someone (who matters.)
If your software product does not fit that definition, does it matter how you make it?
Do stuff that makes sense to do and don't do stuff that doesn't make sense to do.
Monday, November 26, 2012
At Agile Testing Days, Retrospective
I arrived in Potsdam Saturday afternoon, with my lady-wife. I was planning on spending the week at Agile Testing Days, while my wife visited the sites with our daughter who was flying in separately from DC. As luck would have it, we were sitting in the airport at Amsterdam when up walks Matt Heusser. Cool. We were on the same flight into Berlin.
Suffice to say that the entire week went that way - happy meetings.
A series of happy meetings with people I had not met in person, but I follow on twitter, read their blogs, articles, books. The list is legion - Gojko Adzic, Huib Schoots, Markus Gartner, Sigge Birgisson, Lisa Crispin, Janet Gregory, Paul Gerrard, Simon Morley, Jurgen Appelo, James Lindsay and ... yeah a buch. On top of that, reconnecting with people I already knew, like Matt. I could not ask for better company. I had many conversations with Dawn Haynes, Scott Barber, Tony Bruce and more.
There were many people I had heard of, but had little contact with their work - at least not very much, like Meike Mertsch. Then the folks I simply met - Carlos Ble, Alex Schladebeck, Anne Schuessler, Chris George.
It was inspiring, invigorating, enlightening, sometimes frustrating, and exhausting.
In my blog posts from the week, I tried to capture the excitement I felt, the energy of the people around me and my general impressions of the various presentations I attended and participated in.
The organization of the conference itself was really, really good. Jose Diaz, Madeleine Griep, Uwe and all the crew did a fantastic job of making sure people were comfortable, had what they needed and feel welcome. The events were great - the speakers dinner was astounding - with great conversation, good food and wine... and beer (Germany. Duh.)
Then - Tuesday, talking with Jose - he said "Your family is here? They must come tonight to the award ceremony. It will be wonderful!" Umm - yeah - it was.
Throughout the week, the conversations were astounding. Hanging with crazy-smart people does more for teaching yourself, and learning. In my case, sharing some small bits of copper among the silver and gold put out by others.
I tried hard to not be the smartest person in the conversation - far from it in fact. In a gathering like this - its easy.
So, yeah. I'm sitting in my living room thinking back to last week. Frankly it amazes me all that happened.
I've been to some very good conferences and hung with, talked with and learned from smart people. While I recognize that many of the people I talked with over the week were very like-minded to my own views on testing - not all were. In fact, quite a few were not. We talked, shared ideas and generally got to set up the opportunity for future learning opportunities.
Yeah. This was a great conference for me.
===
Addendum: Other folks I should have mentioned and simply did not - lets see - Jean-Paul Virwijk (yeah, Arborosa on Twitter); Rob van Steenbergen (rvansteenbergen & TestEvents on twitter); Ralph Jocham, Scott Ambler, Eddie Bruin - and - yeah - a long list - too long to really put here. It was good.
Suffice to say that the entire week went that way - happy meetings.
A series of happy meetings with people I had not met in person, but I follow on twitter, read their blogs, articles, books. The list is legion - Gojko Adzic, Huib Schoots, Markus Gartner, Sigge Birgisson, Lisa Crispin, Janet Gregory, Paul Gerrard, Simon Morley, Jurgen Appelo, James Lindsay and ... yeah a buch. On top of that, reconnecting with people I already knew, like Matt. I could not ask for better company. I had many conversations with Dawn Haynes, Scott Barber, Tony Bruce and more.
There were many people I had heard of, but had little contact with their work - at least not very much, like Meike Mertsch. Then the folks I simply met - Carlos Ble, Alex Schladebeck, Anne Schuessler, Chris George.
It was inspiring, invigorating, enlightening, sometimes frustrating, and exhausting.
In my blog posts from the week, I tried to capture the excitement I felt, the energy of the people around me and my general impressions of the various presentations I attended and participated in.
The organization of the conference itself was really, really good. Jose Diaz, Madeleine Griep, Uwe and all the crew did a fantastic job of making sure people were comfortable, had what they needed and feel welcome. The events were great - the speakers dinner was astounding - with great conversation, good food and wine... and beer (Germany. Duh.)
Then - Tuesday, talking with Jose - he said "Your family is here? They must come tonight to the award ceremony. It will be wonderful!" Umm - yeah - it was.
Throughout the week, the conversations were astounding. Hanging with crazy-smart people does more for teaching yourself, and learning. In my case, sharing some small bits of copper among the silver and gold put out by others.
I tried hard to not be the smartest person in the conversation - far from it in fact. In a gathering like this - its easy.
So, yeah. I'm sitting in my living room thinking back to last week. Frankly it amazes me all that happened.
I've been to some very good conferences and hung with, talked with and learned from smart people. While I recognize that many of the people I talked with over the week were very like-minded to my own views on testing - not all were. In fact, quite a few were not. We talked, shared ideas and generally got to set up the opportunity for future learning opportunities.
Yeah. This was a great conference for me.
===
Addendum: Other folks I should have mentioned and simply did not - lets see - Jean-Paul Virwijk (yeah, Arborosa on Twitter); Rob van Steenbergen (rvansteenbergen & TestEvents on twitter); Ralph Jocham, Scott Ambler, Eddie Bruin - and - yeah - a long list - too long to really put here. It was good.
Thursday, November 22, 2012
Agile Testing Days: Day 4 Live in Potsdam!
Thursday morning is here, the last day of Agile Testing Days in Potsdam, Germany. I managed to over sleep and miss the start of Lean Coffee. When I walked past, it seemed they had a good number of folks in the room, broken into two groups. Broken? Maybe "refactored" is a better term...
OK, that was a little lame. I think I may be a little tired yet.
So, here we go with another day. Today we have keynotes by Ola Ellnestam, Scott Barber and Matt Heusser. There are a variety of track talks as well. Rumor has it that at least one will combine unicorns with Harry Potter.
And, we are about to kick off with Ola Ellnestam on Fast Feedback Teams. Ready? Set? GO!
--
So, Ola launches into a story of trying to explain what he does to his kids. (Pete Comment: Yeah, if you have kids, its one of those weird conversations to think about when you deal with software. Also, kinda digging the hand drawn slide deck.) It was a good story about kids and understanding. It also included the idea of feedback - when things (like games) are predictable, how much fun are they? Unless you figure out the pattern and your younger brother has not...
Showers are a good example of "feedback loop." Depending on how far the shower head is from the faucet handles, you may have a bit of delay - like the farther away the two are, the longer it takes for you to know if the water is the temperature you want.
Reminder - if you remove the feedback mechanism, you are not "closing the loop" you are kicking it open so the loop never responds.
Reminder - never presume that "everyone knows" - when you are the one who does not know.
The velocity of the project (or aircraft) will determine the timing for feedback. One can not trust a response loop of, of, a couple of minutes, when the feedback involves aircraft at 33,000 feet. You kind of need it sooner - like - instantly.
Other times, consider how clear the communication is - the nature of the feedback can impact the understanding involved. Consider, if our task is to help solve problems, does the solution always involve creating software? Ola says while he likes software, he likes not creating software if there is another solution. (Pete Comment: Yeah - I can dig that.)
Instead of letting the backlog grow perpetually - which tends to freak people out when they look at it - consider paring it down - prioritize the list so the stuff that is really wanted/needed is on it. If the stuff that drops off comes back, then reconsider it. Don't let yourself get bogged down.
The problem is a bit like a bowl of candy - when the user stories are all "compelling" (Pete: by some measure) it gets really hard to choose. Limit the candy in the bowl to that which is important. This can help people understand. Allow the user stories to act as reminders of past conversations. When that conversation comes around again, perhaps the priority on that story needs to go up.
Ola tells a story about trying to pull from a build server - except there is a time difference between the time stamp on the build server and the machine he is working on. Problems resulted - like incomplete understanding / noise / in the response - which causes confusion.
Classic example of noise in response - what Ola calls "Chinese Whisper Game" - which I know as the "Telephone Game" - yeah. Start with one thing and by the time it gets told to everyone in the room and comes back to the first person, it is totally different.
Instead of looking for improvements in "outer" feedback loops, look at the inner-most loop. Feedback tends to happen in (generally) concentric loops, often centered around the life-cycle in the methodology in use. If the initial cycle takes a long time to get feed back, does it matter how efficient (or not) the outer loops are? Optimizing the inner loop as best you can, will give you opportunity to tighten the outer loops more than you can now.
This is true of classic Scrum cycles - as well as in other examples - like in Banking. Banking tends to run multiple batch processes each day. Yeah - that happens a lot more than some folks may realize. Recognizing the results of overlapping batches may possibly impact each other, the result of that impact the nature and type of feedback.
Moving on to double feedback loops - Generally stated - it is better to do the right thing wrong than the wrong thing right. For example - a radiator (room hear for those in the States who don't know about such things) has a thermostat to keep a room at a steady temperature. Recognizing the door or window is open and may have a bearing on the results - if one is looking at how well (or poorly) the radiator is doing its job.
Bug Reports? Yeah - those are records of things we did wrong. We have an option, look at them and figure out what went wrong, or make sure we don't do anything wrong. Reminder - the easiest way to avoid doing something wrong is to not do anything.
To move an organization, particularly a new organization, toward success, sometimes the easiest way is to reduce stuff that does not help. It may be user stories from the backlog - or it may be existent features that are of no value that can be removed. This will close loops that may currently only add noise instead of value. It can also speed the feedback return so you can do a better job.
Interesting question - What about slow feedback loops - those that start now, but the event for the feedback will not occur for some time? Well - good question. Consider Ola's flight to the conference. He bought round trip tickets on Scandinavian Air (SAS) - except there is a bunch of stuff going on with them right now, and his return ticket may not be "any use." So, he invested in a backup plan - specifically a 1-way ticket on Lufthansa- just in case. He'll know which one he needs when he goes home.
----
Right - so - I kinda took the morning off to practice my presentation and - well - confer with really smart people. So, after lunch - Scott Barber is up.
Scott Barber launches his keynote with a clip from 2001 A Space Odyssey - where he describes what was shown as not the dawn of man, but the beginning of development. He tracks this through manual, waterfall development, into automation - via horsepower and steam engines, with internal combustion engines to follow.
Then we get electricity - which gives us computers and the first computer bug - from there things go downhill.
Scott's assertion is that neither Agile nor Context Driven ideas are new. They are, in fact, how society, most cultures, most people, live their lives. He then wonders why so many software shops describe software development in terms of manufacturing than in terms of Research and Development. After all, we're not making widgets (which takes a fair amount of R&D before it got to the point where it could be mass-produced.
Ummmmm - yeah - does anyone really mass produce software - other than burning a bunch of CDs and shrink-wrapping them?
So, when it comes to context driven or agile or... whatever - can we really do stuff that people say we do? Or maybe think we do?
Citing the fondue restaurant at CAST in Colorado. And the dinner with Jerry Weinberg.
And discussing what testing development in 1960's - like in satellites and and aircraft and military stuff - you MUST know performance testing. Why? Because the tolerances were micro-scopic. No titles - just a bunch of smart people working together to make good stuff. Deadlines? Really? We have no idea if it will WORK let alone when it might be delivered. Oh. And they tested on paper - because it was faster and better than testing it on the machine.
Did this work? Well, it put people on the moon and brought them back.
Then two things happened.
Before 1985 (in the US) Software had no value - it could not be sold - legally - as a product. Before then, the software had to do something - Now it just needs to sell and make money. If it makes money - then why not apply manufacturing principles to it?
Scott then gives an interesting version of testing and development that is painfully accurate and - depressing at the same time. BUT - it resolved in a rainbow of things that are broadly in common, except for the terminology.
DevOps, Agile, Lean, Incremental, Spiral, Itarative, W-Model, V-Model, Waterfall.
Yeah - ewwwwwwwwwwwwwwwwwwww
So - until around 2010 stuff was this way. After that rough time zone - something happened...
Software production methods experienced a shift, where they split away, never to reuine. This gives us these two models:
1. Lean Cloudy Agile DevOps - Or The Unicorn Land
2. Lean-ish Traditional Regulated Audible - Or The Real World
How do we resolve this stuff? Simple - we add value. How do we add value? We support the business needs. Yeah. OK
FLASH! Test is Dead - well - the old "heads down don't think stuff" is dead.
So, what about the test thing - the no testers at Facebook, etc., So what? If there's a problem, next patch is in 10 or 15 minutes - the one after that will be another 10 or 15 minutes. So what? No one pays for it. Oh, the only time FaceBook actually came down hard? Ya know what that was?
Justin Beiber got a haircut - and every teeny-bopper girl in the world got on all at once to scream about it.
In Scott's model - FaceBook is the model. Don't worry about titles - worry about what the work is. Worry about the talented people you are working with - or not working with.
Scott predicts that the R&D / Manufacturing models will reunite - except right now we don't have the same language among ourselves.
Maybe we need to focus instead on what the Management Words are. If we speak in terms they understand - like use their words - we can get things sorted out. This helps us become an invaluable project team member - not an arrogant tester who acts like your bug is more important than $13M in lost revenue if it doesn't ship on time. (That is straight off his slide)
Help your team produce business valuable systems - faster and cheaper. Be a testing expert and a project jack-of-all-trades - Reject the Testing Union mentality.
Do not assume that you can know the entire context of business decisions. However, you can take agile testing and develop a skill in place of a role.
The ONLY reason you get paid to test is because some exevutive thinks it will reduce their time to a bigger yacht.
(Pete Comment: Ummmm - Yeah.)
---
And now for Huib Schoots on Changing the Context: How a Bank Changes their Software Development Methodology.
Huib, until recently, worked with Rabobank International - a bank in the Netherlands that has no share holders - the depositors ownthe bank (Pete Comment: Sounds like a Credit Union in the States).
Huib worked with a team doing Bank Operations - doing - well, bank stuff. The problems when he came in included testing with indefinite understanding of expected behavior -- not a huge problem, unless the experts can't agree.
BANG - Gauntlet is thrown - Agile is not about KPIs and Hard Measures and Manager stuff. Its kinda scary. Manager says - You need templates and ... Eewwwwwwwwww. Not for Huib.
So - the test plans are non-existant and the bosses want stuff that doesn't really work - (Pete Comment: ...and the junk that never seems to make sense to me.) Instead, he asked if any of them had heard of Rapid Software Testing? Ummmm - No.
So Huib began working his "Change" toward Context Driven practices, RST, Passion as a tester (and for other things in life), Thinking - yeah - thinking is really important for testers (Pete Comment: its a pity how many people believe they are thinking when in fact they are not.) - and to develop Skills over Knowledge.
With this, Agile practices came into play and acted as a "lubricant." Lubricant help things work together when they don't automatically really want to work together - they kinda rub against each other - that is why there's motor oil in your car engine.
Story Boards helped people talk - it helped people communicate and understand (to some level) what others were working on. Before, no one was really sure. Moving on - Passion became contagious. In good form, Huib dove in to show the team that its OK to make mistakes - and he did. Loads of them. Each time it was like "OK, its good, I learned something." Good move.
These changes led to the "Second Wave" - More agile testing, including shared responsibilities and pairing and... yeah. Cool stuff. Then some Exploratory Testing was introduced - by Michael Bolton himself. The thing was, Huib was a victim of his own sucess. Some 80 testers showed up when he expected half that number. Oops. Then, a cool tool was introduced, Mind Maps. They can help visualize plans and relationships in a clear concise way. This lead to concurrent Workgroups to share work and distribute knowledge and understanding.
Yeah, some tools are needed. But use them wisely.
What is ahead? Likely Session Based Test Management - loads of Automation (as they really don't have any) - Coaching (yeah) - Practice (definitely)
What made it work? Careful steps - passion - adaptability, building community, persistence (can you please stop asking questions? Why?) and - Yeah - the QA word - Question Asking!
What did not work? Plain straight training (don't do shotgun training and then not follow up). Project pressure - yeah, you are not doing that weird stuff on this project it is too important. You can't everything at once. Out and out resistance to change. We did that and it did not work.
Huib's suggestions - RST training - Passion - THINK! - Question EVERYTHING! - Testing as a social science - Explore (boldly!) - continuous learning.
---
OK - Recovered enough from my own presentation to pick up for Matt Heusser's keynote.
PLAY IS IMPORTANT - ok that is not really the title, but hey - that was ummmm - a little hard to sneak in here.
So, we are considering play and ideas and ... stuff. and shows a clip from A Beautiful Mind with of John Nash describing game theory, whilst sitting in a bar when attractive women come in and ... well - apparently beer is a great thought motivator. Game theory presents that we can do work that will benefit us. Yeah, that much we get. Yet, Reciprocity means that we will act in the belief that in help one person, we will also be helped, by some measure.
Why are we doing this? Matt pulled up four folks and did an exercise (before mentioning Reciprocity) and moving forward and - yeah - they act against the stand alone Game Testing theory, in hopes of benefit later - apparently. And the expected outcome occurred - its just one of those things - People like good endings and it happened - Reciprocity worked in this case.
Matt is describing software testing as The Great Game of Testing. Cool observation.
He's got a picture of a kanban board up - a real one - not a make believe one - The danger of course is that sometimes, there is a problem with the way work gets done. The "rules" are set up so everyone is happy and gets stuff done within the Sprint - except QA becomes the bottleneck and why isn't QA done? Never mind that the stories were delivered the day before.
Instead, if we look at a flow process where there are "workflow limits" in place - so the QA column has spots for a few stories, no new stories can enter dev until the stories in dev get pushed - So if dev can help QA clean their plate they can then push the stories that are waiting ...
So, sometimes things can work out. Elizabeth Hendrickson's Shortcut Game is an example of what happens when you try and short circuit the activity list. It demonstrates what happens when we do "extra work" to make this sprint's goals, but may negatively impact the next sprint. That could be a problem.
The challenge of conferences, of course is to be able to implement the stuff you pick up at a conference. Sometimes you just need to do stuff. Consider this - when you go to a conference, write a two page report with three things that could be done - like - are not physically impossible. Add a fourth that would need help to get done. THEN - do the three things and try the fourth. You never know what might happen.
This ends the last day of the conference. I need to consider the overall event. Look for a summary post in the next few days. Right now, my brain hurts.
Thank you Jose, Madeleine and Uwe!
Thank you Potsdam!
Auf Wiedersehen!
Finished with engines
OK, that was a little lame. I think I may be a little tired yet.
So, here we go with another day. Today we have keynotes by Ola Ellnestam, Scott Barber and Matt Heusser. There are a variety of track talks as well. Rumor has it that at least one will combine unicorns with Harry Potter.
And, we are about to kick off with Ola Ellnestam on Fast Feedback Teams. Ready? Set? GO!
--
So, Ola launches into a story of trying to explain what he does to his kids. (Pete Comment: Yeah, if you have kids, its one of those weird conversations to think about when you deal with software. Also, kinda digging the hand drawn slide deck.) It was a good story about kids and understanding. It also included the idea of feedback - when things (like games) are predictable, how much fun are they? Unless you figure out the pattern and your younger brother has not...
Showers are a good example of "feedback loop." Depending on how far the shower head is from the faucet handles, you may have a bit of delay - like the farther away the two are, the longer it takes for you to know if the water is the temperature you want.
Reminder - if you remove the feedback mechanism, you are not "closing the loop" you are kicking it open so the loop never responds.
Reminder - never presume that "everyone knows" - when you are the one who does not know.
The velocity of the project (or aircraft) will determine the timing for feedback. One can not trust a response loop of, of, a couple of minutes, when the feedback involves aircraft at 33,000 feet. You kind of need it sooner - like - instantly.
Other times, consider how clear the communication is - the nature of the feedback can impact the understanding involved. Consider, if our task is to help solve problems, does the solution always involve creating software? Ola says while he likes software, he likes not creating software if there is another solution. (Pete Comment: Yeah - I can dig that.)
Instead of letting the backlog grow perpetually - which tends to freak people out when they look at it - consider paring it down - prioritize the list so the stuff that is really wanted/needed is on it. If the stuff that drops off comes back, then reconsider it. Don't let yourself get bogged down.
The problem is a bit like a bowl of candy - when the user stories are all "compelling" (Pete: by some measure) it gets really hard to choose. Limit the candy in the bowl to that which is important. This can help people understand. Allow the user stories to act as reminders of past conversations. When that conversation comes around again, perhaps the priority on that story needs to go up.
Ola tells a story about trying to pull from a build server - except there is a time difference between the time stamp on the build server and the machine he is working on. Problems resulted - like incomplete understanding / noise / in the response - which causes confusion.
Classic example of noise in response - what Ola calls "Chinese Whisper Game" - which I know as the "Telephone Game" - yeah. Start with one thing and by the time it gets told to everyone in the room and comes back to the first person, it is totally different.
Instead of looking for improvements in "outer" feedback loops, look at the inner-most loop. Feedback tends to happen in (generally) concentric loops, often centered around the life-cycle in the methodology in use. If the initial cycle takes a long time to get feed back, does it matter how efficient (or not) the outer loops are? Optimizing the inner loop as best you can, will give you opportunity to tighten the outer loops more than you can now.
This is true of classic Scrum cycles - as well as in other examples - like in Banking. Banking tends to run multiple batch processes each day. Yeah - that happens a lot more than some folks may realize. Recognizing the results of overlapping batches may possibly impact each other, the result of that impact the nature and type of feedback.
Moving on to double feedback loops - Generally stated - it is better to do the right thing wrong than the wrong thing right. For example - a radiator (room hear for those in the States who don't know about such things) has a thermostat to keep a room at a steady temperature. Recognizing the door or window is open and may have a bearing on the results - if one is looking at how well (or poorly) the radiator is doing its job.
Bug Reports? Yeah - those are records of things we did wrong. We have an option, look at them and figure out what went wrong, or make sure we don't do anything wrong. Reminder - the easiest way to avoid doing something wrong is to not do anything.
To move an organization, particularly a new organization, toward success, sometimes the easiest way is to reduce stuff that does not help. It may be user stories from the backlog - or it may be existent features that are of no value that can be removed. This will close loops that may currently only add noise instead of value. It can also speed the feedback return so you can do a better job.
Interesting question - What about slow feedback loops - those that start now, but the event for the feedback will not occur for some time? Well - good question. Consider Ola's flight to the conference. He bought round trip tickets on Scandinavian Air (SAS) - except there is a bunch of stuff going on with them right now, and his return ticket may not be "any use." So, he invested in a backup plan - specifically a 1-way ticket on Lufthansa- just in case. He'll know which one he needs when he goes home.
----
Right - so - I kinda took the morning off to practice my presentation and - well - confer with really smart people. So, after lunch - Scott Barber is up.
Scott Barber launches his keynote with a clip from 2001 A Space Odyssey - where he describes what was shown as not the dawn of man, but the beginning of development. He tracks this through manual, waterfall development, into automation - via horsepower and steam engines, with internal combustion engines to follow.
Then we get electricity - which gives us computers and the first computer bug - from there things go downhill.
Scott's assertion is that neither Agile nor Context Driven ideas are new. They are, in fact, how society, most cultures, most people, live their lives. He then wonders why so many software shops describe software development in terms of manufacturing than in terms of Research and Development. After all, we're not making widgets (which takes a fair amount of R&D before it got to the point where it could be mass-produced.
Ummmmm - yeah - does anyone really mass produce software - other than burning a bunch of CDs and shrink-wrapping them?
So, when it comes to context driven or agile or... whatever - can we really do stuff that people say we do? Or maybe think we do?
Citing the fondue restaurant at CAST in Colorado. And the dinner with Jerry Weinberg.
And discussing what testing development in 1960's - like in satellites and and aircraft and military stuff - you MUST know performance testing. Why? Because the tolerances were micro-scopic. No titles - just a bunch of smart people working together to make good stuff. Deadlines? Really? We have no idea if it will WORK let alone when it might be delivered. Oh. And they tested on paper - because it was faster and better than testing it on the machine.
Did this work? Well, it put people on the moon and brought them back.
Then two things happened.
Before 1985 (in the US) Software had no value - it could not be sold - legally - as a product. Before then, the software had to do something - Now it just needs to sell and make money. If it makes money - then why not apply manufacturing principles to it?
Scott then gives an interesting version of testing and development that is painfully accurate and - depressing at the same time. BUT - it resolved in a rainbow of things that are broadly in common, except for the terminology.
DevOps, Agile, Lean, Incremental, Spiral, Itarative, W-Model, V-Model, Waterfall.
Yeah - ewwwwwwwwwwwwwwwwwwww
So - until around 2010 stuff was this way. After that rough time zone - something happened...
Software production methods experienced a shift, where they split away, never to reuine. This gives us these two models:
1. Lean Cloudy Agile DevOps - Or The Unicorn Land
2. Lean-ish Traditional Regulated Audible - Or The Real World
How do we resolve this stuff? Simple - we add value. How do we add value? We support the business needs. Yeah. OK
FLASH! Test is Dead - well - the old "heads down don't think stuff" is dead.
So, what about the test thing - the no testers at Facebook, etc., So what? If there's a problem, next patch is in 10 or 15 minutes - the one after that will be another 10 or 15 minutes. So what? No one pays for it. Oh, the only time FaceBook actually came down hard? Ya know what that was?
Justin Beiber got a haircut - and every teeny-bopper girl in the world got on all at once to scream about it.
In Scott's model - FaceBook is the model. Don't worry about titles - worry about what the work is. Worry about the talented people you are working with - or not working with.
Scott predicts that the R&D / Manufacturing models will reunite - except right now we don't have the same language among ourselves.
Maybe we need to focus instead on what the Management Words are. If we speak in terms they understand - like use their words - we can get things sorted out. This helps us become an invaluable project team member - not an arrogant tester who acts like your bug is more important than $13M in lost revenue if it doesn't ship on time. (That is straight off his slide)
Help your team produce business valuable systems - faster and cheaper. Be a testing expert and a project jack-of-all-trades - Reject the Testing Union mentality.
Do not assume that you can know the entire context of business decisions. However, you can take agile testing and develop a skill in place of a role.
The ONLY reason you get paid to test is because some exevutive thinks it will reduce their time to a bigger yacht.
(Pete Comment: Ummmm - Yeah.)
---
And now for Huib Schoots on Changing the Context: How a Bank Changes their Software Development Methodology.
Huib, until recently, worked with Rabobank International - a bank in the Netherlands that has no share holders - the depositors ownthe bank (Pete Comment: Sounds like a Credit Union in the States).
Huib worked with a team doing Bank Operations - doing - well, bank stuff. The problems when he came in included testing with indefinite understanding of expected behavior -- not a huge problem, unless the experts can't agree.
BANG - Gauntlet is thrown - Agile is not about KPIs and Hard Measures and Manager stuff. Its kinda scary. Manager says - You need templates and ... Eewwwwwwwwww. Not for Huib.
So - the test plans are non-existant and the bosses want stuff that doesn't really work - (Pete Comment: ...and the junk that never seems to make sense to me.) Instead, he asked if any of them had heard of Rapid Software Testing? Ummmm - No.
So Huib began working his "Change" toward Context Driven practices, RST, Passion as a tester (and for other things in life), Thinking - yeah - thinking is really important for testers (Pete Comment: its a pity how many people believe they are thinking when in fact they are not.) - and to develop Skills over Knowledge.
With this, Agile practices came into play and acted as a "lubricant." Lubricant help things work together when they don't automatically really want to work together - they kinda rub against each other - that is why there's motor oil in your car engine.
Story Boards helped people talk - it helped people communicate and understand (to some level) what others were working on. Before, no one was really sure. Moving on - Passion became contagious. In good form, Huib dove in to show the team that its OK to make mistakes - and he did. Loads of them. Each time it was like "OK, its good, I learned something." Good move.
These changes led to the "Second Wave" - More agile testing, including shared responsibilities and pairing and... yeah. Cool stuff. Then some Exploratory Testing was introduced - by Michael Bolton himself. The thing was, Huib was a victim of his own sucess. Some 80 testers showed up when he expected half that number. Oops. Then, a cool tool was introduced, Mind Maps. They can help visualize plans and relationships in a clear concise way. This lead to concurrent Workgroups to share work and distribute knowledge and understanding.
Yeah, some tools are needed. But use them wisely.
What is ahead? Likely Session Based Test Management - loads of Automation (as they really don't have any) - Coaching (yeah) - Practice (definitely)
What made it work? Careful steps - passion - adaptability, building community, persistence (can you please stop asking questions? Why?) and - Yeah - the QA word - Question Asking!
What did not work? Plain straight training (don't do shotgun training and then not follow up). Project pressure - yeah, you are not doing that weird stuff on this project it is too important. You can't everything at once. Out and out resistance to change. We did that and it did not work.
Huib's suggestions - RST training - Passion - THINK! - Question EVERYTHING! - Testing as a social science - Explore (boldly!) - continuous learning.
---
OK - Recovered enough from my own presentation to pick up for Matt Heusser's keynote.
PLAY IS IMPORTANT - ok that is not really the title, but hey - that was ummmm - a little hard to sneak in here.
So, we are considering play and ideas and ... stuff. and shows a clip from A Beautiful Mind with of John Nash describing game theory, whilst sitting in a bar when attractive women come in and ... well - apparently beer is a great thought motivator. Game theory presents that we can do work that will benefit us. Yeah, that much we get. Yet, Reciprocity means that we will act in the belief that in help one person, we will also be helped, by some measure.
Why are we doing this? Matt pulled up four folks and did an exercise (before mentioning Reciprocity) and moving forward and - yeah - they act against the stand alone Game Testing theory, in hopes of benefit later - apparently. And the expected outcome occurred - its just one of those things - People like good endings and it happened - Reciprocity worked in this case.
Matt is describing software testing as The Great Game of Testing. Cool observation.
He's got a picture of a kanban board up - a real one - not a make believe one - The danger of course is that sometimes, there is a problem with the way work gets done. The "rules" are set up so everyone is happy and gets stuff done within the Sprint - except QA becomes the bottleneck and why isn't QA done? Never mind that the stories were delivered the day before.
Instead, if we look at a flow process where there are "workflow limits" in place - so the QA column has spots for a few stories, no new stories can enter dev until the stories in dev get pushed - So if dev can help QA clean their plate they can then push the stories that are waiting ...
So, sometimes things can work out. Elizabeth Hendrickson's Shortcut Game is an example of what happens when you try and short circuit the activity list. It demonstrates what happens when we do "extra work" to make this sprint's goals, but may negatively impact the next sprint. That could be a problem.
The challenge of conferences, of course is to be able to implement the stuff you pick up at a conference. Sometimes you just need to do stuff. Consider this - when you go to a conference, write a two page report with three things that could be done - like - are not physically impossible. Add a fourth that would need help to get done. THEN - do the three things and try the fourth. You never know what might happen.
This ends the last day of the conference. I need to consider the overall event. Look for a summary post in the next few days. Right now, my brain hurts.
Thank you Jose, Madeleine and Uwe!
Thank you Potsdam!
Auf Wiedersehen!
Finished with engines
Subscribe to:
Posts (Atom)