The last couple of years I have tended to write blog posts at the change of the year. One to summarize the year that is ending and one to list the things I am looking forward to in the coming year. This time it is different. It feels different.
Changes
Much has happened this year. As I was considering how to encapsulate it, I read over the posts on changing from 2011 to 2012. I must admit, I had to smile. Much has happened, still much remains to be done.
What has happened? Well, in August I submitted my resignation to the company where I was working. My "old company" had been bought by a much larger competitor and I found myself in a struggle to keep myself focused on what my goals and values were. I was a little surprised because I had worked for large companies in the past - most of my working life in fact, had been with large companies.
The surprising thing to the person I was a few years ago, was that I resigned without a "company" to go to. I went independent. I struck out on my own with a letter of marque sailing against any and every - oh, no, umm - that is being a privateer - not a working independent test professional. Meh, whatever.
But, that is what I did. The roots for this lie in this post I wrote late in 2011. Looking back, it was the natural progression of where I was going from and where I was going to.
Now, I did have a contract lined up - which has since been extended. This made the opportunity a little easier than jumping in cold-turkey - or deciding to go independent after being let go. I concede this was an advantage.
Of course, now I am working even harder - not simply at "the day job" but in my writing, my learning and my attempts to understand things better. The push from being sacked, as described in the blog post mentioned above, seems to have led me to the point where I hoisted my own flag, and have so far, avoided being hoist with my own petard.
People
I have been very fortunate in my meetings and comings and goings this past year. Given the opportunity to speak in Portland at PNSQC and then in Potsdam at Agile Testing Days, I met a massive number of people I had only read of, or read their words. It was inspiring, encouraging and humbling all at once. In both instances, I found it easy to not be the smartest person in the room. I had a pile of people there I could relate to and learn from.
To each of you, I am deeply indebted. Its a long list - let's see. There's Matt Heusser, who is still a bundle of energy and ideas. Michael Larsen, who is really amazingly smart. Bernie Berger, Markus Gartner, Janet Gregory, Gojko Adzic, Huib Schoots, Sigge Birgisson, Paul Gerrard, Simon Morley, Jurgen Appelo, James Lindsay, Michael Dedolph, Linda Rising, Ben Simo, and.... the list really does kind of go on.
The people I continue to find to be wonderful teachers and gentle instructors (sometimes not so gentle as well) sometimes through conversation, emails, IM/Skype chats, blog posts and articles. They include, in no particular order, Elizabeth Hendrickson, Fiona Charles, James Bach, Paul Holand, Michael Bolton, Cem Kaner, Jon Bach, Catherine Powell, Griffin Jones. There are others, but these folks came to mind as I was writing this.
Community
Wow. This year has been amazing. The local group, the GR Testers, are meeting every month, with a variety of people showing up - not "the same folks every time" but people wandering in to check it out. I find this exciting.
AST - Association for Software Testing
What an amazing group of people this is, and is continuing to develop into. The Education Special Interest Group (EdSIG) is continuing to be an area of interest. Alas, my intention of participating in "more courses" has been impacted by life stuff. I've been able to assist with a couple of Foundations sessions for the BBST course, and offered ideas on some discussions but that is about all.
This past August I was honored to be elected to the Board of Directors of AST. My participation continues to be as much as I can give on a regular basis - including monitoring/moderating the Forums on the AST website (a really under utilized resource, perhaps we can change this in the coming year) and the LinkedIn AST group's discussion forum (mostly whacking spam).
A new and exciting development is the Test Leadership Special Interest Group - LeadershipSIG. This new group is looking into all sorts of interesting questions around Test Management and Test Leadership and - well - stuff - including the interesting question of the difficulty of finding and recruiting Context Driven Test leaders, managers and directors.
CAST is scheduled for August in Madison, Wisconsin. This is going to be good.
Other Conference / Community Stuff
Conferences coming up include STPCon - in San Diego in April. Also in April is GLSEC - Great Lakes Software Excellence Conference - that one is in Grand Rapids. QAI's QUEST conference is also scheduled for the Spring.
There are several conferences I've considered submitting proposals to - and I suspect it is time to do more than consider.
Writing - Oh my. I have several projects I've been working through. I am really excited about some of the potential opportunities. I'm pretty geeked about this.
Overall, I am excited about what 2013 may hold. It strikes me that things that have been set up over the last several years are coming into place. What is in store? I do not know. I believe it is going to be good.
After all. I am writing this the evening of December 23. According to some folks, the world was supposed to end a couple of days ago. What those folks don't understand is that everything changes. All the time. Marking sequences and patterns and tracking them is part of what every society does. They don't end. Simply turn the page.
Let us rise up together.
Showing posts with label thinking. Show all posts
Showing posts with label thinking. Show all posts
Monday, December 24, 2012
Wednesday, November 21, 2012
Agile Testing Days, Day 1: Workshops
Monday in Potsdam was a lovely day. Yeah, a little foggy, maybe a little damp outside, but hey - I was inside where there was good coffee, a variety of juices, waters and the odd snack or two. A nice luncheon with great conversation following a very enjoyable breakfast with great conversation - Oh, and Matt and I had another opportunity to present Software Testing Reloaded - Our full day workshop. This time in conjunction with Agile Testing Days.
As usual, we totally messed up the room - this time the staff of the hotel were more amused than horrified. The folks wandered in after coffee and light snacks and found us playing The Dice Game - Yeah. That one.
In retrospect, it was a great ice breaker to get people in the room, involved and thinking. It was a good warmup for what was going to follow. So, we chatted and conducted the first exercise, had everyone introduce themselves, asked what they were hoping to get from the workshop.
I think Matt and I were equally astounded when a couple of people said they wanted to learn how to test and how to transition from waterfall (well, V-model) to Agile. We gently suggested that the people who wrote the book were down the hall and perhaps that might be better for them - and reassured everyone that if they were looking for something more that they could either re-evaluate their choice OR they could hang with us.
So, after a couple of folks took off, and a couple more wandered in, we settled at 11 participants. It was a lively bunch with a lot going on - great exercises, good interaction. Kept us on our toes and, I think, we kept them in their toes as well.
Somehow, we managed to have a complete fail in getting to every single topic that people wanted us to talk to or do exercises around. Ummm - I think our record is perfect then. You see, there is always more for us to talk on than there is time. That is frighteningly like, well, life on a software project.
We often find ourselves with more stuff to deliver in a given period of time than we can hope to. If we promise to give everyone everything, we really can't deliver anything. Well, maybe that is a bit of a stretch. Maybe it is closer to say we will deliver far less than people expect, and less than what we really can deliver if we prioritize our work differently in advance.
So, for Matt and I, we try to work our way through the most commonly occurring themes and address them to the best of our ability. Sometimes we can get most of the list in, sometimes, well, we get less than "most."
Still, we try and let people know in advance that we will probably not be able to get to every single topic. We will do everything we can to do justice to each one, but...
This got me thinking. How do people manage the expectations of others when it comes to work, software projects and other stuff of that ilk.
How well do we let people know what is on the cusp and may not make the iteration? How do we let people know, honestly, that we can not get something in this release and will get it in the release after?
I know - the answer depends on our context.
In other news, It is really dark here in Potsdam (it being Wednesday night now.)
To summarize, we met some amazingly smart people who were good thinkers and generally all around great folks to meet. My brain is melted after 3 days of conference mode - and there is one more day to go.
I've been live blogging on Tuesday and Wednesday, and intend to do the same tomorrow. I wonder if that has contributed to my brain melt. Hmmmmmmmmmmm.
Auf Wiedersehen.
As usual, we totally messed up the room - this time the staff of the hotel were more amused than horrified. The folks wandered in after coffee and light snacks and found us playing The Dice Game - Yeah. That one.
In retrospect, it was a great ice breaker to get people in the room, involved and thinking. It was a good warmup for what was going to follow. So, we chatted and conducted the first exercise, had everyone introduce themselves, asked what they were hoping to get from the workshop.
I think Matt and I were equally astounded when a couple of people said they wanted to learn how to test and how to transition from waterfall (well, V-model) to Agile. We gently suggested that the people who wrote the book were down the hall and perhaps that might be better for them - and reassured everyone that if they were looking for something more that they could either re-evaluate their choice OR they could hang with us.
So, after a couple of folks took off, and a couple more wandered in, we settled at 11 participants. It was a lively bunch with a lot going on - great exercises, good interaction. Kept us on our toes and, I think, we kept them in their toes as well.
Somehow, we managed to have a complete fail in getting to every single topic that people wanted us to talk to or do exercises around. Ummm - I think our record is perfect then. You see, there is always more for us to talk on than there is time. That is frighteningly like, well, life on a software project.
We often find ourselves with more stuff to deliver in a given period of time than we can hope to. If we promise to give everyone everything, we really can't deliver anything. Well, maybe that is a bit of a stretch. Maybe it is closer to say we will deliver far less than people expect, and less than what we really can deliver if we prioritize our work differently in advance.
So, for Matt and I, we try to work our way through the most commonly occurring themes and address them to the best of our ability. Sometimes we can get most of the list in, sometimes, well, we get less than "most."
Still, we try and let people know in advance that we will probably not be able to get to every single topic. We will do everything we can to do justice to each one, but...
This got me thinking. How do people manage the expectations of others when it comes to work, software projects and other stuff of that ilk.
How well do we let people know what is on the cusp and may not make the iteration? How do we let people know, honestly, that we can not get something in this release and will get it in the release after?
I know - the answer depends on our context.
In other news, It is really dark here in Potsdam (it being Wednesday night now.)
To summarize, we met some amazingly smart people who were good thinkers and generally all around great folks to meet. My brain is melted after 3 days of conference mode - and there is one more day to go.
I've been live blogging on Tuesday and Wednesday, and intend to do the same tomorrow. I wonder if that has contributed to my brain melt. Hmmmmmmmmmmm.
Auf Wiedersehen.
Labels:
Agile Testing Days,
conference mode,
testing,
thinking
Sunday, November 11, 2012
What Makes Software Teams that Work, Work?
In pulling together some notes and reviewing some papers, I was struck by a seemingly simple question, and as I consider it, I pose it here.
Some software development teams are brilliantly successful. Some teams are spectacular failures. Most are somewhere in between.
Leaving the question of what constitutes a success or failure aside, I wonder what it is that results in which.
Some teams have strong process models in place. They have rigorous rules guiding every step to be taken from the initial question of "What would it take for X?" through delivery of the software product. These teams have strong control models and specific metrics in place that could be used to demonstrate the precise progress of the development effort.
Other teams have no such models. They may have other models, perhaps "general guidelines" might be a better phrase. Rather than hard-line metrics and measurement criteria, they have more general ideas.
Some teams schedule regular meetings, weekly, a few days a week or sometimes daily. Some teams take copious notes to be distributed and reviewed. Some teams have a shared model in place to track progress and others keep no records at all.
Some of each of these teams are successful - they deliver products on time that their customers want and use, happily.
Some of each of these teams are less successful. They have products with problems that are delivered late and are not used, or used grudgingly because they have no option.
Do the models in use make a difference or is it something else?
Why do some teams deliver products on time and others do not?
I suspect that the answer does not lie in the pat, set-piece answers but somewhere else.
I must think on this.
Some software development teams are brilliantly successful. Some teams are spectacular failures. Most are somewhere in between.
Leaving the question of what constitutes a success or failure aside, I wonder what it is that results in which.
Some teams have strong process models in place. They have rigorous rules guiding every step to be taken from the initial question of "What would it take for X?" through delivery of the software product. These teams have strong control models and specific metrics in place that could be used to demonstrate the precise progress of the development effort.
Other teams have no such models. They may have other models, perhaps "general guidelines" might be a better phrase. Rather than hard-line metrics and measurement criteria, they have more general ideas.
Some teams schedule regular meetings, weekly, a few days a week or sometimes daily. Some teams take copious notes to be distributed and reviewed. Some teams have a shared model in place to track progress and others keep no records at all.
Some of each of these teams are successful - they deliver products on time that their customers want and use, happily.
Some of each of these teams are less successful. They have products with problems that are delivered late and are not used, or used grudgingly because they have no option.
Do the models in use make a difference or is it something else?
Why do some teams deliver products on time and others do not?
I suspect that the answer does not lie in the pat, set-piece answers but somewhere else.
I must think on this.
Monday, October 8, 2012
Testers and UX and That's Not My Job
OK.
I don't know if you are one of the several tester types I've talked with over the last couple of months who keep telling me that "Look, we're not supposed to worry about that UX stuff you talk about. We're only supposed to worry about the requirements."
If you are, let me say this: You are soooooooooooooooo wrong.
No, really. Even if there is someone else who will "test" that, I suggest, gently, that you consider what a reasonable person would expect while you are examining whatever process it is that you are examining. "Reasonable person" being part of the polyglot that many folk label as "users." You know - the people who are actually expected to use the software to do what they need to do? Those folks?
It does not matter, in my experience at least, if those people (because that is what they are) work for your company or if they (or their company) pay you to use the software you are working on.
Your software can meet all the documented requirements there are. If the people using it can't easily do what they need to do, then it is rubbish.
OK, so maybe I'm being too harsh. Maybe, just maybe, I'm letting the events of yesterday (when I was sitting in an airport, looking at a screen with my flight number displayed and a status of "On Time" when it is 20 minutes after I was supposed to be airborne) kinda get to me. Or, maybe I've just run into a fair number of systems where things were designed - intentionally designed - in such a way that extra work is required by people who need the software to do their jobs.
An Example
Consider some software I recently encountered. It is a new feature rolled out as a modeling tool for people with investments through this particular firm.
To use it, I needed to sign in to my account. No worries. From there, I could look up all sorts of interesting stuff about me generally, and about some investments I had. There was a cool feature that was available so I could track what could happen if I tweaked some allocations in fund accounts, essentially move money from one account to another - one type of fund to another - and possible impact on my overall portfolio over time.
So far, so good, right? I open the new feature to see what it tells me.
The first screen asked me to confirm my logon id, my name and my account number. Well, ok. If it has the first, why does it need the other two? (My first thought was a little less polite, but you get the idea.)
So I enter the requested information, click submit and POOF! A screen appears asking the types of accounts I currently had with them. (Really? I've given you information to identify me and you still want me to identify the types of accounts I have? This is kinda silly, but, ok.)
I open another screen to make sure I match the exact type of account I have with what is on the list of options - there are many that are similar in name, so I did not want to be confused.
It then asked me to enter the current balance I had in each of the accounts.
WHAT???? You KNOW what I have! It is on this other screen I'm looking at! Both screens are part of the same system for crying out loud. (or at least typing in all caps with a bunch of question-marks.) This is getting silly.
So, I have a thought. Maybe, this is intended to be strictly hypothetical. OK, I'll give that a shot.
I hit the back button until I land on the page to enter the types of accounts. I swap some of my real accounts for accounts I don't have - hit next and "We're sorry, your selections do not agree with our records." OK - so much for that idea.
Think on
Now, I do not want to cast disparaging thoughts on the people who obviously worked very hard on this software, by some measure. It clearly does something. What it does is not quite clear to me. There is clearly some knowledge of the accounts I have in this tool - but then why do I need to enter the information?
This seems, awkward, at best.
I wonder how the software came to this state. I wonder if the requirements handed off left room for the design/develop folks to interpret them in ways that the people who were in the requirements discussions did not intend.
I wonder if the objections raised were met with "This is only phase one. We'll make those changes for phase two, ok?" I wonder if the testers asked questions about this. I wonder how that can be.
Actually I think I know. I believe I have been in the same situation more than once. Frankly it is no fun. Here is what I have learned from those experiences and how I approach this now.
Lessons
Ask questions.
Challenge requirements when they are unclear.
Challenge requirements when they are clear.
Challenge requirements when there is no mention of UX ideas,
Challenge requirements when three are mentions of US ideas.
Draw them out with a mind map or decision tree or something. They don't need to be be fancy, but they can help you focus your thinking and may give you an "ah-HA" moment - paper, napkins, formal tools - whatever. Clarify them as best you can. Even if everyone knows what something means, make sure they all know the same thing..
Limit ambiguity - as others if their understanding is the same as yours.
If there are buzzwords in the requirement documents, as for them to be defined clearly (yeah, this goes back to the thing about understanding being the same.
Is any of this unique to UX? Not really. I have a feeling that some of the really painful stuff I've run into lately would have been less painful if someone had argued more strongly early on in the projects where that software was developed.
The point of this rant - If, in your testing, you see behavior that you believe will negatively impact a person attempting to use the software, flag it.
Even if "there is no requirement covering that" - . Ask a question. Raise your hand.
I hate to say that requirements are fallible, but they are. The can not be your only measure for the "quality" of the software you are working on if you wish to be considered a tester.
They are a starting point. Nothing more.
Proceed from them thoughtfully.
I don't know if you are one of the several tester types I've talked with over the last couple of months who keep telling me that "Look, we're not supposed to worry about that UX stuff you talk about. We're only supposed to worry about the requirements."
If you are, let me say this: You are soooooooooooooooo wrong.
No, really. Even if there is someone else who will "test" that, I suggest, gently, that you consider what a reasonable person would expect while you are examining whatever process it is that you are examining. "Reasonable person" being part of the polyglot that many folk label as "users." You know - the people who are actually expected to use the software to do what they need to do? Those folks?
It does not matter, in my experience at least, if those people (because that is what they are) work for your company or if they (or their company) pay you to use the software you are working on.
Your software can meet all the documented requirements there are. If the people using it can't easily do what they need to do, then it is rubbish.
OK, so maybe I'm being too harsh. Maybe, just maybe, I'm letting the events of yesterday (when I was sitting in an airport, looking at a screen with my flight number displayed and a status of "On Time" when it is 20 minutes after I was supposed to be airborne) kinda get to me. Or, maybe I've just run into a fair number of systems where things were designed - intentionally designed - in such a way that extra work is required by people who need the software to do their jobs.
An Example
Consider some software I recently encountered. It is a new feature rolled out as a modeling tool for people with investments through this particular firm.
To use it, I needed to sign in to my account. No worries. From there, I could look up all sorts of interesting stuff about me generally, and about some investments I had. There was a cool feature that was available so I could track what could happen if I tweaked some allocations in fund accounts, essentially move money from one account to another - one type of fund to another - and possible impact on my overall portfolio over time.
So far, so good, right? I open the new feature to see what it tells me.
The first screen asked me to confirm my logon id, my name and my account number. Well, ok. If it has the first, why does it need the other two? (My first thought was a little less polite, but you get the idea.)
So I enter the requested information, click submit and POOF! A screen appears asking the types of accounts I currently had with them. (Really? I've given you information to identify me and you still want me to identify the types of accounts I have? This is kinda silly, but, ok.)
I open another screen to make sure I match the exact type of account I have with what is on the list of options - there are many that are similar in name, so I did not want to be confused.
It then asked me to enter the current balance I had in each of the accounts.
WHAT???? You KNOW what I have! It is on this other screen I'm looking at! Both screens are part of the same system for crying out loud. (or at least typing in all caps with a bunch of question-marks.) This is getting silly.
So, I have a thought. Maybe, this is intended to be strictly hypothetical. OK, I'll give that a shot.
I hit the back button until I land on the page to enter the types of accounts. I swap some of my real accounts for accounts I don't have - hit next and "We're sorry, your selections do not agree with our records." OK - so much for that idea.
Think on
Now, I do not want to cast disparaging thoughts on the people who obviously worked very hard on this software, by some measure. It clearly does something. What it does is not quite clear to me. There is clearly some knowledge of the accounts I have in this tool - but then why do I need to enter the information?
This seems, awkward, at best.
I wonder how the software came to this state. I wonder if the requirements handed off left room for the design/develop folks to interpret them in ways that the people who were in the requirements discussions did not intend.
I wonder if the objections raised were met with "This is only phase one. We'll make those changes for phase two, ok?" I wonder if the testers asked questions about this. I wonder how that can be.
Actually I think I know. I believe I have been in the same situation more than once. Frankly it is no fun. Here is what I have learned from those experiences and how I approach this now.
Lessons
Ask questions.
Challenge requirements when they are unclear.
Challenge requirements when they are clear.
Challenge requirements when there is no mention of UX ideas,
Challenge requirements when three are mentions of US ideas.
Draw them out with a mind map or decision tree or something. They don't need to be be fancy, but they can help you focus your thinking and may give you an "ah-HA" moment - paper, napkins, formal tools - whatever. Clarify them as best you can. Even if everyone knows what something means, make sure they all know the same thing..
Limit ambiguity - as others if their understanding is the same as yours.
If there are buzzwords in the requirement documents, as for them to be defined clearly (yeah, this goes back to the thing about understanding being the same.
Is any of this unique to UX? Not really. I have a feeling that some of the really painful stuff I've run into lately would have been less painful if someone had argued more strongly early on in the projects where that software was developed.
The point of this rant - If, in your testing, you see behavior that you believe will negatively impact a person attempting to use the software, flag it.
Even if "there is no requirement covering that" - . Ask a question. Raise your hand.
I hate to say that requirements are fallible, but they are. The can not be your only measure for the "quality" of the software you are working on if you wish to be considered a tester.
They are a starting point. Nothing more.
Proceed from them thoughtfully.
Saturday, September 22, 2012
In Defense of the Obvious, Testers and User Experience III
I have had some interesting conversations over the last few months with testers and designers and PM types and experts in a variety of fields. I ask questions and they answer them, then they ask me a question and I answer it.
That is part of how a conversation works. Of course, another part is that when Person B is responding to a question by Person A, it is possible, if not likely or probable that A will respond or comment to B.
This leads to B responding to A and so forth. Most folks know this is how a conversation works.
It is not a monologue or lecture or pontification. It is an exchange of views, ideas and thoughts.
So, do all conversations follow the same model? Are they essentially the same in form and structure? Do they resemble those pulp, mass-produced fiction books that follow the "formula" used by the specific publisher? You know the ones. Pick one up, change the name of the main characters, change the name of the town - then pick up another from the same publisher and SURPRISE! Same Story! Change the name of the characters in the second book to what you changed the names from the first book - and see how similar they are.
OK. Software folks - Are your perceptions of users (you know, people who use your software to do what they need to do) as fixed as the characters in the mass-produced fiction books? Or are your perceptions of users more like the participants in conversations?
Some Ideas I have that may seem really obvious to a fair number of folks, but I suspect are either revolutionary or heretical to others...
No Two People Are the Same
OK. Obvious idea Number 1 for software testers: No two people are the same. Duh. Says so in red just above that, right? They are the same, right? Really? How many differences can you spot? (Go ahead, try. Its OK.)
Why do we expect the people using the system to be a homogenous group where they generally act the same? Think of people you work with who use software - ANY software. Do they select similar options as each other? Do they have the same interests?
Do they like the same coffee? Do they do the same job? Do they want to do the same job? No, wait. When you read the last couple of questions, what was your answer? Do they REALLY do the same job? Or do they do the same general function?
Are they doing something similar to each other? Umm - similar is not the same, right? If these are question you don't want to deal with - or maybe don't know the answer to - How are you designing your tests?
How are you designing your systems?
What "users" are your "user stories" emulating?
I had a bizarre chat fairly recently. Boss-type said "We fixed this by using personas. We can emulate people and mimic their behavior."
OK, says I to myself, reasonable idea and reasonable approach to formulating various scenarios. They can be very powerful. "Really," says I, out loud, "tell me about some of them. Sounds like it could be cool."
"Sure!" says the very proud boss-type, "We have Five of them: One for each department." Really? So, tell me more. "Sure! Persona 1 does the thing-a-ma-bob function. Persona 2 does the dumaflatchey function. Persona 3 does the whats-it function. Persona 4 does the thing-a-ma-jig function (similar to the thing-a-ma-bob function but not the same). Persona 5 does the whatever function."
So, a total of five personas? OK, how many people are in each department?
"Well, the smallest department has 15 people. The others have 75 to 100."
Really? They are all the same? They all do the same thing every time? They never vary in their routine?
Do they all do the same thing your test scenarios do - in that sequence - every single time they go into the system?
Sometimes People Have Bad Days
Yeah, I know you thought that only applied to software folks. Sometimes super-model types have bad day's too. Of course, famous folk have bad days - then they get their picture in various tabloids and their "bad day" seems not so bad because all the attention because of the tabloids is worse than the original "bad day."
Bad days can impact more than just our coding or testing or a public figure's dinner plans. Remarkably enough they can impact people who use the software we're working on.
Sometimes people have too much fun the night before they are in the office using our software. Their typing is less than perfect. They are less accurate than normal in their work - they read things wrong; they invert character sequences; they simply don't notice their own mistakes.
Sometimes they had a really bad night instead of a really good night. Maybe they were up half the night caring for a sick child. Maybe it wasn't a child, maybe it was a partner. What if it was a parent?
The results may be the same outwardly, but what about the inner turmoil?
"Is my child/partner/parent doing better now? Do I need to check on them? What if I call and they don't answer the phone? If they are sleeping, I may wake them. If they can't get to the phone, why not? Something could be seriously wrong?"
Will they be more irritable than they normally are? Will that impact others in the group and cause their productivity to drop?
Sometimes the Best People Aren't at Their Best
What? How can that be? Aren't they like what the Men In Black are looking for? Aren't they "Best of the Best of the Best" (sir)?
What if they are too good? What if they get asked questions and are interrupted and step away from their machines for a minute or lock their screens while they help someone else? What if their user session times out?
Let's face it. Anyone can get distracted. Anyone can be interrupted. Is the system time-sensitive? How about state sensitive? The session can time-out mid-transaction, can't it? Someone else has a problem so the expert locks her system and helps out the guy with a problem - what happens with her session when she comes back?
Do you know?
What if they get called into a conference room with some boss types to answer some questions. If she signs in from another location, what happens to her first session?
And so forth...
These are not new ideas. Do we know what happens though?
Now some of you may be thinking to yourself "But Pete, this is not really UX kind of stuff, is it?" That makes me wonder what kind of "stuff" they might be.
Do your test scenarios consider these possibilities? Do they consider any one of them?
Testing to Requirements
Ah yes. I hear the chorus of "Our instructions are to test to the requirements. Things that aren't in the requirements should not be tested. They are out of scope." Whose requirements?
The requirements that were written down by the BA (or group of them) or the ones that were negotiated and word-smithed and stated nicely?
What about the requirements that the BA did not understand, hence did not write down. Or maybe he wrote them down but they made no sense so other folks scrapped them.
Then there are the implied requirements. These are the ones that don't make the documented requirements because they seem so obvious. My favorite is the one about "Saving a new or modified record in the system will not corrupt the database."
You hardly ever see that, but everyone kind of expects that. Right? But if you are ONLY testing to DOCUMENTED requirements, then that does not count as a bug, right? It is out of scope. RIGHT?
NO? Really?
See? That is kind of my point. You may be considering the experience of the users already. You just don't know it.
Now, broaden your field of vision. Pan back. Zoom out. What else is obvious to the users that you have not considered before?
Now go test that stuff, too.
That is part of how a conversation works. Of course, another part is that when Person B is responding to a question by Person A, it is possible, if not likely or probable that A will respond or comment to B.
This leads to B responding to A and so forth. Most folks know this is how a conversation works.
It is not a monologue or lecture or pontification. It is an exchange of views, ideas and thoughts.
So, do all conversations follow the same model? Are they essentially the same in form and structure? Do they resemble those pulp, mass-produced fiction books that follow the "formula" used by the specific publisher? You know the ones. Pick one up, change the name of the main characters, change the name of the town - then pick up another from the same publisher and SURPRISE! Same Story! Change the name of the characters in the second book to what you changed the names from the first book - and see how similar they are.
OK. Software folks - Are your perceptions of users (you know, people who use your software to do what they need to do) as fixed as the characters in the mass-produced fiction books? Or are your perceptions of users more like the participants in conversations?
Some Ideas I have that may seem really obvious to a fair number of folks, but I suspect are either revolutionary or heretical to others...
No Two People Are the Same
OK. Obvious idea Number 1 for software testers: No two people are the same. Duh. Says so in red just above that, right? They are the same, right? Really? How many differences can you spot? (Go ahead, try. Its OK.)
Why do we expect the people using the system to be a homogenous group where they generally act the same? Think of people you work with who use software - ANY software. Do they select similar options as each other? Do they have the same interests?
Do they like the same coffee? Do they do the same job? Do they want to do the same job? No, wait. When you read the last couple of questions, what was your answer? Do they REALLY do the same job? Or do they do the same general function?
Are they doing something similar to each other? Umm - similar is not the same, right? If these are question you don't want to deal with - or maybe don't know the answer to - How are you designing your tests?
How are you designing your systems?
What "users" are your "user stories" emulating?
I had a bizarre chat fairly recently. Boss-type said "We fixed this by using personas. We can emulate people and mimic their behavior."
OK, says I to myself, reasonable idea and reasonable approach to formulating various scenarios. They can be very powerful. "Really," says I, out loud, "tell me about some of them. Sounds like it could be cool."
"Sure!" says the very proud boss-type, "We have Five of them: One for each department." Really? So, tell me more. "Sure! Persona 1 does the thing-a-ma-bob function. Persona 2 does the dumaflatchey function. Persona 3 does the whats-it function. Persona 4 does the thing-a-ma-jig function (similar to the thing-a-ma-bob function but not the same). Persona 5 does the whatever function."
So, a total of five personas? OK, how many people are in each department?
"Well, the smallest department has 15 people. The others have 75 to 100."
Really? They are all the same? They all do the same thing every time? They never vary in their routine?
Do they all do the same thing your test scenarios do - in that sequence - every single time they go into the system?
Sometimes People Have Bad Days
Yeah, I know you thought that only applied to software folks. Sometimes super-model types have bad day's too. Of course, famous folk have bad days - then they get their picture in various tabloids and their "bad day" seems not so bad because all the attention because of the tabloids is worse than the original "bad day."
Bad days can impact more than just our coding or testing or a public figure's dinner plans. Remarkably enough they can impact people who use the software we're working on.
Sometimes people have too much fun the night before they are in the office using our software. Their typing is less than perfect. They are less accurate than normal in their work - they read things wrong; they invert character sequences; they simply don't notice their own mistakes.
Sometimes they had a really bad night instead of a really good night. Maybe they were up half the night caring for a sick child. Maybe it wasn't a child, maybe it was a partner. What if it was a parent?
The results may be the same outwardly, but what about the inner turmoil?
"Is my child/partner/parent doing better now? Do I need to check on them? What if I call and they don't answer the phone? If they are sleeping, I may wake them. If they can't get to the phone, why not? Something could be seriously wrong?"
Will they be more irritable than they normally are? Will that impact others in the group and cause their productivity to drop?
Sometimes the Best People Aren't at Their Best
What? How can that be? Aren't they like what the Men In Black are looking for? Aren't they "Best of the Best of the Best" (sir)?
What if they are too good? What if they get asked questions and are interrupted and step away from their machines for a minute or lock their screens while they help someone else? What if their user session times out?
Let's face it. Anyone can get distracted. Anyone can be interrupted. Is the system time-sensitive? How about state sensitive? The session can time-out mid-transaction, can't it? Someone else has a problem so the expert locks her system and helps out the guy with a problem - what happens with her session when she comes back?
Do you know?
What if they get called into a conference room with some boss types to answer some questions. If she signs in from another location, what happens to her first session?
And so forth...
These are not new ideas. Do we know what happens though?
Now some of you may be thinking to yourself "But Pete, this is not really UX kind of stuff, is it?" That makes me wonder what kind of "stuff" they might be.
Do your test scenarios consider these possibilities? Do they consider any one of them?
Testing to Requirements
Ah yes. I hear the chorus of "Our instructions are to test to the requirements. Things that aren't in the requirements should not be tested. They are out of scope." Whose requirements?
The requirements that were written down by the BA (or group of them) or the ones that were negotiated and word-smithed and stated nicely?
What about the requirements that the BA did not understand, hence did not write down. Or maybe he wrote them down but they made no sense so other folks scrapped them.
Then there are the implied requirements. These are the ones that don't make the documented requirements because they seem so obvious. My favorite is the one about "Saving a new or modified record in the system will not corrupt the database."
You hardly ever see that, but everyone kind of expects that. Right? But if you are ONLY testing to DOCUMENTED requirements, then that does not count as a bug, right? It is out of scope. RIGHT?
NO? Really?
See? That is kind of my point. You may be considering the experience of the users already. You just don't know it.
Now, broaden your field of vision. Pan back. Zoom out. What else is obvious to the users that you have not considered before?
Now go test that stuff, too.
Monday, July 23, 2012
CAST 2012, or CASTing Through the Computer Screen
CAST 2012 recently wrapped up. The annual Conference of the Association for Software Testing was held in San Jose, California, Monday and Tuesday, July 16 and 17, 2012. Wednesday the 18th saw tutorials. Thursday was the Board of Directors meeting.
First, even though I had helped plan the Emerging Topics track, I had some conflicts arise that kept me away (physically). I was a tad disappointed that after the work that went in, I would not be drinking in the goodness that is the CAST experience.
Why is that a big deal?
Its a conference, right?
CAST is unlike any conference I have ever attended. They make it a point of being a participatory, engaged, function - not merely sitting listening to someone drone on reading power point slides. There is required time in each session for questions and discussion around the presentation. Some of them can be quite brutally honest.
This becomes an issue for some people.
When one engages with people who think, rote-learned answers do not hold up well. Watching the interplay as people, including the presenters, learn, is something that is in itself, a top-flite education in testing.
And I could not be there. Bummer.
I chose the next best thing - I joined in the video stream as much as the day-job and other responsibilities allowed. I listened to Emerging Topics presentations, keynotes, a panel discussion on the results of Test Coach Camp and CASTLive - the evening series of interviews with speakers and participants - with online chat and... stuff.
While I could not be there in person, this was a pretty decent substitute.
Other cool things
Keynotes by Tripp Babbitt and Elizabeth Hendrickson. Great panel discussions on what people learned from at Test Coach Camp, and, cool stuff.
Simply put, there are recordings to be viewed and listened to here.
Other things happened as well, like announcing the election results for AST Board of Directors.
I was elected to the Board of Directors to serve a single year, to fill a position left vacant by a Board Member who could not finish his term.
I am deeply honored to have been selected to serve in this way.
I am humbled, and looking forward to this new chapter in testing adventure.
First, even though I had helped plan the Emerging Topics track, I had some conflicts arise that kept me away (physically). I was a tad disappointed that after the work that went in, I would not be drinking in the goodness that is the CAST experience.
Why is that a big deal?
Its a conference, right?
CAST is unlike any conference I have ever attended. They make it a point of being a participatory, engaged, function - not merely sitting listening to someone drone on reading power point slides. There is required time in each session for questions and discussion around the presentation. Some of them can be quite brutally honest.
This becomes an issue for some people.
When one engages with people who think, rote-learned answers do not hold up well. Watching the interplay as people, including the presenters, learn, is something that is in itself, a top-flite education in testing.
And I could not be there. Bummer.
I chose the next best thing - I joined in the video stream as much as the day-job and other responsibilities allowed. I listened to Emerging Topics presentations, keynotes, a panel discussion on the results of Test Coach Camp and CASTLive - the evening series of interviews with speakers and participants - with online chat and... stuff.
While I could not be there in person, this was a pretty decent substitute.
Keynotes by Tripp Babbitt and Elizabeth Hendrickson. Great panel discussions on what people learned from at Test Coach Camp, and, cool stuff.
Simply put, there are recordings to be viewed and listened to here.
Other things happened as well, like announcing the election results for AST Board of Directors.
I was elected to the Board of Directors to serve a single year, to fill a position left vacant by a Board Member who could not finish his term.
I am deeply honored to have been selected to serve in this way.
I am humbled, and looking forward to this new chapter in testing adventure.
Saturday, July 7, 2012
In the Beginning, on Testing and User Experience
Behold, an Angel of the Lord appeared before them and said "Lift up your faces, oh wandering ones, and hear the words of the One who is Most High.I've seen project methodologies adapted at companies that look and read painfully close to this. None have gone this far, perhaps - at least not in language and phrasing. Alas, a painful number have the spirit and feeling of the above.
Thus says the Lord, the Keeper of the Way and Holder of Truth, 'Do not let your minds be troubled for I send a messenger unto you with great tidings. The Path to Software Quality shall this messenger bring. Heed the words and keep them whole.
Worry not your hearts over the content of software releases. Behold, one shall come bearing Requirements, functional and non-functional. And they shall be Good. Study them and take them under your roof that you may know them wholly.
If, in your frail-mindedness, such Requirements are beyond the comprehension of lesser beings, Designers and Developers and Analysts, yea, unto the lowly Testers who are unable to comprehend such lofty concepts, fear not to come and humbly ask which these Requirements mean. Lo, it shall be revealed unto you all that you need to know. On asking, that which is given in answer shall be all that the lesser ones, unto the lowly Testers, shall be revealed.
Ask not more than once, for doing so will try the patience of the Great Ones who are given the task of sharing the Revelation of the Requirements. To try the patience of these Great Ones can end with a comment in your permanent file, unto impacting your performance review and any pay raise you may long for now, and in the future.
Developers and Designers and lowly Testers shall then go forth from the place to the place of their cubicles and prepare such documents as is their task to prepare.'"
Then the Angel of the Lord spoke to them this warning with a voice that shook the Earth and the Hills and the Valleys and caused the Rivers to change from their courses. "Seek not the counsel of those in the Help Desk nor those in Customer Support nor those in Services. Their path will lead you astray for their knowledge is limited and perception flawed. Avoid them in these tasks before you as they are not worthy to hear or read the words handed down to you from the Requirements Bearer. Thus says the One who is Most High."
1st Book of Development Methodology, 2:7-45
Rubbish.
As you sow, so shall you reap.
It does not matter what methodology your organization uses to make software. It does not matter what approach you have to determining what the requirements are. They shall be revealed in their own time.
If you are making software that people outside of your company will use - maybe they will pay money for using it. Maybe that is how your company stays in business. Maybe that is where the money coming into the company for things
If that is the case, I wonder where the "Requirements" are coming from. Companies I have worked for, in this situation, the "requirements" come from Sales folk. Now, don't get me wrong, many of these Sales folk are nice enough. I'm just not sure they all understand certain aspects of technology and what can be done with software these days.
Frankly, I'm not sure if some of them have any idea what the software they are selling does.
That's a pity. The bigger pity is that many times the people they are working with to "get the deal" have no real idea what is needed.
They can have a grand, overall needs view. They can have a decent understanding of what the company wants, or at least what their bosses say the company wants, from the new or improved software, They may know the names of some of the managers that the people using the software every day.
This does not include the people who glance at the dashboard and say things like, "Hmmm. There seems to be a divergence between widget delivery and thing-a-ma-bob capacity. Look at these charts. Something is not right."
That's a good start. Do they have any idea where the data for those charts come from? Do they have any idea on how to drill down a bit and see what can be going on? In some cases, they might. I really hope that this is true in the majority of cases. From what I have seen though, this isn't the case.
The Average User
Ah yes. What is an "average user"? Well, some folks seem to have an idea and talk about what an "average user" of the system would do. When they are sales folk who sell software (maybe that exists) I am not certain what they mean.
Do they mean an "everyman" kind of person? Do they picture their mother trying to figure out the Internet and email and search engines? I don't know.
Do they mean someone who follows the script, the instructions they are given on "how to do this function" - probably copied from the user manual - for 2,5 hours (then a coffee break) then 2 hours (lunch!) then 2 hours (then an afternoon break) then 1.5 hours (then home)? Maybe.
Have any of you reading this seen that actually happen? I have not.
So, folks tasked with designing a system to meet requirements derived from conversations with people who may or may not have contact with the system/process in general and who may or may not understand the way the system is actually used at their company (and when you multiply this across the model of "collected enhancement requests" many companies) will then attempt to build a narrative that addresses all of these needs, some of them competing.
This generally describes the process at four companies I know.
The designers may or may not create "user stories" to walk through scenarios to aid in the design. They will look at aspects of the system and say "Yup, that's covered all the requirements. Good job, team/"
What has not happened in that model? Anyone?
No one has actually talked with someone who is in contact with (or is) an "average user."
When I raised the point at one company that this was an opportunity for miscommunication, I was told "the users are having problems because they are using it wrong."
Really? Users are using the system wrong?
REALLY?
Or are they using it in a manner our model did not anticipate?
Are they using it in a manner they need to in order to get their job done, and our application does not support them as they need it to? Are they using it as they always needed to, working around the foibles of our software, and their boss' boss' boss - the guy talking with the sales folks - had no clue about?
Why?
Is it because the Customer Support, Help Desk, Professional Services... whatever they are called at your company - may know more about how customers actually use the software than, say, the "product experts"? Is it because of the difference between how people expect the software to be used
and how those annoying carbon-based units actually use it?
As testers, is it reasonable to hold to testing strictly what one model of system use tells us is correct? When we are to test strictly the "documented requirements" and adhere to the path as designed and intended by the design team, are we confirming anything other than their bias?
Are we limiting how we discover system behavior? Are we testing?
I know I need to think on this some more.
Labels:
Expectations,
Methods,
thinking,
User Experience
Thursday, July 5, 2012
On Best Practices, Or, What Marketing Taught Me About Buzzwords
An eon or two ago, whilst working toward my Bachelor's Degree, I had a way interesting professor. Perfectly deadpan and recognized people's faces and names - but never the two together. Crazy smart though.
He had multiple PhDs. Before deciding he would enter academia (PhD #2), he worked as a geologist searching out potential oil deposits (PhD #1) for a large multinational oil company. It was when he got tired of "making a sinful amount of money" (his words) he decided he should try something else.
He had an interesting lecture series on words and terms and what they mean. One subset came back to me recently. I was cleaning out my home office - going through stuff that at one point I thought I might need. In one corner, in a box, was a collection of stuff that included the text book and lecture notes from that class. I started flipping through them with a smile at my long ago youthful optimism I had recorded there.
One thing jumped out at me as I perused the notes of a young man less than half my current age - a 2 lecture discourse on the word "best" when used in advertising.
Some of this I remembered after all these years. Some came roaring back to me. Some made me think "What?"
X is the Best money can buy.
Now, I've noticed a decline in TV advertising that makes claims like this, "Our product is the best product on the market. Why pay more when you can have the best?" Granted, I don't watch a whole lot of TV anymore. Not that I did then either - no time for it. (Now, I have no patience for most it.)
Those ads could be running on shows I don't watch. This is entirely possible. Perhaps some of you have seen these types of ads.
Anyway, part of one lecture was a discussion on how so many competing products in the same market segment could possibly all claim to be the best: toothpaste, fabric softener, laundry detergent, dish detergent, soft drink, coffee, tea... whatever. All of them were "the best."
The way this professor worked his lectures was kind of fun. He'd get people talking, debating ideas, throwing things out and ripping the ideas apart as to why that logic was flawed or something. He'd start with a general statement on the topic, then put up a couple of straw-men to get things going. (I try and use the same general approach, when I can, when presenting. It informs everyone, including the presenter.)
The debate would rage on until he reeled everyone in and gave a summary of what was expressed. He'd comment on what he thought was good and not so good. Then he'd present his view and let the debate rage again.
I smiled as I read through the notes I made - and the comments he gave on the points raised.
Here, in short, is what I learned about the word "Best" in advertising: Best is a statement of equivalence. If a product performs the function it was intended to do, and all other products do the same, one and all can claim to be "the best."
However, if a product had claims that it was "better" than the competition, they needed to be able to provide the proof they were better or withdraw the ad.
Does the same apply to that blessed, sanctified and revered phrase "Best Practice?"
Ah, there be dragons!
The proponents of said practices will defend them with things like, "These practices, as collected, are the best available. So, they are Best Practices." Others might say things like, "There are a variety of best practices. You must use the best practice that fits the context."
What? What are you saying? What does that mean?
I've thought about this off and on for some time. Then, I came across the notes from that class.
Ah-HA! Eureka! Zounds!
Zounds? Really? Yes, Zounds! Aside from being the second archaic word I've used in the post, it does rather fit. (I'll wait while you look up the definition if you like.)
OK, so now that we're back, consider this: The only way for this to make any sense is to forget that these words that look and sound like perfectly ordinary words in the English language. They do not mean what one might think they mean.
Just like X toothpaste and Y toothpaste both can not both be the best, because how can you have TWO best items? Unless, they mean "best" as a statement of equivalence, not superiority.
Then it makes sense. Then I can understand what the meaning is.
The meaning is simple: Best Practices are Practices that may, or may not work in a given situation.
Therefore, they are merely practices. Stuff that is done.
Fine.
Now find something better.
He had multiple PhDs. Before deciding he would enter academia (PhD #2), he worked as a geologist searching out potential oil deposits (PhD #1) for a large multinational oil company. It was when he got tired of "making a sinful amount of money" (his words) he decided he should try something else.
He had an interesting lecture series on words and terms and what they mean. One subset came back to me recently. I was cleaning out my home office - going through stuff that at one point I thought I might need. In one corner, in a box, was a collection of stuff that included the text book and lecture notes from that class. I started flipping through them with a smile at my long ago youthful optimism I had recorded there.
One thing jumped out at me as I perused the notes of a young man less than half my current age - a 2 lecture discourse on the word "best" when used in advertising.
Some of this I remembered after all these years. Some came roaring back to me. Some made me think "What?"
X is the Best
Now, I've noticed a decline in TV advertising that makes claims like this, "Our product is the best product on the market. Why pay more when you can have the best?" Granted, I don't watch a whole lot of TV anymore. Not that I did then either - no time for it. (Now, I have no patience for most it.)
Those ads could be running on shows I don't watch. This is entirely possible. Perhaps some of you have seen these types of ads.
Anyway, part of one lecture was a discussion on how so many competing products in the same market segment could possibly all claim to be the best: toothpaste, fabric softener, laundry detergent, dish detergent, soft drink, coffee, tea... whatever. All of them were "the best."
The way this professor worked his lectures was kind of fun. He'd get people talking, debating ideas, throwing things out and ripping the ideas apart as to why that logic was flawed or something. He'd start with a general statement on the topic, then put up a couple of straw-men to get things going. (I try and use the same general approach, when I can, when presenting. It informs everyone, including the presenter.)
The debate would rage on until he reeled everyone in and gave a summary of what was expressed. He'd comment on what he thought was good and not so good. Then he'd present his view and let the debate rage again.
I smiled as I read through the notes I made - and the comments he gave on the points raised.
Here, in short, is what I learned about the word "Best" in advertising: Best is a statement of equivalence. If a product performs the function it was intended to do, and all other products do the same, one and all can claim to be "the best."
However, if a product had claims that it was "better" than the competition, they needed to be able to provide the proof they were better or withdraw the ad.
So Best Practices?
Does the same apply to that blessed, sanctified and revered phrase "Best Practice?"
Ah, there be dragons!
The proponents of said practices will defend them with things like, "These practices, as collected, are the best available. So, they are Best Practices." Others might say things like, "There are a variety of best practices. You must use the best practice that fits the context."
What? What are you saying? What does that mean?
I've thought about this off and on for some time. Then, I came across the notes from that class.
Ah-HA! Eureka! Zounds!
Zounds? Really? Yes, Zounds! Aside from being the second archaic word I've used in the post, it does rather fit. (I'll wait while you look up the definition if you like.)
OK, so now that we're back, consider this: The only way for this to make any sense is to forget that these words that look and sound like perfectly ordinary words in the English language. They do not mean what one might think they mean.
Just like X toothpaste and Y toothpaste both can not both be the best, because how can you have TWO best items? Unless, they mean "best" as a statement of equivalence, not superiority.
Then it makes sense. Then I can understand what the meaning is.
The meaning is simple: Best Practices are Practices that may, or may not work in a given situation.
Therefore, they are merely practices. Stuff that is done.
Fine.
Now find something better.
Monday, June 25, 2012
On Value, Part 2: The Failure of Testers
This is the second post which resulted from a simple question my lady-wfe asked at a local tester meeting recently.
This blog post resulted in a fair number of visits, tweets, retweets and other measures that people often use to measure popularity or "quality" of a post.
The comments had some interesting observations. I agree with some of them, can appreciate the ideas expressed in others. Some, I'm not so sure about.
Observations on Value
For example, Jim wrote "Yes, it all comes down to how well we "sell" ourselves and our services. How well we "sell" testing to the people who matter, and get their buy-in."
Generally, I can agree with this. We as testers have often failed to do just that - sell ourselves and what we do, and the value of that.
Aleksis wrote "I really don't think there are shortcuts in this. Our value comes through our work. In order to be recognized as a catalyst for the product, it requires countless hours of succeeding in different projects. So, the more we educate us (not school) and try to find better ways to practice our craft, the more people involved in projects will see our value."
Right. There are no shortcuts. I'm not so certain that our value comes through our work. If there are people who can deliver the same results for less pay (i.e., lower cost) then what does this do to our value? I wonder if the issue is what that work is? More on that later, back to comments.
Aleksis also wrote "A lot of people come to computer industry from universities and lower level education. They just don't know well enough testing because it's not teach to them (I think there was 1 course in our university). This is probably one of the reasons why software testing is not that well known."
I think there's something to this as well. Alas, many managers and directors and other boss-types testers deal with, work with and for, come from backgrounds other than software testing. Most were developers, or programmers when I did the same job. Reasonably few did more than minimal testing, or unit testing or some form of functional testing. To them, when they were doing their testing, it was a side-activity to what their "real work" was. Their goal was to show they had done their development work right and that was that.
Now, that is all well and good, except that no one is infallible in matters of software. Everyone makes mistakes, and many deceive themselves about software behavior that does not quite match their expectations.
Jesper chimed in with "It's important that all testing people start considering how they add value for their salary. If they don't their job is on the line in the next offshoring or staff redux."
That seems related to Jim's comment. If people, meaning boss-types, don't see the point of your work, you will have "issues" to sort out - like finding your next gig.
Taken together, these views, and the ones expressed in the original blog post,can be summarized as this: Convincing people (bosses) that there is value in what you do as a tester is hard.
The greater problem I see is not convincing one set of company bosses or another that you "add value." The greater problem is what I see rampant in the world of software development:
Testers are not seen as knowledge workers by a significant portion of technical and corporate management.
I know - that is a huge sweeping statement. It has been gnawing at me on how to express it. There are many ideas bouncing around that eventually led me to this conclusion. For example, consider these statements (goals) I have heard and read in the last several weeks, as being highly desirable
The core tenet is that the skilled work is done by a "senior" tester writing the detailed test case instructions. Then, the unskilled laborers (the testers) follow the scripts as written and report if their results match the documented, "expected" results.
The galling thing is that people working in these environments do not cry out against this. Either debating the wisdom of such practices, or arguing that defects found in production could NOT have been found by following the documented steps they were required to follow.
Some folks may mumble and generally ask questions, but don't do more. I know, the idea of questioning bosses when the economy is lousy is a freighting prospect. You might be reprimanded. You may get "written up." You may get fired.
If you do not resist this position with every bit of your professional soul and spirit, you are contributing to the problem.
You can resist actively, as I do and as do others whom I respect. In doing so, you confront people with alternatives. You present logical arguments, politely, on how the model is flawed. You engage in conversation, learning as you go how to communicate to each person you are dealing with.
Alternatively, you can resist passively, as some people I know advocate you do. I find that to be more obstructionist than anything else. Instead of presenting alternatives and putting yourself forward to steadfastly explain your beliefs, you simply say "No." Or you don't say it, you just don't comply, obey, whatever.
One of the fairly common gripes that comes up every few months on various forums, including LinkedIn, are whinge-fests on how its not fair that developers are paid "so much more" than testers are.
If you...
If you are one of the people complaining about lack of PAY or RESPECT or ANYTHING ELSE with your chosen line of work, and you do nothing to improve yourself, you have no one to blame but yourself.
If you work in an environment where bosses clearly have a commodity-view of testers, and you do nothing to convince them otherwise, you have no one to blame but yourself.
If you do something that a machine could do just as well, and you wonder why no one respects you, you have no one to blame but yourself.
If you are content to do Validation & Verification "testing" and never consider branching beyond that, you are contributing to the greater problem and have no one to blame but yourself.
I am not blaming the victims. I am blaming people who are content to do whatever they are told as being a "best practice" and will accept everything at face value.
I am blaming people who have no interest in the greater community of software testers. I am blaming people who have no vision beyond what they are told "good testers" do.
I am blaming the Lemmings that wrongfully call themselves Testers.
If you are in any of those descriptions above, the failure is yours.
The opportunity to correct it is likewise yours.
This blog post resulted in a fair number of visits, tweets, retweets and other measures that people often use to measure popularity or "quality" of a post.
The comments had some interesting observations. I agree with some of them, can appreciate the ideas expressed in others. Some, I'm not so sure about.
Observations on Value
For example, Jim wrote "Yes, it all comes down to how well we "sell" ourselves and our services. How well we "sell" testing to the people who matter, and get their buy-in."
Generally, I can agree with this. We as testers have often failed to do just that - sell ourselves and what we do, and the value of that.
Aleksis wrote "I really don't think there are shortcuts in this. Our value comes through our work. In order to be recognized as a catalyst for the product, it requires countless hours of succeeding in different projects. So, the more we educate us (not school) and try to find better ways to practice our craft, the more people involved in projects will see our value."
Right. There are no shortcuts. I'm not so certain that our value comes through our work. If there are people who can deliver the same results for less pay (i.e., lower cost) then what does this do to our value? I wonder if the issue is what that work is? More on that later, back to comments.
Aleksis also wrote "A lot of people come to computer industry from universities and lower level education. They just don't know well enough testing because it's not teach to them (I think there was 1 course in our university). This is probably one of the reasons why software testing is not that well known."
I think there's something to this as well. Alas, many managers and directors and other boss-types testers deal with, work with and for, come from backgrounds other than software testing. Most were developers, or programmers when I did the same job. Reasonably few did more than minimal testing, or unit testing or some form of functional testing. To them, when they were doing their testing, it was a side-activity to what their "real work" was. Their goal was to show they had done their development work right and that was that.
Now, that is all well and good, except that no one is infallible in matters of software. Everyone makes mistakes, and many deceive themselves about software behavior that does not quite match their expectations.
Jesper chimed in with "It's important that all testing people start considering how they add value for their salary. If they don't their job is on the line in the next offshoring or staff redux."
That seems related to Jim's comment. If people, meaning boss-types, don't see the point of your work, you will have "issues" to sort out - like finding your next gig.
The Problem: The View of Testing
Taken together, these views, and the ones expressed in the original blog post,can be summarized as this: Convincing people (bosses) that there is value in what you do as a tester is hard.
The greater problem I see is not convincing one set of company bosses or another that you "add value." The greater problem is what I see rampant in the world of software development:
Testers are not seen as knowledge workers by a significant portion of technical and corporate management.
I know - that is a huge sweeping statement. It has been gnawing at me on how to express it. There are many ideas bouncing around that eventually led me to this conclusion. For example, consider these statements (goals) I have heard and read in the last several weeks, as being highly desirable
- Reduce time spent executing manual test cases by X%;
- Reduce the number of manual test cases executed by Y%;
- Automate everything (then reduce tester headcount);
The core tenet is that the skilled work is done by a "senior" tester writing the detailed test case instructions. Then, the unskilled laborers (the testers) follow the scripts as written and report if their results match the documented, "expected" results.
The First Failure of Testers
The galling thing is that people working in these environments do not cry out against this. Either debating the wisdom of such practices, or arguing that defects found in production could NOT have been found by following the documented steps they were required to follow.
Some folks may mumble and generally ask questions, but don't do more. I know, the idea of questioning bosses when the economy is lousy is a freighting prospect. You might be reprimanded. You may get "written up." You may get fired.
If you do not resist this position with every bit of your professional soul and spirit, you are contributing to the problem.
You can resist actively, as I do and as do others whom I respect. In doing so, you confront people with alternatives. You present logical arguments, politely, on how the model is flawed. You engage in conversation, learning as you go how to communicate to each person you are dealing with.
Alternatively, you can resist passively, as some people I know advocate you do. I find that to be more obstructionist than anything else. Instead of presenting alternatives and putting yourself forward to steadfastly explain your beliefs, you simply say "No." Or you don't say it, you just don't comply, obey, whatever.
One of the fairly common gripes that comes up every few months on various forums, including LinkedIn, are whinge-fests on how its not fair that developers are paid "so much more" than testers are.
If you...
If you are one of the people complaining about lack of PAY or RESPECT or ANYTHING ELSE with your chosen line of work, and you do nothing to improve yourself, you have no one to blame but yourself.
If you work in an environment where bosses clearly have a commodity-view of testers, and you do nothing to convince them otherwise, you have no one to blame but yourself.
If you do something that a machine could do just as well, and you wonder why no one respects you, you have no one to blame but yourself.
If you are content to do Validation & Verification "testing" and never consider branching beyond that, you are contributing to the greater problem and have no one to blame but yourself.
I am not blaming the victims. I am blaming people who are content to do whatever they are told as being a "best practice" and will accept everything at face value.
I am blaming people who have no interest in the greater community of software testers. I am blaming people who have no vision beyond what they are told "good testers" do.
I am blaming the Lemmings that wrongfully call themselves Testers.
If you are in any of those descriptions above, the failure is yours.
The opportunity to correct it is likewise yours.
Thursday, June 14, 2012
You Call That Testing? Really? What is the value in THAT?
The local tester meetup was earlier this week. As there was no formal presentation planned it was an extended round table discussion with calamari and pasta and wine and cannoli and the odd coffee.
That was the official topic.
The result was folks sitting around describing testing at companies where they worked or had worked. This was everything from definitions to war-stories to a bit of conjecture. I was taking notes and tried hard to not let my views dominate the conversation - mostly because I wanted to hear what the others had to say.
The definitions ranged from "Testing is a bi-weekly paycheck" (yes, that was tongue-in-cheek, I think) to more philosophical, " Testing is an attempt to identify and quantify risk." I kinda like that one.
James Bach was also referred to with "Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous."
What was interesting to me was how the focus of the discussion was experiential. There were statements that "We only do really detailed, scripted testing. I'm trying to get away from that, but the boss doesn't get it. But, we do some 'exploratory' work to create the scripts. I want to expand that but the boss says 'No.'"
That led to an interesting branch in the discussion, prompted by a comment from the lady-wife who was listening in and having some pasta.
She asked "How do you change that? How do you get people to see the value that you can bring the company so you are seen as an asset and not a liability or an expense?"
Yeah, that is kind of the question a lot of us are wrestling with.
How do you quantify quality? Is what we do related to quality at all? Really?
When we test we...
We exercise software, based on some model. We may not agree with the model, or charter or purpose or ... whatever. There it is.
If our stated mission is to "validate the explicit requirements have been implemented as described" then that is what we do, right?
If our stated mission is to "evaluate the software product's suitability to the business purpose of the customer" then that is what we do, right?
When we exercise software to validate the requirements we received have been filled, have we done anything to exercise the suitability of purpose? Well, maybe. I suspect it depends on how far out of the lines we go.
When we exercise software to evaluate the suitability to purpose, are we, by definition exercising the requirements? Well, maybe. My first question is, do we have any idea at all about how to judge the suitability of purpose? At some shops, well, maybe - yes. Others? I think a fair number of people don't understand enough to understand that they don't understand.
So, the conversation swirled on around testing and good and bad points.
How do we do better testing?
I know reasonably few people who don't care about what kind of a job they do. Most folks I know want to do the best work they can do.
The problem comes when we are following the instructions, mandate, orders, model, whatever, that we are told to follow, and defects are reported in production. Sometimes by customers, sometimes by angry customers. Sometimes by customers saying words like "withhold payment" or "cancel the contract" or "legal action" - that tends to get the attention of certain people.
Alas, sometimes it does not matter what we as testers say. The customers can say scary words like that and get the attention of people who define the models us lowly testers work within. Sometimes the result is we "get in trouble" for testing within the model we are told to test within. Of course, when we go outside the model we may get in trouble for that as well. Maybe that never happened to you? Ah well.
Most people want to do good work - I kinda said that earlier. We (at least I and many people I respect) want to do the absolute best we can. We will make mistakes. Bugs will get out into the wild. Customers will report problems (or not and just grumble about them until they run into someone at the user conference and they compare notes - then watch the firestorm start!)
Part of the problem is many (most) businesses look at testing and testers as expenses. Plain and simple. It does not seem to matter if the testers are exercising software to be used internally or commercial software to be used by paying customers. We are an expense in their minds.
If we do stuff they do not see as "needed" then testing "takes too long" and "costs too much." What is the cost of testing? What is the cost of NOT testing?
I don't know. I need to think on that. One of the companies I worked for, once upon a time, it was bankruptcy. Other were less dramatic, but avoiding the national nightly news was adequate incentive for one organization I worked for.
One of the participants in the meeting compared testing to some form of insurance - you buy it, don't like paying the bill, but when something happens you are usually glad you did. Of course, if nothing bad happens, then people wonder why they "spent so much" on something they "did not need."
I don't have an answer to that one. I need to think on that, too.
So, when people know they have an issue - like a credibility gap or perceived value gap - how do you move forward?
I don't know that either - at least not for everyone. No two shops I've been in have followed the same path to understanding, either. Not the "All QA does is slow things down and get in the way" shop nor the "You guys are just going through the motions and not really doing anything" shop. Nor any of the other groups I've worked with.
Making the Change
In each of these instances, it was nothing we as testers (or QA Engineers or QA Analysts or whatever) did to convince people we had value and what we did had value. It was a Manager catching on that we were finding things their staff would not have found. It was a Director realizing we were working with his business staff and learning from them while we were teaching them the ins and outs of the new system so they could test it adequately.
They went to others and mentioned the work we were doing. They SAW what was going on and realized it was helping them - The development bosses saw the work we did as, at its essence, making them and their teams look good. The user's bosses realized we were training people and helping them get comfortable with the system so they could explain it to others, while we were learning about their jobs - which meant we could do better testing before they got their hands on it.
It was nothing we did, except our jobs - the day-in and day-out things that we did anyway - that got managers and directors and vice-presidents and all the other layers of bosses at the various companies - to see that we were onto something.
That something cost a lot of money in the short-term, to get going. As time went on, they saw a change in the work going on - slowly. They began talking about it and other residents of the mahogany row began talking about it. Then word filtered down through the various channels that something good was going on.
The people who refused to play along before began to wander in and "check it out" and "look around for themselves." Some looked for a way to turn it to their advantage - any small error or bug would be pounced on as "SEE! They screwed up!" Of course, before we came along, any small errors found in production would be swept under the rug as something pending a future enhancement (that never came, of course.)
We proved the value by doing what we did, and humbly, diplomatically going about our work. In those shops that worked wonders.
And so...
We return then to the question above. How do we change people's perspectives about what we do?
Can we change entire industries? Maybe. But what do we mean by "industries?" Can we at least get all the developers in the world to recognize we can add value and help them? How about their bosses?
How about we start with the people we all work with, and go from there? I don't know how to do that in advance. I hope someone can figure that out and help me understand.
I'll be waiting excitedly to hear back from you.
"What is this testing stuff anyway?"
That was the official topic.
The result was folks sitting around describing testing at companies where they worked or had worked. This was everything from definitions to war-stories to a bit of conjecture. I was taking notes and tried hard to not let my views dominate the conversation - mostly because I wanted to hear what the others had to say.
The definitions ranged from "Testing is a bi-weekly paycheck" (yes, that was tongue-in-cheek, I think) to more philosophical, " Testing is an attempt to identify and quantify risk." I kinda like that one.
James Bach was also referred to with "Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous."
What was interesting to me was how the focus of the discussion was experiential. There were statements that "We only do really detailed, scripted testing. I'm trying to get away from that, but the boss doesn't get it. But, we do some 'exploratory' work to create the scripts. I want to expand that but the boss says 'No.'"
That led to an interesting branch in the discussion, prompted by a comment from the lady-wife who was listening in and having some pasta.
She asked "How do you change that? How do you get people to see the value that you can bring the company so you are seen as an asset and not a liability or an expense?"
Yeah, that is kind of the question a lot of us are wrestling with.
How do you quantify quality? Is what we do related to quality at all? Really?
We exercise software, based on some model. We may not agree with the model, or charter or purpose or ... whatever. There it is.
If our stated mission is to "validate the explicit requirements have been implemented as described" then that is what we do, right?
If our stated mission is to "evaluate the software product's suitability to the business purpose of the customer" then that is what we do, right?
When we exercise software to validate the requirements we received have been filled, have we done anything to exercise the suitability of purpose? Well, maybe. I suspect it depends on how far out of the lines we go.
When we exercise software to evaluate the suitability to purpose, are we, by definition exercising the requirements? Well, maybe. My first question is, do we have any idea at all about how to judge the suitability of purpose? At some shops, well, maybe - yes. Others? I think a fair number of people don't understand enough to understand that they don't understand.
So, the conversation swirled on around testing and good and bad points.
How do we do better testing?
I know reasonably few people who don't care about what kind of a job they do. Most folks I know want to do the best work they can do.
The problem comes when we are following the instructions, mandate, orders, model, whatever, that we are told to follow, and defects are reported in production. Sometimes by customers, sometimes by angry customers. Sometimes by customers saying words like "withhold payment" or "cancel the contract" or "legal action" - that tends to get the attention of certain people.
Alas, sometimes it does not matter what we as testers say. The customers can say scary words like that and get the attention of people who define the models us lowly testers work within. Sometimes the result is we "get in trouble" for testing within the model we are told to test within. Of course, when we go outside the model we may get in trouble for that as well. Maybe that never happened to you? Ah well.
Most people want to do good work - I kinda said that earlier. We (at least I and many people I respect) want to do the absolute best we can. We will make mistakes. Bugs will get out into the wild. Customers will report problems (or not and just grumble about them until they run into someone at the user conference and they compare notes - then watch the firestorm start!)
Part of the problem is many (most) businesses look at testing and testers as expenses. Plain and simple. It does not seem to matter if the testers are exercising software to be used internally or commercial software to be used by paying customers. We are an expense in their minds.
If we do stuff they do not see as "needed" then testing "takes too long" and "costs too much." What is the cost of testing? What is the cost of NOT testing?
I don't know. I need to think on that. One of the companies I worked for, once upon a time, it was bankruptcy. Other were less dramatic, but avoiding the national nightly news was adequate incentive for one organization I worked for.
One of the participants in the meeting compared testing to some form of insurance - you buy it, don't like paying the bill, but when something happens you are usually glad you did. Of course, if nothing bad happens, then people wonder why they "spent so much" on something they "did not need."
I don't have an answer to that one. I need to think on that, too.
So, when people know they have an issue - like a credibility gap or perceived value gap - how do you move forward?
I don't know that either - at least not for everyone. No two shops I've been in have followed the same path to understanding, either. Not the "All QA does is slow things down and get in the way" shop nor the "You guys are just going through the motions and not really doing anything" shop. Nor any of the other groups I've worked with.
Making the Change
In each of these instances, it was nothing we as testers (or QA Engineers or QA Analysts or whatever) did to convince people we had value and what we did had value. It was a Manager catching on that we were finding things their staff would not have found. It was a Director realizing we were working with his business staff and learning from them while we were teaching them the ins and outs of the new system so they could test it adequately.
They went to others and mentioned the work we were doing. They SAW what was going on and realized it was helping them - The development bosses saw the work we did as, at its essence, making them and their teams look good. The user's bosses realized we were training people and helping them get comfortable with the system so they could explain it to others, while we were learning about their jobs - which meant we could do better testing before they got their hands on it.
It was nothing we did, except our jobs - the day-in and day-out things that we did anyway - that got managers and directors and vice-presidents and all the other layers of bosses at the various companies - to see that we were onto something.
That something cost a lot of money in the short-term, to get going. As time went on, they saw a change in the work going on - slowly. They began talking about it and other residents of the mahogany row began talking about it. Then word filtered down through the various channels that something good was going on.
The people who refused to play along before began to wander in and "check it out" and "look around for themselves." Some looked for a way to turn it to their advantage - any small error or bug would be pounced on as "SEE! They screwed up!" Of course, before we came along, any small errors found in production would be swept under the rug as something pending a future enhancement (that never came, of course.)
We proved the value by doing what we did, and humbly, diplomatically going about our work. In those shops that worked wonders.
And so...
We return then to the question above. How do we change people's perspectives about what we do?
Can we change entire industries? Maybe. But what do we mean by "industries?" Can we at least get all the developers in the world to recognize we can add value and help them? How about their bosses?
How about we start with the people we all work with, and go from there? I don't know how to do that in advance. I hope someone can figure that out and help me understand.
I'll be waiting excitedly to hear back from you.
Sunday, April 22, 2012
On Passion, or Be Careful What You Wish For
Recently I was reminded of something that was said several years ago.
The Several Years Ago part: In the middle of a project that was simply not going well, in fact, it was a bit of a train-wreck. Nah, not a bit. It was a complete and total train-wreck. Pick something that would go wrong and it did. In spades.
Yours truly was QA Lead and was overwhelmed. A "target rich environment" does not begin to describe what was going on. Massive effort, huge amounts of overtime to try and control the damage, stop the flooding, stop the bleeding, well, pick a metaphor.
Fact was, the testers were putting in a lot of effort and, frankly not many others were.
So, sitting having an adult beverage, or several, with one of the development managers on the project, he looked at me and said, "Pete, you have a real passion for what you do. You're better at testing and understand software better than an awful lot of people I've worked with. You are really passionate about what you do. That is great. Be careful though. If you're too passionate you can burn out."
That struck me as odd, at the time anyway. How can one be "too passionate"? Is it possible that one can be too involved? Too close to the work? Too passionate?
After all, we have a lot to do and scads of work and... whoa. Why is it that some folks are diving in and going full bore and others are, well, sliding by and doing what seems to be the minimum. Why is it that some people just, well, not as deeply into making the project work as others?
The Reminder part: So, talking with another tester I know, she was muttering about a project where the developers just did not seem to care, about deadlines, quality of the project, impact on, well, performance reviews, raises, bonuses, and the like. She looked at me and said "Its like they just don't care!"
SO, why is it that some people just, well, are not as deeply into making the project work as others? I don't know. Maybe it depends on what is expected, or what the normal approach is for the shop or company or, whatever. Maybe it depends on the nature of the project leadership. Are people being managed or controlled, compelled.
While what is often called craftsmanship is something that seems hard to find.these days, in some places (maybe many places, I don't know) I remember hearing many people speak passionately about being, well, passionate - as a tester, as a developer, or as whatever it is that each one of us is.
I got to thinking some more Friday night and generally over the weekend about this.
When looking for places where everyone is passionate about their work, what does that look like? How do you know when you find it? I used to think I knew. I've worked at places where the majority of people were very passionate about what they did. They wrapped much of their view of their self-worth into their work - so if the project was a success, their efforts were "worth it."
Then, I started wondering what a project that was a success looked like. I suspect it rather depends on the software development group's target audience. Are the people who will be using the results of your work all working for the company you are working for? If so, "market" is a hard concept - unless the results of their work, with the new system, improves so much that the company as a whole performs better because of the many long hours and weekends in the office and ... yeah, you get it.
If the company makes software that will be bought by other companies for use in their business, the combination of sales, licenses, recurring/renewal of contracts around the software and the like will be one measure of how your efforts contributed to a successful project. Likewise, the customer-companies being able to conduct their business better, more efficiently, is another measure of the success of the project.
And so, what about the other signs? What about the places where people are not passionate about their work. What do they look like?
That's easier to find examples...
People use "process" as an excuse to not do something. "I'd love to do this, but I can't do X until D, F and L are in place. That is what the process is." (Whether its true or not does not seem to matter.)
People lock into rituals and stay there. Arrive 5 minutes after the "start time"; start laptop/desk-top computer; get coffee; drink coffee, eat breakfast; sign on to network; get more coffee; sign on to email (personal)... etc., leave 10 minutes before official "stop time" to "avoid the rush". Use the, "well, I work a lot of extra hours from home and over the weekend" reasoning. (Oh, laptop is still in the dock on the desk as they are heading home.)
The appearance of work counts more than actually doing work. Lots of reports being filed, status reports, progress reports, papers being shuffled up to leads and supervisors and managers and, of course, process management. This is different than using process as an excuse to not do something. This is taking the literal process and ignoring the intent.
Heroic Behavior is rewarded more than steady solid work. Now, I'm not down on heroes. I've been in that role, and was recently called a hero as well. I mean the false-heroes, the ones who dawdle and obfuscate and put things off and delay, and miss interim deadlines and miss delivery deadlines - partly by using the first three behaviors - and then work massive hours the last week of a project to pull things together and deliver something - and let everyone know how hard they worked. to "make this happen."
I bet you can come up with a bunch of other examples. I stopped there simply because, well, I did.
Now, What to Do? If you find yourself working at a shop or department or company that you find described above - where it seems you are the only one who cares - what do you do about it? Ask yourself, "Has it always been this way?" Maybe something changed recently, or not so recently. Maybe the change has been gradual.
Sometimes, it takes you being the one to be burned by this behavior to notice it. Sometimes it has been going on with some people and not others and it is your turn to work with these people and - what a mess.
You can say "Maybe they learned their lesson from this and the next time will be better."
Don't bet on it. There is likely some other reward system in play that they value more than the rewards workmanship, craftsmanship and passion for doing good quality work can provide. Ironically, they may get rewarded from their supervisors for being heroes (even though they created the situation that needed heroes) or "preserving the process" or, whatever.
So, back to what to do.
Your choices are limited.
You can try to "change the culture." This is easier in small companies than in large Borg-like companies that grow by assimilating small companies into the Collective. I know people who have tried to do this. Some were successful; those dealing with the Borg Collective were less so.
You can try to "change the environment." Here I include "process" as well as the nature and flow of the work and communication. You can ask questions and field inquiries and take part in improvement task forces and, and, and... don't let the project slip. I know people who have tried this - myself included. It may work, you may feel more engaged and more aligned with improving the company. At some point you may look back ans wonder what has been accomplished.
You can stop resisting - Accept it for what it is. Turn off independent thought and go with the flow. Collect the paycheck, take the "motivational development" courses and continue to collect the paycheck.
You nuclear option - Leave. Go somewhere else. That is what I did with the company in the first part of this post. I packed it in. I do not regret it. My other options seemed so improbable. I tried them - the engage thing, the culture change thing. I could not bring myself to stop resisting.
Please, never select to stop resisting. Never conform that much. We are testers. We can not be good testers if we stop questioning. That is what is required of that option.
The Several Years Ago part: In the middle of a project that was simply not going well, in fact, it was a bit of a train-wreck. Nah, not a bit. It was a complete and total train-wreck. Pick something that would go wrong and it did. In spades.
Yours truly was QA Lead and was overwhelmed. A "target rich environment" does not begin to describe what was going on. Massive effort, huge amounts of overtime to try and control the damage, stop the flooding, stop the bleeding, well, pick a metaphor.
Fact was, the testers were putting in a lot of effort and, frankly not many others were.
So, sitting having an adult beverage, or several, with one of the development managers on the project, he looked at me and said, "Pete, you have a real passion for what you do. You're better at testing and understand software better than an awful lot of people I've worked with. You are really passionate about what you do. That is great. Be careful though. If you're too passionate you can burn out."
That struck me as odd, at the time anyway. How can one be "too passionate"? Is it possible that one can be too involved? Too close to the work? Too passionate?
After all, we have a lot to do and scads of work and... whoa. Why is it that some folks are diving in and going full bore and others are, well, sliding by and doing what seems to be the minimum. Why is it that some people just, well, not as deeply into making the project work as others?
The Reminder part: So, talking with another tester I know, she was muttering about a project where the developers just did not seem to care, about deadlines, quality of the project, impact on, well, performance reviews, raises, bonuses, and the like. She looked at me and said "Its like they just don't care!"
SO, why is it that some people just, well, are not as deeply into making the project work as others? I don't know. Maybe it depends on what is expected, or what the normal approach is for the shop or company or, whatever. Maybe it depends on the nature of the project leadership. Are people being managed or controlled, compelled.
While what is often called craftsmanship is something that seems hard to find.these days, in some places (maybe many places, I don't know) I remember hearing many people speak passionately about being, well, passionate - as a tester, as a developer, or as whatever it is that each one of us is.
I got to thinking some more Friday night and generally over the weekend about this.
When looking for places where everyone is passionate about their work, what does that look like? How do you know when you find it? I used to think I knew. I've worked at places where the majority of people were very passionate about what they did. They wrapped much of their view of their self-worth into their work - so if the project was a success, their efforts were "worth it."
Then, I started wondering what a project that was a success looked like. I suspect it rather depends on the software development group's target audience. Are the people who will be using the results of your work all working for the company you are working for? If so, "market" is a hard concept - unless the results of their work, with the new system, improves so much that the company as a whole performs better because of the many long hours and weekends in the office and ... yeah, you get it.
If the company makes software that will be bought by other companies for use in their business, the combination of sales, licenses, recurring/renewal of contracts around the software and the like will be one measure of how your efforts contributed to a successful project. Likewise, the customer-companies being able to conduct their business better, more efficiently, is another measure of the success of the project.
And so, what about the other signs? What about the places where people are not passionate about their work. What do they look like?
That's easier to find examples...
People use "process" as an excuse to not do something. "I'd love to do this, but I can't do X until D, F and L are in place. That is what the process is." (Whether its true or not does not seem to matter.)
People lock into rituals and stay there. Arrive 5 minutes after the "start time"; start laptop/desk-top computer; get coffee; drink coffee, eat breakfast; sign on to network; get more coffee; sign on to email (personal)... etc., leave 10 minutes before official "stop time" to "avoid the rush". Use the, "well, I work a lot of extra hours from home and over the weekend" reasoning. (Oh, laptop is still in the dock on the desk as they are heading home.)
The appearance of work counts more than actually doing work. Lots of reports being filed, status reports, progress reports, papers being shuffled up to leads and supervisors and managers and, of course, process management. This is different than using process as an excuse to not do something. This is taking the literal process and ignoring the intent.
Heroic Behavior is rewarded more than steady solid work. Now, I'm not down on heroes. I've been in that role, and was recently called a hero as well. I mean the false-heroes, the ones who dawdle and obfuscate and put things off and delay, and miss interim deadlines and miss delivery deadlines - partly by using the first three behaviors - and then work massive hours the last week of a project to pull things together and deliver something - and let everyone know how hard they worked. to "make this happen."
I bet you can come up with a bunch of other examples. I stopped there simply because, well, I did.
Now, What to Do? If you find yourself working at a shop or department or company that you find described above - where it seems you are the only one who cares - what do you do about it? Ask yourself, "Has it always been this way?" Maybe something changed recently, or not so recently. Maybe the change has been gradual.
Sometimes, it takes you being the one to be burned by this behavior to notice it. Sometimes it has been going on with some people and not others and it is your turn to work with these people and - what a mess.
You can say "Maybe they learned their lesson from this and the next time will be better."
Don't bet on it. There is likely some other reward system in play that they value more than the rewards workmanship, craftsmanship and passion for doing good quality work can provide. Ironically, they may get rewarded from their supervisors for being heroes (even though they created the situation that needed heroes) or "preserving the process" or, whatever.
So, back to what to do.
Your choices are limited.
You can try to "change the culture." This is easier in small companies than in large Borg-like companies that grow by assimilating small companies into the Collective. I know people who have tried to do this. Some were successful; those dealing with the Borg Collective were less so.
You can try to "change the environment." Here I include "process" as well as the nature and flow of the work and communication. You can ask questions and field inquiries and take part in improvement task forces and, and, and... don't let the project slip. I know people who have tried this - myself included. It may work, you may feel more engaged and more aligned with improving the company. At some point you may look back ans wonder what has been accomplished.
You can stop resisting - Accept it for what it is. Turn off independent thought and go with the flow. Collect the paycheck, take the "motivational development" courses and continue to collect the paycheck.
You nuclear option - Leave. Go somewhere else. That is what I did with the company in the first part of this post. I packed it in. I do not regret it. My other options seemed so improbable. I tried them - the engage thing, the culture change thing. I could not bring myself to stop resisting.
Please, never select to stop resisting. Never conform that much. We are testers. We can not be good testers if we stop questioning. That is what is required of that option.
Labels:
Passion,
Process,
Process Improvement,
Stress,
thinking
Monday, December 12, 2011
CAST 2012, The Thinking Tester - Do You Know the Way to San Jose?
This may well be the shortest blog post I've published in some time. There may be some rambling, but less than what I normally have. Don't look for a deep, thought-provoking idea buried in an apparently pointless story. Its not there.
So, here's the point. If you are a Thinking Tester then you need to know about CAST 2012. The Conference for the Association for Software Testing is scheduled for July 16 through 18 in San Jose, California.
The Call For Participation is up (here). There are three basic types of presentations:
If you are a Thinking Tester, I encourage you to consider attending CAST. If you are interested in telling people about your ideas, I encourage you to consider submitting a proposal.
So, here's the point. If you are a Thinking Tester then you need to know about CAST 2012. The Conference for the Association for Software Testing is scheduled for July 16 through 18 in San Jose, California.
The Call For Participation is up (here). There are three basic types of presentations:
- Interactive Workshops (140 minutes);
- Regular Track Sessions (70 minutes with at least 25 minutes for discussion);
- Emerging Topics (20 minutes with at least 5 minutes for discussion);
The information you need to know about submitting proposals is on the website at the link above.
If you are a Thinking Tester, I encourage you to consider attending CAST. If you are interested in telling people about your ideas, I encourage you to consider submitting a proposal.
Friday, November 18, 2011
Thoughts from TesTrek 2011 - Part 2
Thursday morning at TesTrek, in Toronto started with a keynote presentation by Michael Mah from QSM Associates on Offshoring, Agile Methods and the idea of a "Flat World." I could not stay as I was presenting in the first track session immediately following. My presentation on Integration Testing went over reasonably well, I thought. There were a fair number of people who were willing to participate and generally engage and some interesting discussion afterward.
To unwind, I went to Fiona Charles session on Test Strategy, She has given this as a full day workshop. Cramming it into a 90 minute session was challenging, but I thought gave a reasonable idea around the challenges of looking beyond templates and boilerplate.
I had a nice lunch conversation, again with Fiona and a handful of other people sitting around a table.
The balance of the day was a rush of impressions for me. I know the afternoon sessions occurred. Still, I found myself in interesting conversations with a people - many of whom I have named already. The thing is, without establishing relationships in the past, these conversations may not have happened.
Much of what I learn at conferences occurs in the "hallway track" - talking with people and discussing concepts of interest to us, whether they are on the program for the conference or not. There are a lot of people smarter than I am, with more experience than I have. The fun part for me is learning and sharing what I learn and have experienced.
The beauty of smaller conferences is that they give the intimacy that allows participants to meet a large number of people if they are willing to step outside of themselves. I can not encourage people enough to take advantage of that opportunity.
One thing that struck me was that I saw only a few people talking with other people they did not work with or know in advance. I'm always curious about that. The thing I consider to have been fortunate in is that I learned to swallow hard, overcome my shy, introspective tendencies and talk with people. Walk up, say "Hi, I'm Pete. Are you enjoying the conference? What have you been learning?" Sometimes it leads to interesting conversations.
Other times it is a little, less interesting. Folks say "Oh yeah, I have a session to go to. Maybe we can talk later." OK, no worries.
The thing is, I learned some time ago, and have blogged about it, that you need to allow time to talk with other people. It is a remarkable conference that has really significant, information-packed sessions in every time slot. Now, this is not a dig at TesTrek, don't get me wrong. I just find it interesting that there was not as much socializing/networking/confering as I saw. (There may have been more, in places I did not find, but I did not find or hear about them.)
I tweeted a few times inviting people to talk about anything to do with testing. Now, I had some fantastic conversations with Fiona, Adam Goucher, Tommas, Stephen and more. But what I found interesting was that of the tweets I sent out, the invitations (including the link to the blog post inviting people to confer at TesTrek) , resulted in one person saying "Are you Pete? I'm Heather! I saw your tweet!" That person was Heather Gardiner, with tulkita Technologies. We had a nice conversation, then we both had to deal with other things.
The thing is, and I think this holds for more testers, don't be afraid to meet and talk with other testers. Even folks like conference speakers, yeah, the "experts", like learning new things. You may not agree with them, and they may not agree with you. But, people who are thoughtful testers with a desire to learn and to share, are good sources for you to learn as well.
This, I think, is the great opportunity for people going to conferences: meeting people with a different viewpoint and learning. Smaller conferences, like TesTrek, give you the opportunity to meet people like you and have the chance to talk with every attendee.
Meet people. Talk with them. You never know what you might learn.
To unwind, I went to Fiona Charles session on Test Strategy, She has given this as a full day workshop. Cramming it into a 90 minute session was challenging, but I thought gave a reasonable idea around the challenges of looking beyond templates and boilerplate.
I had a nice lunch conversation, again with Fiona and a handful of other people sitting around a table.
The balance of the day was a rush of impressions for me. I know the afternoon sessions occurred. Still, I found myself in interesting conversations with a people - many of whom I have named already. The thing is, without establishing relationships in the past, these conversations may not have happened.
Much of what I learn at conferences occurs in the "hallway track" - talking with people and discussing concepts of interest to us, whether they are on the program for the conference or not. There are a lot of people smarter than I am, with more experience than I have. The fun part for me is learning and sharing what I learn and have experienced.
The beauty of smaller conferences is that they give the intimacy that allows participants to meet a large number of people if they are willing to step outside of themselves. I can not encourage people enough to take advantage of that opportunity.
One thing that struck me was that I saw only a few people talking with other people they did not work with or know in advance. I'm always curious about that. The thing I consider to have been fortunate in is that I learned to swallow hard, overcome my shy, introspective tendencies and talk with people. Walk up, say "Hi, I'm Pete. Are you enjoying the conference? What have you been learning?" Sometimes it leads to interesting conversations.
Other times it is a little, less interesting. Folks say "Oh yeah, I have a session to go to. Maybe we can talk later." OK, no worries.
The thing is, I learned some time ago, and have blogged about it, that you need to allow time to talk with other people. It is a remarkable conference that has really significant, information-packed sessions in every time slot. Now, this is not a dig at TesTrek, don't get me wrong. I just find it interesting that there was not as much socializing/networking/confering as I saw. (There may have been more, in places I did not find, but I did not find or hear about them.)
I tweeted a few times inviting people to talk about anything to do with testing. Now, I had some fantastic conversations with Fiona, Adam Goucher, Tommas, Stephen and more. But what I found interesting was that of the tweets I sent out, the invitations (including the link to the blog post inviting people to confer at TesTrek) , resulted in one person saying "Are you Pete? I'm Heather! I saw your tweet!" That person was Heather Gardiner, with tulkita Technologies. We had a nice conversation, then we both had to deal with other things.
The thing is, and I think this holds for more testers, don't be afraid to meet and talk with other testers. Even folks like conference speakers, yeah, the "experts", like learning new things. You may not agree with them, and they may not agree with you. But, people who are thoughtful testers with a desire to learn and to share, are good sources for you to learn as well.
This, I think, is the great opportunity for people going to conferences: meeting people with a different viewpoint and learning. Smaller conferences, like TesTrek, give you the opportunity to meet people like you and have the chance to talk with every attendee.
Meet people. Talk with them. You never know what you might learn.
Thursday, November 17, 2011
Thoughts from TesTrek 2011 - Part 1
Last week I was in Toronto for the TesTrek Symposium hosted by Quality Assurance Institute. There were, what seemed to me, some 200 to 250 testers hanging out and talking about testing. In downtown Toronto. Cool.
So, I had the opportunity to spend time with people I had met briefly before the last two years I've been there. Yeah, it seems hard to believe this was my third TesTrek. Go figure.
The advantage of returning to the same conference, particularly if it is hosted in the same city, is you get to catch up and get to know other people you met there better than you can in a single meeting. In my case, I got to have a really nice series of conversations with both Tommas Marchese and Stephen Reiff - both of whom I met previously, but had the chance to spend time with each other, chat and learn.
Other people I see fairly frequently, mostly at other conferences, were Nancy Kelln, Adam Goucher, Fiona Charles. These folks are smart, capable testers. You hear a lot of marketing hype about "thought leaders" or "technical experts" or other buzzwords. You know what's really interesting? The people who are the real deal don't take those titles on themselves.
Monday and Tuesday at TesTrek consisted of a Manager's Workshop. This is an interesting model in that the participants break into groups and discuss topics of interest to, well, test managers. The times I've been involved in these workshops have been mentally invigorating, if not exhausting. This year, the day-job kind of got in the way so I could not attend and participate.
I drove to Toronto on Tuesday, checked into the hotel in Toronto, then went looking for the fun. I found the folks from the conference, like Darrin Crittenden and Nancy Kastl. I had the chance to sit down and have the first of many chats with Fiona and Tommas, and Nancy when she arrived from Calgary.
Wednesday opened with a "Pre-Keynote" by Tommas Marchese. His topic was "Heads Up Testers: Striving for Testing Excellence." In short, it was a call to action for testers to break out of the mold that some companies expect testers to stay in. He had several solid points and I thought it was an excellent start to the day.
The keynote following this, after all, this was a "pre-keynote" was a panel presentation with representatives from Microsoft, Micro Focus, HP and IBM-Rational. I did not find this an OK idea, and thought it would be better to have greater opportunity for audience participation, questions and the like.
The rest of the day was broken into workshop and presentation sessions. Tuesday these consisted of presentations around Test Measurement, Cloud Computing, Test Leadership, Security Testing and others. Nancy Kelln gave a workshop on Test Estimation that had originally been intended to be given along with her Partner-in-Crime/Conferences, Lynn McKee. She challenged people's expectations, just as I thought she might.
Tommas Marchese boldly gave a session on regression testing that he was not scheduled to give. Filling in and giving a presentation not your own can be a problem. He did a respectable job, I thought, and made some good points.
After the opening reception, with some more conversations, a handfull of us went to the Elephant & Castle around the corner for a quiet pint and conversation. I retired early to rest for the next day and prepare for my presentation.
So, I had the opportunity to spend time with people I had met briefly before the last two years I've been there. Yeah, it seems hard to believe this was my third TesTrek. Go figure.
The advantage of returning to the same conference, particularly if it is hosted in the same city, is you get to catch up and get to know other people you met there better than you can in a single meeting. In my case, I got to have a really nice series of conversations with both Tommas Marchese and Stephen Reiff - both of whom I met previously, but had the chance to spend time with each other, chat and learn.
Other people I see fairly frequently, mostly at other conferences, were Nancy Kelln, Adam Goucher, Fiona Charles. These folks are smart, capable testers. You hear a lot of marketing hype about "thought leaders" or "technical experts" or other buzzwords. You know what's really interesting? The people who are the real deal don't take those titles on themselves.
Monday and Tuesday at TesTrek consisted of a Manager's Workshop. This is an interesting model in that the participants break into groups and discuss topics of interest to, well, test managers. The times I've been involved in these workshops have been mentally invigorating, if not exhausting. This year, the day-job kind of got in the way so I could not attend and participate.
I drove to Toronto on Tuesday, checked into the hotel in Toronto, then went looking for the fun. I found the folks from the conference, like Darrin Crittenden and Nancy Kastl. I had the chance to sit down and have the first of many chats with Fiona and Tommas, and Nancy when she arrived from Calgary.
Wednesday opened with a "Pre-Keynote" by Tommas Marchese. His topic was "Heads Up Testers: Striving for Testing Excellence." In short, it was a call to action for testers to break out of the mold that some companies expect testers to stay in. He had several solid points and I thought it was an excellent start to the day.
The keynote following this, after all, this was a "pre-keynote" was a panel presentation with representatives from Microsoft, Micro Focus, HP and IBM-Rational. I did not find this an OK idea, and thought it would be better to have greater opportunity for audience participation, questions and the like.
The rest of the day was broken into workshop and presentation sessions. Tuesday these consisted of presentations around Test Measurement, Cloud Computing, Test Leadership, Security Testing and others. Nancy Kelln gave a workshop on Test Estimation that had originally been intended to be given along with her Partner-in-Crime/Conferences, Lynn McKee. She challenged people's expectations, just as I thought she might.
Tommas Marchese boldly gave a session on regression testing that he was not scheduled to give. Filling in and giving a presentation not your own can be a problem. He did a respectable job, I thought, and made some good points.
After the opening reception, with some more conversations, a handfull of us went to the Elephant & Castle around the corner for a quiet pint and conversation. I retired early to rest for the next day and prepare for my presentation.
Thursday, August 11, 2011
CAST 2011, Day 2, A Brief Summary
Again, I had intended to write this last night. It is amazing top be how mentally and physically drained I am byt the end of each day at conferences. So many smart people it seems impossible to keep up.
Right, so, people. Had some really nice hallway conversations with Elana Houser, who was in the BBST Foundations course with me. We did not always agree with each other in the course, she is, however, a very good thinker. Lynn McKee, Nancy Kelln, Selena Delesie and had nice chats and gave great insights on discussion topics. I also brifely met Karen Johnson - OK folks, she is smart and wise - doesn't always come in the same package.
Amazing talk(s) with Michael Hunter - Yeah, the Braidy Tester guy. He really is as good and inspriational as his blog posts seem. Oh, now then, let's see, Had some Fantastic chats with Ajay.Balamurugadas. Ben Yaroch is crazy smart and a hard worker - really. Michael Larsen really DOES have as much energy as his podcasts make it seem like he does. Let's see. Also had some good visits with Justin Hunter, Paul Holland, Bill Matthews and Johan Jonasson - Phil McNealy is a good person to know as well.
One of the highlights for me was seeing the Emerging Topics track come together and be a reality. Some of the speakers had a bit of a rough go. Many had never presented outside their own company before - WHAT a daunting task! Yeah - Present a 20 minute idea in front of some of the best testers around. YEAH! Still, everyone made it through the experience, good information and ideas were shared - even if folks were a little nervous.
I had a chance to drop in the tail end of the Open Season of the BBST Experience track. Cool Q&A session, lots of energy. The Lightning Talks, which I dropped in on after the BBST talk ended, were interesting - ideas and "quick hits" with ideas. Fun.
I ended up having an interesting conversation with Felipe Knorr Kuhn, Gary Masnica, Phil McNealy and Lanette Creamer. Job Titles, Job Roles, What to Do, How things work... highly enjoyable, mentally invigorating. This set me up for a good session in the EdSIG - Education Special Interest Group.
Michael Larsen, me, some dozen other people talking via Skype with Rebecca Fiedler and Cem Kaner (who could not be at CAST.) Good ideas, much meaty discussion - look for another blog post on that before too long.
It was an amazing day.
Oh, I did not get elected to the Board of Directors for AST. Now, some folks tried to console me, I was unconsoleable. Well, technically, literally, there was nothing to console me about! I believe that each of the five candidates were eminently qualified to serve on the board and three were selected. This is good.
So, this morning, I find myself sitting at a table (starting this blog post actually) and Michael Hunter sat down to chat and have a little breakfast. Griffon Jones dropped his pack and went for a little breakfast, but got tied up. As it was, Michael and I had a great visit before we headed off to Michael Bolton's workshop on Test Framing. That, too, is another blog post.
Right, so, people. Had some really nice hallway conversations with Elana Houser, who was in the BBST Foundations course with me. We did not always agree with each other in the course, she is, however, a very good thinker. Lynn McKee, Nancy Kelln, Selena Delesie and had nice chats and gave great insights on discussion topics. I also brifely met Karen Johnson - OK folks, she is smart and wise - doesn't always come in the same package.
Amazing talk(s) with Michael Hunter - Yeah, the Braidy Tester guy. He really is as good and inspriational as his blog posts seem. Oh, now then, let's see, Had some Fantastic chats with Ajay.Balamurugadas. Ben Yaroch is crazy smart and a hard worker - really. Michael Larsen really DOES have as much energy as his podcasts make it seem like he does. Let's see. Also had some good visits with Justin Hunter, Paul Holland, Bill Matthews and Johan Jonasson - Phil McNealy is a good person to know as well.
One of the highlights for me was seeing the Emerging Topics track come together and be a reality. Some of the speakers had a bit of a rough go. Many had never presented outside their own company before - WHAT a daunting task! Yeah - Present a 20 minute idea in front of some of the best testers around. YEAH! Still, everyone made it through the experience, good information and ideas were shared - even if folks were a little nervous.
I had a chance to drop in the tail end of the Open Season of the BBST Experience track. Cool Q&A session, lots of energy. The Lightning Talks, which I dropped in on after the BBST talk ended, were interesting - ideas and "quick hits" with ideas. Fun.
I ended up having an interesting conversation with Felipe Knorr Kuhn, Gary Masnica, Phil McNealy and Lanette Creamer. Job Titles, Job Roles, What to Do, How things work... highly enjoyable, mentally invigorating. This set me up for a good session in the EdSIG - Education Special Interest Group.
Michael Larsen, me, some dozen other people talking via Skype with Rebecca Fiedler and Cem Kaner (who could not be at CAST.) Good ideas, much meaty discussion - look for another blog post on that before too long.
It was an amazing day.
Oh, I did not get elected to the Board of Directors for AST. Now, some folks tried to console me, I was unconsoleable. Well, technically, literally, there was nothing to console me about! I believe that each of the five candidates were eminently qualified to serve on the board and three were selected. This is good.
So, this morning, I find myself sitting at a table (starting this blog post actually) and Michael Hunter sat down to chat and have a little breakfast. Griffon Jones dropped his pack and went for a little breakfast, but got tied up. As it was, Michael and I had a great visit before we headed off to Michael Bolton's workshop on Test Framing. That, too, is another blog post.
Monday, May 16, 2011
Agile or You Keep Using that Word; I Do Not Think It Means What You Think It Means.
Its funny. Many of the more recent blog posts have come from ideas or thoughts or reactions to comments and discussion at the local tester group meetings. I think there's a blog post in there, but this one is triggered around an idea I've have for some time. Of course, it came together clearly during a lightning talk at the most recent meeting.
Yes, yet again the local testing group had gathered to discuss testing and eat pizza. I don't know if it is the collection of bright people sitting around munching on pizza just talking - no slides, no formal agenda - just folks talking about testing - or if it is the collection of minds engaged in thought on the same topic that I find so interesting.
The Trigger
One of the presentations discussed "The Fundamental Flaw in Agile" - and was based on the presenter's experience around Agile environments in software development shops. Her premise, which I can find no fault with, was that most shops "doing Agile" make the same mistake that most shops did with "Waterfall" and experience very similar results. That is, the belief that there is a single inerrant oracle for "user information" for software development projects.
Mind you, she is no slouch and is extremely talented. In fact, one statement she made was the key to allow my mind to pull things together, and that in turn, lead to this blog post. You see, sometimes (like at conferences or presentations) I use twitter to take notes. Other times, I outline ideas then add ideas around that outline and that turns into a blog post. Then sometimes that blog post turns into the foundation for a presentation or longer paper.
You see, I've worked with some really bright people in agile environments. I've also worked with some really bright people in Agile environments. I've also had the pleasure of working with some really bright people in Waterfall environments.
Some of the people in the first group (agile) are also in the third group (Waterfall.)
Nah, Pete - you're kidding, right? Everyone knows that Waterfall is not agile.
Really?
I'd argue that the way most people functioned and called it "Waterfall" was anything other than "agile." It certainly had little to do with the Agile Manifesto. Now, I have some theories around that but they will wait for another time.
I might suggest that the ideas expressed in the Agile Manifesto were the extreme antithesis of how many folks "did Waterfall." I certainly would suggest that the idea of using "Agile" to fix software development practices of some shops is equivalent to the silver bullet solution that gave us project managers and business analysts and other folks getting involved in software development with limited experience in the field themselves.
Now, an aside. I do believe that some very talented people can help move a project nicely. They can be Project Managers. They can be Business Analysts. They can be Programmers and Testers and DBAs and on and on. The interesting thing, to me, is that when I got into software development, the common title for those people doing the bulk of that work was "Programmer." Anyone else remember when programmers were expected to sit down with business users or there representatives and discuss in a knowledgeable way how the software could help them do their work better? Now, avoiding images of people getting excited and yelling "I'm a people person!" why is it that we figure people who are good at technology stuff should be un-good with people stuff? I don't know either. But for now, let's leave that and consider it in another blog post. OK?
Right. Where was I? Oh, yes. Silver bullets.
Many shops where I've seen people "doing Agile" seem curious to me. In fact, I get curious about them in general. I ask questions and get answers like "No. We're Agile so we don't need documentation." A close second is "We're Agile so we don't need to do Regression testing." Third most common is something like "We're Agile so we don't track defects..." (now up to this point, no worries; the worries normally come after) "... because we don't do documentation."
Thus the thought that pops into my mind,,,
"I do not think it means what you think it means."
Now, I'm not the sharpest knife in the drawer. I make a lot of mistakes and I have said some really un-smart things in my time. Having said that, those folks I sometimes hear selling "Agile" to people - and neither the person selling nor the potential customer/client have a decent idea, or at least a more clearly formed idea of what "Agile" means, than I do. I mean, come ON!
Listen to what you are saying! "Oh, you have communication problems! That is because you use Waterfall! Agile fixes that! You have customers not getting what they need! That is because you use Waterfall! Agile fixes that too!" And on and on and on...
sorry. got excited there a moment.
Here's what I'm getting at. There are some really smart people who firmly believe that Agile methodologies are fantastic. I think there is a lot to recommend them. Really, I do. I can agree with everything listed in the Agile Manifesto - Really!
I disagree with the way some people interpret Agile. Why? Because they are missing the point. In my mind, the entire purpose - including dropping the stuff that is not needed, that does not move the project forward, etc., boils down to one thing: Simplify Communication.
By that I mean exactly that - help people communicate better by breaking down the barriers that get pur in the way by process or by culture or by evil piskies.
It seems to me, that is the greatest flaw in "Agile."
Without good communication, Agile projects will fail. Full stop. If you do not have good communication, nothing else matters.
When you replace one set of burdensome processes with another and wrap it in the banner of "Agile" have you really made it better? Really? Is the process the key? Really?
Do me a favor and grab a dictionary and look up the word "agile." Go ahead, I'll wait.
OK, you're back? I bet you found something like this...
Adjective: Characterized by quickness, lightness, and ease of movement; nimble.
Wait. Did you look up "Agile Development" or "agile"? Yeah, consider what the word means - not the methodology but the word.
Now. Someone please explain to me how folks demand that something be done because "that's what you do when you're Agile" is really agile? If they are following form over function - doing something by rote - without explaining to the rest of the team why this is important (I understand that each Scrum master or whatever the "leader" is called needs some leeway in approach) then will the team see any more value in this than in the "evil" methods of "Waterfall"?
Then again, in my experience, what is the difference between teams that were successful and those that were unsuccessful in Waterfall? Communication.
Yes, yet again the local testing group had gathered to discuss testing and eat pizza. I don't know if it is the collection of bright people sitting around munching on pizza just talking - no slides, no formal agenda - just folks talking about testing - or if it is the collection of minds engaged in thought on the same topic that I find so interesting.
The Trigger
One of the presentations discussed "The Fundamental Flaw in Agile" - and was based on the presenter's experience around Agile environments in software development shops. Her premise, which I can find no fault with, was that most shops "doing Agile" make the same mistake that most shops did with "Waterfall" and experience very similar results. That is, the belief that there is a single inerrant oracle for "user information" for software development projects.
Mind you, she is no slouch and is extremely talented. In fact, one statement she made was the key to allow my mind to pull things together, and that in turn, lead to this blog post. You see, sometimes (like at conferences or presentations) I use twitter to take notes. Other times, I outline ideas then add ideas around that outline and that turns into a blog post. Then sometimes that blog post turns into the foundation for a presentation or longer paper.
You see, I've worked with some really bright people in agile environments. I've also worked with some really bright people in Agile environments. I've also had the pleasure of working with some really bright people in Waterfall environments.
Some of the people in the first group (agile) are also in the third group (Waterfall.)
Nah, Pete - you're kidding, right? Everyone knows that Waterfall is not agile.
Really?
I'd argue that the way most people functioned and called it "Waterfall" was anything other than "agile." It certainly had little to do with the Agile Manifesto. Now, I have some theories around that but they will wait for another time.
I might suggest that the ideas expressed in the Agile Manifesto were the extreme antithesis of how many folks "did Waterfall." I certainly would suggest that the idea of using "Agile" to fix software development practices of some shops is equivalent to the silver bullet solution that gave us project managers and business analysts and other folks getting involved in software development with limited experience in the field themselves.
Now, an aside. I do believe that some very talented people can help move a project nicely. They can be Project Managers. They can be Business Analysts. They can be Programmers and Testers and DBAs and on and on. The interesting thing, to me, is that when I got into software development, the common title for those people doing the bulk of that work was "Programmer." Anyone else remember when programmers were expected to sit down with business users or there representatives and discuss in a knowledgeable way how the software could help them do their work better? Now, avoiding images of people getting excited and yelling "I'm a people person!" why is it that we figure people who are good at technology stuff should be un-good with people stuff? I don't know either. But for now, let's leave that and consider it in another blog post. OK?
Right. Where was I? Oh, yes. Silver bullets.
Many shops where I've seen people "doing Agile" seem curious to me. In fact, I get curious about them in general. I ask questions and get answers like "No. We're Agile so we don't need documentation." A close second is "We're Agile so we don't need to do Regression testing." Third most common is something like "We're Agile so we don't track defects..." (now up to this point, no worries; the worries normally come after) "... because we don't do documentation."
Thus the thought that pops into my mind,,,
"I do not think it means what you think it means."
Now, I'm not the sharpest knife in the drawer. I make a lot of mistakes and I have said some really un-smart things in my time. Having said that, those folks I sometimes hear selling "Agile" to people - and neither the person selling nor the potential customer/client have a decent idea, or at least a more clearly formed idea of what "Agile" means, than I do. I mean, come ON!
Listen to what you are saying! "Oh, you have communication problems! That is because you use Waterfall! Agile fixes that! You have customers not getting what they need! That is because you use Waterfall! Agile fixes that too!" And on and on and on...
sorry. got excited there a moment.
Here's what I'm getting at. There are some really smart people who firmly believe that Agile methodologies are fantastic. I think there is a lot to recommend them. Really, I do. I can agree with everything listed in the Agile Manifesto - Really!
I disagree with the way some people interpret Agile. Why? Because they are missing the point. In my mind, the entire purpose - including dropping the stuff that is not needed, that does not move the project forward, etc., boils down to one thing: Simplify Communication.
By that I mean exactly that - help people communicate better by breaking down the barriers that get pur in the way by process or by culture or by evil piskies.
It seems to me, that is the greatest flaw in "Agile."
Without good communication, Agile projects will fail. Full stop. If you do not have good communication, nothing else matters.
When you replace one set of burdensome processes with another and wrap it in the banner of "Agile" have you really made it better? Really? Is the process the key? Really?
Do me a favor and grab a dictionary and look up the word "agile." Go ahead, I'll wait.
OK, you're back? I bet you found something like this...
Adjective: Characterized by quickness, lightness, and ease of movement; nimble.
Wait. Did you look up "Agile Development" or "agile"? Yeah, consider what the word means - not the methodology but the word.
Now. Someone please explain to me how folks demand that something be done because "that's what you do when you're Agile" is really agile? If they are following form over function - doing something by rote - without explaining to the rest of the team why this is important (I understand that each Scrum master or whatever the "leader" is called needs some leeway in approach) then will the team see any more value in this than in the "evil" methods of "Waterfall"?
Then again, in my experience, what is the difference between teams that were successful and those that were unsuccessful in Waterfall? Communication.
Subscribe to:
Posts (Atom)