This year is drawing to an end. I know it is a tad lame to have a "look at the year that was" or any of the other cliche laden phrases that tend to be used to introduce these things.
The thing is, it has been an interesting year for me personally and professionally.
Let's see. General stuff. I retired the blog attached to my defunct drumming with bagpipe bands website. I replaced it with, this one. It had been in the "thinking about" phase for a long-time, and finally I decided to do it. Ya know what's interesting? As I think about other stuff - often Non-Testing stuff - something pops into my head about software development or testing or SOMETHING. Sometimes, that results in a blog post. Other times it leads to sitting in my big green comfy chair sipping a brandy and thinking.
Interesting work stuff at the day-job with interesting challenges early in the year. With a flurry of emails I found myself and the boss registered to attend QUEST in Dallas, Texas. This was a huge surprise to me as I was not expecting it at all, given limited budgets and going to TesTrek in Toronto the previous October. QUEST was interesting in that I met a number of people whose writings I had read, and had not met in real-life. I also got to connect with people I had met before and get back in touch in real-life.
May I received confirmation that I COULD attend CAST, which was being held around 15 minutes from my house - then in June it became clear that the scheduled release would conflict with attending CAST, so the company would neither pay the conference fee (something I was not too worried about) nor would they grant time-off. That one was a problem. July rolled around, schedules shifted again. I could be granted the time to go to CAST IF I was available during the conference. COMPROMISE! COOL!
Sunday evening of CAST had a great dinner and conversation with Fiona Charles and Griffin Jones and the lady-wife at a neighborhood Italian place. Recipes from Sicily and friendly folks and good wine and great conversation, little of it around testing, but all of it applicable to testing. What a great night.
Another night had a fantastic dinner out with a bunch of folks - Yeah, I know I blogged about that shortly after the event - it is still a great memory.
Dragged the boss in one evening to meet some of the great ones of the craft who would be there. Had a fantastic evening out with Nancy Kelln and Lynn McKee and the boss - more good wine (notice a trend?) and a great conversation.
Then, a bombshell was dropped that left me gob-smacked. It seems one of our dinner companions had a conflict and could not fulfill a speaking commitment in Toronto, would I be interested in being suggested as a an alternative speaker? Holy Cow. I thought about it briefly... and said Yes. One thing led to another and I did indeed speak at TesTrek in Toronto that October. Yeah, I blogged about that, too.
Stuff at the day-job continued to be interesting - meaning, really, really, busy.
So, things progressed. Talked with the boss about some interesting emails. The result of those chats was submitting proposals to a couple of conferences. I submitted proposals for a session similar to the session at TesTrek, but with a more advanced perspective than the general view there. The exciting thing was that the boss and I submitted a proposal for a joint presentation based on our experiences starting a QA/Testing team from scratch.
One conference said "no thanks" (although the boss was asked to consider a presentation in a different area) the other accepted both proposals! Yeah, that rocks. I get to hang with the cool kids at STPCon in Nashville this coming March.
More projects were successfully rolled out at the day job. There are some interesting things that seem to be happening there, they may lead to more ideas on blog posts.
The local testing group and its attempts spread its wings and fly has been great fun to watch and be a part of. Through it, I've met some terrific people, like Matt Heusser and Melissa Bugai , and have had fun sharing the adventure with them.
At home it was a good year in the garden. We had a good crop of strawberries and peppers and tomatoes, although some of the others were a little surprising in what were less prolific than expected. Several big projects got done - and inspired thoughts about, then blog posts about, software and testing.
We had some sadness in our lives this year. Stuff that led to serious rounds of soul-searching for "what is this all about." We also have had some great joys in our lives this year. For that, I am grateful. I don't know what 2011 will bring, but I am looking forward to the next year.
Friday, December 31, 2010
Thursday, December 30, 2010
A Hero's Fall From Grace or Why a Big Pay Raise Is Better Than a Statue
A couple of things happened recently that made me think of things I probably would not normally think of.
I know a fellow who works for a company that has one or two (depending on needs) "major" releases each year. They also generate and distribute monthly maintenance releases to address defects, bundle hot-fixes neatly into a package and all the other good things that come from such things.
A recent release had some "issues." Lots of issues. After working for 12 "work" days and a Saturday and Sunday thrown in as well, this fellow's boss made a comment that he had "saved them again" and was a "hero."
When thinking about this, I got to thinking about Selena Delesie's blog posts on "hero culture."
When teams pull together and work to overcome obstacles, amazing things can be achieved. Sometimes, the issues encountered require extra effort and creative thinking and good, if not great, communication between testers and developers to find solutions. This is a fantastic thing and is, I believe, the point of being a "hero" in our profession.
The part that makes me think, if not wonder, about this is when this becomes the rule, rather than the exception. When the same team that pulled together and found solutions and worked toward delivering the best product that can be delivered is expected to work long hours regularly, to repeat the same "pull out all stops" effort for every project, big or small, there is a danger.
That danger is that the edge, the creative thinking, the "Hey, what if..." factor gets worn down.
Now, I know there are a lot of potential reasons for this to happen. Some may be within the team itself. There may be factors at work where individuals like the drama, the "rush" of the massive push at the end.
There may also be issues around the development methodology or practices. If the only testing done in unit testing is validating the "happy path" then likelihood that builds will come fast and furious, depending on how quickly the detected defects are addressed.
Another possibility is that someone, possibly in the test group or outside the test group but involved in the projects, likes the chaos - the frantic push is something they thrive on.
Whatever the cause, when every project turns into a heroic stand worthy of an action movie, something is seriously wrong. When testers, or anyone else, are expected to perform super-human, or heroic, feats every project, the edge will be blunted. The probability of significant defects being missed increases with each cycle. Eventually, people will be working less effectively no matter how many hours, or how "hard" they are working.
In some shops, the option to simply say "No, I won't do this" may be a legitimate one. In shops where contractual requirements don't allow such a response, I don't have a solution. Now, I am aware of the need to keep regular releases going, I'm just not sure of a solution that works everywhere.
What I do know is that the bosses, the leaders at your shop need to make sure they don't need their people to be heroes each and every release. Horatius held the bridge, but he had help. The lone cowboy was a myth, just like superman.
I know a fellow who works for a company that has one or two (depending on needs) "major" releases each year. They also generate and distribute monthly maintenance releases to address defects, bundle hot-fixes neatly into a package and all the other good things that come from such things.
A recent release had some "issues." Lots of issues. After working for 12 "work" days and a Saturday and Sunday thrown in as well, this fellow's boss made a comment that he had "saved them again" and was a "hero."
When thinking about this, I got to thinking about Selena Delesie's blog posts on "hero culture."
When teams pull together and work to overcome obstacles, amazing things can be achieved. Sometimes, the issues encountered require extra effort and creative thinking and good, if not great, communication between testers and developers to find solutions. This is a fantastic thing and is, I believe, the point of being a "hero" in our profession.
The part that makes me think, if not wonder, about this is when this becomes the rule, rather than the exception. When the same team that pulled together and found solutions and worked toward delivering the best product that can be delivered is expected to work long hours regularly, to repeat the same "pull out all stops" effort for every project, big or small, there is a danger.
That danger is that the edge, the creative thinking, the "Hey, what if..." factor gets worn down.
Now, I know there are a lot of potential reasons for this to happen. Some may be within the team itself. There may be factors at work where individuals like the drama, the "rush" of the massive push at the end.
There may also be issues around the development methodology or practices. If the only testing done in unit testing is validating the "happy path" then likelihood that builds will come fast and furious, depending on how quickly the detected defects are addressed.
Another possibility is that someone, possibly in the test group or outside the test group but involved in the projects, likes the chaos - the frantic push is something they thrive on.
Whatever the cause, when every project turns into a heroic stand worthy of an action movie, something is seriously wrong. When testers, or anyone else, are expected to perform super-human, or heroic, feats every project, the edge will be blunted. The probability of significant defects being missed increases with each cycle. Eventually, people will be working less effectively no matter how many hours, or how "hard" they are working.
In some shops, the option to simply say "No, I won't do this" may be a legitimate one. In shops where contractual requirements don't allow such a response, I don't have a solution. Now, I am aware of the need to keep regular releases going, I'm just not sure of a solution that works everywhere.
What I do know is that the bosses, the leaders at your shop need to make sure they don't need their people to be heroes each and every release. Horatius held the bridge, but he had help. The lone cowboy was a myth, just like superman.
Friday, December 17, 2010
On Exploration or How You Might Be Testing and Not Know It
I had an interesting conversation earlier this week. A colleague dropped into the cube, grabbed a handful of M&M's and muttered something about how she kept finding defects and wasn't able to get any test scripts written because of it.
OK - That got my attention.
So, I asked her what she meant. It seems that the project she was working on was not terribly well documented, the design was unclear and the requirements were mere suggestions and she had gotten several builds. So, she was working her way through things as she understood them.
So she explained what was going on... She intended to make sure she was understanding them correctly so she could document them and write her test scripts. Then, she could start testing.
The problem was, she'd try different features and they didn't work like she expected. So, she'd call the developer and ask what she was doing wrong. Problem: She wasn't.
The issue, as she saw it, was that the code was so unstable that she could not work her way through it enough to understand how she was to exercise the application as fully as possible. To do that, the standard process required test cases written so that they could be repeated and "fully document" the testing process for the auditors. Because she kept finding bugs just "checking it out" she was concerned that she was falling farther and farther behind and would never really get to testing.
More M&Ms.
So we talked a bit. First response: "Wow! Welcome to Exploratory Testing! Your going through the product, learning about it, designing tests and executing them, all within writing formal test cases or steps or anything. Cool!"
Now, we had done some "introduction to ET" sessions in the past, and have gradually ramped up more time in each major release dedicated to ET. The idea was to follow leads, hunches and, well, explore. The only caveat was to keep track of what steps you followed so they could recreate "unusual responses" when they were encountered.
Explaining that the process she was working through actually WAS testing lead to, well, more M&Ms.
The result of the conversation was that the problems she was encountering were part of testing - not delaying it. By working through reasonable suppositions on what you would expect software to do, you are performing a far more worthwhile effort, in my mind, than "faithfully" following a script, whether you wrote it or not.
Mind you, she still encountered many problems just surfing through various functions. That indicated other issues - but not that she was unable to test.
That thought prompted another handful of M&Ms, and a renewed effort in testing - without a script.
OK - That got my attention.
So, I asked her what she meant. It seems that the project she was working on was not terribly well documented, the design was unclear and the requirements were mere suggestions and she had gotten several builds. So, she was working her way through things as she understood them.
So she explained what was going on... She intended to make sure she was understanding them correctly so she could document them and write her test scripts. Then, she could start testing.
The problem was, she'd try different features and they didn't work like she expected. So, she'd call the developer and ask what she was doing wrong. Problem: She wasn't.
The issue, as she saw it, was that the code was so unstable that she could not work her way through it enough to understand how she was to exercise the application as fully as possible. To do that, the standard process required test cases written so that they could be repeated and "fully document" the testing process for the auditors. Because she kept finding bugs just "checking it out" she was concerned that she was falling farther and farther behind and would never really get to testing.
More M&Ms.
So we talked a bit. First response: "Wow! Welcome to Exploratory Testing! Your going through the product, learning about it, designing tests and executing them, all within writing formal test cases or steps or anything. Cool!"
Now, we had done some "introduction to ET" sessions in the past, and have gradually ramped up more time in each major release dedicated to ET. The idea was to follow leads, hunches and, well, explore. The only caveat was to keep track of what steps you followed so they could recreate "unusual responses" when they were encountered.
Explaining that the process she was working through actually WAS testing lead to, well, more M&Ms.
The result of the conversation was that the problems she was encountering were part of testing - not delaying it. By working through reasonable suppositions on what you would expect software to do, you are performing a far more worthwhile effort, in my mind, than "faithfully" following a script, whether you wrote it or not.
Mind you, she still encountered many problems just surfing through various functions. That indicated other issues - but not that she was unable to test.
That thought prompted another handful of M&Ms, and a renewed effort in testing - without a script.
Thursday, December 16, 2010
Measurements and Metrics, Or How One Thing Led to Another
So, once upon a time, my dear daughter and her beau gave me a combination "Christmas and New Job" present. Yeah, I was changing jobs in late December... What was I thinking? Not sure now, but it seemed like a good idea at the time.
Anyway, this gift was an M&M dispenser. Yeah. Pretty cool, eh? Turn the little thingie on the top and a handfull of M&Ms would fall through a little chute and come out the bottom. Not too shabby!
So, move along to the summer of 2008. The company I was working for had a huge, big, ugly release coming out. It was the first time with a new release process and schedule and nerves were pretty thin all the way around, developers, testers, support folks, bosses, everyone. Well, being the observant fellow, I realized that we were consuming a LOT of M&Ms - Of course, it helped that the dispenser was at my desk, in my humble cube/work-area.
So, I started keeping track of how much candy we went through. The only folks who partook of these multi-coloured delicacies were the QA/Tester group and a couple of brave developers who realized that we were not contagious and they could not catch anything from us. (They also learned that they might learn something from us and we testers might learn something from them.)
What I discovered was kind of interesting. As the stress-level went up, so did the consumption of M&M's. As things were going better and things were looking good, then consumption went down.
Using a simple Excel spreadsheet, I added up the number of bags eaten (it helps that they have the weight on them) as well as the partial bags each week. Then using the cool graphing tool in Excel, I could visually represent how much we went through. By correlation, the level of stress the team was under.
After about a month, I "published" the results to the team. SHOCK! GASP! We went through HOW MUCH??????
Then the boss sat down with me and looked at the wee little chart. "What was going on during this week?" Ah-HA! The first obvious attempt to match what the graph was showing. I tracked usage for the rest of the year. The amount the team consumed over the six months or so that I tracked, lined up remarkably with due dates and, interestingly, defects reported in testing.
One thing led to another, and the dispenser was put away for a time. In mid-2009, for reasons which now I don't recall, the M&Ms came back out. As the crew realized this, consumption went up. And up. And up. Eventually, I noticed that the same pattern demonstrated before was coming back.
I learned two things doing this exercise (which I continue to do.)
One, is that it is possible to measure one thing and be informed on another. Now, I am well aware of the First and Second Order (and other) Measurements described by some of the great ones in our craft. This exercise brought it home to me in ways that the theoretical discussions did not.
The other thing, sitting at a desk and making a meal of M&M's is a really, really bad idea.
Anyway, this gift was an M&M dispenser. Yeah. Pretty cool, eh? Turn the little thingie on the top and a handfull of M&Ms would fall through a little chute and come out the bottom. Not too shabby!
So, move along to the summer of 2008. The company I was working for had a huge, big, ugly release coming out. It was the first time with a new release process and schedule and nerves were pretty thin all the way around, developers, testers, support folks, bosses, everyone. Well, being the observant fellow, I realized that we were consuming a LOT of M&Ms - Of course, it helped that the dispenser was at my desk, in my humble cube/work-area.
So, I started keeping track of how much candy we went through. The only folks who partook of these multi-coloured delicacies were the QA/Tester group and a couple of brave developers who realized that we were not contagious and they could not catch anything from us. (They also learned that they might learn something from us and we testers might learn something from them.)
What I discovered was kind of interesting. As the stress-level went up, so did the consumption of M&M's. As things were going better and things were looking good, then consumption went down.
Using a simple Excel spreadsheet, I added up the number of bags eaten (it helps that they have the weight on them) as well as the partial bags each week. Then using the cool graphing tool in Excel, I could visually represent how much we went through. By correlation, the level of stress the team was under.
After about a month, I "published" the results to the team. SHOCK! GASP! We went through HOW MUCH??????
Then the boss sat down with me and looked at the wee little chart. "What was going on during this week?" Ah-HA! The first obvious attempt to match what the graph was showing. I tracked usage for the rest of the year. The amount the team consumed over the six months or so that I tracked, lined up remarkably with due dates and, interestingly, defects reported in testing.
One thing led to another, and the dispenser was put away for a time. In mid-2009, for reasons which now I don't recall, the M&Ms came back out. As the crew realized this, consumption went up. And up. And up. Eventually, I noticed that the same pattern demonstrated before was coming back.
I learned two things doing this exercise (which I continue to do.)
One, is that it is possible to measure one thing and be informed on another. Now, I am well aware of the First and Second Order (and other) Measurements described by some of the great ones in our craft. This exercise brought it home to me in ways that the theoretical discussions did not.
The other thing, sitting at a desk and making a meal of M&M's is a really, really bad idea.
Monday, December 6, 2010
Not the Happy Path or I am a Hi-Lo
I was at a local tester meeting tonight. Tons of fun and a great conversation. There was a student from a local college attending. In the course of the discussion we were discussing the dangers of trusting the "happy path." The student asked, "What do you mean by that?"
So, we explained about looking only for what "worked" and not investigating other issues that were more problematic and probably more error-prone. In the midst of this, a story from over 20 years ago flooded back into my memory.
Mind you, it influenced me greatly at the time. It led me to some of my early revelations of software, development, testing and "revealed truth."
When the IBM PC AT was state of the art, I worked as a developer (programmer) for a small manufacturer that had its own warehouses and distribution center for its finished product. The company was a family run company located in fairly old buildings, well, from the late 1800's and early 1900's. One individual was the nemesis of the software development folks.
He was in charge of the warehouse - both finished products and component pieces. Any sofftware running on machines in the warehouse had to be run past him for approval. These were scattered around the varous floors of the warehouses. Now, these warehouses were monsters. Support posts were massive beams, 24"x24". The PCs were usually located near a beam.
The very old warehouses had a very small amount of leeway for placing pallets and the like. Placing a case or pallet even a few inches away from where it was supposed to be could cause a fair amount of problems for the hi-lo operators moving material from one area to another.
The curious bit was that at least once a week, a h-lo would hit (referred to as "bump") a support beam. This was usually result from navigating away from mis-placed pallets. Sometimes it was simply the operator missing a turn. Once in a while, they'd hit the power conduit that powered a PC on an early network connection. Once in a great while, they'd "take out" the PC itself. Oops.
Back to my story.
This same nemisis of software developemtn was finicky. Extremely finicky. He wanted to make sure that any data entered could be retrieved under any circumstnace. If the user hit "enter" or "save" he had the expectation that the data would be fully retrievable.
His favorite tactic during demonstrations where changes or enhancements were being demonstrated, was to have the demonstrator enter components or finished part information. He'd sometimes have the demonstrator repeat the process. In the middle of the repeat, after clicking "save" or going to the next page, he'd say "I'm a hi-lo." and unplug the power cord.
He'd count 20 and plug it back in.
Then he'd sit down next to the demonstrator and say "Show me what you just entered."
If you couldn't, he refused to accept the change until it could pass his "test."
How much work is it for your users to recover their work after a "that will never happen" event?
So, we explained about looking only for what "worked" and not investigating other issues that were more problematic and probably more error-prone. In the midst of this, a story from over 20 years ago flooded back into my memory.
Mind you, it influenced me greatly at the time. It led me to some of my early revelations of software, development, testing and "revealed truth."
When the IBM PC AT was state of the art, I worked as a developer (programmer) for a small manufacturer that had its own warehouses and distribution center for its finished product. The company was a family run company located in fairly old buildings, well, from the late 1800's and early 1900's. One individual was the nemesis of the software development folks.
He was in charge of the warehouse - both finished products and component pieces. Any sofftware running on machines in the warehouse had to be run past him for approval. These were scattered around the varous floors of the warehouses. Now, these warehouses were monsters. Support posts were massive beams, 24"x24". The PCs were usually located near a beam.
The very old warehouses had a very small amount of leeway for placing pallets and the like. Placing a case or pallet even a few inches away from where it was supposed to be could cause a fair amount of problems for the hi-lo operators moving material from one area to another.
The curious bit was that at least once a week, a h-lo would hit (referred to as "bump") a support beam. This was usually result from navigating away from mis-placed pallets. Sometimes it was simply the operator missing a turn. Once in a while, they'd hit the power conduit that powered a PC on an early network connection. Once in a great while, they'd "take out" the PC itself. Oops.
Back to my story.
This same nemisis of software developemtn was finicky. Extremely finicky. He wanted to make sure that any data entered could be retrieved under any circumstnace. If the user hit "enter" or "save" he had the expectation that the data would be fully retrievable.
His favorite tactic during demonstrations where changes or enhancements were being demonstrated, was to have the demonstrator enter components or finished part information. He'd sometimes have the demonstrator repeat the process. In the middle of the repeat, after clicking "save" or going to the next page, he'd say "I'm a hi-lo." and unplug the power cord.
He'd count 20 and plug it back in.
Then he'd sit down next to the demonstrator and say "Show me what you just entered."
If you couldn't, he refused to accept the change until it could pass his "test."
How much work is it for your users to recover their work after a "that will never happen" event?
Winter Testing Workshop or How to Go Sledding With No Snow
I found myself testing an application for the day job over the weekend. Thanks to the wonders of reasonably modern technology, and a decent broadband connection, I was able to do so from the comfort of home.
Here I was, Sunday afternoon, sitting at my dining room table connected to the office running tests that I needed to work through. It was a lovely day. Cold, but not terribly. The sun was even trying to peek out from the clouds that had been hiding it for most of the last week. We had some snow the Wednesday before - not quite two inches or so on the ground. By Sunday early afternoon, there was none on the pavement or sidewalks, and much that had been in the grass / yard had returned from whence it came.
So as I was working my way through a log file, I heard an obviously frustrated child outside. I looked up and saw the wee kids across the street looking quite perplexed. They wanted to go sledding on the small bank in their yard, leading down to the sidewalk. Problem: Most of the snow was gone, therefore, the sleds/slider-thingies they had simply were not working well. Sledding was pretty much out of the question - particularly when you're between the ages of 6 and 9 years.
When you're stuck with a testing project, without any clear way forward. What do you do? Send a terse email demanding whatever you need from whomever you believe should get it to you? I tried that when I was younger and more green in software testing than I am now. Didn't work so well.
How 'bout rail against the unfair universe? "Why do we do things like this?!? This is AWFUL!" Yeah, good luck with that, too.
Or, maybe, you could look around and see what options you have, even if they are so far outside the realm of possibility that all the "experts" would say "Don't waste your time!"
The kids across the street chose the third option. They put their two slides/sleds next to each other on the top of the "hill" that is the bank in their yard. Then, while the youngest held them down so the wind would not blow them away, the older two used a) a garden rake and b) a snow shovel to get enough snow from the REST of the yard tp make a run wide enough for both sleds that ran down the bank, across the sidewalk, and ended with a small berm (of snow) to keep them from going into the street.
They then proceeded to have a good 90 minutes of fun doing something that the "experts" (grown-ups) would have told them they could not possibly do.
A 9-year-old can think that creatively. Can we?
Here I was, Sunday afternoon, sitting at my dining room table connected to the office running tests that I needed to work through. It was a lovely day. Cold, but not terribly. The sun was even trying to peek out from the clouds that had been hiding it for most of the last week. We had some snow the Wednesday before - not quite two inches or so on the ground. By Sunday early afternoon, there was none on the pavement or sidewalks, and much that had been in the grass / yard had returned from whence it came.
So as I was working my way through a log file, I heard an obviously frustrated child outside. I looked up and saw the wee kids across the street looking quite perplexed. They wanted to go sledding on the small bank in their yard, leading down to the sidewalk. Problem: Most of the snow was gone, therefore, the sleds/slider-thingies they had simply were not working well. Sledding was pretty much out of the question - particularly when you're between the ages of 6 and 9 years.
When you're stuck with a testing project, without any clear way forward. What do you do? Send a terse email demanding whatever you need from whomever you believe should get it to you? I tried that when I was younger and more green in software testing than I am now. Didn't work so well.
How 'bout rail against the unfair universe? "Why do we do things like this?!? This is AWFUL!" Yeah, good luck with that, too.
Or, maybe, you could look around and see what options you have, even if they are so far outside the realm of possibility that all the "experts" would say "Don't waste your time!"
The kids across the street chose the third option. They put their two slides/sleds next to each other on the top of the "hill" that is the bank in their yard. Then, while the youngest held them down so the wind would not blow them away, the older two used a) a garden rake and b) a snow shovel to get enough snow from the REST of the yard tp make a run wide enough for both sleds that ran down the bank, across the sidewalk, and ended with a small berm (of snow) to keep them from going into the street.
They then proceeded to have a good 90 minutes of fun doing something that the "experts" (grown-ups) would have told them they could not possibly do.
A 9-year-old can think that creatively. Can we?
Friday, December 3, 2010
Of WikiLeaks and Diplomats or Software and Trust
Um, unless you've been in a cave the last week or so, you've heard about the recent leak of Diplomatic "Cables" (what a love anachronism in 2010.) Not the leak, but the idea of telegrams.
So, listening to the radio on my way home from work last night, a commentator likened the situation that the US and its diplomatic "partners" are in to a teenager's private comments about their friends getting back to those friends. All of them. At once. They went on to talk about how they would and need to rebuild their trust to rebuild their relationship.
That got me thinking about some conversations I had a while ago, both in person and by email. The thing is, there was one really simple theme running throughout all of them. The entire development organization, not just the testers, not just the developers or designers or BAs or PMs - but all of them - must trust each other to be doing the best they can do, and they know how to do.
If that trust is lacking, the group will not be able to function properly. If one section of the group believes themselves superior in some way, that will show through to all the groups and will be as destructive to the overall relationship as, well, having private communications made public.
So, listening to the radio on my way home from work last night, a commentator likened the situation that the US and its diplomatic "partners" are in to a teenager's private comments about their friends getting back to those friends. All of them. At once. They went on to talk about how they would and need to rebuild their trust to rebuild their relationship.
That got me thinking about some conversations I had a while ago, both in person and by email. The thing is, there was one really simple theme running throughout all of them. The entire development organization, not just the testers, not just the developers or designers or BAs or PMs - but all of them - must trust each other to be doing the best they can do, and they know how to do.
If that trust is lacking, the group will not be able to function properly. If one section of the group believes themselves superior in some way, that will show through to all the groups and will be as destructive to the overall relationship as, well, having private communications made public.
Tuesday, November 30, 2010
Through the Looking Glass or the Fear Factor of Management
I finally have a bit of time to try and catch up on some reading. I don't know how strange this is, but I tend to read the same book two or three times, particularly if I can only read small segments at a time. As I'm not the widely travelled road-warrior spending long hours waiting in airport terminals, and when I am sitting in airport terminals I tend to have a stack of day-job stuff to read/respond to, I find myself falling farther and farther behind in my reading. The result is, I sometimes get to read a paragraph or two of a book, then get called away to deal with something.
I've been working my way through two books over the Thanksgiving weekend, both on software. Which ones do not really matter. I resolved to start the smaller one over completely and see if reading entire pages at a time made it better. (It did.)
I have also been trying to catch up on my long-list of blog posts that I intended to read and see what the great ones of our craft can teach me. The answer was: "Quite a bit." One stuck out though and prompted this post.
Selena Delesie wrote some time about Yes Men and the managers who like them. I had read enough of that post to say "I need to look that up when I get a moment." That moment finally came. Toward the end of her entry, Selena wrote "Have you worked with or for someone who is a dictator-type who thrives on working with ‘Yes’ Men?" This is my response:
Yes. To make matters worse, I did not even report to that manager, either directly or indirectly. I was in a completely different reporting line than he was. My Manager was a peer with him. However, as he managed a large development group I needed to work with (as QA and BA and PM) it presented all sorts of challenges. Forget that. It was not a "challenge." It was a "pain."
Management By Intimidation was the best way I can think of to describe his approach. Really, it was un-good. People who disagreed with him or had an opinion that did not exactly match his own were belittled, often publicly, or (as I learned later in private conversation then first-hand) had their position/employment threatened. Fear was a great motivational tool.
People who did not work for him but were in meetings with him could count on having any statement challenged, any assertion questioned. "You can't prove that, you have no evidence!" Never mind that the previous 15 minutes had been laying out evidence to support the statement he was challenging (gratuitously.) And even when assertions were presented as "possibilities" you could be certain that anything that could be in conflict with what he wanted done would be publicly thrashed.
Other managers were afraid of what he would say to the VP in private. He created a mystique of "getting things done" at all costs.
In a matter of months, from the time when he first joined the company to when I needed to interact with him or his people on a daily basis, this job went from the best job that I had ever had to the absolute worst one. In truth, I learned a lot during that time.
Managers in the business units were frustrated. When speaking with them, you know, doing my job, about needs and business function, several actually hung their heads and said "It doesn't really matter, Pete. No matter what I say, he's going to do what he wants to do and tell me this is the way it has to be." When I asked why they did not go to their leadership and look for support, to a man (they really all were males) the response was "If I don't put up with this, he won't have his people do what I really need to have done."
One interesting thing was if you did not have to interact with him to get your job done, and he needed you, you were the best buddy he had. Pals for life! Until there was a change or he did not get something he wanted.
This guy was also a huge believer in bell-curves. Particularly when they were applied to people. He also loved metrics. There was never a metric he did not proclaim the great value of, then manipulate to his own ends. Flagrantly. He also knew that no one would call him on it.
Finally, I did. Publicly. In a manager meeting. With the VP present. Hell hath no fury like a bully and a liar who is called on behavior. I knew it would cost me. I did not care. In the end, I could not work in an environment like that where people were truly afraid of the consequences of speaking out. I would no longer be complacent in an environment where the cost of taking a stand on the moral high-ground was more fearsome than what the toxic environment did to the person as a person.
I landed a new position. I did not realize how the toxicity of the last one lingered on me and I made some mistakes there. Nothing huge, but enough where my outlook had been changed to be more confrontational that it needed to be. I learned from that, too. That's another blog post though.
I said I learned a lot there. I did. Here's part of what I learned:
I've been working my way through two books over the Thanksgiving weekend, both on software. Which ones do not really matter. I resolved to start the smaller one over completely and see if reading entire pages at a time made it better. (It did.)
I have also been trying to catch up on my long-list of blog posts that I intended to read and see what the great ones of our craft can teach me. The answer was: "Quite a bit." One stuck out though and prompted this post.
Selena Delesie wrote some time about Yes Men and the managers who like them. I had read enough of that post to say "I need to look that up when I get a moment." That moment finally came. Toward the end of her entry, Selena wrote "Have you worked with or for someone who is a dictator-type who thrives on working with ‘Yes’ Men?" This is my response:
Yes. To make matters worse, I did not even report to that manager, either directly or indirectly. I was in a completely different reporting line than he was. My Manager was a peer with him. However, as he managed a large development group I needed to work with (as QA and BA and PM) it presented all sorts of challenges. Forget that. It was not a "challenge." It was a "pain."
Management By Intimidation was the best way I can think of to describe his approach. Really, it was un-good. People who disagreed with him or had an opinion that did not exactly match his own were belittled, often publicly, or (as I learned later in private conversation then first-hand) had their position/employment threatened. Fear was a great motivational tool.
People who did not work for him but were in meetings with him could count on having any statement challenged, any assertion questioned. "You can't prove that, you have no evidence!" Never mind that the previous 15 minutes had been laying out evidence to support the statement he was challenging (gratuitously.) And even when assertions were presented as "possibilities" you could be certain that anything that could be in conflict with what he wanted done would be publicly thrashed.
Other managers were afraid of what he would say to the VP in private. He created a mystique of "getting things done" at all costs.
In a matter of months, from the time when he first joined the company to when I needed to interact with him or his people on a daily basis, this job went from the best job that I had ever had to the absolute worst one. In truth, I learned a lot during that time.
Managers in the business units were frustrated. When speaking with them, you know, doing my job, about needs and business function, several actually hung their heads and said "It doesn't really matter, Pete. No matter what I say, he's going to do what he wants to do and tell me this is the way it has to be." When I asked why they did not go to their leadership and look for support, to a man (they really all were males) the response was "If I don't put up with this, he won't have his people do what I really need to have done."
One interesting thing was if you did not have to interact with him to get your job done, and he needed you, you were the best buddy he had. Pals for life! Until there was a change or he did not get something he wanted.
This guy was also a huge believer in bell-curves. Particularly when they were applied to people. He also loved metrics. There was never a metric he did not proclaim the great value of, then manipulate to his own ends. Flagrantly. He also knew that no one would call him on it.
Finally, I did. Publicly. In a manager meeting. With the VP present. Hell hath no fury like a bully and a liar who is called on behavior. I knew it would cost me. I did not care. In the end, I could not work in an environment like that where people were truly afraid of the consequences of speaking out. I would no longer be complacent in an environment where the cost of taking a stand on the moral high-ground was more fearsome than what the toxic environment did to the person as a person.
I landed a new position. I did not realize how the toxicity of the last one lingered on me and I made some mistakes there. Nothing huge, but enough where my outlook had been changed to be more confrontational that it needed to be. I learned from that, too. That's another blog post though.
I said I learned a lot there. I did. Here's part of what I learned:
- Managers may not be leaders, but they need to manage well;
- Human Mistakes are learning opportunities, not something whose outcomes should be dreaded;
- Intimidation only works if one is willing to be intimidated;
- Sometimes in the office, just like in the schoolyard, bullies will collapse when confronted by a united opposition;
- Make sure you are not the bully.
Labels:
communication,
leadership,
lessons,
manager stuff
Monday, November 15, 2010
What is Wanted and What They THINK is Wanted or Why Don't They Listen?
A funny thing happened the other day. The 17 year old grandson was helping my lady-wife hook up a VCR (remember those?) to a TV. Now, the story is, the lady-wife has a "studio" where she makes quilts and other fabric stuff. She wanted a TV in the studio, with a VCR so she could play some of the instructional videos for techniques she hasn't tried. Now, since she has a large collection of VHS tapes, her plan was to simply hang on to them and play them on the "old" TV with the "old" VCR hooked up.
Well, the dear grandson looked at her TV and VCR and the cables and pronounced "this will never work." It seems that she needed a converter and completely different set of cables to make the TV worked. Her response was simple, "I just want to watch some videos."
"But it won't work like this. You won't get any channels."
"I don't want to watch TV, I want to watch some videos. That's all."
"But everything is digital now. This won't work without the right antenna and cables."
"I don't want to watch anything on TV. I just want to watch some videos. That's all."
"So, you don't want to watch TV? Just watch some movies? Oh."
Now, how many times do you have conversations where the IT/Development/Lords of Software "KNOW" what the users want to do? Have you ever dealt with folks who KNOW they know the business better than the people doing the business?
Yeah, I've dealt with them, too.
I remember one Very self-Important Person who told me flat out that "We drive technology and they don't know what the technology does or why they need a new system to replace what they have." I asked "What problems are they having that we need to change their systems and all their business processes?"
I was told that I clearly did not understand what their mission was or what my role was. I was working as a Business Analyst at the time. I knew then that I clearly did not like the idea of Lords of Software running a business because the people who are supposed to run it did not understand technology.
The moral of the story? Remember who is the servant and who is served.
Well, the dear grandson looked at her TV and VCR and the cables and pronounced "this will never work." It seems that she needed a converter and completely different set of cables to make the TV worked. Her response was simple, "I just want to watch some videos."
"But it won't work like this. You won't get any channels."
"I don't want to watch TV, I want to watch some videos. That's all."
"But everything is digital now. This won't work without the right antenna and cables."
"I don't want to watch anything on TV. I just want to watch some videos. That's all."
"So, you don't want to watch TV? Just watch some movies? Oh."
Now, how many times do you have conversations where the IT/Development/Lords of Software "KNOW" what the users want to do? Have you ever dealt with folks who KNOW they know the business better than the people doing the business?
Yeah, I've dealt with them, too.
I remember one Very self-Important Person who told me flat out that "We drive technology and they don't know what the technology does or why they need a new system to replace what they have." I asked "What problems are they having that we need to change their systems and all their business processes?"
I was told that I clearly did not understand what their mission was or what my role was. I was working as a Business Analyst at the time. I knew then that I clearly did not like the idea of Lords of Software running a business because the people who are supposed to run it did not understand technology.
The moral of the story? Remember who is the servant and who is served.
Friday, November 12, 2010
On Communication and Documentation
East is East and West is West
And Ne'er the twain shall meet.
Kipling knew more about developing software than some people I can think of, or have worked with on occasion.
I had a conversation this week that reminded me about good ol' Rudyard's poetry. I was talking with some folks I know and they were rather muttering about how they can't get questions answered. The funny thing is I've had conversations like that before. They all go something like this:
Me: "Hey, I've have a question about HIJ function in XYZ project."
Them: "The detail design has everything in it."
Me: "OK, well, I read that and the requirements doc and I'm still not sure about something."
Them: "The detail design doc has all the infromation you need."
Me: "Well, I read that and there's a couple things I don't understand. I wonder if we can talk about them."
Then: "We don't need to talk. Everything is in the documents you have. There's no room for questions."
Really?
Dear people,
Documents should assist communication, not replace it. Communication involves more than writing, or reading, a document. Since not everyone shares the same world-view, it seems that sometimes, when someone writes something, other people read it and may not understand completely. How can that be if "Everything is in the documents?"
Maybe it should be "Everything I think you need is in the documents." I'm not sure.
It always strikes me that the point of "Communication" is something that got talked about when I was taking classes an eon or two ago. I learned that communication is a process of transferring information from one person to another. I don't recall anything about "documents."
Now, don't get me wrong. Documents are great! I've written some myself! I have read many of them written by other people. The point of it is that the information should be conveyed between people. Documents can record decisions. Documents can support conclusions. Documents can serve as memory aides.
Documents are not, in themselves, communication. Just like East is not West.
Friday, November 5, 2010
Perspectives and Guy Fawkes and Movies
So, working on my computer this morning, I saw a Tweet from a well regarded tester that said:
Remember, remember the 5th of November, The Gunpowder Treason and Plot,
I know of no reason Why the Gunpowder Treason Should ever be forgot.
Being of the mindset that I am, my immediate reaction was "Guy Fawkes? She's an American, what is she on about Guy Fawkes and the Bonfire Night for?"
So I asked if she had made a Guy? (Seemed perfectly reasonable I thought.) Her response was a little, well, not at all what I expected. We were several tweets into the conversation when I realized her tweet had actually nothing at all to do with Guy Fawkes day and was based on V for Vendetta - the movie.
From that point on, both of us understood why the other had no clue what we had been talking about.
This was pretty quick, compared to when things like this happen in the office, on projects. I sometimes wonder how it is that people can fail to realize that they are using the same terms or phrases or buzz-words and talking about completely different things.
So, Movie or Historical Event or Project?
Remember, remember the 5th of November, The Gunpowder Treason and Plot,
I know of no reason Why the Gunpowder Treason Should ever be forgot.
Being of the mindset that I am, my immediate reaction was "Guy Fawkes? She's an American, what is she on about Guy Fawkes and the Bonfire Night for?"
So I asked if she had made a Guy? (Seemed perfectly reasonable I thought.) Her response was a little, well, not at all what I expected. We were several tweets into the conversation when I realized her tweet had actually nothing at all to do with Guy Fawkes day and was based on V for Vendetta - the movie.
From that point on, both of us understood why the other had no clue what we had been talking about.
This was pretty quick, compared to when things like this happen in the office, on projects. I sometimes wonder how it is that people can fail to realize that they are using the same terms or phrases or buzz-words and talking about completely different things.
So, Movie or Historical Event or Project?
Remember, remember the Fifth of November,
The Gunpowder Treason and Plot,
I know of no reason
Why the Gunpowder Treason
Should ever be forgot.
Guy Fawkes, Guy Fawkes, t'was his intent
To blow up the King and Parli'ment.
Three-score barrels of powder below
To prove old England's overthrow;
By God's providence he was catch'd
With a dark lantern and burning match...
Labels:
communication,
perception,
perspective,
testing
Conference Attendance 101 or Learning while Conferring
A couple weeks ago I blogged, excitedly, about my experience speaking at the TesTrek conference hosted by QAI in Toronto the week of October 18. I think this consititutes Part 2 of that post.
When I was younger and more "fill the schedule" oriented than I am now, when I went to a conference or user group meeting or seminars or whatever, I tried really, really hard to "get the most for the money spent" by being in a track session or workshop every single minute and moving quickly from one presentation to the next. I made a point of not getting drawn into conversations because I might miss a presentation. Even if there was not a presentation that I was really interested in attending, I made a point of going anyway. I needed to get my (well, my boss' anyway) money's worth!
How foolish of me.
Several years ago, I was sent as a "technical person" to a user group meeting for a software package my empplyer had purchased, installed and was using. I was the senior programmer for supporting the integration and customizations, and since they introduced a "technical track" that year, the software company "hosts" made a big deal about sending "technical experts" to learn about what was coming and what was going on. After a series of presentations with the same people sitting within a few seats of each other with the same "you've got to be kidding" looks on their faces as I'm sure I had, a small number of us began comparing notes. We skipped the next session, grabbed some drinks from the bar in the conference center, got out our pads of paper and did out own "track."
We had a number of people with similar experiences and problems and decided that we knew as much as the sales people who could not answer a single question about the product they were supposed to be giving us "technical information." The next day and a half I had two legal pads full of notes, diagrams and a stack of business cards from the folks sitting around the table. In my memory, we had 8 or 10 people who "snuck out" and "wasted the company's money." Except that all of us had solutions to problems we encountered that the vendor had not been able to address - and each of the solutions had been implemented somewhere and actually worked.
A few years ago at a regional conference, I ran into a couple of people who shared the same "this presenter does not get it" look on their face. The fact that one of them was a speaker I had listened to the day before, and been really really impressed with his information, did reinforce that a bit. We proceeded to have a "hallway conversation" that turned into several people sitting in comfy chairs drinking tea and/or soft drinks talking about the topic for the presentation we had just been in. We compared notes and war stories and annecdotes and experiences - and everyone came away with new ideas they did not have before (and did not get from the presentation we had all attended.)
From that point, every conference I've been to, I intentionally leave holes in my schedule. Lets face it. There may not be a speaker I want to hear or a topic I "really, really want to learn something about."
So, instead, I may seek out other interesting people I've seen during the conference, or heard ask intelligent questions, who are milling about (during that period between sessions) and talk with them - ask questions, SOMETHING. Those have been really enlightening the last couple of years, and lead to some great contacts and insight that I may not have gotten elsewhere.
Now, I know that walking up to someone can be a bit intimidating. So what - do it anyway. If they are speaking at the conference, they may well be open to a good conversation. If not, they may be as equally lost about "what session to go to next..." and maybe the right one is the one that starts in the hallway with the two of you and see what happens. More may join in and it could last and hour, or 10 minutes.
Either way, share conact information - let them know how to get in touch with you and find out how to get in touch with them. Easiest way to do that? Give them your card! Now, don't be like the guys who were talking with me at TesTrek and have a rather sheepish look and say "Our company doesn't give us business cards so I don't have any..."
TOO BAD! Business cards are cheap! A simple black ink on white card ("classic look...") can be made pretty inexpensively at most big-box office supply stores, or any small printing shop can help you. All you need is your name, something to identify what you do (like, in this case "software tester" might be appropriate) email address and phone number. Your address might be nice, but not needed.
So, since folks like lists, here's my list for conference attendees to do or bring for the the conference:
When I was younger and more "fill the schedule" oriented than I am now, when I went to a conference or user group meeting or seminars or whatever, I tried really, really hard to "get the most for the money spent" by being in a track session or workshop every single minute and moving quickly from one presentation to the next. I made a point of not getting drawn into conversations because I might miss a presentation. Even if there was not a presentation that I was really interested in attending, I made a point of going anyway. I needed to get my (well, my boss' anyway) money's worth!
How foolish of me.
Several years ago, I was sent as a "technical person" to a user group meeting for a software package my empplyer had purchased, installed and was using. I was the senior programmer for supporting the integration and customizations, and since they introduced a "technical track" that year, the software company "hosts" made a big deal about sending "technical experts" to learn about what was coming and what was going on. After a series of presentations with the same people sitting within a few seats of each other with the same "you've got to be kidding" looks on their faces as I'm sure I had, a small number of us began comparing notes. We skipped the next session, grabbed some drinks from the bar in the conference center, got out our pads of paper and did out own "track."
We had a number of people with similar experiences and problems and decided that we knew as much as the sales people who could not answer a single question about the product they were supposed to be giving us "technical information." The next day and a half I had two legal pads full of notes, diagrams and a stack of business cards from the folks sitting around the table. In my memory, we had 8 or 10 people who "snuck out" and "wasted the company's money." Except that all of us had solutions to problems we encountered that the vendor had not been able to address - and each of the solutions had been implemented somewhere and actually worked.
A few years ago at a regional conference, I ran into a couple of people who shared the same "this presenter does not get it" look on their face. The fact that one of them was a speaker I had listened to the day before, and been really really impressed with his information, did reinforce that a bit. We proceeded to have a "hallway conversation" that turned into several people sitting in comfy chairs drinking tea and/or soft drinks talking about the topic for the presentation we had just been in. We compared notes and war stories and annecdotes and experiences - and everyone came away with new ideas they did not have before (and did not get from the presentation we had all attended.)
From that point, every conference I've been to, I intentionally leave holes in my schedule. Lets face it. There may not be a speaker I want to hear or a topic I "really, really want to learn something about."
So, instead, I may seek out other interesting people I've seen during the conference, or heard ask intelligent questions, who are milling about (during that period between sessions) and talk with them - ask questions, SOMETHING. Those have been really enlightening the last couple of years, and lead to some great contacts and insight that I may not have gotten elsewhere.
Now, I know that walking up to someone can be a bit intimidating. So what - do it anyway. If they are speaking at the conference, they may well be open to a good conversation. If not, they may be as equally lost about "what session to go to next..." and maybe the right one is the one that starts in the hallway with the two of you and see what happens. More may join in and it could last and hour, or 10 minutes.
Either way, share conact information - let them know how to get in touch with you and find out how to get in touch with them. Easiest way to do that? Give them your card! Now, don't be like the guys who were talking with me at TesTrek and have a rather sheepish look and say "Our company doesn't give us business cards so I don't have any..."
TOO BAD! Business cards are cheap! A simple black ink on white card ("classic look...") can be made pretty inexpensively at most big-box office supply stores, or any small printing shop can help you. All you need is your name, something to identify what you do (like, in this case "software tester" might be appropriate) email address and phone number. Your address might be nice, but not needed.
So, since folks like lists, here's my list for conference attendees to do or bring for the the conference:
- Business cards - lots and lots of business cards. Even if the company doesn't give you some, get some made;
- Laptop or netbook computer or smartphone - great for taking notes (or checking email if you "chose poorly") and tweeting about the good (or bad) points the speaker is making;
- An open mind - You never know what you might learn and how that might relate to your interests, both personal and professional;
- Did I mention business cards?
- Note book / scratch pad. Yeah, I know, many conferences will give out folders or portfolios and a lot of conference centers have little note pads for jotting things down. The problem is I find those note pads too little. The portfolios may be useful for other things - like holding all the papers/CDs/DVDs you may collect;
- An open schedule - Have lists of "must see", "good to see" and "want to see" sessions, but don't feel you need to have every single timeslot open;
- Your favorite mints or hard candy;
- Business cards (did I say that already?);
- Work. Leave the office behind. Your boss sent you there to learn, so learn. Those people who are trying to suck you in now are the same ones who want all of your attention all the time anyway, including on weekends if they think they might get it. They'll be waiting anyway to ruin your day when you get back. You're there to learn. Learn to talk with people you don't know and learn about them. It may help you in your job.
- Work email. Yeah, I know, I mentioned bringing the laptop to check email or whatever. That work email that needs attention yesterday if not sooner can't really be dealt with while your in a conference session, so leave it for a while. Come back to it later - another hour or two won't make that big a difference. (I know. I've broken this rule, but just to be the exception that proves the rule...)
- Extreme self assurance in the "rightness" of your position. Put the ego in "Neuteral" and you may learn something useful.
Sunday, October 24, 2010
TPI Presentation Summary
This post resulted from typing up the notes taken on flip-charts that I promised to type and send to the participants in the workshop I did at TesTrek in Toronto. My thanks go to all the people who were there and participated in the discussion, particularly Lynn McKee, Paul Carvalho, Michael Bolton, Michael... the other Michael who did not have business cards and whose last name I don't recall. That this session took the path it did, and that the quality of the discussion it had was due very largely, if not entirely, to you. I know I learned a great deal and I was the one with the microphone.
My notes:
Points made in discussion during presentation portion:
SWOT is a tool to look at your team’s Strengths and Weaknesses while being aware of external Opportunities and Threats – Things that you may be able to take advantage of and those things that may block your progress.
These items are from the ideas that were volunteered by participants.
Strengths
Technically Competent
Dedicated
Finds “Good Bugs” fast
Detail Oriented
Shows Craftsmanship / Professional Pride in work
Team Gels
Good Communication (and skills)
Understands Roles
Big Test Case Toolbox
Adaptable
Has the trust of peers and colleagues
Weaknesses
Hard to say “No”
Resistant to change
Low technical knowledge
Poor estimation skills
Staff not as good as they think they are
Lack of creativity
Summary
The conversation around these points was important and I allowed it to flow freely, believing that they bore greater value than walking through a planned exercise. It was interesting to note that the strengths were drawn out very quickly, while the weaknesses took nearly twice as long and ended with far fewer items.
This is almost exactly in line with my experiences with using this technique.
It is easy to state what a person or team is good at – what their strengths are. Getting this down to specifics from the more general terms can be a bit more challenging, but usually bears fruit. Saying out loud (admitting to yourself and your team) what the weaknesses and short comings are is far harder. We all have frames around ourselves that limit our vision. We all want to be heroes in our own minds – no one wants to be the villain. Most people want to believe they are good if not very good at what they do.
Getting your team together and discussing what the weaknesses the team has means at some point people must trust each other to help improve individual shortcomings. If your list of strengths includes something about “teamwork” and people are not able or are unwilling to be honest with each other (yes, you can be polite and honest at the same time) then the “teamwork” strength needs to be removed from the list.
The greatest single challenge is to face yourself as you are. This is compounded when attempting to do this with, and in front of, co-workers and team members. The leader/manager may need to get help in doing this very hard task, and to break down the barriers that exist to allow frank discussion to occur. Tempers may flare and nerves will certainly be on edge. The “trick” is to allow cooling-off periods. Perhaps meeting for a couple hours each day for a couple of weeks instead of reserving three or four days in a row to do nothing but this would make it easier. This will allow people to talk privately and do their own reality-checks on what happens, or should happen.
Sometimes, the most potent force in this process is to have people thinking about these topics in the back of their minds while working on their “real” work. While focusing on a challenge, don’t be surprised if something pops into your mind related to the SWOT discussions and how that revelation can bear on the next discussion session.
AND SO, in simple language:
• To improve your Test Process, you must improve your team’s testing.
• To improve your testing, you must have a solid understanding of what your team is capable of RIGHT NOW.
• To understand your team’s capability, you must understand your team’s Strengths and Weaknesses.
• If you understand the Strengths and Weaknesses, you can consider what it is that Management or Customers are expecting.
• Recognizing what Management and Customers are expecting becomes your first Opportunity.
• Recognizing Opportunity may reveal things that will block those opportunities: Threats.
• Engaging in this process with your entire team will demonstrate to your team how serious you are to improving the team and making the individuals better at what they do.
• When you make the testing itself better, the Testing Process will be improved.
My notes:
Test Process Improvement:
Lessons Learned from the Trenches
Flip-chart notes from TesTrek Conference, October 2010
Points made in discussion during presentation portion:
- (In looking at testing…) How do I add value? (Lynn McKee)
- Something’s wrong, Customers report “issues”
- What’s an issue?
- Issues may not be problems to everyone (Michael Bolton)
- Expectations don’t match somehow
- Problem in Requirements or Scope creep?
- Allow your team to make mistakes (Paul Carvalho)
- Nothing teaches more than failure…
- Understand why you are doing something…
SWOT is a tool to look at your team’s Strengths and Weaknesses while being aware of external Opportunities and Threats – Things that you may be able to take advantage of and those things that may block your progress.
These items are from the ideas that were volunteered by participants.
Strengths
Technically Competent
Dedicated
Finds “Good Bugs” fast
Detail Oriented
Shows Craftsmanship / Professional Pride in work
Team Gels
Good Communication (and skills)
Understands Roles
Big Test Case Toolbox
Adaptable
Has the trust of peers and colleagues
Weaknesses
Hard to say “No”
Resistant to change
Low technical knowledge
Poor estimation skills
Staff not as good as they think they are
Lack of creativity
Summary
The conversation around these points was important and I allowed it to flow freely, believing that they bore greater value than walking through a planned exercise. It was interesting to note that the strengths were drawn out very quickly, while the weaknesses took nearly twice as long and ended with far fewer items.
This is almost exactly in line with my experiences with using this technique.
It is easy to state what a person or team is good at – what their strengths are. Getting this down to specifics from the more general terms can be a bit more challenging, but usually bears fruit. Saying out loud (admitting to yourself and your team) what the weaknesses and short comings are is far harder. We all have frames around ourselves that limit our vision. We all want to be heroes in our own minds – no one wants to be the villain. Most people want to believe they are good if not very good at what they do.
Getting your team together and discussing what the weaknesses the team has means at some point people must trust each other to help improve individual shortcomings. If your list of strengths includes something about “teamwork” and people are not able or are unwilling to be honest with each other (yes, you can be polite and honest at the same time) then the “teamwork” strength needs to be removed from the list.
The greatest single challenge is to face yourself as you are. This is compounded when attempting to do this with, and in front of, co-workers and team members. The leader/manager may need to get help in doing this very hard task, and to break down the barriers that exist to allow frank discussion to occur. Tempers may flare and nerves will certainly be on edge. The “trick” is to allow cooling-off periods. Perhaps meeting for a couple hours each day for a couple of weeks instead of reserving three or four days in a row to do nothing but this would make it easier. This will allow people to talk privately and do their own reality-checks on what happens, or should happen.
Sometimes, the most potent force in this process is to have people thinking about these topics in the back of their minds while working on their “real” work. While focusing on a challenge, don’t be surprised if something pops into your mind related to the SWOT discussions and how that revelation can bear on the next discussion session.
AND SO, in simple language:
• To improve your Test Process, you must improve your team’s testing.
• To improve your testing, you must have a solid understanding of what your team is capable of RIGHT NOW.
• To understand your team’s capability, you must understand your team’s Strengths and Weaknesses.
• If you understand the Strengths and Weaknesses, you can consider what it is that Management or Customers are expecting.
• Recognizing what Management and Customers are expecting becomes your first Opportunity.
• Recognizing Opportunity may reveal things that will block those opportunities: Threats.
• Engaging in this process with your entire team will demonstrate to your team how serious you are to improving the team and making the individuals better at what they do.
• When you make the testing itself better, the Testing Process will be improved.
Saturday, October 23, 2010
Conferences and Vacations and Learning
So, my lady-wife and I don't do "anniversaries" - we do "annual honeymoons." We (try to) take a week and go be sweet-hearts. With the schedule at the day-job, getting a week off this summer was "not highly likely" which meant that a day or two here and there was the best one could hope for. However, I was able to schedule a full week off for our "honeymoon." We had planned to return to Beaver Island, in Lake Michigan, where we had gone after our wedding. Then, interesting things happened.
A colleague (well, highly regarded tester and speaker on testing and test consultant) asked if I'd be interested if she put my name forward as a possible candidate to fill her spot on the schedule at TesTrek, run by QAI. I was a little surprised, well, a lot surprised, and said I'd be happy, and honored, to be considered. The only drawback was the timing - the week slated to go to Beaver Island. That could be a problem. The week we try and reserve just for us would turn into a little bit of "us time" and a lot of "conference/work" time. Positive side, it was in Toronto.
With a little concern, I approached my Lady Wife and asked what she thought. Her response was "I LOVE Toronto!" So, away we went. As things happened, I found myself in a position to prepare an abstract and submit it to the folks at QAI. It was approved, which meant getting a presentation together that consisted of more than vague ideas.
The topic was one that I suspected may be a big draw - Test Process Improvement. That is one of the "Almost 124.7% certain to be on a Conference Agenda" topics, along with Estimation, Automation and Requirements. Now, this was not the intimidating part. The intimidating part was that there were a stack of people who were going to be there who would very probably disagree with me. I don't have a problem with that. In fact, I've gotten quite good at having people disagree with me. I can even be gracious when people can explain why clearly, and with a reasoneid argument. I've been known to get along quite well with people with whom I have a professional disagreement. Mind you, some folks have a harder time with that.
The thing was, I've done lunch and learns and training sessions and presentations for teams I've been on and led and worked with. I've been doing drumming workshops for many years, in addition to the group and private lessons I've done. The thing was, these weren't novices or non-testers I'd be speaking to - they were testers and test managers and consultants and maybe famous testing people. Gulp. Some of them were bound to know the topic better than I did. Then I remembered the Abstract I worked on and the presentation I had worked so hard on. This was sharing what I had learned - not what some expert said was the "right" way to do things. And that was my starting point.
I do not have answers nor do I have a magic wand to reveal the secrets that need to be revealed. But I can talk about what I learned myself. And if some of the people who wrote the things I read and tried some of their ideas were sitting in the room - fine! My hat's off to them and I'll credit them with being the source for ideas.
Now, I had done "practice runs" with the slides and watching myself in mirrors and such - and done a dry run with volunteer co-workers. I had three possible paths planned for the exercise portion, depending on the number of people, the room layout and, frankly, how the lead up to it went. Five minutes before I was to start, I had the projector ready, a bottle of water handy, the way-cool remote clickey thing to advance the slides was hooked up - and the wireless mic was clipped to my belt. No worries.
The "Track Host" walked up to introduce me and... the next 30 seconds were a warning. The "click on" for the wireless mic didn't. The cool remote thingie... didn't. I muttered something about testing software and not hardware and dove in. The next 90 minutes flew by. I asked questions, people answered, people asked questions, I responded - then attendees responded - then all of a sudden things were cruising.
Moral of the story - If you have never tried to present on a topic, ANY topic - try it. It does not need to be at a conference where "major names" are speaking. It could be a local testing group meeting, a company lunch and learn, something. Maybe a "lightning talk" at local meeting or regional conference? It does not need to be a 60 or 90 minute presentation. But make it something, somewhere.
The fact is, you know something that may help someone else. Someone else may likely have the same kind of questions you did. If you ever wondered what you could do to improve yourself - this may be it. Do something that may help someone else and learn about yourself. It may also help you meet some really way cool people.
Oh, we had a great honeymoon, too. Toronto's a great city to visit.
A colleague (well, highly regarded tester and speaker on testing and test consultant) asked if I'd be interested if she put my name forward as a possible candidate to fill her spot on the schedule at TesTrek, run by QAI. I was a little surprised, well, a lot surprised, and said I'd be happy, and honored, to be considered. The only drawback was the timing - the week slated to go to Beaver Island. That could be a problem. The week we try and reserve just for us would turn into a little bit of "us time" and a lot of "conference/work" time. Positive side, it was in Toronto.
With a little concern, I approached my Lady Wife and asked what she thought. Her response was "I LOVE Toronto!" So, away we went. As things happened, I found myself in a position to prepare an abstract and submit it to the folks at QAI. It was approved, which meant getting a presentation together that consisted of more than vague ideas.
The topic was one that I suspected may be a big draw - Test Process Improvement. That is one of the "Almost 124.7% certain to be on a Conference Agenda" topics, along with Estimation, Automation and Requirements. Now, this was not the intimidating part. The intimidating part was that there were a stack of people who were going to be there who would very probably disagree with me. I don't have a problem with that. In fact, I've gotten quite good at having people disagree with me. I can even be gracious when people can explain why clearly, and with a reasoneid argument. I've been known to get along quite well with people with whom I have a professional disagreement. Mind you, some folks have a harder time with that.
The thing was, I've done lunch and learns and training sessions and presentations for teams I've been on and led and worked with. I've been doing drumming workshops for many years, in addition to the group and private lessons I've done. The thing was, these weren't novices or non-testers I'd be speaking to - they were testers and test managers and consultants and maybe famous testing people. Gulp. Some of them were bound to know the topic better than I did. Then I remembered the Abstract I worked on and the presentation I had worked so hard on. This was sharing what I had learned - not what some expert said was the "right" way to do things. And that was my starting point.
I do not have answers nor do I have a magic wand to reveal the secrets that need to be revealed. But I can talk about what I learned myself. And if some of the people who wrote the things I read and tried some of their ideas were sitting in the room - fine! My hat's off to them and I'll credit them with being the source for ideas.
Now, I had done "practice runs" with the slides and watching myself in mirrors and such - and done a dry run with volunteer co-workers. I had three possible paths planned for the exercise portion, depending on the number of people, the room layout and, frankly, how the lead up to it went. Five minutes before I was to start, I had the projector ready, a bottle of water handy, the way-cool remote clickey thing to advance the slides was hooked up - and the wireless mic was clipped to my belt. No worries.
The "Track Host" walked up to introduce me and... the next 30 seconds were a warning. The "click on" for the wireless mic didn't. The cool remote thingie... didn't. I muttered something about testing software and not hardware and dove in. The next 90 minutes flew by. I asked questions, people answered, people asked questions, I responded - then attendees responded - then all of a sudden things were cruising.
Moral of the story - If you have never tried to present on a topic, ANY topic - try it. It does not need to be at a conference where "major names" are speaking. It could be a local testing group meeting, a company lunch and learn, something. Maybe a "lightning talk" at local meeting or regional conference? It does not need to be a 60 or 90 minute presentation. But make it something, somewhere.
The fact is, you know something that may help someone else. Someone else may likely have the same kind of questions you did. If you ever wondered what you could do to improve yourself - this may be it. Do something that may help someone else and learn about yourself. It may also help you meet some really way cool people.
Oh, we had a great honeymoon, too. Toronto's a great city to visit.
Tuesday, October 12, 2010
Improving Test Processes, Part IV, or The TPI Secret of Secrets
So far, I rambled about Improving Processes. In Part I, I wrote about how we may recognize there's a problem, but may not be sure what the problem is. In Part II, I wrote about the problem of introspection and how hard it can be to see outside ourselves and look at how we really are. In Part III, I wrote about Don Quixote and the unattainable goal of Process when the Charter and Mission are in disarray.
The simple fact is, each of these things play a part in that which makes up Test Process Improvement.
Now for the secret. TPI is not the point. TPI is not the goal.
In the end, TPI doesn't really matter except as a means to the REAL goal.
The REAL goal is this: Better Value from your Testing Effort.
The thing is, most humans don't think in a clear fashion. I know I don't think in a way that can be described as linear in any way, shape or form. That is particularly true when I'm working on a problem. If I DID I would long ago have stopped looking into something I was testing because it did not feel right, even though there was nothing on the surface to indicate there was a problem. When I have one of those feelings, I sometimes will go over what I have for notes, look at the logs from the applications (not the nicely formatted versions, but the raw logs) or poke around in the database. Sometimes its nothing. Sometimes, I sit back and think "Well, look at that. Where did that come from?" (Actually, I sometimes say that out loud.)
That is the pay-off for me as a tester. I found something with a strong likelihood of causing grief for the users/customers which will in turn cause grief for my company.
I don't know how to describe that in a linear fashion. I wish I did, I'd probably be able to make a pile of money from it and live comfortably for the rest of my life from the earnings. The fact is, its too organic - organic in the sense that Jerry Weinberg used the term the first time I encountered it in this context (Becoming a Technical Leader) not in the Chemistry organic/carbon-based context.
The Test Script (and its companion, the formal Test Process Document) is not the Test. The Test is the part that is done by the M1-A1 Human Brain. Using that most powerful tool is the key to gaining value from testing - or improving the value you are currently getting.
You can have the best Process in the World of Software. You can have the best Charter and Mission statements. You can have the best tools money can buy.
Without encouraging your people to think when they are working, and rewarding them when they do it creatively and do it well, none of those other things matter.
The simple fact is, each of these things play a part in that which makes up Test Process Improvement.
Now for the secret. TPI is not the point. TPI is not the goal.
In the end, TPI doesn't really matter except as a means to the REAL goal.
The REAL goal is this: Better Value from your Testing Effort.
The thing is, most humans don't think in a clear fashion. I know I don't think in a way that can be described as linear in any way, shape or form. That is particularly true when I'm working on a problem. If I DID I would long ago have stopped looking into something I was testing because it did not feel right, even though there was nothing on the surface to indicate there was a problem. When I have one of those feelings, I sometimes will go over what I have for notes, look at the logs from the applications (not the nicely formatted versions, but the raw logs) or poke around in the database. Sometimes its nothing. Sometimes, I sit back and think "Well, look at that. Where did that come from?" (Actually, I sometimes say that out loud.)
That is the pay-off for me as a tester. I found something with a strong likelihood of causing grief for the users/customers which will in turn cause grief for my company.
I don't know how to describe that in a linear fashion. I wish I did, I'd probably be able to make a pile of money from it and live comfortably for the rest of my life from the earnings. The fact is, its too organic - organic in the sense that Jerry Weinberg used the term the first time I encountered it in this context (Becoming a Technical Leader) not in the Chemistry organic/carbon-based context.
The Test Script (and its companion, the formal Test Process Document) is not the Test. The Test is the part that is done by the M1-A1 Human Brain. Using that most powerful tool is the key to gaining value from testing - or improving the value you are currently getting.
You can have the best Process in the World of Software. You can have the best Charter and Mission statements. You can have the best tools money can buy.
Without encouraging your people to think when they are working, and rewarding them when they do it creatively and do it well, none of those other things matter.
Labels:
Best Practices,
Process Improvement,
Secrets,
testing
Monday, October 4, 2010
Improving Processes, Part III, or, Why Don Quixote's Quest May Have Ended Better Than Yours Will
A few weeks ago, while looking for some other information, I stumbled across the power point slides of a conference session on Test Process Improvement that I decided was "not a good fit" for me. Yeah, I walked out... about 10 minutes into it.
The premise was "If you don't have a Process, you need one. If you don't have a Process, you have Chaos and Chaos is bad." Following the obligatory introduction, and some seven minutes of what appeared to be gratuitous assertions, I said, "Enough" and walked out.
Having a Process is not a silver bullet. Simply having a Process will not magically fix your Chaotic environment. If you are trying to impose Process on the organization wearing your "Tester" white hat or the plate mail of the Quality Paladin, good luck. Most places where I've seen Chaos rule, its because someone with a lot of scrambled eggs on their hat likes it that way. (I wonder how many metaphors I can pull into one paragraph? Better quit there.)
However, if you have a Process and no one follows it, the question should be why not? My previous blog posts (Part II and Part I of this thread) talked about how the "problem" might not be the real problem and how you need to seriously look at what you are doing before you can fix what might need fixing.
When you look long and hard and honestly at what you and your group is doing, when you find the places where what is done varies from what The Process says, you must determine why this difference exists.
I suspect that it will boil down to a matter of relevance. The official Process has no relevance to the reality of what actually is needed in those situations. If it is a one-off, then there may be something that can be tweaked. If it is a regular occurrence, then the value of The Process comes into question. If it doesn't work, why pretend it does? Why bother having it at all?
Granted, The Process may have been relevant at one time and things may have changed since it was introduced. However, nothing is permanent. Change is inevitable. Even The Process may need to be updated from time to time.
When you do, look to the Purpose your team is to fulfill. Why do you exist? What is your Charter? What is your Mission? Do you have a Mission? I'll bet you do, even if you don't know what it is.
To start, look to what Management expects. If a boss-type is telling you that the Test Process needs improvement, try talking with them. Discuss with them what they believe needs to be improved or where the gaps are. This may become the basis of the group's Charter.
The Quest that you are expected to follow.
What are they seeing as "broken" that needs to be fixed?
If the gist is "there are too many defects being found by customers" ask if there are specific examples. Anecdotal evidence can paint a compelling story, yet without specific examples, you may never be able to find hard facts Is this a hunch or are there specific examples? Are these defects, as in they should have been found in testing?
Maybe these are aspects of the application that the customers expected to behave differently than they received? If they are, why is that? How can that be? How can the expectations be so different than what you believed it would be? After all! The Design and Requirements, that you based the tests on, matched perfectly!
Let us ask Dulcinea how these things can be so different than what they appear to be?
The premise was "If you don't have a Process, you need one. If you don't have a Process, you have Chaos and Chaos is bad." Following the obligatory introduction, and some seven minutes of what appeared to be gratuitous assertions, I said, "Enough" and walked out.
Having a Process is not a silver bullet. Simply having a Process will not magically fix your Chaotic environment. If you are trying to impose Process on the organization wearing your "Tester" white hat or the plate mail of the Quality Paladin, good luck. Most places where I've seen Chaos rule, its because someone with a lot of scrambled eggs on their hat likes it that way. (I wonder how many metaphors I can pull into one paragraph? Better quit there.)
However, if you have a Process and no one follows it, the question should be why not? My previous blog posts (Part II and Part I of this thread) talked about how the "problem" might not be the real problem and how you need to seriously look at what you are doing before you can fix what might need fixing.
When you look long and hard and honestly at what you and your group is doing, when you find the places where what is done varies from what The Process says, you must determine why this difference exists.
I suspect that it will boil down to a matter of relevance. The official Process has no relevance to the reality of what actually is needed in those situations. If it is a one-off, then there may be something that can be tweaked. If it is a regular occurrence, then the value of The Process comes into question. If it doesn't work, why pretend it does? Why bother having it at all?
Granted, The Process may have been relevant at one time and things may have changed since it was introduced. However, nothing is permanent. Change is inevitable. Even The Process may need to be updated from time to time.
When you do, look to the Purpose your team is to fulfill. Why do you exist? What is your Charter? What is your Mission? Do you have a Mission? I'll bet you do, even if you don't know what it is.
To start, look to what Management expects. If a boss-type is telling you that the Test Process needs improvement, try talking with them. Discuss with them what they believe needs to be improved or where the gaps are. This may become the basis of the group's Charter.
The Quest that you are expected to follow.
What are they seeing as "broken" that needs to be fixed?
If the gist is "there are too many defects being found by customers" ask if there are specific examples. Anecdotal evidence can paint a compelling story, yet without specific examples, you may never be able to find hard facts Is this a hunch or are there specific examples? Are these defects, as in they should have been found in testing?
Maybe these are aspects of the application that the customers expected to behave differently than they received? If they are, why is that? How can that be? How can the expectations be so different than what you believed it would be? After all! The Design and Requirements, that you based the tests on, matched perfectly!
Let us ask Dulcinea how these things can be so different than what they appear to be?
Labels:
Design,
Don Quixote,
Process Improvement,
Requirements
Saturday, October 2, 2010
Improving Processes, Part II
O would some power the giftie gie us
to see ourselves as others see us.
to see ourselves as others see us.
- Robert Burns, To a Louse
Right. So if you need that translated, Rabbie was saying this, in English:
O would some power the gift to give us
to see ourselves as others see us.
to see ourselves as others see us.
There are a couple of things that are just hard to do for most people. One, is to change our perspective and see what others see in us. The really hard part is similar to that: to look at ourselves honestly, and see what we really are like.
Any time that self examination comes into play, most folks will avoid it as much as possible. We can tell wee little fibs and stories and justify actions by many interesting machinations. Yet when we strip those aside, what are we left with?
That is the really hard part in any form of process improvement. When we look at ourselves, as individuals or as a group, the challenge is to set aside what we wish we were doing or like to think we do, and focus on our true actions.
If our real processes and the official process don't match, we need to ask ourselves, "Why not?"
Indeed, to see ourselves as we really are, or as others see us, can be an event that we don't want to face. Without doing that, we can never improve.
Friday, October 1, 2010
Improving Processes and Other Stuff, Part 1
I've been teaching a lot of pipe band drumming workshops lately. Well, "a lot" compared to the last two years anyway. They can be hard, but generally are great fun. By the time I realize how tired I am, I'm half way home - close enough where a cup of Tim Horton's coffee will get me home (yes, there are some in Michigan.)
So, this last session was a mix of absolute beginners and those that were a step or two beyond that. They all play in the same band, or aspire to anyway, and so have a common bond between them. Part of the intention of the workshop organizers is to not only teach the beginners, but teach the more advanced players how to teach.
That actually is easier than it sounds, at least with drumming. I get to present an idea, work on some exercises, see who is getting it and who isn't. If it is a physical thing, there are other exercises to try. If it is a mental thing or thought process thing, then I present the same basic idea another way - changing the context sometimes makes it easier.
This last session we were working on triplets. Cool things, those triplets. They are also bread-and-butter stuff for pipe band drumming. The kind of thing where if you don't get them, your future as a pipe band drummer is quite limited. One guy was having a bit of a hard time with them than the other students. Mind you, these students ranged in age from 8 years to the mid-30's or so. This particular fellow was doing alright, but was having an issue with getting his head around the idea of "three notes where normally there are two."
I handed out a bunch of candy/sweets to the other participants and asked this fellow to play the bit he was having a problem with. Perfect. So I asked him to do it again. Perfect. Third time was still perfect. Hmmm... it does not look like its the actual "hard bit" thats the issue. So I had him play the exercise from the beginning. Trainwreck! Had him slow things down and do it again - same thing. The second time I noticed something in what he was doing. As he got closer to the "hard part" his grip tensed up (he gripped his sticks harder) his muscles in his forearms tensed visibly - both bad things for drummers. Try as he might, the sticks simply were not going to do what he wanted them to do. When he jumped right into it, things worked fine.
If the first step to solving a problem is recognizing you have a problem, how do you know what the problem is? In this poor fellow's case, he knew he had a problem and simply could not see what it was. It wasn't the problem he thought he was having - it some something else entirely. When he stayed relaxed throughout the line of the exercise, he played it flawlessly. Problem solved. But what was the problem?
When it comes to process improvement for nearly anything, I try and apply the same approach: There may be a problem, lets see what the symptoms are and see if we can isolate the problem - instead of whacking the symptoms or results of the problem.
When looking at Test Process Improvement in particular, the problem that gets described is usually a symptom or list of symptoms - not the actual problem. We can stomp on symptoms one at a time, without really addressing the crux if the problem. That will continue to churn and bubble and fester until something else breaks.
The "problems" presented usually are presented in a handfull of ways:
When I was younger and not as diplomatic as I am today, my reaction to each of those points generally ran something like "Compared to what?" Now, in my more mellow state and place of being, I only think that. Sometimes loudly, but what comes out of the mouth runs something like, "Can you explain what you mean by 'too long' so I can understand better what you expect? An awful lot of the time our original estimates on test effort or duration are based on the understanding of what the project will entail at the time the estimates are made. As the project develops, we learn more and that will impact both the revised effort estimates and the actual duration. What we are doing is based on the instructions given to us and the mandates for what critical processes must be validated. Perhaps this needs to be reconsidered."
So, I guess that's a long-winded version of "Compared to what?" but it tends to go over a bit better.
What I do wonder, however, when a boss-type says something like those statements, is "Are those symptoms or problems?" Are we running over timelines and cost estimates because of other things than lousy estimates? Are "due dates" being missed because of testing? Are there a flurry of defects keeping customers from using the new release?
Is there something else going on and these are perceptions of problems, rarther than symptoms of problems?
So, this last session was a mix of absolute beginners and those that were a step or two beyond that. They all play in the same band, or aspire to anyway, and so have a common bond between them. Part of the intention of the workshop organizers is to not only teach the beginners, but teach the more advanced players how to teach.
That actually is easier than it sounds, at least with drumming. I get to present an idea, work on some exercises, see who is getting it and who isn't. If it is a physical thing, there are other exercises to try. If it is a mental thing or thought process thing, then I present the same basic idea another way - changing the context sometimes makes it easier.
This last session we were working on triplets. Cool things, those triplets. They are also bread-and-butter stuff for pipe band drumming. The kind of thing where if you don't get them, your future as a pipe band drummer is quite limited. One guy was having a bit of a hard time with them than the other students. Mind you, these students ranged in age from 8 years to the mid-30's or so. This particular fellow was doing alright, but was having an issue with getting his head around the idea of "three notes where normally there are two."
I handed out a bunch of candy/sweets to the other participants and asked this fellow to play the bit he was having a problem with. Perfect. So I asked him to do it again. Perfect. Third time was still perfect. Hmmm... it does not look like its the actual "hard bit" thats the issue. So I had him play the exercise from the beginning. Trainwreck! Had him slow things down and do it again - same thing. The second time I noticed something in what he was doing. As he got closer to the "hard part" his grip tensed up (he gripped his sticks harder) his muscles in his forearms tensed visibly - both bad things for drummers. Try as he might, the sticks simply were not going to do what he wanted them to do. When he jumped right into it, things worked fine.
If the first step to solving a problem is recognizing you have a problem, how do you know what the problem is? In this poor fellow's case, he knew he had a problem and simply could not see what it was. It wasn't the problem he thought he was having - it some something else entirely. When he stayed relaxed throughout the line of the exercise, he played it flawlessly. Problem solved. But what was the problem?
When it comes to process improvement for nearly anything, I try and apply the same approach: There may be a problem, lets see what the symptoms are and see if we can isolate the problem - instead of whacking the symptoms or results of the problem.
When looking at Test Process Improvement in particular, the problem that gets described is usually a symptom or list of symptoms - not the actual problem. We can stomp on symptoms one at a time, without really addressing the crux if the problem. That will continue to churn and bubble and fester until something else breaks.
The "problems" presented usually are presented in a handfull of ways:
- Testing is taking too long;
- Testing costs too much;
- Too many defects are being found by customers.
When I was younger and not as diplomatic as I am today, my reaction to each of those points generally ran something like "Compared to what?" Now, in my more mellow state and place of being, I only think that. Sometimes loudly, but what comes out of the mouth runs something like, "Can you explain what you mean by 'too long' so I can understand better what you expect? An awful lot of the time our original estimates on test effort or duration are based on the understanding of what the project will entail at the time the estimates are made. As the project develops, we learn more and that will impact both the revised effort estimates and the actual duration. What we are doing is based on the instructions given to us and the mandates for what critical processes must be validated. Perhaps this needs to be reconsidered."
So, I guess that's a long-winded version of "Compared to what?" but it tends to go over a bit better.
What I do wonder, however, when a boss-type says something like those statements, is "Are those symptoms or problems?" Are we running over timelines and cost estimates because of other things than lousy estimates? Are "due dates" being missed because of testing? Are there a flurry of defects keeping customers from using the new release?
Is there something else going on and these are perceptions of problems, rarther than symptoms of problems?
Labels:
Drumming,
Investigation,
Process Improvement,
testing
Wednesday, September 29, 2010
Requirements, Traceability and El Dorado
The day-job has been crazy busy the last several weeks. I have several half-written entries that I want to finish and post and with the project hours, and stuff needed to be done at home, there simply has not been much time. However, I've been lurking in a couple of places, reading posts and email conversations, getting my fix of "smart people's thoughts" that way.
The interesting thing is that a couple of themes have crept back up and I finally have the chance to take a look at the topic(s) myself and consider some aspects I may not have considered.
The initial question revolved around defining requirements and establishing traceability of test plans, test cases and the like back to requirements. By extension, when executing the test cases defects found should likewise be able to be traced back to said requirements.
Now, folks who have read my blog in the past will realize that I've been writing about requirements off an and for some time. Well, actually, its more "on" than "off." I write about requirements and testing a lot. Possibly this is because the struggles of the company I work with on defining requirements and the subsequent struggles to adequately test the software products created from those requirements. Now, to be clear, it is not simply this company I am with that has an issue. Most places I've worked have been seriously "requirements challenged."
One thing that sends up every warning flag the back of my neck has is the idea that we can fully define the requirements before doing anything else. I know, Robin Goldsmith has an interesting book on defining "REAL requirements" and he has some good ideas. In light of the shops where I have worked over the last, oh, 25 and more years, some of these ideas simply don't apply. They are not bad ideas, in fact, I think testers should read the book and get a better understanding of it. (Look here to find it, yeah, I know its pretty pricey - expense it.)
Having said that, how many times have we heard people (developers, testers, analysts of some flavor, project managers, et al.) complain that the "customers" either "don't know what they want" or "changed their requirements." I've written before about the understanding of requirements changing, and how considering one aspect of a project may inform understanding on another. When this happens in design, development, or worse testing, the automatic chorus is that the "users" don't know what they want and work will need to be changed. All of us have encountered this right? This is nothing new, presumably.
My point with this revisit is that if you are looking to find the cause of this recurring phenomenon, look in a mirror. All of us have our own biases that effect everything we do - whether we intend to or not.
So, if your shop is like some I've worked in, you get a really nice Requirements Document that formally spells out the requirements for the new system or enhancement to the existing system. The "designers" take this and work on their design. Test planners start working on planning how they will test the software and (maybe) what things they will look for when reviewing the design.
Someone, maybe a tester, maybe a developer, will notice something; maybe an inconsistency, maybe they'll just have a hunch that the pieces don't quite go together as neatly as they should. So a question will be asked. Several things may happen. In some cases, the developer will be given a vague instruction to "handle it." In some cases, there will be much back and forth over what the system "should" do, then the developer will be told to "handle it."
At one shop I worked at, the normal result was a boss type demanding why QA (me) had not found the problem earlier.
My point is, defining requirements itself is an ongoing process around which all the other functions in software development operate.
Michael Bolton recently blogged on test framing. It is an interesting read. It also falls nicely into a question raised by Rebecca Staton-Reinstein's book Conventional Wisdom around how frames and perspectives can be both limiting and liberating.
This brings me back to my unanswered question on Requirements: How do you show traceability and coverage in advance when it is 99.99% certain that you do not know all the requirements? Can it really be done or is it a fabled goal that can't be reached - like the city of gold?
Wiser people than me may know the answer.
The interesting thing is that a couple of themes have crept back up and I finally have the chance to take a look at the topic(s) myself and consider some aspects I may not have considered.
The initial question revolved around defining requirements and establishing traceability of test plans, test cases and the like back to requirements. By extension, when executing the test cases defects found should likewise be able to be traced back to said requirements.
Now, folks who have read my blog in the past will realize that I've been writing about requirements off an and for some time. Well, actually, its more "on" than "off." I write about requirements and testing a lot. Possibly this is because the struggles of the company I work with on defining requirements and the subsequent struggles to adequately test the software products created from those requirements. Now, to be clear, it is not simply this company I am with that has an issue. Most places I've worked have been seriously "requirements challenged."
One thing that sends up every warning flag the back of my neck has is the idea that we can fully define the requirements before doing anything else. I know, Robin Goldsmith has an interesting book on defining "REAL requirements" and he has some good ideas. In light of the shops where I have worked over the last, oh, 25 and more years, some of these ideas simply don't apply. They are not bad ideas, in fact, I think testers should read the book and get a better understanding of it. (Look here to find it, yeah, I know its pretty pricey - expense it.)
Having said that, how many times have we heard people (developers, testers, analysts of some flavor, project managers, et al.) complain that the "customers" either "don't know what they want" or "changed their requirements." I've written before about the understanding of requirements changing, and how considering one aspect of a project may inform understanding on another. When this happens in design, development, or worse testing, the automatic chorus is that the "users" don't know what they want and work will need to be changed. All of us have encountered this right? This is nothing new, presumably.
My point with this revisit is that if you are looking to find the cause of this recurring phenomenon, look in a mirror. All of us have our own biases that effect everything we do - whether we intend to or not.
So, if your shop is like some I've worked in, you get a really nice Requirements Document that formally spells out the requirements for the new system or enhancement to the existing system. The "designers" take this and work on their design. Test planners start working on planning how they will test the software and (maybe) what things they will look for when reviewing the design.
Someone, maybe a tester, maybe a developer, will notice something; maybe an inconsistency, maybe they'll just have a hunch that the pieces don't quite go together as neatly as they should. So a question will be asked. Several things may happen. In some cases, the developer will be given a vague instruction to "handle it." In some cases, there will be much back and forth over what the system "should" do, then the developer will be told to "handle it."
At one shop I worked at, the normal result was a boss type demanding why QA (me) had not found the problem earlier.
My point is, defining requirements itself is an ongoing process around which all the other functions in software development operate.
Michael Bolton recently blogged on test framing. It is an interesting read. It also falls nicely into a question raised by Rebecca Staton-Reinstein's book Conventional Wisdom around how frames and perspectives can be both limiting and liberating.
This brings me back to my unanswered question on Requirements: How do you show traceability and coverage in advance when it is 99.99% certain that you do not know all the requirements? Can it really be done or is it a fabled goal that can't be reached - like the city of gold?
Wiser people than me may know the answer.
Monday, August 30, 2010
Learning and Teaching and Leading
One thing I learned early on when teaching drumming students, particularly beginners, is that the person who learns the most is often the teacher.
It never seems to matter whether the lesson is an individual or group lesson, focused on one style or general drumming - the process of teaching beginners forces the instructor to reconsider things that the instructor simply does. This forces the teacher to reconsider all that he does, find interesting foibles or potential weaknesses, then correct or change them as needed for working with the student.
The interesting thing is that this reflection sometimes leads to profound understanding of what the student is learning and what the instructor is conveying. When preparing for the odd lunch-and-learn or training session at the office I never really had that kind of an experience - or when presenting such sessions.
On Improvement...
This last couple of weeks something interesting happened. I've been preparing a presentation on Test Process Improvement for TesTrek in October. I wasn't scheduled to present, or lead a workshop, but as a couple of presenters had to cancel, Viola! I'm on the presenters list. Then, a couple of other things came into my observation.
There have been several conversations on email lists I'm a participant in, as well as forums, on the dreaded M word. Yes - Metrics.
On top of this, I had a remarkably revealing email conversation with Markus Gartner - amazingly bright guy. This came about because the questions I submitted for the "Ask the Tester" were submitted after the magic number of 10 had been reached. However, they were forwarded to Markus and that presented me the opportunity to learn and be rinded of things I once knew and had forgotten (or channelled off into a safe place in my memory.)
My question to Markus was centered on his take of "Test Process Improvement" in an Agile environment. The bulk of his response was reasonably close to what I expected - in fact, reassuringly close to what I had prepared for the presentation so my confidence level increased dramatically in what I was saying. (Yes, a little reassurance is sometimes a good thing, particularly when one is a very little fish hanging out with very big fish.)
He had one idea that I did not have. And it left me gob-smacked. Tacked onto an already interesting sentence about the organization's management, Markus said "... or they don't trust testing anymore."
On Trust...
I was immediately thrown back many years to when Developers were called Programmers and when I was working as a COBOL Programmer on a large IBM mainframe. I had a Manager who did not trust his staff. Not because they were inexperienced, but because he simply did not trust them. To this day, I do not know why that was the case. I can surmise why, but it has little to do with the point. Suffice to say, it was an un-happy work environment.
Markus made an interesting observation. His point was that in Agile, the very purpose is to engender trust amongst all participants. Additionally, when management is invited to observe the meetings, they can gain an understanding of what is being done by their staff and as their understanding increases, so to should their level of trust.
When a group or a team has lost the trust of its management, the task of regaining that trust is nigh-on insurmountable. Likewise, if a manager or lead has lost the trust of the group they are to lead or manage, the results will almost certainly be dire.
On Process...
Thus, when the call comes down for "better metrics" or "process improvement" or any other number of topics. What is the underlying message? What is it that someone is hoping to gain? Do they know? CAN they know? Are they guessing?
Much is debated around QUANTifiable and QUALifiable considerations, measurement and understanding. I am not nearly bright enough to join into that fray fully-fledged.
What I have seen, however, is when Managers, Directors, VPs, EVPs, and big-bosses of all varieties are looking for something - nearly anything will suffice. A depressing number of times, I have seen management groups flail around what is wanted - then issue and edict announcing the new policy or practice or whatever it is. These tend to roll-out like clockwork, every three to six months.
Each company where I have worked that followed that practice engendered a huge amount of cynicism, resentment and distrust. The sad thing is that these rather stodgy companies - including some that were quite small and prided themselves on having no Dilbert-esque Pointy-Haired-Boss behaviors - were wasting an amazing opportunity.
The first step to fixing a "problem" is figuring out what the problem is. If there is no understanding over why policies or procedures are changing and no feed-back loop on the purposes behind the changed, will the average rank-and-file worker stand up and say "What do you hope to change/improve/learn from this?" At some companies - maybe. But I have seen relatively few times where the combination of policy-dujour and staff willing to stick their necks out and ask questions both exist in the same organization.
On Leadership...
What I have learned, instead, is to look at all sources of information. Explain what the problem or perceived problem is. Ask for input - then consider it fairly. To do so is not a sign of weakness - it is a sign of strength. That the leadership of the organization have enough trust in their workers to approach them with a problem and work together toward a solution.
This, in my mind, is the essence of building a team.
If you throw a bunch of people together without a unifying factor and expect great things it is silly in the extreme. In the military, "Basic Training" serves this purpose - laying the groundwork to trust your comrades and follow the direction of officers and non-commissioned officers. In the end though, the object is teamwork: learning to work together using each persons strengths to off-set others weaknesses.
Why is it that so many managers miss this rather elementary point? For a team to "work" they must learn to work together. If the Lead or Manager has not built the group into one capable of working together, like a team, what, other than professional pride, will get any results at all?
Although I can not prove this, in a scientific method as it were, I suspect that it is the essence of the problem mentioned above. The question I do not know the answer to, although suspect it, is the question of leadership in this instance.
Is it that they, the leaders, have no idea how to build a team? Is it possible that the step of instructing the fledgling team and shaping it into the needed form was too challenging? Could it be that in the process of doing so, their own closely held beliefs, habits and foibles were more dear than the building of a successful team?
If this basic lack is present, does it contribute to the selection of what is easy over what is right?
These are the ideas that have been floating through my mind while preparing the presentation and workshop lessons for the session at TesTrek. If the master knows that he is but a beginner in the craft, what of those who consider themselves experts in all aspects of our trade.
Can this be at the root of the behaviours I've seen first hand and read about? Are they feeling so insecure in their own abilities that they mistrust their own staff, the team they are charged with leading? Is it to make up for this lack, they flounder and grasp for tips or magic revelations that will show them the "path?" Is that why there is a continuing and perpetual drive for Metrics and Process Improvement?
It never seems to matter whether the lesson is an individual or group lesson, focused on one style or general drumming - the process of teaching beginners forces the instructor to reconsider things that the instructor simply does. This forces the teacher to reconsider all that he does, find interesting foibles or potential weaknesses, then correct or change them as needed for working with the student.
The interesting thing is that this reflection sometimes leads to profound understanding of what the student is learning and what the instructor is conveying. When preparing for the odd lunch-and-learn or training session at the office I never really had that kind of an experience - or when presenting such sessions.
On Improvement...
This last couple of weeks something interesting happened. I've been preparing a presentation on Test Process Improvement for TesTrek in October. I wasn't scheduled to present, or lead a workshop, but as a couple of presenters had to cancel, Viola! I'm on the presenters list. Then, a couple of other things came into my observation.
There have been several conversations on email lists I'm a participant in, as well as forums, on the dreaded M word. Yes - Metrics.
On top of this, I had a remarkably revealing email conversation with Markus Gartner - amazingly bright guy. This came about because the questions I submitted for the "Ask the Tester" were submitted after the magic number of 10 had been reached. However, they were forwarded to Markus and that presented me the opportunity to learn and be rinded of things I once knew and had forgotten (or channelled off into a safe place in my memory.)
My question to Markus was centered on his take of "Test Process Improvement" in an Agile environment. The bulk of his response was reasonably close to what I expected - in fact, reassuringly close to what I had prepared for the presentation so my confidence level increased dramatically in what I was saying. (Yes, a little reassurance is sometimes a good thing, particularly when one is a very little fish hanging out with very big fish.)
He had one idea that I did not have. And it left me gob-smacked. Tacked onto an already interesting sentence about the organization's management, Markus said "... or they don't trust testing anymore."
On Trust...
I was immediately thrown back many years to when Developers were called Programmers and when I was working as a COBOL Programmer on a large IBM mainframe. I had a Manager who did not trust his staff. Not because they were inexperienced, but because he simply did not trust them. To this day, I do not know why that was the case. I can surmise why, but it has little to do with the point. Suffice to say, it was an un-happy work environment.
Markus made an interesting observation. His point was that in Agile, the very purpose is to engender trust amongst all participants. Additionally, when management is invited to observe the meetings, they can gain an understanding of what is being done by their staff and as their understanding increases, so to should their level of trust.
When a group or a team has lost the trust of its management, the task of regaining that trust is nigh-on insurmountable. Likewise, if a manager or lead has lost the trust of the group they are to lead or manage, the results will almost certainly be dire.
On Process...
Thus, when the call comes down for "better metrics" or "process improvement" or any other number of topics. What is the underlying message? What is it that someone is hoping to gain? Do they know? CAN they know? Are they guessing?
Much is debated around QUANTifiable and QUALifiable considerations, measurement and understanding. I am not nearly bright enough to join into that fray fully-fledged.
What I have seen, however, is when Managers, Directors, VPs, EVPs, and big-bosses of all varieties are looking for something - nearly anything will suffice. A depressing number of times, I have seen management groups flail around what is wanted - then issue and edict announcing the new policy or practice or whatever it is. These tend to roll-out like clockwork, every three to six months.
Each company where I have worked that followed that practice engendered a huge amount of cynicism, resentment and distrust. The sad thing is that these rather stodgy companies - including some that were quite small and prided themselves on having no Dilbert-esque Pointy-Haired-Boss behaviors - were wasting an amazing opportunity.
The first step to fixing a "problem" is figuring out what the problem is. If there is no understanding over why policies or procedures are changing and no feed-back loop on the purposes behind the changed, will the average rank-and-file worker stand up and say "What do you hope to change/improve/learn from this?" At some companies - maybe. But I have seen relatively few times where the combination of policy-dujour and staff willing to stick their necks out and ask questions both exist in the same organization.
On Leadership...
What I have learned, instead, is to look at all sources of information. Explain what the problem or perceived problem is. Ask for input - then consider it fairly. To do so is not a sign of weakness - it is a sign of strength. That the leadership of the organization have enough trust in their workers to approach them with a problem and work together toward a solution.
This, in my mind, is the essence of building a team.
If you throw a bunch of people together without a unifying factor and expect great things it is silly in the extreme. In the military, "Basic Training" serves this purpose - laying the groundwork to trust your comrades and follow the direction of officers and non-commissioned officers. In the end though, the object is teamwork: learning to work together using each persons strengths to off-set others weaknesses.
Why is it that so many managers miss this rather elementary point? For a team to "work" they must learn to work together. If the Lead or Manager has not built the group into one capable of working together, like a team, what, other than professional pride, will get any results at all?
Although I can not prove this, in a scientific method as it were, I suspect that it is the essence of the problem mentioned above. The question I do not know the answer to, although suspect it, is the question of leadership in this instance.
Is it that they, the leaders, have no idea how to build a team? Is it possible that the step of instructing the fledgling team and shaping it into the needed form was too challenging? Could it be that in the process of doing so, their own closely held beliefs, habits and foibles were more dear than the building of a successful team?
If this basic lack is present, does it contribute to the selection of what is easy over what is right?
These are the ideas that have been floating through my mind while preparing the presentation and workshop lessons for the session at TesTrek. If the master knows that he is but a beginner in the craft, what of those who consider themselves experts in all aspects of our trade.
Can this be at the root of the behaviours I've seen first hand and read about? Are they feeling so insecure in their own abilities that they mistrust their own staff, the team they are charged with leading? Is it to make up for this lack, they flounder and grasp for tips or magic revelations that will show them the "path?" Is that why there is a continuing and perpetual drive for Metrics and Process Improvement?
Sunday, August 29, 2010
Music, or Testerman's Ramble
At one point in my life I played in a band that performed a variety of Irish traditional and folk music. We also played a fair amount of Scottish traditional and folk as well, however, it seems if you play or sing a single Irish song, you are labelled "Irish" and you'll be crazy-busy in March, and pretty slow the rest of the year. Unless you work really hard and play reasonably well.
So a side-effect of playing in a band that performs this stuff is, when you get good enough for people to pay you money to go to their towns, cities, festivals, whatever, you will run into other folks who play the same type of music. When schedules permit, this often devolves into a session / sessiun / wild-music-playing party. There are certain protocols that most folks follow in these events - and the fantastic thing is that usually the level of play is quite good. Tempos are snappy so reels drive forward and hornpipes can be lilty (and tend to run around Warp 9) and jigs are of a nature where feet rarely touch the ground.
Now, these uber-sessions are not so different than more traditional ones held in houses or coffee-shops or bars or clubs. The big difference is the recognition that there are no light-weight players and everyone has mastered their craft. This is not always the case at other sessions.
I have been out of the performing trad/folk music for several years now, and in the last year began attending some of the local sessions, just to get my feet wet. I was a bit rusty on bodhran, the Irish hand frame drum, which I had played for 20 years on stage and in sessions. My limited ability on penny whistle was nigh-on vanished - I remembered tunes and could call phrases from my memory to my finger tips, but I'm effectively starting over. With crazy work and home schedule it has been hard to find time to practice , let alone become "street legal" on whistle.
So, I show up at the Sunday night sessions and play a couple of tunes on whistle when they come up. I will also play the bodhran a bit, depending on the number of people there (it does not take many drums to become "too many" for the melody instruments - whistles, mandolins, fiddles, flutes, dulcimers and the like.)
This last Sunday there were a fair number of players. There were 8 or 9 "melody" players, a couple of guitars, a tenor-banjo, who played melody when he knew the tune and vamped when he did not - and me on drum (with the occaisional contribution of bones.) Some of the players are quite experienced and I have seen around for many years. Some are between beginner and novice. Some are "in between" levels of experience.
One tune in particular would have made me stop the band, if it was a "band" that was playing and have them start again. That typically isn't done in sessions - so I did the "drummer equivalent" and simply stopped playing. One of the mandolin players, who knew me and has also been around the block gave a little smile and he stopped as well. We were treated to a rare sight of 6 people who were absolutely certain what the "correct" tempo was for the tune that was being played - and none of them would give an inch - or a click on the metronome. The guitar players seemed to play along with which ever melody instrument was close to them and generally the best description was "trainwreck."
That reminded me of a projet I had worked on some time ago. I was not on the project originally, but was brought in as part of a desperation move to fix it. Like in the tune on Sunday, each of the participants knew what the right thing to do was. The problem was none of them agreed on what that thing was. "Blood on the Green" was an apt summation of that effort. The programmers were berated for not following instructions - but how do you follow instructions when there are multiple, conflicting sets of instructions?
Because of the "political nature" of the project, no managers or directors were willing to step up and take on some form of leadership role for fear that there would be repercussions for doing so. The PM, BA and Dev Lead floundered without some form of direction from their respective management teams. Information was contradictory at best.
In the end, a Director put his foot down, asserted control and forced the issue. Me being added to the project was part of forcing the issue. Until that point, the uncertainty of the leadership was sapping the ability of the project group to operate as an effective team. Like the music session last week, no one had a clear picture as to what was "right" and where the center of gravity was.
People can urge "Best Practices," "Standards," "Process" and "Metrics" all they want. In some contexts, that may be the right thing. However, wiothout a clear understanding of the intent of the effort, nothing will save the project. Ulysses S. Grant, that prescient Software Oracle (well, American General turned President) warned that indecision was worse than a wrong decision. Wrong decisions could be countered by "right" decisions, but no decision, from leadership, leaves your group floundering looking for a center.
So a side-effect of playing in a band that performs this stuff is, when you get good enough for people to pay you money to go to their towns, cities, festivals, whatever, you will run into other folks who play the same type of music. When schedules permit, this often devolves into a session / sessiun / wild-music-playing party. There are certain protocols that most folks follow in these events - and the fantastic thing is that usually the level of play is quite good. Tempos are snappy so reels drive forward and hornpipes can be lilty (and tend to run around Warp 9) and jigs are of a nature where feet rarely touch the ground.
Now, these uber-sessions are not so different than more traditional ones held in houses or coffee-shops or bars or clubs. The big difference is the recognition that there are no light-weight players and everyone has mastered their craft. This is not always the case at other sessions.
I have been out of the performing trad/folk music for several years now, and in the last year began attending some of the local sessions, just to get my feet wet. I was a bit rusty on bodhran, the Irish hand frame drum, which I had played for 20 years on stage and in sessions. My limited ability on penny whistle was nigh-on vanished - I remembered tunes and could call phrases from my memory to my finger tips, but I'm effectively starting over. With crazy work and home schedule it has been hard to find time to practice , let alone become "street legal" on whistle.
So, I show up at the Sunday night sessions and play a couple of tunes on whistle when they come up. I will also play the bodhran a bit, depending on the number of people there (it does not take many drums to become "too many" for the melody instruments - whistles, mandolins, fiddles, flutes, dulcimers and the like.)
This last Sunday there were a fair number of players. There were 8 or 9 "melody" players, a couple of guitars, a tenor-banjo, who played melody when he knew the tune and vamped when he did not - and me on drum (with the occaisional contribution of bones.) Some of the players are quite experienced and I have seen around for many years. Some are between beginner and novice. Some are "in between" levels of experience.
One tune in particular would have made me stop the band, if it was a "band" that was playing and have them start again. That typically isn't done in sessions - so I did the "drummer equivalent" and simply stopped playing. One of the mandolin players, who knew me and has also been around the block gave a little smile and he stopped as well. We were treated to a rare sight of 6 people who were absolutely certain what the "correct" tempo was for the tune that was being played - and none of them would give an inch - or a click on the metronome. The guitar players seemed to play along with which ever melody instrument was close to them and generally the best description was "trainwreck."
That reminded me of a projet I had worked on some time ago. I was not on the project originally, but was brought in as part of a desperation move to fix it. Like in the tune on Sunday, each of the participants knew what the right thing to do was. The problem was none of them agreed on what that thing was. "Blood on the Green" was an apt summation of that effort. The programmers were berated for not following instructions - but how do you follow instructions when there are multiple, conflicting sets of instructions?
Because of the "political nature" of the project, no managers or directors were willing to step up and take on some form of leadership role for fear that there would be repercussions for doing so. The PM, BA and Dev Lead floundered without some form of direction from their respective management teams. Information was contradictory at best.
In the end, a Director put his foot down, asserted control and forced the issue. Me being added to the project was part of forcing the issue. Until that point, the uncertainty of the leadership was sapping the ability of the project group to operate as an effective team. Like the music session last week, no one had a clear picture as to what was "right" and where the center of gravity was.
People can urge "Best Practices," "Standards," "Process" and "Metrics" all they want. In some contexts, that may be the right thing. However, wiothout a clear understanding of the intent of the effort, nothing will save the project. Ulysses S. Grant, that prescient Software Oracle (well, American General turned President) warned that indecision was worse than a wrong decision. Wrong decisions could be countered by "right" decisions, but no decision, from leadership, leaves your group floundering looking for a center.
Labels:
Best Practices,
leadership,
Metrics,
music,
project management
Tuesday, August 10, 2010
Of Walkways and Fountains
A Story
Once upon a time there was a business person who knew exactly what she wanted. So, she explained to an analyst precisely what it was that she wanted and all of the points that she wanted addressed and precisely how she wanted it addressed. The analyst said he understood exactly what she wanted.
So, the analyst went and assembled all the requirements and looked at everything that was spelled out. He gathered everything together.
He found that he had some very usable information and some that was less than useable. So, he dug and dug and found the perfect item that would fit the needs the user described and make it pleasing to her.
Then he assembled all the components and tested the product and found that it matched exactly what the user had asked for - and everything worked perfectly. The finished product was, indeed, a thing of beauty.
So, he called the user over to see the wonderful product he had made. She looked at it and said, "What is this?"
"Its what you asked for! It has everything you wanted!"
"No, this is..."
Have you ever heard of a project that matched the requirements precisely for what was needed to be included in the "finished product" only to find there was a complete mis-understanding about what the real purpose was?
Monday, August 9, 2010
Cast Curtain Call - Part 2 - Conversations
I was very fortunate this last week to have had extended conversations with several people, some "movers and shakers" and some "well respected testers" and some "regular folks." Rather than sort out which is which, I'm going to focus on some of the great conversations I had, starting Sunday evening, through Thursday.
The hard part is picking out the bestest ones. So, I'm going to summarize some and the mental meanderings that resulted.
Monday, chatting with Griffin Jones, he asked bout the mission and charter for the group I work with. We had been talking about techniques and some of the on-line forum conversations around exploratory/ad-hoc/fully-scripted testing in light of Michael Bolton's blog entry on Testers: Get Out of the QA Business. He asked about this after what I thought was a tangent that was around the question of "what works best for what kind of environment?"
His simple question got me to wondering, other than the slogan on the company's internal wiki about the QA/Testing group, what is it that we are about? For some time, we have been working toward getting more people involved in the idea of tangible requirements, of QA helping define requirements and acting as a bridge in the design process. But that begged the question - What is our mission?
I wonder how many testing groups (or whatever each group calls itself) have a "slogan" but no "mission" or "purpose" statement that can be pointed to, where everyone knows about it. If you don't know about it, is it reasonable for people to act towards that - its a goal, right? How do you achieve a goal if you don't know what it is (I feel another blog post coming on, but not right now!)
I had several brilliant little chats with Scott Barber. It helps when you're sitting next to each other at the Registration table. We talked about a bunch of stuff - For those who have read his stuff or have read his articles or postings in various online forums for that matter, he really is as smart as he seems - Holy Cow!
We got onto the "mission" of testing groups and "doing things right" vs "doing things well enough." What most theory-centric folks sometimes forget is that there is a cost to "doing things 'right.'" If the product will be shipped for 2 weeks late because you want to run a 4 week duration system load test, costing approximately $1M, what will the company gain? What are the risks? If you're extremely likely to see significant problems within the first 8 to 12 hours and the likelihood decreases over time, what will that extra two or even three weeks going to get you - other than a delay to delivery and a dissatisfied customer? That, in itself, is one reason why testers should inform and advise but not make the final go/no-go decision.
Yeah - there's another blog post on that in detail.
Other people I met included Jeff Fry, where DOES he get all that energy? Then Selenia Delesie was holding court on lightning talks in the lobby. WHOA! Crazy-smart and the nice as the day is long. Selena gave two really good presentations - unfortunately, while I read the abstract and supporting paper, there were not enough of me to get to all the presentations that I wanted to get to. I think that's a sign of a fantastic conference - too many good simultaneous presentations.
Other folks I met included Michael Hunter, the Braidy Tester - What a guy, although he's now braidless. Paul Kam from DornerWorks is another really smart guy. DornerWorks was one of the sponsors of the conference. They did a lot to make this happen.
Tuesday night the "Rebel Alliance / CASTAway Social" was a hoot. Tester games and chicken-wings and varied and sundry edibles and drinkables - Thanks to Matt Heusser for making that happen. He's another one who is just crazy-smart and really friendly. If you have not seen his TWIST podcasts, check them out.
After the social, a bunch of folks went to dinner and had a fantastic time. If I recall correctly, there were 15 or 16 of us. I scored a major triumph by having Michael Bolton sit across from me at the end of the table. What an amazing time. Melisa Bugai was sitting with us as we discussed the likely causes of why the lights on the deck of the restaurant kept going out. Yes, we tested the theory when Melissa unplugged the rope light going around the railing. They all stayed on after that. WHOO-HOO!
The conversation, as any conversation with Michael, took many twists and turns. We talked on language and literacy and music and education and mental discipline and CBC radio shows and how each touched on testing. What a mind-bendingly enjoyable night.
Wednesday I had the great pleasure of dining with Lynn Mckee and Nancy Kelln - and my boss. Best part is, the boss picked up the tab! WHEEEEEEEEEEEEE! Another night of fantastic conversation on testing and wine and great food. Did I mention we talked about testing?
There were so many other great conversations - How can I give a recap of all of them? As it is, there is much to think on.
The hard part is picking out the bestest ones. So, I'm going to summarize some and the mental meanderings that resulted.
Monday, chatting with Griffin Jones, he asked bout the mission and charter for the group I work with. We had been talking about techniques and some of the on-line forum conversations around exploratory/ad-hoc/fully-scripted testing in light of Michael Bolton's blog entry on Testers: Get Out of the QA Business. He asked about this after what I thought was a tangent that was around the question of "what works best for what kind of environment?"
His simple question got me to wondering, other than the slogan on the company's internal wiki about the QA/Testing group, what is it that we are about? For some time, we have been working toward getting more people involved in the idea of tangible requirements, of QA helping define requirements and acting as a bridge in the design process. But that begged the question - What is our mission?
I wonder how many testing groups (or whatever each group calls itself) have a "slogan" but no "mission" or "purpose" statement that can be pointed to, where everyone knows about it. If you don't know about it, is it reasonable for people to act towards that - its a goal, right? How do you achieve a goal if you don't know what it is (I feel another blog post coming on, but not right now!)
I had several brilliant little chats with Scott Barber. It helps when you're sitting next to each other at the Registration table. We talked about a bunch of stuff - For those who have read his stuff or have read his articles or postings in various online forums for that matter, he really is as smart as he seems - Holy Cow!
We got onto the "mission" of testing groups and "doing things right" vs "doing things well enough." What most theory-centric folks sometimes forget is that there is a cost to "doing things 'right.'" If the product will be shipped for 2 weeks late because you want to run a 4 week duration system load test, costing approximately $1M, what will the company gain? What are the risks? If you're extremely likely to see significant problems within the first 8 to 12 hours and the likelihood decreases over time, what will that extra two or even three weeks going to get you - other than a delay to delivery and a dissatisfied customer? That, in itself, is one reason why testers should inform and advise but not make the final go/no-go decision.
Yeah - there's another blog post on that in detail.
Other people I met included Jeff Fry, where DOES he get all that energy? Then Selenia Delesie was holding court on lightning talks in the lobby. WHOA! Crazy-smart and the nice as the day is long. Selena gave two really good presentations - unfortunately, while I read the abstract and supporting paper, there were not enough of me to get to all the presentations that I wanted to get to. I think that's a sign of a fantastic conference - too many good simultaneous presentations.
Other folks I met included Michael Hunter, the Braidy Tester - What a guy, although he's now braidless. Paul Kam from DornerWorks is another really smart guy. DornerWorks was one of the sponsors of the conference. They did a lot to make this happen.
Tuesday night the "Rebel Alliance / CASTAway Social" was a hoot. Tester games and chicken-wings and varied and sundry edibles and drinkables - Thanks to Matt Heusser for making that happen. He's another one who is just crazy-smart and really friendly. If you have not seen his TWIST podcasts, check them out.
After the social, a bunch of folks went to dinner and had a fantastic time. If I recall correctly, there were 15 or 16 of us. I scored a major triumph by having Michael Bolton sit across from me at the end of the table. What an amazing time. Melisa Bugai was sitting with us as we discussed the likely causes of why the lights on the deck of the restaurant kept going out. Yes, we tested the theory when Melissa unplugged the rope light going around the railing. They all stayed on after that. WHOO-HOO!
The conversation, as any conversation with Michael, took many twists and turns. We talked on language and literacy and music and education and mental discipline and CBC radio shows and how each touched on testing. What a mind-bendingly enjoyable night.
Wednesday I had the great pleasure of dining with Lynn Mckee and Nancy Kelln - and my boss. Best part is, the boss picked up the tab! WHEEEEEEEEEEEEE! Another night of fantastic conversation on testing and wine and great food. Did I mention we talked about testing?
There were so many other great conversations - How can I give a recap of all of them? As it is, there is much to think on.
Subscribe to:
Posts (Atom)