This year is drawing to an end. I know it is a tad lame to have a "look at the year that was" or any of the other cliche laden phrases that tend to be used to introduce these things.
The thing is, it has been an interesting year for me personally and professionally.
Let's see. General stuff. I retired the blog attached to my defunct drumming with bagpipe bands website. I replaced it with, this one. It had been in the "thinking about" phase for a long-time, and finally I decided to do it. Ya know what's interesting? As I think about other stuff - often Non-Testing stuff - something pops into my head about software development or testing or SOMETHING. Sometimes, that results in a blog post. Other times it leads to sitting in my big green comfy chair sipping a brandy and thinking.
Interesting work stuff at the day-job with interesting challenges early in the year. With a flurry of emails I found myself and the boss registered to attend QUEST in Dallas, Texas. This was a huge surprise to me as I was not expecting it at all, given limited budgets and going to TesTrek in Toronto the previous October. QUEST was interesting in that I met a number of people whose writings I had read, and had not met in real-life. I also got to connect with people I had met before and get back in touch in real-life.
May I received confirmation that I COULD attend CAST, which was being held around 15 minutes from my house - then in June it became clear that the scheduled release would conflict with attending CAST, so the company would neither pay the conference fee (something I was not too worried about) nor would they grant time-off. That one was a problem. July rolled around, schedules shifted again. I could be granted the time to go to CAST IF I was available during the conference. COMPROMISE! COOL!
Sunday evening of CAST had a great dinner and conversation with Fiona Charles and Griffin Jones and the lady-wife at a neighborhood Italian place. Recipes from Sicily and friendly folks and good wine and great conversation, little of it around testing, but all of it applicable to testing. What a great night.
Another night had a fantastic dinner out with a bunch of folks - Yeah, I know I blogged about that shortly after the event - it is still a great memory.
Dragged the boss in one evening to meet some of the great ones of the craft who would be there. Had a fantastic evening out with Nancy Kelln and Lynn McKee and the boss - more good wine (notice a trend?) and a great conversation.
Then, a bombshell was dropped that left me gob-smacked. It seems one of our dinner companions had a conflict and could not fulfill a speaking commitment in Toronto, would I be interested in being suggested as a an alternative speaker? Holy Cow. I thought about it briefly... and said Yes. One thing led to another and I did indeed speak at TesTrek in Toronto that October. Yeah, I blogged about that, too.
Stuff at the day-job continued to be interesting - meaning, really, really, busy.
So, things progressed. Talked with the boss about some interesting emails. The result of those chats was submitting proposals to a couple of conferences. I submitted proposals for a session similar to the session at TesTrek, but with a more advanced perspective than the general view there. The exciting thing was that the boss and I submitted a proposal for a joint presentation based on our experiences starting a QA/Testing team from scratch.
One conference said "no thanks" (although the boss was asked to consider a presentation in a different area) the other accepted both proposals! Yeah, that rocks. I get to hang with the cool kids at STPCon in Nashville this coming March.
More projects were successfully rolled out at the day job. There are some interesting things that seem to be happening there, they may lead to more ideas on blog posts.
The local testing group and its attempts spread its wings and fly has been great fun to watch and be a part of. Through it, I've met some terrific people, like Matt Heusser and Melissa Bugai , and have had fun sharing the adventure with them.
At home it was a good year in the garden. We had a good crop of strawberries and peppers and tomatoes, although some of the others were a little surprising in what were less prolific than expected. Several big projects got done - and inspired thoughts about, then blog posts about, software and testing.
We had some sadness in our lives this year. Stuff that led to serious rounds of soul-searching for "what is this all about." We also have had some great joys in our lives this year. For that, I am grateful. I don't know what 2011 will bring, but I am looking forward to the next year.
Friday, December 31, 2010
Thursday, December 30, 2010
A Hero's Fall From Grace or Why a Big Pay Raise Is Better Than a Statue
A couple of things happened recently that made me think of things I probably would not normally think of.
I know a fellow who works for a company that has one or two (depending on needs) "major" releases each year. They also generate and distribute monthly maintenance releases to address defects, bundle hot-fixes neatly into a package and all the other good things that come from such things.
A recent release had some "issues." Lots of issues. After working for 12 "work" days and a Saturday and Sunday thrown in as well, this fellow's boss made a comment that he had "saved them again" and was a "hero."
When thinking about this, I got to thinking about Selena Delesie's blog posts on "hero culture."
When teams pull together and work to overcome obstacles, amazing things can be achieved. Sometimes, the issues encountered require extra effort and creative thinking and good, if not great, communication between testers and developers to find solutions. This is a fantastic thing and is, I believe, the point of being a "hero" in our profession.
The part that makes me think, if not wonder, about this is when this becomes the rule, rather than the exception. When the same team that pulled together and found solutions and worked toward delivering the best product that can be delivered is expected to work long hours regularly, to repeat the same "pull out all stops" effort for every project, big or small, there is a danger.
That danger is that the edge, the creative thinking, the "Hey, what if..." factor gets worn down.
Now, I know there are a lot of potential reasons for this to happen. Some may be within the team itself. There may be factors at work where individuals like the drama, the "rush" of the massive push at the end.
There may also be issues around the development methodology or practices. If the only testing done in unit testing is validating the "happy path" then likelihood that builds will come fast and furious, depending on how quickly the detected defects are addressed.
Another possibility is that someone, possibly in the test group or outside the test group but involved in the projects, likes the chaos - the frantic push is something they thrive on.
Whatever the cause, when every project turns into a heroic stand worthy of an action movie, something is seriously wrong. When testers, or anyone else, are expected to perform super-human, or heroic, feats every project, the edge will be blunted. The probability of significant defects being missed increases with each cycle. Eventually, people will be working less effectively no matter how many hours, or how "hard" they are working.
In some shops, the option to simply say "No, I won't do this" may be a legitimate one. In shops where contractual requirements don't allow such a response, I don't have a solution. Now, I am aware of the need to keep regular releases going, I'm just not sure of a solution that works everywhere.
What I do know is that the bosses, the leaders at your shop need to make sure they don't need their people to be heroes each and every release. Horatius held the bridge, but he had help. The lone cowboy was a myth, just like superman.
I know a fellow who works for a company that has one or two (depending on needs) "major" releases each year. They also generate and distribute monthly maintenance releases to address defects, bundle hot-fixes neatly into a package and all the other good things that come from such things.
A recent release had some "issues." Lots of issues. After working for 12 "work" days and a Saturday and Sunday thrown in as well, this fellow's boss made a comment that he had "saved them again" and was a "hero."
When thinking about this, I got to thinking about Selena Delesie's blog posts on "hero culture."
When teams pull together and work to overcome obstacles, amazing things can be achieved. Sometimes, the issues encountered require extra effort and creative thinking and good, if not great, communication between testers and developers to find solutions. This is a fantastic thing and is, I believe, the point of being a "hero" in our profession.
The part that makes me think, if not wonder, about this is when this becomes the rule, rather than the exception. When the same team that pulled together and found solutions and worked toward delivering the best product that can be delivered is expected to work long hours regularly, to repeat the same "pull out all stops" effort for every project, big or small, there is a danger.
That danger is that the edge, the creative thinking, the "Hey, what if..." factor gets worn down.
Now, I know there are a lot of potential reasons for this to happen. Some may be within the team itself. There may be factors at work where individuals like the drama, the "rush" of the massive push at the end.
There may also be issues around the development methodology or practices. If the only testing done in unit testing is validating the "happy path" then likelihood that builds will come fast and furious, depending on how quickly the detected defects are addressed.
Another possibility is that someone, possibly in the test group or outside the test group but involved in the projects, likes the chaos - the frantic push is something they thrive on.
Whatever the cause, when every project turns into a heroic stand worthy of an action movie, something is seriously wrong. When testers, or anyone else, are expected to perform super-human, or heroic, feats every project, the edge will be blunted. The probability of significant defects being missed increases with each cycle. Eventually, people will be working less effectively no matter how many hours, or how "hard" they are working.
In some shops, the option to simply say "No, I won't do this" may be a legitimate one. In shops where contractual requirements don't allow such a response, I don't have a solution. Now, I am aware of the need to keep regular releases going, I'm just not sure of a solution that works everywhere.
What I do know is that the bosses, the leaders at your shop need to make sure they don't need their people to be heroes each and every release. Horatius held the bridge, but he had help. The lone cowboy was a myth, just like superman.
Friday, December 17, 2010
On Exploration or How You Might Be Testing and Not Know It
I had an interesting conversation earlier this week. A colleague dropped into the cube, grabbed a handful of M&M's and muttered something about how she kept finding defects and wasn't able to get any test scripts written because of it.
OK - That got my attention.
So, I asked her what she meant. It seems that the project she was working on was not terribly well documented, the design was unclear and the requirements were mere suggestions and she had gotten several builds. So, she was working her way through things as she understood them.
So she explained what was going on... She intended to make sure she was understanding them correctly so she could document them and write her test scripts. Then, she could start testing.
The problem was, she'd try different features and they didn't work like she expected. So, she'd call the developer and ask what she was doing wrong. Problem: She wasn't.
The issue, as she saw it, was that the code was so unstable that she could not work her way through it enough to understand how she was to exercise the application as fully as possible. To do that, the standard process required test cases written so that they could be repeated and "fully document" the testing process for the auditors. Because she kept finding bugs just "checking it out" she was concerned that she was falling farther and farther behind and would never really get to testing.
More M&Ms.
So we talked a bit. First response: "Wow! Welcome to Exploratory Testing! Your going through the product, learning about it, designing tests and executing them, all within writing formal test cases or steps or anything. Cool!"
Now, we had done some "introduction to ET" sessions in the past, and have gradually ramped up more time in each major release dedicated to ET. The idea was to follow leads, hunches and, well, explore. The only caveat was to keep track of what steps you followed so they could recreate "unusual responses" when they were encountered.
Explaining that the process she was working through actually WAS testing lead to, well, more M&Ms.
The result of the conversation was that the problems she was encountering were part of testing - not delaying it. By working through reasonable suppositions on what you would expect software to do, you are performing a far more worthwhile effort, in my mind, than "faithfully" following a script, whether you wrote it or not.
Mind you, she still encountered many problems just surfing through various functions. That indicated other issues - but not that she was unable to test.
That thought prompted another handful of M&Ms, and a renewed effort in testing - without a script.
OK - That got my attention.
So, I asked her what she meant. It seems that the project she was working on was not terribly well documented, the design was unclear and the requirements were mere suggestions and she had gotten several builds. So, she was working her way through things as she understood them.
So she explained what was going on... She intended to make sure she was understanding them correctly so she could document them and write her test scripts. Then, she could start testing.
The problem was, she'd try different features and they didn't work like she expected. So, she'd call the developer and ask what she was doing wrong. Problem: She wasn't.
The issue, as she saw it, was that the code was so unstable that she could not work her way through it enough to understand how she was to exercise the application as fully as possible. To do that, the standard process required test cases written so that they could be repeated and "fully document" the testing process for the auditors. Because she kept finding bugs just "checking it out" she was concerned that she was falling farther and farther behind and would never really get to testing.
More M&Ms.
So we talked a bit. First response: "Wow! Welcome to Exploratory Testing! Your going through the product, learning about it, designing tests and executing them, all within writing formal test cases or steps or anything. Cool!"
Now, we had done some "introduction to ET" sessions in the past, and have gradually ramped up more time in each major release dedicated to ET. The idea was to follow leads, hunches and, well, explore. The only caveat was to keep track of what steps you followed so they could recreate "unusual responses" when they were encountered.
Explaining that the process she was working through actually WAS testing lead to, well, more M&Ms.
The result of the conversation was that the problems she was encountering were part of testing - not delaying it. By working through reasonable suppositions on what you would expect software to do, you are performing a far more worthwhile effort, in my mind, than "faithfully" following a script, whether you wrote it or not.
Mind you, she still encountered many problems just surfing through various functions. That indicated other issues - but not that she was unable to test.
That thought prompted another handful of M&Ms, and a renewed effort in testing - without a script.
Thursday, December 16, 2010
Measurements and Metrics, Or How One Thing Led to Another
So, once upon a time, my dear daughter and her beau gave me a combination "Christmas and New Job" present. Yeah, I was changing jobs in late December... What was I thinking? Not sure now, but it seemed like a good idea at the time.
Anyway, this gift was an M&M dispenser. Yeah. Pretty cool, eh? Turn the little thingie on the top and a handfull of M&Ms would fall through a little chute and come out the bottom. Not too shabby!
So, move along to the summer of 2008. The company I was working for had a huge, big, ugly release coming out. It was the first time with a new release process and schedule and nerves were pretty thin all the way around, developers, testers, support folks, bosses, everyone. Well, being the observant fellow, I realized that we were consuming a LOT of M&Ms - Of course, it helped that the dispenser was at my desk, in my humble cube/work-area.
So, I started keeping track of how much candy we went through. The only folks who partook of these multi-coloured delicacies were the QA/Tester group and a couple of brave developers who realized that we were not contagious and they could not catch anything from us. (They also learned that they might learn something from us and we testers might learn something from them.)
What I discovered was kind of interesting. As the stress-level went up, so did the consumption of M&M's. As things were going better and things were looking good, then consumption went down.
Using a simple Excel spreadsheet, I added up the number of bags eaten (it helps that they have the weight on them) as well as the partial bags each week. Then using the cool graphing tool in Excel, I could visually represent how much we went through. By correlation, the level of stress the team was under.
After about a month, I "published" the results to the team. SHOCK! GASP! We went through HOW MUCH??????
Then the boss sat down with me and looked at the wee little chart. "What was going on during this week?" Ah-HA! The first obvious attempt to match what the graph was showing. I tracked usage for the rest of the year. The amount the team consumed over the six months or so that I tracked, lined up remarkably with due dates and, interestingly, defects reported in testing.
One thing led to another, and the dispenser was put away for a time. In mid-2009, for reasons which now I don't recall, the M&Ms came back out. As the crew realized this, consumption went up. And up. And up. Eventually, I noticed that the same pattern demonstrated before was coming back.
I learned two things doing this exercise (which I continue to do.)
One, is that it is possible to measure one thing and be informed on another. Now, I am well aware of the First and Second Order (and other) Measurements described by some of the great ones in our craft. This exercise brought it home to me in ways that the theoretical discussions did not.
The other thing, sitting at a desk and making a meal of M&M's is a really, really bad idea.
Anyway, this gift was an M&M dispenser. Yeah. Pretty cool, eh? Turn the little thingie on the top and a handfull of M&Ms would fall through a little chute and come out the bottom. Not too shabby!
So, move along to the summer of 2008. The company I was working for had a huge, big, ugly release coming out. It was the first time with a new release process and schedule and nerves were pretty thin all the way around, developers, testers, support folks, bosses, everyone. Well, being the observant fellow, I realized that we were consuming a LOT of M&Ms - Of course, it helped that the dispenser was at my desk, in my humble cube/work-area.
So, I started keeping track of how much candy we went through. The only folks who partook of these multi-coloured delicacies were the QA/Tester group and a couple of brave developers who realized that we were not contagious and they could not catch anything from us. (They also learned that they might learn something from us and we testers might learn something from them.)
What I discovered was kind of interesting. As the stress-level went up, so did the consumption of M&M's. As things were going better and things were looking good, then consumption went down.
Using a simple Excel spreadsheet, I added up the number of bags eaten (it helps that they have the weight on them) as well as the partial bags each week. Then using the cool graphing tool in Excel, I could visually represent how much we went through. By correlation, the level of stress the team was under.
After about a month, I "published" the results to the team. SHOCK! GASP! We went through HOW MUCH??????
Then the boss sat down with me and looked at the wee little chart. "What was going on during this week?" Ah-HA! The first obvious attempt to match what the graph was showing. I tracked usage for the rest of the year. The amount the team consumed over the six months or so that I tracked, lined up remarkably with due dates and, interestingly, defects reported in testing.
One thing led to another, and the dispenser was put away for a time. In mid-2009, for reasons which now I don't recall, the M&Ms came back out. As the crew realized this, consumption went up. And up. And up. Eventually, I noticed that the same pattern demonstrated before was coming back.
I learned two things doing this exercise (which I continue to do.)
One, is that it is possible to measure one thing and be informed on another. Now, I am well aware of the First and Second Order (and other) Measurements described by some of the great ones in our craft. This exercise brought it home to me in ways that the theoretical discussions did not.
The other thing, sitting at a desk and making a meal of M&M's is a really, really bad idea.
Monday, December 6, 2010
Not the Happy Path or I am a Hi-Lo
I was at a local tester meeting tonight. Tons of fun and a great conversation. There was a student from a local college attending. In the course of the discussion we were discussing the dangers of trusting the "happy path." The student asked, "What do you mean by that?"
So, we explained about looking only for what "worked" and not investigating other issues that were more problematic and probably more error-prone. In the midst of this, a story from over 20 years ago flooded back into my memory.
Mind you, it influenced me greatly at the time. It led me to some of my early revelations of software, development, testing and "revealed truth."
When the IBM PC AT was state of the art, I worked as a developer (programmer) for a small manufacturer that had its own warehouses and distribution center for its finished product. The company was a family run company located in fairly old buildings, well, from the late 1800's and early 1900's. One individual was the nemesis of the software development folks.
He was in charge of the warehouse - both finished products and component pieces. Any sofftware running on machines in the warehouse had to be run past him for approval. These were scattered around the varous floors of the warehouses. Now, these warehouses were monsters. Support posts were massive beams, 24"x24". The PCs were usually located near a beam.
The very old warehouses had a very small amount of leeway for placing pallets and the like. Placing a case or pallet even a few inches away from where it was supposed to be could cause a fair amount of problems for the hi-lo operators moving material from one area to another.
The curious bit was that at least once a week, a h-lo would hit (referred to as "bump") a support beam. This was usually result from navigating away from mis-placed pallets. Sometimes it was simply the operator missing a turn. Once in a while, they'd hit the power conduit that powered a PC on an early network connection. Once in a great while, they'd "take out" the PC itself. Oops.
Back to my story.
This same nemisis of software developemtn was finicky. Extremely finicky. He wanted to make sure that any data entered could be retrieved under any circumstnace. If the user hit "enter" or "save" he had the expectation that the data would be fully retrievable.
His favorite tactic during demonstrations where changes or enhancements were being demonstrated, was to have the demonstrator enter components or finished part information. He'd sometimes have the demonstrator repeat the process. In the middle of the repeat, after clicking "save" or going to the next page, he'd say "I'm a hi-lo." and unplug the power cord.
He'd count 20 and plug it back in.
Then he'd sit down next to the demonstrator and say "Show me what you just entered."
If you couldn't, he refused to accept the change until it could pass his "test."
How much work is it for your users to recover their work after a "that will never happen" event?
So, we explained about looking only for what "worked" and not investigating other issues that were more problematic and probably more error-prone. In the midst of this, a story from over 20 years ago flooded back into my memory.
Mind you, it influenced me greatly at the time. It led me to some of my early revelations of software, development, testing and "revealed truth."
When the IBM PC AT was state of the art, I worked as a developer (programmer) for a small manufacturer that had its own warehouses and distribution center for its finished product. The company was a family run company located in fairly old buildings, well, from the late 1800's and early 1900's. One individual was the nemesis of the software development folks.
He was in charge of the warehouse - both finished products and component pieces. Any sofftware running on machines in the warehouse had to be run past him for approval. These were scattered around the varous floors of the warehouses. Now, these warehouses were monsters. Support posts were massive beams, 24"x24". The PCs were usually located near a beam.
The very old warehouses had a very small amount of leeway for placing pallets and the like. Placing a case or pallet even a few inches away from where it was supposed to be could cause a fair amount of problems for the hi-lo operators moving material from one area to another.
The curious bit was that at least once a week, a h-lo would hit (referred to as "bump") a support beam. This was usually result from navigating away from mis-placed pallets. Sometimes it was simply the operator missing a turn. Once in a while, they'd hit the power conduit that powered a PC on an early network connection. Once in a great while, they'd "take out" the PC itself. Oops.
Back to my story.
This same nemisis of software developemtn was finicky. Extremely finicky. He wanted to make sure that any data entered could be retrieved under any circumstnace. If the user hit "enter" or "save" he had the expectation that the data would be fully retrievable.
His favorite tactic during demonstrations where changes or enhancements were being demonstrated, was to have the demonstrator enter components or finished part information. He'd sometimes have the demonstrator repeat the process. In the middle of the repeat, after clicking "save" or going to the next page, he'd say "I'm a hi-lo." and unplug the power cord.
He'd count 20 and plug it back in.
Then he'd sit down next to the demonstrator and say "Show me what you just entered."
If you couldn't, he refused to accept the change until it could pass his "test."
How much work is it for your users to recover their work after a "that will never happen" event?
Winter Testing Workshop or How to Go Sledding With No Snow
I found myself testing an application for the day job over the weekend. Thanks to the wonders of reasonably modern technology, and a decent broadband connection, I was able to do so from the comfort of home.
Here I was, Sunday afternoon, sitting at my dining room table connected to the office running tests that I needed to work through. It was a lovely day. Cold, but not terribly. The sun was even trying to peek out from the clouds that had been hiding it for most of the last week. We had some snow the Wednesday before - not quite two inches or so on the ground. By Sunday early afternoon, there was none on the pavement or sidewalks, and much that had been in the grass / yard had returned from whence it came.
So as I was working my way through a log file, I heard an obviously frustrated child outside. I looked up and saw the wee kids across the street looking quite perplexed. They wanted to go sledding on the small bank in their yard, leading down to the sidewalk. Problem: Most of the snow was gone, therefore, the sleds/slider-thingies they had simply were not working well. Sledding was pretty much out of the question - particularly when you're between the ages of 6 and 9 years.
When you're stuck with a testing project, without any clear way forward. What do you do? Send a terse email demanding whatever you need from whomever you believe should get it to you? I tried that when I was younger and more green in software testing than I am now. Didn't work so well.
How 'bout rail against the unfair universe? "Why do we do things like this?!? This is AWFUL!" Yeah, good luck with that, too.
Or, maybe, you could look around and see what options you have, even if they are so far outside the realm of possibility that all the "experts" would say "Don't waste your time!"
The kids across the street chose the third option. They put their two slides/sleds next to each other on the top of the "hill" that is the bank in their yard. Then, while the youngest held them down so the wind would not blow them away, the older two used a) a garden rake and b) a snow shovel to get enough snow from the REST of the yard tp make a run wide enough for both sleds that ran down the bank, across the sidewalk, and ended with a small berm (of snow) to keep them from going into the street.
They then proceeded to have a good 90 minutes of fun doing something that the "experts" (grown-ups) would have told them they could not possibly do.
A 9-year-old can think that creatively. Can we?
Here I was, Sunday afternoon, sitting at my dining room table connected to the office running tests that I needed to work through. It was a lovely day. Cold, but not terribly. The sun was even trying to peek out from the clouds that had been hiding it for most of the last week. We had some snow the Wednesday before - not quite two inches or so on the ground. By Sunday early afternoon, there was none on the pavement or sidewalks, and much that had been in the grass / yard had returned from whence it came.
So as I was working my way through a log file, I heard an obviously frustrated child outside. I looked up and saw the wee kids across the street looking quite perplexed. They wanted to go sledding on the small bank in their yard, leading down to the sidewalk. Problem: Most of the snow was gone, therefore, the sleds/slider-thingies they had simply were not working well. Sledding was pretty much out of the question - particularly when you're between the ages of 6 and 9 years.
When you're stuck with a testing project, without any clear way forward. What do you do? Send a terse email demanding whatever you need from whomever you believe should get it to you? I tried that when I was younger and more green in software testing than I am now. Didn't work so well.
How 'bout rail against the unfair universe? "Why do we do things like this?!? This is AWFUL!" Yeah, good luck with that, too.
Or, maybe, you could look around and see what options you have, even if they are so far outside the realm of possibility that all the "experts" would say "Don't waste your time!"
The kids across the street chose the third option. They put their two slides/sleds next to each other on the top of the "hill" that is the bank in their yard. Then, while the youngest held them down so the wind would not blow them away, the older two used a) a garden rake and b) a snow shovel to get enough snow from the REST of the yard tp make a run wide enough for both sleds that ran down the bank, across the sidewalk, and ended with a small berm (of snow) to keep them from going into the street.
They then proceeded to have a good 90 minutes of fun doing something that the "experts" (grown-ups) would have told them they could not possibly do.
A 9-year-old can think that creatively. Can we?
Friday, December 3, 2010
Of WikiLeaks and Diplomats or Software and Trust
Um, unless you've been in a cave the last week or so, you've heard about the recent leak of Diplomatic "Cables" (what a love anachronism in 2010.) Not the leak, but the idea of telegrams.
So, listening to the radio on my way home from work last night, a commentator likened the situation that the US and its diplomatic "partners" are in to a teenager's private comments about their friends getting back to those friends. All of them. At once. They went on to talk about how they would and need to rebuild their trust to rebuild their relationship.
That got me thinking about some conversations I had a while ago, both in person and by email. The thing is, there was one really simple theme running throughout all of them. The entire development organization, not just the testers, not just the developers or designers or BAs or PMs - but all of them - must trust each other to be doing the best they can do, and they know how to do.
If that trust is lacking, the group will not be able to function properly. If one section of the group believes themselves superior in some way, that will show through to all the groups and will be as destructive to the overall relationship as, well, having private communications made public.
So, listening to the radio on my way home from work last night, a commentator likened the situation that the US and its diplomatic "partners" are in to a teenager's private comments about their friends getting back to those friends. All of them. At once. They went on to talk about how they would and need to rebuild their trust to rebuild their relationship.
That got me thinking about some conversations I had a while ago, both in person and by email. The thing is, there was one really simple theme running throughout all of them. The entire development organization, not just the testers, not just the developers or designers or BAs or PMs - but all of them - must trust each other to be doing the best they can do, and they know how to do.
If that trust is lacking, the group will not be able to function properly. If one section of the group believes themselves superior in some way, that will show through to all the groups and will be as destructive to the overall relationship as, well, having private communications made public.
Subscribe to:
Posts (Atom)