Friday, December 30, 2011

Janus Part 1: Looking Back 2011

One of the benefits of taking four years of Latin is that you pick up all kinds of interesting things that many other folks may miss.  Then again, an awful lot of people don't worry too much that "i.e." is an abbreviation of "id est" or, "that is".  Just like "etc." is an abbreviation of "et cetera"  - even though they may even SAY et cetera, I wonder how many know what it means?  I'm mean enough to not say here, and say "look it up" - unless you remember your Latin as well.

Janus, for whom the month of January is named, looked both forward and back.  That is a bit of what I want to do with this post and the next.  The post I wrote last night was a precursor to these couple of posts, partly because the things described yesterday laid the foundation for this past year and the year to come.

What I wrote January 1, 2011:

The Road Ahead...

The interesting thing is I've been thinking about the future. Well, not THE future, but what lay ahead for me professionally and how that may impact the family. It would seem there are several items that are possibilities for the coming year. One path would be to look for new work opportunities, either as a contract/consultant or as a full time, permanent employee. Yeah, as if "permanent" means much.
A bunch of folks commented privately, "Dude, pretty gutsy to say you'll be looking for work when you're still employed."  What I could not say then, was that in December, the entire staff of the company I worked for was told, in essence, that the company leadership was negotiating the sale of the company.  We did not know to which other company, nor what the terms would be.  Many of us speculated that the only reason we were told at that point, was because they needed us to sign releases of our stock options in case the sale closed before the end of the year. 

It was not a bold move to make such a prediction - I simply knew there was a likelihood that I'd be looking for work. When one company assimilates, well, acquires, another company, "long term employment" prospects for the staff of the acquired company are not terribly high.

As it was, I was not let go.  I was retained.  One colleague resigned after accepting a new position.  His last day, we had a farewell luncheon for him.  By the end of the next day, myself and another tester were all that remained of  our team.  One other person, a developer, had been transferred from the development staff to testing.  although others on the team

We are continuing, and moving forward.
Community

Another option is to become more involved in the testing community. Actually, I started working on that as well in 2010. What I mean is that reading blogs other folks write is a good way to learn what they're thinking is. Reading and participating in on-line forums is another way to both learn and become involved. Well, doing that as much as I can right now.

Of course, more actively engaging in both of these types of activities is on my list of things to do this coming year. Ya know, the funny thing is, the more I talk with folks about things I learn and have learned, the more I learn myself.
This continues.  I've been writing.  Alot.  STP Magazine and TechTarget's SearchSoftwareQuality both have run articles I've written.   my  more in my blog, and more engaged in forums than ever before. 

I expect this to continue and grow in the coming year.  That would be way cool.
 
Local Testing Groups

Another thing, the local testing group, GR Testers, has been going in fits and starts for a while. Meetings have been sparse of late. The most recent one, December, was kind of fun. There were a bunch of us sitting around a table, lots of wings, good beer and folks talking about testing. Good way to spend an evening. There's another meeting coming up Monday, 3 January. It makes it the first time in quite a while that there were back to back monthly meetings. Normally, they are officially held every other month. It seems that as more people are showing an interest, the meeting frequency will pick up.

I wonder how many other local testing groups are out there that have a meeting schedule based on "whenever" instead of "We meet at this time, and here are the next couple of topics we're focusing on at these meetings..." I believe that the more people know about local groups, the more they are invited to participate and the more information that is available about them, the more active and the stonger the community there is.

I think that pretty well sums up what I'm looking to do with the local group. I believe that getting more people involved and talking about testing is vital to improving not only our individual tradecraft, but the abilities of the local community. Sharing well reasoned ideas can do nothing but good, presuming all are allowed to learn and ask questions

The GR Testers, the local testing group, is up and running strong. The group has met monthly since that January post.  I've made it to most of the meetings. The ones I missed, I was out of town, usually at a conference. Cool.



Personal Development

Now, I realize that any of the above activities can lead to improving any individual participating. What I mean here is something a bit more. I had been signed up for the BBST Foundations course offered by the Association for Software Testing for a session in in the fall of 2010. Things happened and that session was cancelled. I could not take the session offered as an alternative.

The GOOD news, for me, is I am signed up to take the Foundations course this spring. YEAH! I am really looking forward to this. Everyone I know who took the course raves about it. Big-time excited.

I've continued reading blogs and articles and books and talking with people and... everything else. My goal is to continue learning and to continue to share what I learn.

For conferences, I'll be attending and presenting at STPCon in March in Nashville. I bought myself a birthday present and renewed my AST membership in October. If I can work it out, I'll be attending CAST in August in Seatle.
This happened beyond my wildest dreams.  I took and passed the BBST Foundations course.  Then, even though the schedule did not permit me to take the Bug Advocacy course - that is on the list for next year for me.  I also took the Instructor's Course from AST.  We'll see how the schedule works out this coming year.

Conferences.  I presented at STPCon (Spring) in Nashville.  I gave a joint presentation with my (then) boss, Kristin Dukic, as well as a presentation and lightning talk on my own.  I then was flattered, and honored, to attend and participate in CAST.   With Matt Heusser, I helped organize the Emerging Topics track, where a self-organized group selected topics submitted via a wiki - then ran for 20 minutes, every 25 minutes.  It was astounding.

After CAST, I had the opportunity to present at STPCon Fall in Dallas.  Matt Heusser and I did a day-long workshop (excerpts are on the Software Test Professionals site, under Podcasts) - then a joint track session on "Complete Testing".  THAT was a lot of fun. I also presented a track session on my own as well as a lightning talk.  Matt just gave a keynote.

Then since I was not busy enough, I presented at TesTrek in Toronto in November. 

Whew.

Other Stuff

Scads of people have encouraged me this year.  Among them, Matt Heusser, who put me in contact with the folks at TechTarget, and made the case that he could not do Emerging Topics at CAST on his own - which is how I got in.  Cool, heh?  THEN - Matt had so much fun with that, he asked if I'd be interested in doing a joint workshop in Dallas.  Oh yeah.  The interesting thing is that he's really a nice guy - as the folks who know him will attest. 

Also - Fiona Charles is supportive and encouraging.  She is really an amazing person who is willing to offer suggestions and ideas on how to improve articles, presentations, whatever.  She also is way cool.  She was one of the very first people that I consider a "Name" in testing, to ask for comments on a paper - the list me in the acknowledgements.  Humbling. 

Catherine Powell whom I met in person at STPCon in Nashville always has encouragement and good suggestions.  Michael Larson is a great guy.  He's got a great outlook on life and testing.  His blog is inspiring.  Doug Hoffman was the Head Instructor for the BBST Foundations course.  What a smart guy.  Nice as the day is long.  We had several very nice chats both at CAST or at STPCon Fall.  If you get a chance to see him present - DO.  Cem Kaner - yes DOCTOR Kaner - the drive behind the BBST Courses.  An ongoing inspiration.

There are more - Michael Bolton, Lynn McKee, Griffin Jones, Nancy Kelln, and many more.  These are the people I look to for inspiration and mental reinvigoration.

And of course, my lady-wife, Connie. 

I do not know what the future will bring.  I will discuss what I hope for the future in the next post. 

Thursday, December 29, 2011

Rising From the Ashes or Finding Motivation in Disaster

This has been an interesting year.  There have been many fantastic things happen this year that at times it seemed like I was an observer, and not the one participating.  I've presented at more conferences this year than I attended any year before this.  People write emails asking questions, looking for insight or help with a sticky problem, as if I'm an expert. 

I've written before about not feeling like an expert.  This is not about that. 

While preparing for STPCon this past October I had an interesting in a couple of thoughts.  While working on the presentation, and a couple of papers, I mentioned one of these thoughts to a fellow member of the GR Testers group.  We chatted (cyberly) for a moment on how failure can be a great motivator.  We talked about people who had overcome problems and adversity to rise to great things.

Of course, there are also many examples of people who break under adversity.

I don't know what the differences are in those scenarios.  I don't know why some people crumble, others recover and come back to where they were and others rise to greater success than they have ever known.  The last group, to me, resembles space capsules, like the old Apollo capsules, that would whip around the moon to accelerate even faster than they were going.  Yeah, in Star Trek Kirk did the same thing with the Enterprise around the Sun.  Cool, no?

The second group, I kind of think of as being a bit like a rubber ball.  Not a fancy "Super Ball" that used to be sold with the assurance that it would bounce higher than where it was dropped from (and rarely did as far as I know) but a plain bouncing ball.  Comes back to where it was, but somehow not quite the same.

The first group, like I said.  I don't know why people fail to recover.  The just don't for a variety of reasons. 

Me.  Hah.  I was moving up.  I had left one company where I was simply unhappy, and joined another company as a Test Lead.  There were "issues" there.  I was hired to improve testing and change the way testing was being done.  Well, things were not working out.  I had a series of "those meetings" and the last one was handing me a package and me walking out the door.  (I'll be happy to give more details over adult beverages sometime, if you really want to know.)

So, I went home, popped in a video, cracked an adult beverage and said "What happens next?" 

Short term, I knew what had to happen - I needed to get ready to teach drum lessons that evening.  So, I had a single beer, watched a movie, fried some bacon and eggs and felt sorry for myself for 3 hours.  Then I made a strong pot of tea because I had work to do. 

I made a list of what I was good at and what I was not good at (no PC here, not right then.)  I went through the list of what I was good at that and highlighted those I liked to do and those I wanted to get better at doing. 

I then went through the list of what I was not good at. I split that list into "so what?", "consider improving" and "fix it".  I then considered a list of things I had read about and had done very little with or knew very little about.  I also made a list of things I knew nothing about, but I'd seen mentioned in articles and blog posts and said "this might be worth looking into." 

I then went on and read what I could, learned what I could and did some serious soul-searching on what I really wanted to do.  I then looked at how I would fix the stuff I really needed to fix.  This was hard - really, really hard.

This led me to the next step - Updating the resume, looking at what I wanted to do and where I wanted to do it.  I knew that (at the time) West Michigan was not a hot-bed for top-flite testing jobs, project management jobs and my development experience was not in technology that was in demand.  On top of that, the economy was beginning its downward slide.  So, I figured it would be a good likelihood that I would need to relocate. 

I looked and I looked... and I looked some more.  One month, I applied to 158 jobs. All over the US, Scotland, Ireland and Australia.

I learned a lot.  I've been applying those lessons ever since.

First - Be involved.  Online, locally, within the company, within the team.  Look for ways to learn and improve.  If someone looks for advice, guidance or a sympathetic ear - do what you can.  If something sounds familiar to a situation you were in, talk with them about your experience.

Second - Share.  Now, in some ways, this is similar to the first lesson.  Write.  Blogs, forum posts, responses to posts or online articles.

Third - Learn.  Keep learning, keep reading, keep thinking.

Four - Dare.

Five - Repeat.

Four years ago, the foundations for these really, really simple ideas were where I started.  I landed a job after a stack of interviews.  Some I knew would not be a good fit.  Others, well, they decided it would not fit.  I was ok with that.  When I landed the gig I landed, I talked with people. I learned.  I learned their applications, their methods and their personalities.  I learned how they worked and did things. 

I shared ideas and experiences. I contributed when I could and asked questions when I did not understand. 

Then people began asking me questions - How can we learn more about... Have you ever run into...

As a result of one series of these conversations, I landed at TesTrek in Toronto, where I met Fiona Charles and Michael Bolton in person, for the first time.  I also met a whole slew of people I had never met before, Nancy Kelln, Lynn McKee and slew of other bright folks. 

That week in Toronto resulted in me getting more involved, helping revitalize/reinvigorate the GR Testers, then scrap my drumming blog and move to writing on testing.  That helped with presenting at conferences... and that led to, well, this most astounding year.

Where did this come from?  Getting fired.

You don't need to get fired/sacked/down-sized/happy-sized/whatever to do the same.  If you want to grow, then do it.  If you want to get involved, do it.

The fact is, doing these things may not make you a leader or a superstar or being called an expert.  But, if the world comes tumbling down around you, if you have been doing these things, others can step up and help.  If you have established connections and a reliable cadre of people, they can help just as you can help them.

Monday, December 26, 2011

On Patterns and Blinking and Puzzles and Expectation

Our family has a lot of traditions around the winter holidays, Christmas and New Year.  One tradition is working on a massive 1,000+ piece jigsaw puzzle.  We see it as beneficial in many ways.  When the kids or grandkids are around, for them to participate ("become engaged") this thing that we are doing, they need to slow down.  I've yet to take any pleasure from assembling a puzzle that can be whipped through in an hour or two.

Our puzzles tend to take a week or more to be completed.  We'll start them one evening, then each day tinker a bit as each person has a few moments.  In the evening, we try and set aside 30 or 40 minutes to work on the puzzle together.  We've found it a great extension of "dinner table conversation" where we get caught up with each others' day. 

Oh, we both like doing puzzles too, which is perhaps the biggest reason why we do them.

So, this year's puzzle was a photograph of a Scottish castle, no I don't know which one, with hills and mountains and things in the background, a bit of water near the castle (hard to tell if it is a river or a loch or merely a fair sized pond.)  Like a lot of the better, or harder, puzzles, there were many bits that, well, looked a lot like other bits. 

In sorting out which bits are which, you need to look for subtle differences - small changes or variances in the overall image.  So, this last week, I had a portion that I was sure were part of the castle's battlements - the tops of the walls or towers.  Then I noticed another piece - JUST like the one I had in my hand. but a little different.  There was a small line in the piece I had that was not in this new piece.

I blinked.  Literally.

The portion I was working on was indeed part of the battlements - but the reflection of the battlements in the water - not the actual "top of the wall" stuff.  I was reminded of a defect I had spent time trying to track down on a recent project. 

I had a set of expected results and behavior, my "oracles" - and the results - what I was actually seeing, were really really similar, but not quite what I was expecting.  It looked right, but something did not feel right.  What I was expecting, based on the described behaviors and expected results, was generally what I was seeing. But something did not feel right.

It was kind of like the puzzle pieces.  One looked like what I expected it to look like.  The other was, well, different. 

That got me thinking about other things. 

How many times are we certain that what we expect is really what we should expect?  Is it not possible that the expectations are the "bugs"?  What is it that makes the "expected results" "right"?  Even when you are the one who created the "expected" results, how well do you really understand the software?  Do you have a certain understanding as to what the changes will result in? 

In my case, my "expected results" were what was at fault - both in the puzzle and the testing. 

Once I realized my mistake in the testing, it became much easier to move forward.  I will never know about the puzzle, I'm afraid.  The orange tomcat who lives in the house with us decided that he had enough of us assembling the puzzle.

I believe, but am not certain, that we found all the pieces after he scattered them from the table.

Should we try and put that puzzle together in the future, I expect we'll find out about any missing pieces.

Saturday, December 17, 2011

Coaching and Learning and Opportunity to do Both

My last post was fairly short, well, for me anyway.  This one will be, too.  No more rambling oddities that may, or may not, have anything to do with testing.  Just kind of too the point. 

My last post was on the Call for Participation being open for CAST 2012.  It turns out that the weekend before CAST,  July 14 and 15, there is another learning opportunity - Test Coach Camp.  I'm pretty excited about this. 

Test Coach Camp will be held at the same hotel where CAST will be held. 

Matt Heusser wrote about it here.  The official AST release and Call for Participation can be found here.

These folks said it better than I can. 

If you are interesting in helping testers do their testing better, which is what Test Coaching is all about, right?  Then I suggest you dive in to this.

Its going to be good. 

Monday, December 12, 2011

CAST 2012, The Thinking Tester - Do You Know the Way to San Jose?

This may well be the shortest blog post I've published in some time.  There may be some rambling, but less than what I normally have.  Don't look for a deep, thought-provoking idea buried in an apparently pointless story.  Its not there.

So, here's the point.  If you are a Thinking Tester then you need to know about CAST 2012.  The Conference for the Association for Software Testing is scheduled for July 16 through 18 in San Jose, California. 

The Call For Participation is up (here).  There are three basic types of presentations:
  1. Interactive Workshops (140 minutes);
  2. Regular Track Sessions (70 minutes with at least 25 minutes for discussion);
  3. Emerging Topics (20 minutes with at least 5 minutes for discussion);
The deadline for Regular Tracks and  Workshops is January 16. 

The information you need to know about submitting proposals is on the website at the link above. 

If you are a Thinking Tester, I encourage you to consider attending CAST.  If you are interested in telling people about your ideas, I encourage you to consider submitting a proposal.

Saturday, December 3, 2011

On Improvement, Or Teams and Process

I sometimes find it funny.  I mean writing a blog and posting ideas and getting comments or seeing who agrees or disagrees via twitter and other "networking" sites.

My last post, on team building, had a reasonable number of hits from unique locations, garnered a few tweets/re-tweets and one public comment. I also got a couple of emails from people I know that essentially said (paraphrasing) "We need to assign people to jobs they want to do? What if we end up with three people on one function and functions with no one assigned to them?  That's crazy! You can't operate that way!"

I kind of blinked and thought to myself, "Is that what I said?"  So I reread that post and thought about it.  I can see where someone might take that away as my point.  And still, I don't really think that was what I said.

Consider this.

When talking about process improvement, particularly test process improvement, I say flat out that no cookie-cutter model will work in every shop and then will rarely work for every project in the same shop.  To be able for a team to test effectively, someone must have a decent understanding of what the individuals on the team are capable of doing, what they are good at and how well they work together with other individuals on the team.

When you have some idea about the individuals, and you can look at the overall team's work (essentially looking at what the team as a whole is good at) then you can look for ways to optimize the strengths.  If you can off-set weaknesses with strengths and make the areas of less-than-optimal performance a little closer to where you'd like them to be, you can free more time and resouces (like money and training/reading material, not people) to grow your whole team's capabilities.

Pairing testers where one is stronger in certain areas than another and allowing them to learn and develop skills while doing them, is one way to spread the workload and allow testers to experiment with mentoring other testers.  It can also help develop a closer sense of teamwork and encourage people to turn to other testers for help, if they are not doing that already.

I have found no way to do those things without getting to know what the team members like to do, want to do and are good at doing. 

I know we can't always get the fun tasks - the ones we really like doing.  We can, however, learn about other things, or maybe find ways to improve and get better at the un-fun tasks.  I know for me, some of the things I really find un-fun are the tasks I feel less than comfortable with.  Yet as I learn more, those un-fun, scary, frightening things, the ones where I am an absolute novice at, are also the ones that as I learn more, and become better at, become more fun than un-fun.

I think that is the key right there.  If we want to get better, sometimes we need to work on the un-fun things and learn about them.  The team leader (manager, whatever) may have some idea of what the "fun" tasks are, but if each individual on the team has a different idea of what those fun tasks are, it is almost certainly going to lead to an un-fun experience.

Team leaders, I suspect that if you look, ask, talk and communicate with your team members, you'll find their idea of fun tasks and what they like doing and what they are good at will tend to be the same things.  I exoect the same thing will be true with the the un-fun tasks, what they don't like doing and what they are not very good at will probably be the same.

By making the pool of un-fun tasks small for the individuals and the team, by getting the team members to expand their skills, I believe your core testing wull improve.  I likewise believe your core team work and cooperation will improve.  Finally, I believe your team's morale and sense of working together, their joint craftsmanship, will improve. 

Those things will tend to improve the testing your team does.  Then everyone will have more fun.

Thursday, November 24, 2011

Team Building Or Putting the Fun Back in Dysfunctional

We've all seen these annoying "team building exercises" where someone dreams up something "fun" that will help everyone "learn to work together." 

They can range from the "trust building" thing where one person, blind-folded, follows the directions of someone to get them to a goal.  Then there is the slightly more dangerous (hence fun to watch when things go pear-shaped... and they are almost certainly going to go pear-shaped) version where the blind-folded person crosses their arms and falls backward to be caught by another person. 

Now, upper body strength may play a role in the success of this version.  As does, well, paying attention and at least a rudimentary understanding of physics... and gravity.  Usually things combine into a series of "oh my" type moments.  Sometimes, well, they end with a little bump on the noggin.  (Once in a while its a big bump.)

Then there is the version of "ice-breaker/team-building" thing where two people sit on the ground, back to back with their arms linked.  Then, they work together to stand up.  The idea is they support each other while using their legs to stand up.  Things work just fine as long as both people apply equal pressure at the same rate.  If they don't, well one stands up and the other gets dragged along for the ride.  Or there's the fun alternative - where they mostly push against each other and end in a heap.

The thing is, most of these efforts strike me as artificial.  Translated, they may "work" in some form or other.  Most of the time, I see people l go through the motions because the boss told them to, or HR or, someone.  They don't see the point.

Sometimes , when teams are "created" or "formed" or "built" or, whatever, you see the same kind of exercises.  In these cases, they are even more artificial.  People will go through the motions because if they don't, they figure they'll be fired.  Fear is a great motivator, at least on some level.

What I don't understand is why so many people think it works. 

Its like, oh, I don't know, boss-types throw people together and expect it to work by magic.  Well, maybe not magic.  I think maybe they expect it to work like a high school chemistry "experiment."  You know the type - combine certain items in specific amounts in a specific sequence - POOF!  Stuff Happens!

Well - Humans don't act that way.  But, when you consider people as "resources" it strikes me as, well, demeaning at best.  So, why, when we expect/need people to work together, do we act as if it will just "work."

Most of us will try and work it out and do the "team" thing.  Its part of being a grown-up, mature, professional - right?  We kind of expect it - and they kind of expect us to do it.  So we do the whole storming-forming-norming thing and figure out what we need to do to do the job. 

Then, how many times have people seen this?

About the time we sort out how to tolerate each other and actually work and get stuff done - GOOD stuff, not just the bare minimum - someone waves a wand and reorganizes the company (or department or whatever) and expects things to work at the same peak performance as before the change.

Then there is the "optimizing resources" version of that.  Most of us have seen or heard of that version of the reorg game.

Someone looks at two departments or divisions or, for the "major league" players of that game, subsidiaries, and says "look at the cost savings we can get by combining X with Y!" 

There may be real savings, as in total net savings when the dust settles.  How long it takes the dust to settle is, in my mind, the question.  This is a variation on the "team building games" but instead of a handful of people, its a bunch of people who may very well know little or nothing about what the other people do. 

When this game gets played, I am afraid it is usually for a short-term gain - something to impact the financial statements this quarter or next quarter, with the promise of "real savings/benefits" five quarters in the future.

Then there is the really high-stakes version of the game: Borg, Inc assimilates Minuscule, Ltd.

Well, OK, that may be a slight overstatement.  And, to be fair, I've worked for companies playing both parts.  Still, for those in the company being acquired, the uncertainty of what is coming can be a bit unnerving.  When questions get asked up-stream in the new "organization" and there is no response, then a couple of messages are being sent:  1) We're too busy to respond to your meaningless request for information; 2) Your future is your own (and it probably won't be here). 

Now, those may not be what is intended to be sent, but usually that is what gets received by the "non-response."

What I see happen in those situations is a large-scale version of those team building exercise-games I was on about.  I usually see a mix that has some posturing, some maneuvering, some "hoping for the best" and some resigned to fate.  In the end, there may likely be "staff realignment" actions - meaning some folks get assigned to new groups and others get assigned to "pursuing new interests". 

And the game starts again.

So, the hard part is, how do you make that tolerable, if not palatable?  Can the people who are still there get by without gritting their teeth and going just a tad crazy?  Can the people making the decisions over what the teams, either reorganized or made from those who survived the staff reduction/realignment? 

Maybe, to both.

First, the players in the game - the rank and file folks like I have been for most of my working life - YOU are a crucial part of the mix.  YOU can directly impact how the game gets played.

Here are some thoughts around what I mean. 

People on other teams, groups, departments, whatever are probably not villains in their own mind.  They may well be trying their best in circumstances they find challenging, at the least.  They may very well see you as the villain.

How can you find out about that?  Can you see if they are really villains?  Can you see if they are as clueless as you have been told or come to believe? 

Maybe.  I can't be certain if you ever will.  However, if they work in the same building or general location as you do, try this simple thing:  Walk up and introduce yourself.  :"Hi I'm Pete (well, only say that if your name is "Pete" - try putting your name in there instead) I work in department X.  Can I join you?"

Talk with them.  See what makes them tick.  It may take many conversations, but it is worth a start.  After all, you may also be able to dispel the myth that you have horns, a tail and cloven hooves for feet. 

If they have a similar job function that you do, try talking with them about the problems they run into and how they get over/around them.  Oh, and be willing to share what you are encountering as well.  Make it a mutual learning experience. 

Pretty simple, eh?  I found it works pretty well.  Not all the time and not 100% - but generally, it helps people see that other folks are, well, people too.  It may also give you some insights as to why Tezm Z can't seem to turn out anything your team can work with.  And the folks on Team Z may find out that you don't really mean to be a butt-head.

Things kind of work both ways like that. 

Managers, Leaders, Bosses - you can play a role in this, too.  Before "realigning" groups, have a thought to what the people in those groups are good at.  Find strengths that complement other people's strengths.  Combine them when you can.

I know - most of you believe you are giving it your best effort.  A fair number of you do.

Others, admit it, if only to yourself, are looking to hit the target for head-count or "employee expense" (payroll) or "employee cost-reductions" (sacking high-paid folks) that you were told to hit.  Wel,, you, boss, or whoever set those targets, is a boss too.  Talk to them about this.  Try anyway.  Don't whine,

I call that the "penny-wise/pound-foolish" managenment style. Sure Jane makes a pile more money than these three people - what skills does she have, what knowledge does she have - do the others have those skills or knowledge?  Do they together?  What is the impact to the product (hence customers, hence sales, hence bottom line) if we sack her and leave them?  What if we sack her and one of the other three?

Of course its not easy.  Back when I was in school, the argument was that managers, directors and bosses were paid a pile more money than other folks because they could and would make "hard decisions."  HELLO!  That concept constitutes a "hard decision."  Going with the simple "she makes more than the others, get rid of her" is not a "hard decision."  If that is how you are operating, stop!  Really.

OK, now a scary point.  This is for line managers and staff - rank and file folks like me.

When you get "reorganized" you have an option.  Agree to the terms or pack it in and find another gig.  Simple.  "But the economy is bad!  Things are really tight!  I just did my nails!" 

It is your career and your life.  Manage it.  If you wait for other people to tell you what you can do, you may well be waiting a long time.  If you don't like the terms or what you'll be doing after the reorg, update the resume and get it on the street.  Sooner rather than later.  It weill be better for everyone.

Line managers and leads.  This is for you. 

Find out what your people like to do.  What makes them tick.  I know that you generally try.  The thing is, asking them flat out may be the least efficient way of finding out!  These are people who are testers, right? They analyze and think about things, right?  That means nothing is ever what it seems to them - RIGHT?  So don't be surprised if the answer they give to the question about what they like doing or what they want to be doing in X years/months/whatever, is what they think YOU want to hear and not what they really would answer if talking about this over a beverage with teammates.

Work with your people to learn what they are like.  I know, it can be really hard when you work hundreds (or more) of miles away from people who are supposed to be "direct reports".  Still, make the effort to learn them - not just about them, but learn what they are like, how they respond and how they handle different forms of pressure. 

I saw a really good example of this a couple of weeks ago.  Ironically enough it was at a "team building"exercise.  Some two dozen people were having an outing.  They all worked in the same office and "knew" each other enough to generally associate a face with the corresponding name - at least first name.  After lunch they had some games.

They were divided into two teams, very diplomatically.  They reached into a bag and pulled out a necklace of plastic beads.  Whatever color beads they pulled out, that was what team they were on.

The games they were to play were essentially children's games - some skill, some memory, some, a little of both.  OK, there were some basic rules - some games took 2 people from each team, some took 1.  No single person could play more than 2 games.  Everyone had to play at least 1 game.  After the teams were identified, they had 15 minutes to sort out who would do what games.

One team got together and debated on a name - what are they going to call themselves.  That took several minutes.  Then one person said "I'm really good at X and Y, but not so good at Z. So, I'll do X, I'll be the "partner" for Y, but (pointing at someone else) you do Z."  He then assigned other people to other games.  People just kind of blinked and did not really argue.

The other team took a different approach.  The first question was "Who is good at what games?"  Several people were good at multiple games, some were not sure, some said "Its been so long since I've done any of these, I just don't know."  So, they tried the games.  Yeah, they looked to see who was really the best players for each of the challenges. 

After finding people for the hardest "skill" games, they were sorting out who would do the memory games and who would be the second players on the mutliple player games.  The interesting thing was, those who had a game selected/assigned stayed by that game to make it easier for everyone else to see what games still needed people assigned to them. 

About this time, the FIRST team realized what they were doing and declated it "cheating".  Alas, it was time for the first challenge.  Team 1 had a hard time remembering who was to do that game/challenge.  They took a few minutes to get that sorted.  Then they decided that writing down who would do what was a good idea. so, they began writing things down.

After being trounced in the first game/challenge, Team 1 had, concerns over who was to do the second challenge.  It seems they had people assigned to three challenges and some people were not assigned to any.  One brave soul stepped up for the second challenge (a 1 player game) and was likewise trounced by the Team 2 player. 

This pattern continued.  In one memory game, Team 1 took advantage of a mistake by the Team 2 player and won that game.  In Jenga, Team 1 got very lucky when the Team 2 player bumped the table slightly when moving a pieve, sending the tower of blocks down.  These were the only 2 "wins:" Team 1 had. 

The other games all went to Team 2 - convincingly. 

How does this apply to real work?  I see an awful lot of knee-jerk reactions to situations - kind of like the Team 1 approach in general.  Don't do that if at all possible,  Really.

Find out what your people are good at, and find out what they like doing.  If at all possible, accomodate those skills and preferences.  If there are people who are willing to learn new skills, encourage them - let them practice the "game" the have an interest in.  Encourage others to practice "games" as well.  If there are skills that people don't have, and are needed to do what your group is assigned - ask the people who would be willing to learn the new skill - the new "game".  Give them the option first.

Encourage your people and encourage them to help and support each other,

The next time the "sides" get chosen for the next game, they may not end up in the same Team 1 or Team 2 that they were on this time.  Give them skills to move forward and make their new team better. 

Put the fun back in what we do.

Friday, November 18, 2011

Thoughts from TesTrek 2011 - Part 2

Thursday morning at TesTrek, in Toronto started with a keynote presentation by Michael Mah from QSM Associates on Offshoring, Agile Methods and the idea of a "Flat World." I could not stay as I was presenting in the first track session immediately following. My presentation on Integration Testing went over reasonably well, I thought. There were a fair number of people who were willing to participate and generally engage and some interesting discussion afterward.

To unwind, I went to Fiona Charles session on Test Strategy, She has given this as a full day workshop. Cramming it into a 90 minute session was challenging, but I thought gave a reasonable idea around the challenges of looking beyond templates and boilerplate.

I had a nice lunch conversation, again with Fiona and a handful of other people sitting around a table. 

The balance of the day was a rush of impressions for me. I know the afternoon sessions occurred. Still, I found myself in interesting conversations with a people - many of whom I have named already. The thing is, without establishing relationships in the past, these conversations may not have happened.

Much of what I learn at conferences occurs in the "hallway track" - talking with people and discussing concepts of interest to us, whether they are on the program for the conference or not.  There are a lot of people smarter than I am, with more experience than I have. The fun part for me is learning and sharing what I learn and have experienced.

The beauty of smaller conferences is that they give the intimacy that allows participants to meet a large number of people if they are willing to step outside of themselves.  I can not encourage people enough to take advantage of that opportunity. 

One thing that struck me was that I saw only a few people talking with other people they did not work with or know in advance.  I'm always curious about that.  The thing I consider to have been fortunate in is that I learned to swallow hard, overcome my shy, introspective tendencies and talk with people.  Walk up, say "Hi, I'm Pete.  Are you enjoying the conference?  What have you been learning?"  Sometimes it leads to interesting conversations.

Other times it is a little, less interesting.  Folks say "Oh yeah,  I have a session to go to.  Maybe we can talk later."  OK, no worries. 

The thing is, I learned some time ago, and have blogged about it, that you need to allow time to talk with other people.  It is a remarkable conference that has really significant, information-packed sessions in every time slot.  Now, this is not a dig at TesTrek, don't get me wrong.  I just find it interesting that there was not as much socializing/networking/confering as I saw.  (There may have been more, in places I did not find, but I did not find or hear about them.) 

I tweeted a few times inviting people to talk about anything to do with testing.  Now, I had some fantastic conversations with Fiona, Adam Goucher, Tommas, Stephen and more.  But what I found interesting was that of the tweets I sent out, the invitations (including the link to the blog post inviting people to confer at TesTrek) , resulted in one person saying "Are you Pete?  I'm Heather!  I saw your tweet!"  That person was Heather Gardiner, with tulkita Technologies.  We had a nice conversation, then we both had to deal with other things.

The thing is, and I think this holds for more testers, don't be afraid to meet and talk with other testers.  Even folks like conference speakers, yeah, the "experts", like learning new things.  You may not agree with them, and they may not agree with you.  But, people who are thoughtful testers with a desire to learn and to share, are good sources for you to learn as well. 

This, I think, is the great opportunity for people going to conferences:  meeting people with a different viewpoint and learning.  Smaller conferences, like TesTrek, give you the opportunity to meet people like you and have the chance to talk with every attendee. 
Meet people.  Talk with them.  You never know what you might learn.

Thursday, November 17, 2011

Thoughts from TesTrek 2011 - Part 1

Last week I was in Toronto for the TesTrek Symposium hosted by Quality Assurance Institute.  There were, what seemed to me, some 200 to 250 testers hanging out and talking about testing.  In downtown Toronto.  Cool.

So, I had the opportunity to spend time with people I had met briefly before the last two years I've been there.  Yeah, it seems hard to believe this was my third TesTrek.  Go figure.

The advantage of returning to the same conference, particularly if it is hosted in the same city, is you get to catch up and get to know other people you met there better than you can in a single meeting.  In my case, I got to have a really nice series of conversations with both Tommas Marchese and Stephen Reiff - both of whom I met previously, but had the chance to spend time with each other, chat and learn.

Other people I see fairly frequently, mostly at other conferences, were Nancy Kelln, Adam Goucher, Fiona Charles.  These folks are smart, capable testers.  You hear a lot of marketing hype about "thought leaders" or "technical experts" or other buzzwords.  You know what's really interesting?  The people who are the real deal don't take those titles on themselves. 

Monday and Tuesday at TesTrek consisted of a Manager's Workshop.  This is an interesting model in that the participants break into groups and discuss topics of interest to, well, test managers.  The times I've been involved in these workshops have been mentally invigorating, if not exhausting.  This year, the day-job  kind of got in the way so I could not attend and participate.

I drove to Toronto on Tuesday, checked into the hotel in Toronto, then went looking for the fun.  I found the folks from the conference, like Darrin Crittenden and Nancy Kastl.  I had the chance to sit down and have the first of many chats with Fiona and Tommas, and Nancy when she arrived from Calgary. 

Wednesday opened with a "Pre-Keynote" by Tommas Marchese.  His topic was "Heads Up Testers: Striving for Testing Excellence."  In short, it was a call to action for testers to break out of the mold that some companies expect testers to stay in.  He had several solid points and I thought it was an excellent start to the day. 

The keynote following this, after all, this was a "pre-keynote" was a panel presentation with representatives from Microsoft, Micro Focus, HP and IBM-Rational.  I did not find this an OK idea, and thought it would be better to have greater opportunity for audience participation, questions and the like. 

The rest of the day was broken into workshop and presentation sessions.  Tuesday these consisted of presentations around Test Measurement, Cloud Computing, Test Leadership, Security Testing and others.  Nancy Kelln gave a workshop on Test Estimation that had originally been intended to be given along with her Partner-in-Crime/Conferences, Lynn McKee.  She challenged people's expectations, just as I thought she might.

Tommas Marchese boldly gave a session on regression testing that he was not scheduled to give.  Filling in and giving a presentation not your own can be a problem.  He did a respectable job, I thought, and made some good points. 

After the opening reception, with some more conversations, a handfull of us went to the Elephant & Castle around the corner for a quiet pint and conversation.  I retired early to rest for the next day and prepare for my presentation.

Sunday, October 30, 2011

Conference Attendence 201 - Learning While Confering, Continued

I've written on this idea before.  Here in fact.  Many other people have written passionately about it as well. As I am fresh from presenting at STPCon Fall 2011 in Dallas and am getting my notes and reviewing my presentation for TesTrek 2011 (http://www.qaitestrek.org/2011/)  in a couple weeks in Toronto, I wanted to take a moment and beat this drum one more time.

When you are at a conference, CONFER with people.  Talk with them, ask question.  Answer questions.  Express opinions.  Be open to learning.  If you disagree with someone, let them know politely - and why.  Maybe you are closer than you might realize and simply are stating the same thing different ways.

One really important point.

When the "official" sessions wind down and the "official" "networking opportunities" wrap up - look around for people just hanging from the conference.  Then ask if you can join them.  Ask what they do, where they do it, what they like about it.  You may well learn really valuable ideas you can take back to the boss.

If you see a group of people from the conference sitting in the hotel bar/lounge/whatever, a quick scan will give you some idea of the conversation(s) going on.  If it is vaguely related to software and/or testing, ASK IF YOU CAN JOIN THEM!

I know from my own experience, that if I have ANY energy left and no absolutely pressing duties elsewhere, I like to talk with other test professionals and learn.  Yeah.  I learn a lot just from talking with people.  This last conference, I had some fantastic conversations with Doug Hoffman, Fiona Charles, Tony Bruce, Scott Barber, Dawn Haynes, Lanette Creamer, Catherine Powell, Robert Walsh, Dani Almog... the list goes on - Those are the folks that popped into my mind immediately.  Testing Heavyweights all - and I gained insight, if not actionable information, from each conversation. 

So, I invite any TesTrek Symposium attendee.  If you see me sitting in a chair in the hallway sipping the web, or in the conference center lounge, please feel free to join me.  Really.  I like meeting people and sharing ideas, experiences and viewpoints. 

I'm there to learn, too.  Please help me learn.

Saturday, October 29, 2011

STPCon Fall 2011 - Part IV

Thursday was perhaps the most relaxing day I had the week in Dallas.  I enjoyed a relaxed breakfast with a large number of testers and speakers at the conference - it is easy to relax when all of our speaking commitments have been fulfilled.

Some were still not quite done however.  There were a series of talks called the "Rapid Fire Challenge" with a key idea packed into a five minute presentation.  Dawn Haynes gave an interesting presentation around Tester Personalities.  Lanette Creamer gave a fun presentation on "Tester Tricks" (where she listed Adam Goucher as her "favorite tool.")  Mark Tomlinson talked about Risks and Costs of false positives in automation testing.  Scott Barber gave a cool breakdown on ideas that were useful for determining what to test and what not to test.  He called it FIBLOTS.  Cool.

Then, Fiona Charles delivered a stunning keynote on the question of managing Testing or the Testing Process.  Ummm - wow.  I was tweeting comments from that as fast as I could.  It was good.  It was really good.

From there, I went to Doug Hoffman's presentation on computer assisted exploratory testing.  Overall, I enjoyed it and got some ideas I need to consider.  The huge drawback was that there was simply too much information to squeeze into 1 hour and 15 minutes.  It would take at least a full day to get a good survey of the ideas - and a couple of days would be better. 

From there, I ran into James Pulley - a fellow SQAForums Moderator.  We had never met in person and this was a great opportunity. 

From there, I spent some time trying to get my notes in order, get things sorted out and "filed" so I could make use of it later.  The rest of the time there (which was not much by now) I spent it chatting with people, having a light lunch with Fiona Charles, Matt Heusser and Yvette Francino.  After that, Matt, Fiona and I headed to the airport for our respective flights out.

All in all, this was a remarkable week.

Thursday, October 27, 2011

STPCon Fall 2011 - Part III

Wednesday at STPCon Fall was an interesting day.  In the morning I had signed up to participate in the "Speed Geeking Breakfast Bytes" - 8 minute mini-presentations on a topic we wanted to be sure that people could head home with at the end of the conference.  All the presenters were in a biiiiiiig room giving their presentations to a table of people all at the same time.  We gave our presentations three times before the morning keynote for the day.

The topic I presented was was "Integration Testing Lessons from Pulp Fiction."  Yeah. kind of a movie-theme for me this year - Harry Potter on Tuesday and Pulp Fiction on Wednesday.  Fun!  The first run through for me was a bit rough - actually, I did not finish before time was called.  There were a couple of interruptions and, frankly, I probably needed another cup of coffee before launching into the first run.  Sorry folks.  The second and third run throughs went pretty well and everyone had fun.  One participant in the third session was giving quotes from the movie at appropriate times! 

The rest of the day was dedicated to simply going to sessions and hanging out with people I wanted to talk with.  What a fantastic way to spend a conference - Not preparing for a presentation or answering questions about the presentation, but simply going to presentations and sitting in the back row.  Cool.

I went to Dani Almog's presentation on Automated Test Oracles.  There have been presentations before on a similar topic - what made his interesting was how he developed the oracles: "neural networks" developed from the data identified as correct or incorrect.  Cool stuff.  It requires a huge amount of rigor and control, not to mention structure, but it looked interesting to me how he went about building it.

The next session I went to was by Karen Johnson on Discipline in Teeting.  Ironically, I got there late. Karen spoke to a really full room on how to keep motivated and moving forward.  There were a lot of good suggestions - and she explained how she made use of each, from time-boxing to a form of "Pomodoro Technique" to setting small rewards, eg., "Finish this then go to and get a nice cup of .

One important aspect she suggested was to simply change location - literally.  Go for a walk.  Do something ELSE.  Go somewhere else.  Like, a coffee shop, a conference room and close the door.  Forward the phone to voice mail.  Find a park bench (or comparable) and try to clear your head so you can think better.  What I thought was cool about this was how many people attending the session shared their ideas on what they try and do.  It was really a fun session.

This brought us to lunch and the Lunch Keynote by Matt Heusser.   Matt's topic was "How to Reduce the Cost of Testing on Monday."  Meaning, things you can start with when you get back to the office to be able to focus on testing - not time reporting, not attending meetings, not preparing project status reports and status reports on the status reports and status reports on the status reports on the... yeah, you get the idea. 

He talked about taking steps to open up communication - to help people be able to work more effectively and spend more time and energy focused on testing - so they are really testing, not sort-of-testing.  It was really interesting.

After this, I headed up to listen to Lanette Creamer present on pairing programmers and non-programmers. Now, she did not mean the "Paired Programming" some of the XP (and other) folks mean.  Instead, she meant more of spending some time working together to get things sorted out - either in planning, designing or executing tests - or talking about the application - or... yeah.  A cool idea I like to call "Communication".  I know its kind of a weird concept, but it seems to have some potential.

Following this, I caught up with Catherine Powell, a crazy-smart tester, Matt Heusser (who had come down to Earth after the success of his keynote) and a handfull of other folks for a quiet chat and a little relaxation.  Wonderfull people.

We then headed off to the "Open Jam Sessions" - a bit of fun before dinner.  Folks split up into groups to play a variety of games and fun exercises and generally have a good light series of exercises.  Lots of fun. 

More conversations with a variety of people over dinner then a little writing wrapped up my last full day in Dallas for STPCon - BUT - there was still Thursday to look forward to.

STPCon Fall 2011 - Part II

Tuesday at STPCon in Dallas was an astounding day.  Matt Heusser and I were slated to present a session entitled "On Complete Testing" immediately following Rex Black's morning keynote address.  As I wanted to prepare the room for our presentation and make sure all our potential examples were queued up, I admit that I ducked out early. 

Our Tuesday morning session started with a simple question to the participants "When the boss comes in and says 'We need this completely tested?' or 'We need this to be bug free' what is really meant?"  at is complete testing?"  We got an answer we expected - "That we have complete coverage in our testing."  OK.  Coverage of what?  "Requirements." 

We began discussing that idea, which drew out things like validation metrics, boundaries and equivalences defined within the requirements, and what can be done about undocumented requirements, assumptions and presumptions, expectations that were not communicated, and other problems. 

We moved on state diagrams - mapping each potential state within the application and how that can be exercised.  I pulled out a way cool example of a system I had worked on a few years ago.  The basic functions worked really well.  However, by sheer accident a memory leak in the application was found.  At no I had tested a few years ago.  This was an error found simply by letting the application idle over the weekend.  This proved to be an example of a problem of strange problems that can occur outside of our control or way of expecting any issues.

We moved on to the idea of code coverage as a means toward complete coverage.  This lead us to the differences between statement coverage and branch coverage.  This was an interesting discussion on just what the differences could be and how we could potentially miss paths even if every branch is tested.  We may miss combinations of branches.  We agreed that the idea of "100 Percent" coverage of lines or branches still would not give us complete testing.

We did agree that none of these techniques would give us true complete testing in isolation.  If we made use of all of these techniques, we would have a greater likelihood of coming close to "complete" testing. 

We continued down through input combination coverage, the idea of of order and filtering of memory, state and interrupt problems.  All in all, we had a rolling discussion and the hour and 15 minutes flew past for both Matt and I. 

We had an absolute blast! 

(Note - everyone who was in the session, we promised to send a transcript of the discussion to all who requested it.  SO, those who dropped off their business cards or gave us their email address on the "signup list" - I'm still working on that and I'll get it out as quickly as I can.  OK?)

The session wrapped up.  We headed out and got a cool beverage, then had some really interesting hallway conversations, and headed to lunch. 

The afternoon found me in a series of chats and conversations with people and just loving every minute of it.  Around 3:30, I reached into my briefcase, pulled out and put on a tie I brought specifically for my upcoming presentation on test leadership. 

My afternoon presentation was on Test Leadership Lessons from Harry Potter.  Yes, I know, a geeky-nerdy topic, but that is kinda me.

I walked in, got a very nice introduction from Fiona Charles who introduced me as "a colleague and friend" - which left me gobsmacked (not a good thing to have happen just before speaking.)  Suffice to say that the idea of technical leadership and the example of Harry Potter as a reluctant leader, one who is not appointed and does not seek to be a leader, but finds himself in that role, has similarities in the ideas of Technical Leadership expressed so well by Gerald Weinberg.

It was a fun session for me, and it seemed to me that the participants also had fun.  I encouraged them to write - for themselves, in blogs, newsletters and, well, anywhere.  Learn and share what you learn.  Experiment and share your results.  Be bold and dare greatly - (kinda like what testers are expected to do when testing, right?)

The session wrapped up and we retired to the "Welcome Reception" on the conference center's patio.  This was a great evening with nice appetizers, great conversation, meeting people and generally having a great time.  I was even interviewed by Yvette Francino for Search Software Quality!  How Cool! 

My day wrapped up with a nice relaxing evening back with another crowd of testers sharing stories and having a good day.

I can not imagine a better way I could have celebrated my 50th birthday.

:-)

Wednesday, October 26, 2011

STPCon 2011 Fall - Part 1

It has been an interesting couple of days for me.  I flew from GRR (yes, of ParkCalc fame) to Dallas with Matt Heusser.  We talked about our joint presentations, our individual presentations and what we hoped to learn and hear while at STPCon.

We landed safely, after talking for FOUR HOURS!  Yeah, in the plane to Detroit, waiting at Detroit for the lay-over, then on the flight to Dallas.  I think the people around us were exceptionally glad that the flights were over.  We had a fantastic conversation, the woman next to us said "Wow.  I've never heard two guys get so excited about something so boring."  I figure she was in marketing... or maybe upper management.

OK, so before I go on, let me just say that the turkey burger was delicious.  I found an undocumented requirement.  "Delicious AND well-done ground turkey."  Bad night Sunday after a highly enjoyable conversation with Matt, Fiona Charles, Rich Hand, Abbie Caracostas and a bunch of other people. 

Monday morning I had very little energy, my own fault in retrospect, but we still gave a fairly solid presentation.  By afternoon I was closer to "up to speed" and could contribute much more.  We learned a lot from doing the workshop in front of living, breathing, thinking people and have already begun making changes for the future.

What we presented, and the exercises we conducted, involved a series of testing ideas, problems and scenarios.  We began with Matt talking about the idea of "quick attacks" testing.  That is, doing some basic hits against an application even if you don't have much information about it.  We then applied a series of exercises around that idea.

Then, we introduced the idea of working against states specifications and expectations and how insights to that will change approaches to testing against the same applications, and others.  We then began discussing core ideas around bounds and equivalences in data and how that may impact our testing approach.  After lunch, we  moved on to discussing a variety of topics. 

Matt and I knew that there would be far more information to talk about than we possibly could get into a single day workshop.  We created a list of potential topics that could be of interest.  We presented that list to the class, allowed them to add their ideas and vote for the topics of interest to them.  Each participant was allowed three votes, we sorted based on the number of votes and began working our way down the list. 

This was a hugely fun exercise for us and resulted in some interesting discussion among all the participants in the session, as well as Matt and I.  What made an impression on some of the students is that we did not always agree.  In fact, there were cases where we made a point of showing where we differed, and how our experience and environments, the context in which we worked, impacted some of those views. 

As time wound down and we came to the end of the session, we had several topics we had not addressed.  Matt pointed out to them that in testing, as in the exercise, we may not have time to test everything on "the list" to be tested.  We will then have to work on the items that are of the most interest - Just as we selected topics that showed the most interest to discuss and work through with the students. 

Monday night we settled down for a light supper of appetizers and various beverages with a host of intelligent people.  The ideas and excellent conversation flowed, although I decided to call it a day and retire fairly early to prepare for Tuesday's sessions and get some rest. 

Friday, October 14, 2011

No Black Swans Or Always Expect the Unexpected

So, I expect many, if not most reading this, have heard of Taleb's Black Swan theory.  He put this forward around those things that people write off as being so far in the extreme, or so improbable, that "no one" could predict them.  Many folks far more learned than I have discussed this many times over.  Not just around software events, but in disasters, both natural and, well, not so natural.

Fact is, I have seen so many things in software testing that other folks would write off as "improbable" or "unrealistic" or simply snort in derision over.  There was the developer who once tried to say "No user would ever run this purge process with nothing to purge."  Really?  Never?  They'd always know better because, well, they'd know never to do that?  How would they know?

I can also think of the times I walked into the same trap, unwittingly.  I learned.  I learned to be aware that I can not anticipate everything.  Now, sometimes that seems odd.  Then again, when I think about it, I ran into the same kind of problems other folks had, like those for whom a test result, or worse, an actual event, in production or in the wider world.  That problem was, and sometimes still is, perception.

My way of thinking, approaching problems or questions, is sometimes self-limiting.  The fact is, I suspect it is the same for most people.  What I believe, or maybe hope, is that my awareness of this can help me work around it and be open to multiple possibilities. 

Hmmm - that sounds kinda wishy-washy.

What I mean is that I try and be open to the possibility that I missed something.  Usually when I do miss something, its because of my own perceptions, my way of approaching a question or scenario.  Broadly, my frames

These models of thought can be usefull.  If we are not aware of potential limitations, we will find ourselves in the "No user would ever do X" camp.

Now then.  Something REALLY unexpected?

My lady-wife keeps a large garden.  We also capture rain water in a couple of large barrels to water that garden.  Sometimes, when there is a lot of rain, we will line up some buckets and catch extra water from the run-off of the car-port roof. 

An interesting thing happened last week.  The lady-wife was trimming some plants.  One of the branches had some nice looking flowers on.  She decided to put it in a rain-filled bucket until she could bring the flowers in the house.  She looked in the first bucket in line and saw a small fish. 

A FISH! 

Just a small little guppy looking thing.  No.  We did not put it there.  We have no idea how it got there, although we've bounced around some fun theories.  Have we come up with a model for how it got there?  Sure.  Several.  We don't know which, if any, is correct.

So a fish in the rain bucket is something I definitely did not expect. 

Maybe I should have.

Monday, October 10, 2011

Of Bugs and Weeds or Why Tugging Gently May Reveal More Than Pulling Hard

We have a fair sized garden for where we live.  To be fair, my wife has the garden.  I'm the laborer who makes some of the bigger chores happen.

One chore that we get to do twice a year (mid to late spring and early to mid fall) is pull Virginia Creeper vines out of the lilacs, mock oranges, off the fence, and generally out of everywhere we can pull it out from.  Now, its a pretty enough plant.  However, like most vines it tends to not "stay put" and grows pretty aggressively.

It can, and will, choke out other plants - it has done in a couple of ours and several of the neighbor's much loved mock orange trees.  It looks a bit like poison ivy and actually works pretty well as a deterrent for keeping some of the unscheduled visits to the garden and yard from youngsters (and oldsters) in the area to a minimum - they don't realize that vine is NOT poison ivy.

In the fall, the leaves turn a stricking red - astoundingly bright color. 

Did I mention that it grows REALLY fast? 

While I pulled a huge amount from the one area I worked in, I know there is more there.  I focused on the big mature stuff that would be sending out more runners in the spring.  Any smaller vines I came across I also pulled but I did not go looking for them.  (Kind of like looking for defects that really impact a system vs those that should be fixed but don't really impact anyone.)

Like many people dealing with this plant, I started out, many years ago, pulling hard and aggressively.  I was going to show it who was the boss.  I was going to WIN!  Ummm, not so much.  You see, vines tend to break off at resistance points.  So if a couple of tendrils have looped themselves around a wire in a fence  or a branch of another bush, the "stem" will break if you pull hard - the tendrils will hold the rest in place.

What I learned, and greatly amused my lady wife as she watched me do this, is I can identify a large vine, gently lift it and apply even pressure on it.  That will break the tendrils off so I'll have a much large section with little or no resistance to pulling.  I'll then look up to the top of the bushes (the lilacs along the North side are quite tall - over 10 feet) and watch what moves when I tug on the vine.  That way, if the vine DOES break, I'll know generally where it broke off.

I've also found that if I start a gentle, consistent pressure, I'll get much more of the vine off per attempt than if I give it a good yank.  Like all heuristics, that one is fallible.  Sometimes it works, sometimes it doesn't.  Much of the time through it works. 

So as I was pulling vines this last week, I got to thinking about software defects.  If I dive in aggressively looking for HUGE problems, I tend to find some.  If I use a more gentle, subtle approach, I may find some of the same ones I found with the aggressive techniques.  I also find others that I perhaps may not have found.

I'll experiment some more tonight when I get home.  I need to transplant some of the smaller lilac bushes.  It should be much easier since so much of the virginia creeper was pulled out of that portion of the garden. 

Sunday, October 9, 2011

An Exercise in Task Prioritization Or Where Has Pete Been?

I find it astounding that the last blog entry I posted was written on August 24.  That seems a long time for me to go between postings.  That is not to say I have not thought about writing some thoughts down.  In fact, I have a fairly lengthy list of post-it notes stuck on my bulletin board with "This would be a good topic for a blog entry."  As of today, its quite a lengthy list. 

The simple fact is, I fell into a common trap for software people - I'd see a side project and think "I can do that in my spare time, it can't take too long."  Or, I'd say "Here's an opportunity for me to do X.  Those don't come along too often.  I can do that."

Pretty soon, I had more "side projects" than time to do them in. 

On top of that, there was a "side project" at the day-job that needed to be addressed.  Not a big deal, just a minor little thing of 40 to 60 hours and a couple months to do it.  Of course, it was not on the project schedule because it was pretty small and could be done in the middle of other things.  Until it was scheduled to be shipped to a customer.  THEN the priority ramped up BIG time.  (No one has ever seen that before, right?)

So, in the meantime...
  • I wrapped up slide decks for two presentations at two conferences;
  • Wrote up supporting articles on both; 
  • Wrote an article taking a contrarian view toward the answer to a question asked;
  • Wrote up four short essays answering other questions;
  • Wrote up notes from CAST2011 and filed them neatly for later use;
  • Took the lady-wife on an extended weekend at a music festival that had no electiricity (and allowed no generators) in the camping area - and had a fantastic time;
  • Did the usual (and expected) "end of summer" family stuff;
  • Got caught up on the day-job's projects (well, relatively);
  • Slept in yesterday.
Oh, and the usual home maintenance stuff for a home built in 1888.  (A friend of ours said "The only thing that works in an old house is the people."  He's right.  We stole the line and use it ourselves now.)  Oh, and I fixed the annoying drip in the plumbing in the downstairs shower.   (But that was today.)

The result is I missed a peer conference I had intended to participate in.  I also did NOT submit any proposals in the last 6 weeks to speak at conferences coming up next year (yeah, there werre a bunch of deadlines that I waved to as they whished by.)

The good news, and things I'm quite pleased about, include a resumption of workshops on drumming (pipe band drumming to be particular.)  I agreed to teach a bagpipe band's fledgling drum corps this year.  I had done a series of workshops, fairly intense 4 and 5 hour sessions, starting with "holding the drumstick" and ending with "playing as an ensemble."  They liked it so much that they asked me to repeat the lessons for their novices/newly joined drummers and pick up with more advanced material for last year's students.  Cool.

The day job has had several "wins" from a business view, a software view, a testing view and personally.  Things are far from perfect, but there looks to be an interesting time ahead. 

So, expect an flurry of blog posts as I try and work myt way through the list.

What did I learn?  Hmmm - Jury is still out on that one.  Off the cuff, I'd say I should have learned to not take on more than I can handle.  What I may have learned instead is that sometimes the stuff we agree to do had better be fun, because we may not have a chance to do other stuff that is fun. 

Wednesday, August 24, 2011

Out of the Mouths of Babes or Testing Lessons from a Three Year Old

While recently in Seattle (spot the CAST reference!) my lady-wife and I had a little time for sight-seeing and visiting and what not.  A friend of the daughter and her husband live in the area with their two children, aged 9 months and 3 years.  So, on touching base to say "hello" we get invited over for an evening - and jump at the chance. 

We brought over a pecan pie fresh from a Cajun place down at Pioneer Sqaure, downtown (amazing food by the way - and the pie was straight from the oven) and a couple of bottles of wine and age appropriate presents for the two children.  While visiting with the daughter's friend and playing with the kids (waiting for her husband to come home) I found myself engaged in an informative mentoring session with the 3 year old - Aidan. 

Brilliant kid.  You can tell his parents are terribly bright and spend a ton of time with him.  

Now, most people with children, or have ever had dealings with children, will know that there is a key word in every 2 and 3 year old's vocabulary:  "Why?"

Well, not Aidan.  He looked right at us and asked "What happens?"  Well, sometimes he said "What happened?" but he did so in the right context. 

For example, a ballon popped "What happened?"  "Well, I think it bumped against a stick or a pricker in the grass and that popped the balloon."  (We were playing in the yard with a balloon.)  "What happens?"  "Well, sometimes if a balloon touches something sharp that can pop the balloon."  "Oh."

So then it was time for him to play with his hard-hat and be a builder.  "Can you build me a big building?"  "Yup." (he leaves then comes back)  "Is it done?"  "Yup"  "Great.  Can you build me a barn now?"  (he goes away and comes back.)  "Is it done?" "Yup."  "Great! Can you get some hay and straw and get a cow and a horse and some chickens for the barn?"  "What happens?"  "Well, then the animals can live in the nice bard you built."  "Oh.  What happens?"

About that time, Dad got home and Aidan went to go play with HIM until dinner was ready. 

The rest of the evening and on the drive back to the hotel, that stuck with me.  Not "Why" but the next best question a tester can ask:  "What happens?"

Sunday, August 14, 2011

CAST 2011 Emerging Topics and Wrap Up of Thoughts

I started out looking at my previous several posts and realized how many times in each of them I used the work "amazing."  I promise I will do my best to not let my still spinning head succumb to such a word in this post.  The thing is, I find it really hard to NOT use that word when I've been inundated with intellectually stimulating ideas. 

Emerging Topics

After opening up to opportunity for anyone attending CAST to submit an idea to speak on, we then allowed anyone who was interested to comment, rank or otherwise ask questions around the proposals.  Matt Heusser and I reviewed these comments, rankings, questions (and their answers) to pull together a program from the ideas submitted.  Many of the proposals were from people who had not spoken at a conference before.  Personally, I found that exciting.  Why?

We were opening up venues for people to speak to one of the more challenging conference audiences I have ever encountered.  People who think, and who may not agree with some of your points, are not only encouraged to speak up and ask questions (or challenge the speakers) but are expected to do so. 

When Ben Yaroch let us know that there was a strong likelihood that we'd be able to stream the ET sessions live, that got me even more excited (yeah, right, as if I could get more excited.)  Adam the Volunteer (I never did get his last name) was a big help getting things going Monday afternoon.  That left me free to make sure the presenters were ready and we had their slide decks (presuming they had some) available.  Thanks Adam!  I do appreciate it. 

When Monday rolled around and we kicked off right after lunch, then the fun began.  The ensuing afternoon was much what I expected - a variety of speakers on a broad range of topics, all packed into 15 minute slots with 5 minutes saved for questions.  Some of the speakers were a little un-polished.  We did not care - It was the crisp thoughts they had (not crisp Powerpoint skills) we were interested in. 

Personally, I liked how many speakers used no slide decks at all, instead they focused on the flip chart in the room, using markers to interact with the people in the room.  Coolness - no Death by Powerpoint here!  :)

What was the best?  Hoo boy.  How do I choose? 

Michael Larsen gave an interesting presentation on EDGE (a cool Boy Scout acronym) and how that can be applied to testing.  Anna Royzman gave an experience report on how she got a mixed community of people to work together and apply exploratory approaches to improve UX and overall testing. Lanette Creamer gave a very very brave demonstration of testing on the fly around using tools everyone "knows" in new ways.  Neil Thompson and Felipe Knorr Kuhn both gave interesting talks (hard playing facilitator when the topics draw you in, not my most shining moment.)  Robert Berqvist gave an interesting comparison on the groove of music and the groove in testing  - yeah, drummers love that kind of stuff.  Ben Yaroch spoke to a packed room on leadership ideas drawn from the military, and how they can be applied to testing.  Finally, the most challenging presentation for the day was Geordie Keitt's presentation on "Complexity Quandary, or Why Certified Testers Continue to be in Demand."  This seemed almost tailor-made to draw on ideas in Michael Bolton's keynote, and to serve as a bridge between James Bach and Doug Hoffman's debate on the idea of Schools of testing being divisive.  We gave him a double long session (45 minutes) and the discussion went over that.  I was too busy moderating to tweet - great stuff though.

Tuesday, Eric Jacobsen kicked things off by talking about combatting Tester Fatigue (as I was still recovering from the flight and the excitement that comes from CAST, I thought it appropriate for me!)  Bill Matthews gave a good session on Myth Busting for Testers.  Frankly, I hated cutting both of them off when I did as I thought it was good stuff, and I only wish he had time for more.  Just before lunch, I gave a short version of "Messy Integration Testing" and how things that seem to be unrelated probably were not and needed to be considered in testing.  That was well received, I thought.  

After Lunch, Todd Mazierski gave a short overview of Sinatra.  This was followed by Geordie Keitt's All-Star Tester Revue (OK, I made that name up)  Geordie stood up and played guitar and sang songs around a testing theme (it helps when you write them!)  Then brought in a panel of Michael Bolton, Lanette Creamer, Dee Ann Pizzica who did some interesting improv comedy around a testing theme.  Capped off by Lanette singing a song, with Geordie backing her on guitar - and Geordie closing the session with another original composition.  What a great time.  Matt Heusser wrapped the ET track with a lesson in communicating with "Agilistas" drawn from his experience. 

We then turned the room over to Lightning talks - and I had the chance to go catch up with people. 

One of the people I kept running into during the conference was Adam Yuret.  No, not Adam the Volunteer mentioned before.  He and I have met cyberly for some time, banter on Twitter and various on-line forums.  All in all he's a good guy with ideas to consider. 

Keynotes 'N Stuff

I was looking forward to hearing Cem Kaner's keynote this year.  I missed him speaking last year as I was "otherwise engaged."  Unfortunately, he had to cancel and was not able to attend CAST, so the workshop he was scheduled to teach got shuffled, and Michael Bolton slid into the keynote spot where Cem was scheduled to speak.  Michael's keynote was astounding (avoiding the word "amazing" can ya tell?)  He covered things I have been trying to express for some time.  The minor issue encounterd, and gamely dealt with, was the projector simply did not work.  The result was Michael gave a very academic-like reading of his document which was absolutely chock-full of ideas around the history of scientific thought and how it related to testing and the idea of context driven testing. 

James Bach gave a keynote that, in my mind, was a solid argument on the benefits of avoiding processes that so many people advocate, and were challenged time and time again.  All in all, it was a call-to-arms to reject the set-piece examples and practices that are part of so many people's views of "best practices."

I was sitting with two different groups on Monday and Tuesday.  An amazing thing about CAST, so many people are welcoming and willing to engage in conversation no matter the topic or if you were a "famous" person.  Based on comments around the table, both were well received.

A couple of things stand out at this point in my rambling narration.  First, the hall was absolutely packed.  When the requisite question "How many are at CAST for the first time?" it seemed to me that half the people in the hall raised their hands.  It was an astounding sight.  The first time attendees I met all very readily engaged in the spirit of the conference and actively participated.  This bodes well for the future.

EdSIG - Education Special Interest Group

Tuesday night I participated in the discussions of the Education Special Interest Group.  Topics on the table included getting more instructors for the BBST courses up and active, the upcoming next installment in the series, Test Design, ideas around why there are so many fewer students taking Bug Advocacy than are taking Foundations, branching out (reaching out?) to people who want to help but are not certain where to go to help.  So, there are a stack of issues, including creating a "what to expect in this course" video for Foundations - hopefully so that the amount of work is not overwhelming to the student. 

There is more, but much (for example Michael Bolton's workshop on test framing) is worthy of its own blog post. 

I do want to thank the folks who organized the conference - I know James and Jon Bach were up to their eyebrows - but also Doug Hoffman, Ben Yaroch, Dawn Haynes (who is an all around trooper) - all the people who made all the big ideas (live web streaming for example) move from "idea" to "its happening now."

Thursday, August 11, 2011

CAST 2011, General Observations

I'm writing this the evening of Wednesday, August 10.  This is the evening after the Workshop day of CAST 2011 in Seattle.  Overall, this has been an amazing experience.  The whole experience was highly rewarding in many ways. 

James and John Bach took an unconventional approach to putting together the program.  Speakers were chosen on reputatuon, not on submission topic.  Yeah, it was different.  At the same time, I had the opportunity to work closely with Matt Heusser on putting together the Emerging Topics track.  This was a cool idea, an experiment, and overall, it came off well.

Another experiment was the live webcasting of the keynotes, the ET sessions, lightning talks and "tester interviews."  The discussions were astoinding - no experiment there - certainty was closer to it.  I personally appreciate the great conversations Neil Thompson, Bill Matthews, Fiona Charles, Paul Holland, Dawn Haynes (who is a terrific person and hard worker who does not get nearly enough credit for making things just work)

Oh - I met more people from Sweeden at a testing conference this year than I can imagine!

OK, other people I met whom I have not mentioned - yeah, there were a stack, but these stood out ... Lets see - Christin Wiedemann was in my class today, then - sitting behind me, and next to Michael Hunter was Cathy McBride.  Oh!  Another Alex Bantz was also there.

I had the pleasure of helping with the EdSIG meeting and looking for ways to get the people who were interested in helping in the SIG, and getting involved in BBST, actually involved and active,  The thing is, this is also the same central idea behind keeping any non-profit, volunteer organization - finding tasks that need to be done, matching them up with people with the skills and interest in doing them, and matching them up. 

My experience in Michael Bolton's Test Framing workshop really deserves its own post.  For now, suffice to say it was interesting.

I had intended to decompress, have a quite dinner then get some work done.  Instead, after finishing an adult beverage, as my "take out" dinner was about to come out, Selene Deliesie, Lynn McKee and Nancy Kelln walked into the restaraunt.  What could I do?  We sat down, enjoyed a meal together and had a fantastic conversation. 

Now, it is very late, I'm remarkably tired and have more thoughts running through my head from the last three days and looking forward to more general thought absorption, internalization and a little sight-seeing tomorrow before heading home.

Thank you Seattle and AST for an amazing experience. 

CAST 2011, Day 2, A Brief Summary

Again, I had intended to write this last night.  It is amazing top be how mentally and physically drained I am byt the end of each day at conferences.  So many smart people it seems impossible to keep up.

Right, so, people.  Had some really nice hallway conversations with Elana Houser, who was in the BBST Foundations course with me.  We did not always agree with each other in the course, she is, however, a very good thinker.  Lynn McKee, Nancy Kelln, Selena Delesie and had nice chats and gave great insights on discussion topics.  I also brifely met Karen Johnson - OK folks, she is smart and wise - doesn't always come in the same package. 

Amazing talk(s) with Michael Hunter - Yeah, the Braidy Tester guy.  He really is as good and inspriational as his blog posts seem.  Oh, now then, let's see, Had some Fantastic chats with Ajay.Balamurugadas. Ben Yaroch is crazy smart and a hard worker - really. Michael Larsen really DOES have as much energy as his podcasts make it seem like he does.  Let's see.  Also had some good visits with Justin Hunter, Paul Holland, Bill Matthews and Johan Jonasson - Phil McNealy is a good person to know as well. 

One of the highlights for me was seeing the Emerging Topics track come together and be a reality.  Some of the speakers had a bit of a rough go.  Many had never presented outside their own company before - WHAT a daunting task!  Yeah - Present a 20 minute idea in front of some of the best testers around.  YEAH!  Still, everyone made it through the experience, good information and ideas were shared - even if folks were a little nervous.

I had a chance to drop in the tail end of the Open Season of the BBST Experience track.  Cool Q&A session, lots of energy.  The Lightning Talks, which I dropped in on after the BBST talk ended, were interesting - ideas and "quick hits" with ideas.  Fun.

I ended up having an interesting conversation with Felipe Knorr Kuhn, Gary Masnica, Phil McNealy and Lanette Creamer.  Job Titles, Job Roles, What to Do, How things work... highly enjoyable, mentally invigorating.  This set me up for a good session in the EdSIG - Education Special Interest Group. 

Michael Larsen, me, some dozen other people talking via Skype with Rebecca Fiedler and Cem Kaner (who could not be at CAST.)  Good ideas, much meaty discussion - look for another blog post on that before too long. 

It was an amazing day. 

Oh, I did not get elected to the Board of Directors for AST.  Now, some folks tried to console me, I was unconsoleable.  Well, technically, literally, there was nothing to console me about!  I believe that each of the five candidates were eminently qualified to serve on the board and three were selected.  This is good. 

So, this morning, I find myself sitting at a table (starting this blog post actually) and Michael Hunter sat down to chat and have a little breakfast.  Griffon Jones dropped his pack and went for a little breakfast, but got tied up.  As it was, Michael and I had a great visit before we headed off to Michael Bolton's workshop on Test Framing.  That, too, is another blog post.