Monday, August 30, 2010

Learning and Teaching and Leading

One thing I learned early on when teaching drumming students, particularly beginners, is that the person who learns the most is often the teacher.

It never seems to matter whether the lesson is an individual or group lesson, focused on one style or general drumming - the process of teaching beginners forces the instructor to reconsider things that the instructor simply does.  This forces the teacher to reconsider all that he does, find interesting foibles or potential weaknesses, then correct or change them as needed for working with the student. 

The interesting thing is that this reflection sometimes leads to profound understanding of what the student is learning and what the instructor is conveying.  When preparing for the odd lunch-and-learn or training session at the office I never really had that kind of an experience - or when presenting such sessions. 

On Improvement...

This last couple of weeks something interesting happened.  I've been preparing a presentation on Test Process Improvement for TesTrek in October.  I wasn't scheduled to present, or lead a workshop, but as a couple of presenters had to cancel, Viola!  I'm on the presenters list.  Then, a couple of other things came into my observation. 

There have been several conversations on email lists I'm a participant in, as well as forums, on the dreaded M word.  Yes - Metrics.

On top of this, I had a remarkably revealing email conversation with Markus Gartner - amazingly bright guy.  This came about because the questions I submitted for the "Ask the Tester" were submitted after the magic number of 10 had been reached.  However, they were forwarded to Markus and that presented me the opportunity to learn and be rinded of things I once knew and had forgotten (or channelled off into a safe place in my memory.)

My question to Markus was centered on his take of "Test Process Improvement" in an Agile environment.  The bulk of his response was reasonably close to what I expected - in fact, reassuringly close to what I had prepared for the presentation so my confidence level increased dramatically in what I was saying.  (Yes, a little reassurance is sometimes a good thing, particularly when one is a very little fish hanging out with very big fish.) 

He had one idea that I did not have.  And it left me gob-smacked.  Tacked onto an already interesting sentence about the organization's management, Markus said "... or they don't trust testing anymore."

On Trust...

I was immediately thrown back many years to when Developers were called Programmers and when I was working as a COBOL Programmer on a large IBM mainframe.  I had a Manager who did not trust his staff.  Not because they were inexperienced, but because he simply did not trust them.  To this day, I do not know why that was the case.  I can surmise why, but it has little to do with the point.  Suffice to say, it was an un-happy work environment. 

Markus made an interesting observation.  His point was that in Agile, the very purpose is to engender trust amongst all participants. Additionally, when management is invited to observe the meetings, they can gain an understanding of what is being done by their staff and as their understanding increases, so to should their level of trust. 

When a group or a team has lost the trust of its management, the task of regaining that trust is nigh-on insurmountable.  Likewise, if a manager or lead has lost the trust of the group they are to lead or manage, the results will almost certainly be dire.

On Process...

Thus, when the call comes down for "better metrics" or "process improvement" or any other number of topics.  What is the underlying message?  What is it that someone is hoping to gain?  Do they know?  CAN they know?  Are they guessing? 

Much is debated around QUANTifiable and QUALifiable considerations, measurement and understanding.  I am not nearly bright enough to join into that fray fully-fledged. 

What I have seen, however, is when Managers, Directors, VPs, EVPs, and big-bosses of all varieties are looking for something - nearly anything will suffice.  A depressing number of times, I have seen management groups flail around what is wanted - then issue and edict announcing the new policy or practice or whatever it is.  These tend to roll-out like clockwork, every three to six months. 

Each company where I have worked that followed that practice engendered a huge amount of cynicism, resentment and distrust.  The sad thing is that these rather stodgy companies - including some that were quite small and prided themselves on having no Dilbert-esque Pointy-Haired-Boss behaviors - were wasting an amazing opportunity.

The first step to fixing a "problem" is figuring out what the problem is.  If there is no understanding over why policies or procedures are changing and no feed-back loop on the purposes behind the changed, will the average rank-and-file worker stand up and say "What do you hope to change/improve/learn from this?"  At some companies - maybe.  But I have seen relatively few times where the combination of policy-dujour and staff willing to stick their necks out and ask questions both exist in the same organization. 

On Leadership...

What I have learned, instead, is to look at all sources of information.  Explain what the problem or perceived problem is.  Ask for input - then consider it fairly.  To do so is not a sign of weakness - it is a sign of strength.  That the leadership of the organization have enough trust in their workers to approach them with a problem and work together toward a solution.

This, in my mind, is the essence of building a team. 

If you throw a bunch of people together without a unifying factor and expect great things it is silly in the extreme.  In the military, "Basic Training" serves this purpose - laying the groundwork to trust your comrades and follow the direction of officers and non-commissioned officers.  In the end though, the object is teamwork:  learning to work together using each persons strengths to off-set others weaknesses. 

Why is it that so many managers miss this rather elementary point?  For a team to "work" they must learn to work together.  If the Lead or Manager has not built the group into one capable of working together, like a team, what, other than professional pride, will get any results at all? 

Although I can not prove this, in a scientific method as it were, I suspect that it is the essence of the problem mentioned above.  The question I do not know the answer to, although suspect it, is the question of leadership in this instance. 

Is it that they, the leaders, have no idea how to build a team?  Is it possible that the step of instructing the fledgling team and shaping it into the needed form was too challenging?  Could it be that in the process of doing so, their own closely held beliefs, habits and foibles were more dear than the building of a successful team?

If this basic lack is present, does it contribute to the selection of what is easy over what is right

These are the ideas that have been floating through my mind while preparing the presentation and workshop lessons for the session at TesTrek.  If the master knows that he is but a beginner in the craft, what of those who consider themselves experts in all aspects of our trade.

Can this be at the root of the behaviours I've seen first hand and read about?  Are they feeling so insecure in their own abilities that they mistrust their own staff, the team they are charged with leading?  Is it to make up for this lack, they flounder and grasp for tips or magic revelations that will show them the "path?"  Is that why there is a continuing and perpetual drive for Metrics and Process Improvement?

Sunday, August 29, 2010

Music, or Testerman's Ramble

At one point in my life I played in a band that performed a variety of Irish traditional and folk music.  We also played a fair amount of Scottish traditional and folk as well, however, it seems if you play or sing a single Irish song, you are labelled "Irish" and you'll be crazy-busy in March, and pretty slow the rest of the year.  Unless you work really hard and play reasonably well.

So a side-effect of playing in a band that performs this stuff  is, when you get good enough for people to pay you money to go to their towns, cities, festivals, whatever, you will run into other folks who play the same type of music.  When schedules permit, this often devolves into a session / sessiun / wild-music-playing party.  There are certain protocols that most folks follow in these events - and the fantastic thing is that usually the level of play is quite good.  Tempos are snappy so reels drive forward and hornpipes can be lilty (and tend to run around Warp 9) and jigs are of a nature where feet rarely touch the ground. 

Now, these uber-sessions are not so different than more traditional ones held in houses or coffee-shops or bars or clubs.  The big difference is the recognition that there are no light-weight players and everyone has mastered their craft.  This is not always the case at other sessions. 

I have been out of the performing trad/folk music for several years now, and in the last year began attending some of the local sessions, just to get my feet wet.  I was a bit rusty on bodhran, the Irish hand frame drum, which I had played for 20 years on stage and in sessions.  My limited ability on penny whistle was nigh-on vanished - I remembered tunes and could call phrases from my memory to my finger tips, but I'm effectively starting over.  With crazy work and home schedule it has been hard to find time to practice , let alone become "street legal" on whistle.

So, I show up at the Sunday night sessions and play a couple of tunes on whistle when they come up.  I will also play the bodhran a bit, depending on the number of people there (it does not take many drums to become "too many" for the melody instruments - whistles, mandolins, fiddles, flutes, dulcimers and the like.) 

This last Sunday there were a fair number of players.  There were 8 or 9 "melody" players, a couple of guitars, a tenor-banjo, who played melody when he knew the tune and vamped when he did not - and me on drum (with the occaisional contribution of bones.)  Some of the players are quite experienced and I have seen around for many years.  Some are between beginner and novice.  Some are "in between" levels of experience. 

One tune in particular would have made me stop the band, if it was a "band" that was playing and have them start again.  That typically isn't done in sessions - so I did the "drummer equivalent" and simply stopped playing.  One of the mandolin players, who knew me and has also been around the block gave a little smile and he stopped as well.  We were treated to a rare sight of 6 people who were absolutely certain what the "correct" tempo was for the tune that was being played - and none of them would give an inch - or a click on the metronome.  The guitar players seemed to play along with which ever melody instrument was close to them and generally the best description was "trainwreck."

That reminded me of a projet I had worked on some time ago.  I was not on the project originally, but was brought in as part of a desperation move to fix it.  Like in the tune on Sunday, each of the participants knew what the right thing to do was.  The problem was none of them agreed on what that thing was.  "Blood on the Green" was an apt summation of that effort.  The programmers were berated for not following instructions - but how do you follow instructions when there are multiple, conflicting sets of instructions? 

Because of the "political nature" of the project, no managers or directors were willing to step up and take on some form of leadership role for fear that there would be repercussions for doing so.  The PM, BA and Dev Lead floundered without some form of direction from their respective management teams.  Information was contradictory at best. 

In the end, a Director put his foot down, asserted control and forced the issue.  Me being added to the project was part of forcing the issue.  Until that point, the uncertainty of the leadership was sapping the ability of the project group to operate as an effective team.  Like the music session last week, no one had a clear picture as to what was "right" and where the center of gravity was. 

People can urge "Best Practices," "Standards," "Process" and "Metrics" all they want.  In some contexts, that may be the right thing.  However, wiothout a clear understanding of the intent of the effort, nothing will save the project.  Ulysses S. Grant, that prescient Software Oracle (well, American General turned President) warned that indecision was worse than a wrong decision.  Wrong decisions could be countered by "right" decisions, but no decision, from leadership, leaves your group floundering looking for a center. 

Tuesday, August 10, 2010

Of Walkways and Fountains

A Story

Once upon a time there was a business person who knew exactly what she wanted. So, she explained to an analyst precisely what it was that she wanted and all of the points that she wanted addressed and precisely how she wanted it addressed.  The analyst said he understood exactly what she wanted. 

So, the analyst went and assembled all the requirements and looked at everything that was spelled out. He gathered everything together. 
He found that he had some very usable information and some that was less than useable. So, he dug and dug and found the perfect item that would fit the needs the user described and make it pleasing to her.   
Then he assembled all the components and tested the product and found that it matched exactly what the user had asked for - and everything worked perfectly. The finished product was, indeed, a thing of beauty.
So, he called the user over to see the wonderful product he had made. She looked at it and said, "What is this?"

"Its what you asked for! It has everything you wanted!"

"No, this is..."

Have you ever heard of a project that matched the requirements precisely for what was needed to be included in the "finished product" only to find there was a complete mis-understanding about what the real purpose was?

Monday, August 9, 2010

Cast Curtain Call - Part 2 - Conversations

I was very fortunate this last week to have had extended conversations with several people, some "movers and shakers" and some "well respected testers" and some "regular folks." Rather than sort out which is which, I'm going to focus on some of the great conversations I had, starting Sunday evening, through Thursday.

The hard part is picking out the bestest ones. So, I'm going to summarize some and the mental meanderings that resulted.

Monday, chatting with Griffin Jones, he asked bout the mission and charter for the group I work with. We had been talking about techniques and some of the on-line forum conversations around exploratory/ad-hoc/fully-scripted testing in light of Michael Bolton's blog entry on Testers: Get Out of the QA Business. He asked about this after what I thought was a tangent that was around the question of "what works best for what kind of environment?"

His simple question got me to wondering, other than the slogan on the company's internal wiki about the QA/Testing group, what is it that we are about? For some time, we have been working toward getting more people involved in the idea of tangible requirements, of QA helping define requirements and acting as a bridge in the design process. But that begged the question - What is our mission?

I wonder how many testing groups (or whatever each group calls itself) have a "slogan" but no "mission" or "purpose" statement that can be pointed to, where everyone knows about it. If you don't know about it, is it reasonable for people to act towards that - its a goal, right? How do you achieve a goal if you don't know what it is (I feel another blog post coming on, but not right now!)

I had several brilliant little chats with Scott Barber. It helps when you're sitting next to each other at the Registration table. We talked about a bunch of stuff - For those who have read his stuff or have read his articles or postings in various online forums for that matter, he really is as smart as he seems - Holy Cow!

We got onto the "mission" of testing groups and "doing things right" vs "doing things well enough." What most theory-centric folks sometimes forget is that there is a cost to "doing things 'right.'" If the product will be shipped for 2 weeks late because you want to run a 4 week duration system load test, costing approximately $1M, what will the company gain? What are the risks? If you're extremely likely to see significant problems within the first 8 to 12 hours and the likelihood decreases over time, what will that extra two or even three weeks going to get you - other than a delay to delivery and a dissatisfied customer? That, in itself, is one reason why testers should inform and advise but not make the final go/no-go decision.

Yeah - there's another blog post on that in detail.

Other people I met included Jeff Fry, where DOES he get all that energy? Then Selenia Delesie was holding court on lightning talks in the lobby. WHOA! Crazy-smart and the nice as the day is long. Selena gave two really good presentations - unfortunately, while I read the abstract and supporting paper, there were not enough of me to get to all the presentations that I wanted to get to. I think that's a sign of a fantastic conference - too many good simultaneous presentations.

Other folks I met included Michael Hunter, the Braidy Tester - What a guy, although he's now braidless. Paul Kam from DornerWorks is another really smart guy. DornerWorks was one of the sponsors of the conference. They did a lot to make this happen.

Tuesday night the "Rebel Alliance / CASTAway Social" was a hoot. Tester games and chicken-wings and varied and sundry edibles and drinkables - Thanks to Matt Heusser for making that happen. He's another one who is just crazy-smart and really friendly. If you have not seen his TWIST podcasts, check them out.

After the social, a bunch of folks went to dinner and had a fantastic time. If I recall correctly, there were 15 or 16 of us. I scored a major triumph by having Michael Bolton sit across from me at the end of the table. What an amazing time. Melisa Bugai was sitting with us as we discussed the likely causes of why the lights on the deck of the restaurant kept going out. Yes, we tested the theory when Melissa unplugged the rope light going around the railing. They all stayed on after that. WHOO-HOO!

The conversation, as any conversation with Michael, took many twists and turns. We talked on language and literacy and music and education and mental discipline and CBC radio shows and how each touched on testing. What a mind-bendingly enjoyable night.

Wednesday I had the great pleasure of dining with Lynn Mckee and Nancy Kelln - and my boss. Best part is, the boss picked up the tab! WHEEEEEEEEEEEEE! Another night of fantastic conversation on testing and wine and great food. Did I mention we talked about testing?

There were so many other great conversations - How can I give a recap of all of them? As it is, there is much to think on.

Sunday, August 8, 2010

Testing and Integration and Systems, Oh, My! (Where the programmer meets the wizard)

I was asked a question by a tester in the office the other day that got me thinking on this topic. Her question was "Will we need to do any integration or system integration testing?" Mind you, with some products, that is a perfectly reasonable request. In this shop, given what we do, we're pretty much doing some aspects of that any time we're running a test. Many times, we're testing within the boundary of our charter and exercising only so far. To continue to the next step requires live connections with external companies. We have connections to emulators, but I don't consider that to be a "real" situation - simply checking for handshakes and responses.

So, I thought a bit about the possibility that I have a different understanding of "System Integration Testing." That led me to that all-knowing repository of knowledge, Wikipedia, and found this:
System Integration Testing (SIT) is a testing process that exercises a software system's coexistence with others. System integration testing takes multiple integrated systems that have passed system testing as input and tests their required interactions. Following this process, the deliverable systems are passed on to acceptance testing.

Hmmmm. Well, I don't know if I'd buy that in total for our situation, or for most situations where I've worked.

So, I said, HEY! She has Foundation Level Certification from ISTQB.

ISTQB Glossary says:
System Integration Testing: Testing the integration of systems and packages; testing interfaces to external organizations.

Striking me a bit as "A painter is one who paints" I went looking at the individual terms. So:

Test: A set of one or more test cases. (taken from IEEE 829)

Testing: The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

Integration Testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.

System: A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]

System Testing: The process of testing an integrated system to verify that it meets specified requirements. [from W. Hetzel (1988), The complete guide to software testing – 2nd edition, QED Information Sciences, ISBN 0-89435-242-3.]

This made me think of the very first time I encountered "integration testing" when I was testing something I had not programmed.

Some folks remember the days when programmers talked to business users/clients and met with them to discuss what they needed, then worked on the design for the solution, then talked with the client again about the design and checked with the client and verified what we had worked on with them. We worked up sample reports and screen shots and went over them again. Talk about time-consuming, no?

So here I was a reasonably senior programmer being pulled into a project that was over a year late and had to be delivered - postively had to be delivered - in 2 months. The first task I was given was to "simply validate" a series of batch jobs (remember JCL on IBM mainframes?) and confirm that all the components together worked correctly and finished within the required timeframe, nightly.

No problem, I thought. Gathered what I needed to learn the systems it touched, met with the more senior programmers who wrote the programs and the JCL and what not, made sure I knew the exact sequences and limitations that they knew of. I set up a test run on a weekend. The idea was to take over the entire test system - a clone of the production system - fire the process up, monitor the logs while it ran then check the summary reports. If they all looked correct, run some SQL scripts to validate the DB was correct.

Only problem was, after 18 hours running, draining the system of all available resources, the first step had not completed. That, constitutes a problem in any book.

I worked all the next day on identifying the cause, literally. When the team walked into the main conference room Monday morning, I had the entire data flow for the system the wall - all the way around the room. Bottlenecks were circled in red, pages from DB schemas were taped to the wall, and a first draft of a solution was scribbled on data flow diagrams were on the table at my "usual" seat.

That version, after rebuilding the various PROCs that were needed (ummm, execution sequence for those who don't remember when green-bar ruled the computer world) the second version was tested the next weekend. This one took only 4 hours to run to completion. Better, but well outside the target window of 2 to 3 hours.

Next idea? Pull in a couple of other mainframe jockeys for ideas, grab a senior DBA and say "I need a process that will run in a variable number of initiators, up to at least 5 and ideally up to 7." They said, "Can we do that? Not sure, but it might be fun to try." And we did. It worked. It ran in 45 minutes.

What I learned then was that nothing "worked" until you had proven that all the parts worked in concert with all the other parts or components or systems it needed to work with, in the timeframe it needed to work.

I also learned something else that day. Testing rocks. From that point, I studied all I could, even though I was a "programmer." After changing jobs, I found myslef in a position to branch out beyond what was then my "career" and learn various flavours of Unix, languages that did not exist when I went to college. Then, there was a reorganization at the company I worked for.

Part of it was rolling out a previously non-existant group. From scratch. I took the chance and have not looked back. Without running those integration tests, I am not sure I would have chosen this path.

CAST Curtain Call (Well, Review) - Part 1

I'm in the process (still) of recovering from an incredible experience at CAST 2010. As good fortune would have it, this year CAST was held 15 minutes from my house and about 50 minutes from my office.

CAST - the Conference for the Association of Software Testing - is an incredible experience. I had been told flat out by several people that I needed to go. As luck, and project status and work schedule and finances, would have it, at the last minute, I could.

It was very strange. I got an email from friend/colleague/fellow-tester Matt Heusser asking if I was available to help out. Now, I had previously told him that I may be able to help with some of the running around stuff, but probably would not be able to attend because of the state of the project. Matt sounded like a bit of help was needed. I checked with the project team on the state of things, checked with the boss, and determined that since it was so close, I could "work from conference" (as opposed to work from home) thus learning as much as possible, keeping the project rolling and helping folks with their conference as much as possible.

Long story even longer, I'm in as a volunteer, helping to lug stuff and helping to make and post signs at appropriate times and generally lending a hand at the registration table and being cheerful.

Sunday, before the conference officially began and while the AST Board was meeting, I went to the conference center (Prince Conference Center at Calvin College - lovely facilities) to pick up some boxes of books that were delivered Friday.

While there, I met Dorothy Graham, a fellow native of Grand Rapids, who was looking for someone who promised to meet her there. We introduced ourselves and I made my first uncontrolled reaction of the week "Oh yes! I have your book!" She laughed graciously. (If you have not read her book, I suggest you do so.

I then ran into Fiona Charles, who introduced me to Giffin Jones and a legion of other folks whose writings I had read for some time, but I had never met in person, including Cem Kaner.

Fiona, Griffin and I went to dinner, with my lady-wife joining us. We talked about anything and everything. A couple of bottles of a very drinkable red wine, plates mounded with pasta and a lovely terraza made for an evening that was the perfect introduction to the week.

Monday I was at the office most of the day, but swung by the conference in time to catch up with some people and meet Scott Barber and another flock of people leaving me in awe to be walking among the mighty of our craft.

Tuesday morning, I was there bright and early, laptop, headphones and powercord in hand. Jumped in with posting sign updates and then helping out with participants arriving, meeting people and sipping coffee. About the time the keynote speaker (Tim Lister) began, I was at the table dealing with work emails and helping the odd person coming in late.

Note - if you ever find yourself in a position to help at a conference, DO IT. You get to meet an amazing number of people and usually it gets you in the conference at no (financial) charge.

One poor lady came in just at lunch time - her flight from India had been delayed, getting her in Tuesday morning instead of Monday evening. She registered with the conference, checked in for her room at the registration desk and came back to ask if the conference required formal attire. I assued her it did not, and made the observation that the fellow who came in a few minutes after her was dressed very casually and he (Michael Bolton) was speaking. She was relieved went to her room, then came down to get lunch as I went to a conference call for work.

I was able to attend a couple of sessions Tuesday. One - Nancy Kelln's presentation on "Cutting the Mustard - Lessons Learned in Striving to be a Superstar Tester" was extremely good. Nancy is a bright, articulate, up-and-comer who stands out among a flock of other bright, articulate, up-and-comers. Solid questions were raised and addressed both by Nancy and by other participants.

If you have not been to CAST, that last bit "by other participants," is one of the things that sets CAST apart from other conferences. In a 60 minute time slot, 20 minutes are reserved for what others would call "Questions" but at CAST is "Open Season." There are colour coded cards distributed to each participant - Green is a "new thread" on the presentation, Yellow or light green is a comment on the current thread or previous comments - there are others, but those two are the most commonly used. A facilitator keeps the sessions in order and calls people in turn to speak. It truely is "open season" and anyone, not just the speaker, who has something to say had better be able to defend what they say.

Both of the presentations I was able to attend on Wednesday were extremely good. Karen Johnson gave a brilliant presentation (Reporting Skills and Software Testing) on approaching testing, and meeting with business experts, the same way a "newspaper reporter" might. This went beyond the fairly obvious "communication models" one might expect. She touched on several ideas that I know I, at least, had not considered.

Karen hit one idea around fact and opinion that struck me as a brilliant observation. "When do you cross the line between fact and opinion? Emotion." That is probably worth a blog entry in itself. As it is, suffice to say that when one is attempting to sift through what is "fact" and what is "believed to be fact" you can do a reality check with yourself, or with the person you are working with by checking the level of emotion. "I found something that did not match what I expected to see, is this right?" "Of COURSE it is! I checked it myself - its RIGHT!" Ones facts may be challenged, but expect a strong response if ones emotions or beliefs are challenged. Most importantly, when there is a strong response, recognize it as a reaction to the opinion being challenged and don't take it personally.

Finally, I rounded out my day with a presentation by Lynn McKee. Lynn, another bright, up-and-comer, gave a solid presentation on assessing your value as a tester. That she sparked a lively debate during open season is, in my mind, an indicator that she touched on something that many testers are looking for - how does one define value when value itself is subjective?