Saturday, December 28, 2013

Change of Years: 2013-2014

Looking back at my previous year-ending posts and those that looked forward to the coming year is something I've come to enjoy.  Partly, I get to see how close I was in my expectations to what would happen, and did happen.  At times I was very close - other times, not at all. 

And so I launch into another consideration while sitting in my house on a quiet Saturday morning in December. 

Overall, this has been a good year for me.  I've grown and learned and developed in ways I did not expect to.  I have strengthened bonds of friendship, loosened some others and discovered much about myself and those around me.

I participated in several conferences this year.  I find conferences to be enlightening on multiple levels.  I know many people go to learn new things.  Some people go to enhance their reputation.  Others go simply because their company is paying for it.  For me, yes, even when I'm presenting, it is an opportunity to meet people I do not know, or have never met in person.  I try and keep an open mind, although sometimes my reactions to what appears to be rubbish get the better of me.  I try hard to not attend sessions by people I know, unless they are presenting a topic new to me - I really do try and avoid epistemological bubbles.

The contract-consulting thing is going well for me.  I stepped out in 2012 away from the perceived security of being an employee of some company and became a privateer.  I'll do work for your company, but only on terms I'm OK with.  If something is stupid I reserve the right to say "This is stupid."  Mind you, I did that before - but the sense of 'they might fire me' is gone.  I realized people can pay you for doing good work and speaking factual truth to them.

Sometimes doing so bluntly is called for.

Oh, you don't need to be a contractor or a consultant to do those things.  realizing you are responsible for your career and your growth as a craftsman is the first step.  Speaking truth is part of that.  Removing the wall of fear about "losing your job" is a huge one.

Once that fear is gone - you are free.

So, yes.  I participated in regular meetups and some conferences and a few workshops.

Conferences & Workshops

These stand out in my mind:

STPCon in San Diego was fun.  I got an excited email from the organizers about the huge number of people who were in the room and the massive number of "5" ratings my presentation on leadership got.  Apparently more people liked it than not. 

CAST in Madison, WI was a lot of work. I found myself really busy - more busy than I expected to be. I learned much and enjoyed that conference greatly.   

Agile Testing Days in Potsdam, Germany - my second outing there - was also much work and much fun.  My workshop had generally positive things in the twittersphere and blogosphere - and that the numbers of participants increased after the break half way through may speak to something - but I have a feeling none of the participants filled out the official "rate this session" web page.  The Halloween costume contest was much fun.  I was a bit - along with Matt Heusser, Huib Schoots and my dear lady-wife Connie.

WHOSE - the workshop on self-education in software testing - in Cleveland - Oh, my.  I have much work to do from that still.  This was an AST function/event to derive a list of skills needed for software testers.  I drove down on a Wednesday evening after work, had a good night's sleep - then did not get much more sleep until Saturday night after driving home.  Mentally exhausted does not begin to describe the state I was in.  I need to blog on that soon.  It was good - and a good deal of work was done.

Personal Learning

Loads of people have helped me learn this year.  Some of these were engaged actively in that learning, others in conversations (where they thought they were the ones learning) and others through their writing.  Thanks to these in particular - Matt Heusser, Robert Sabourin, Michael Bolton, Chris George, Dan Ashby, Mike McIntosh, Ben Yaroch, Ben Simo, James Bach.

There have been others, of course.  Many people contributed.  Some greatly and positively - Some have shown me how not to be or act (I did not name any of those folks.)

I have two major areas of interest I am working on now.  One is an ongoing quest for "What information around testing is of value to the business?"  The other is one I've been dealing in fits and starts - and for the last two or three months have been looking into more deeply - "What skills does an organization need in their software testers?" 

These are related questions.  They are tied into work I have been doing at my client company of late.  They are also things I am wrestling with in my own mind.  I can see these as being fairly extensive in my studies and effort into the coming year.

The Future

Conferences - People ask me what conferences I will be attending and participating in this year.  I don't know.  The number I am considering submitting proposals to is fairly small.  My calendar is messy - I have projects at my client that need help - that is why I am there, after all.  When this contract is up, then perhaps this will change.  Sometimes, the idea of hopping on jets and flying hither and yon seems cool.  I spent enough time in airports waiting for that to have lost some luster.  I don't feel the need to travel giving talks about some aspect of software testing.  I'd rather be doing the testing and talking a little about it.

Writing - My writing has fallen way off this last year.  I want to get back to that some more this coming year.  I have a bunch of projects that are in the "Outlined" stage but have not had the effort given to them to actually develop them into something useable.  That needs to change. 

Work - My current client is "reviewing contracts" for the future.  The projects I am on are slated to run through much of the summer.  Its interesting, but like everything else in Corporate-land, nothing is certain.  Folks, this is normal.  Every company does this regularly.  Sometimes they are dealing with contractors/consultants - sometimes they are dealing with employees.  "Job Security" is a myth for most folks doing software, or any form of Information work.

Meetups - The GR Testers are going as always.  We get together monthly and discuss topics of interest to the group.  Sometimes there are presentations, other times, it is organized chaos as we go through ideas.  Other things in the works - when I can, I get to a (fairly new) Code & Coffee meetup that happens in the morning before heading in to the office.  I find it an excellent way to start the day.  Others?  The Mid-Michigan Testers Meet Down gets together in the Lansing, MI area, sporadically.  I'd like to attend more than the 1 time I did this year. 

All in all - I'm looking forward to 2014.  Not the wide-eyed wonder some folks have, or think they should have.  More of "I bet something interesting will happen." 

I'll leave notes along the way so folks can come along for the fun of it, if you want to. 

Cheers -

Happy New Year!

Sunday, December 22, 2013

Controlling Management or How the Grinch Stole Agile

We begin with a Poem:

No cute rhymes;
No clever gimmicks.
Domineering Managers
Really mess with things.

OK, so it doesn't rhyme.  It doesn't have any fun or funny made up words and generally sounds pretty negative.  It is.

I cannot count the number of times I've been working with a company where an experimental project is launch using some form of Agile Development, and the development managers say they'll support the experiment - and then proceed to take shots at every opportunity.

The common theme?  "How do you control that process?  It can't be managed."

Somehow, it amazes me when I encounter that mindset.

Consider, organizations realize that the size and nature of a project will not work well with their conventional software development approach.  They publicly state that because of the nature of the project and the lack of surety in the requirements defined in advance that a different approach is needed.  They recognize that people working on the project will need to dedicate the majority of their time to that project and that project alone - meaning they can be available for problems/support activities, but not other projects.

The project room is set up with whiteboards, an open environment and a relaxed schedule where everyone participating agrees on "regular" hours - meaning the hours when they will be there and available - recognizing that some folks like starting earlier in the day and some folks like starting later.  The daily "catch-ups" or "stand-ups" or whatever you choose to call them are at a time everyone agrees make sense, not mandated by someone.  The room is situated so that the people who will actually use the software can stop in, ask questions, check out what is going on or maybe just have a cup of coffee.  (I really push for coffee/treats always being present even if I'm the one bringing stuff in.)

People get together and for four or five hours a day work on stuff.  Together.  Working.  Talking.  Comparing notes.

The first sprint is a little shaky - I've seen that happen most of the time.  People are a bit uncertain what is expected of them and how things will work that the first sprint is almost always a bit shaky.   (Where I've seen that as a given is where "this Agile stuff" is new or an experiment at a more traditional organization.)  The first sprint wraps - those participating for the first time learn and the second goes much better.  Stuff get delivered and demonstrated and stuff works - OR - you've made progress and you know what it takes to finish that off and some other tasks.

About the third sprint, the management grumbling begins.

"What are they doing over there?  Is anything getting done or are they just screwing around?  Why don't we hear anything?  Why aren't they submitting the project documents like they should?"  on and on ad nauseum.

When the invitations are made to sit in on meetings, reviews, or simply come over and check it out - Or - at LEAST - actually attend the progress report sessions we have.  (The name varies by organization - essentially, the summary of what was finished at the end of each sprint.)

Yo! Managers!

If you are invited to attend a meeting where "your people" that are participating in an "Agile" (by some definition of Agile) are talking about and presenting what they have finished - then GO.  ATTEND.  PARTICIPATE.  This takes the place of the documents where you skim the "Executive Summary."

Stop for a minute.  Really.  Consider - We have an idea what the finished product is supposed to do.  We have an idea what the stuff we agree to do every sprint looks like.  We (developers, testers, business experts, customers) work together to make sure each piece does what we believe it is supposed to do.

Does it matter what the details that we did are?  What about "This is the work we have to do to finish this." instead?  If we focus on where we are going, and move forward in measurable chunks, does that not answer the question of "What are you doing?"


The Questions Begin...

What about the stage gate meetings?
Answer - We have them every morning.  You are welcome to attend.

What about the SDLC documents?  The design and architecture plans?  The requirements documents?  The estimates?  The test plans?  WHAT ABOUT THAT IMPORTANT STUFF?
Answer - Those are on the wall over there.  The stuff we have not started on or are going to be in future sprints are on that other wall.  We are tracking all of that real time.  And we talk about them every day.

But how can you measure this stuff?  Your tasks show up in the time reporting system and dare closed every two weeks.  How can that be?  You can't plan anything that way!  Tasks show up on Monday - they aren't even entered before then - and time gets charged against them and then they are closed in two weeks.  That's crazy!   
Answer - Actually, since we're working on pieces as we can, based on system availability, environment readiness and our mutual commitment to deliver top quality software each sprint, what you see makes sense.  The participants figure out what needs to be done, how we can do it and what it will take to make happen.  Then we make it happen.  Can progress be observed and measured by this?  Of course - as we complete tasks and they move from "Planned" to "In Progress" to "Done" we have a definite track of where the project is.

But you cannot really measure stuff unless you can show what you've DONE!  This stuff you're telling me makes no sense.  How am I supposed to know what my people are doing?  If I'm not watching what they are doing they will waste time and not do what I want them to do!
Answer - Is that the crux of the issue? Are you in such need of directing every piece of what the staff you hired - because they were "the best" - that you must treat them as if they are novices?

But they keep making mistakes!  I can't trust them to do the right thing! 
Response - Ah.  So that IS the crux of the issue.  The staff you hired because of their expertise and experience do things differently than you would.  So you intervene, and direct their actions.  You countermand their decisions and direct the architecture and the design you hired them to be able to do.  And there are still problems?

You don't understand!  I want to trust them but I can't.  They have proven that time and again!
Response - Ah.  So you hired the best, they don't do what you want and direct what they are to do and they do what you tell them to do and there are problems with the result.  Are the problems the result of the staff's efforts?

You don't understand!  Why do I waste my time with your questions? 
Answer - You asked me for suggestions and comments on the Agile practices of your organization.  Accepting or rejecting them is entirely your perrogative.

And so...

No.  There is no happy ending.

There is no singing around the spot where the Who-ville Christmas Tree stood.  There is no redemption of the Grinch in this story.  The staff continue in their drudgery and wonder and worry about the future.  The Manager(s) continue to mandate the minutiae of the work their staff does instead of allowing them to solve problems presented. 

Tuesday, December 17, 2013

On Test Cases and Conversing with Unicorns

The other day, I was sitting quietly contemplating some measurement functions people were asking about.  Whilst sipping a nice coffee in a small coffee shop, I heard a voice beside me - someone clearing their throat and asking if they could join me.

"Are you Pete?  May I join you?"

Now normally, I'm not easily taken aback.  This time, I was.  It was a unicorn speaking with me.  Apparently he, I think it was a he, asked was waiting for a friendly griffin who did a mix of java work for his day gig but was fluent in other languages.  Alas, the griffin was late.  You may not know it but griffins are notorious for unpunctuality. 

We got to talking about software and software development and software testing.  The unicorn asked me what was on my mind.  This struck me as odd.  I suspect he was simply being polite.  Unicorns can read minds of non magical humans, you see.

I explained that like many companies, I was trying to help people understand something that I thought was pretty fundamental.  The issue was one that it seems a fair number of people are wrestling with these days.

People are being asked to count things.  Tests.  Bugs.  Requirements.  Effort.  Time.  Whatever.

And the unicorn looked at me and asked "Why?"

It seems that people are looking to estimate work and measure effectiveness.  Their managers are trying to find ways to measure progress and estimate the amount of work remaining.

The unicorn started laughing - no, really.  He did.  Have you ever heard a unicorn laugh?  Yeah.  Its kind of interesting.

He looked at me and said "They've always wanted to know that stuff.  It seems things haven't progressed very far.  In the old days, we looked at the work and worked together to make really good software.  It would be ready as soon as it could and we could tell managers when we got close to it being ready.  Now, we expect people to be able to parse tasks and effort before they even figure out everything that needs to be done?  What are the odds of that actually happening?"

We sighed and sipped coffee for a moment.

The problem, of course, is that sometimes we're not quite sure what else can be counted.  The issue with that, the whole metrics thing?  When we latch onto the easy to count stuff it seems that the only stuff we count really never matters very much to the actual outcome of the project.  Why is that?

So, the conversation flowed.  We each had another coffee.

My thoughts focused on test cases.  Why do so many folks insist on counting test cases and the number that passed and failed?  What does that tell us about the software?  If we can logically define for every situation what test cases should look like, and can define instances where they will always be true guidelines, that may work.

My problem is simple:  I can't recall two projects ever conforming to the same rules.  That set of rules does not seem to work most of the environments I've worked in.

The unicorn seemed to understand.

He said "I tend to use failure points at steps in documented test scripts when I need them.  Some people use each failure point as a test case. They get many, many more test cases than I do.  Does that make their tests better?  Are they better testers because of the way they define their test cases?"

We both agreed that simply having more test cases means almost nothing as far as the quality of testing.  That in turn, tells us nothing about the quality of the software.

If "a test case" fails and there are ten or twenty bugs written up - one for each of the failure points - does that tell us something more or less and if ten or twenty test cases resulted in the same number of bugs being written - again - one for every failure point.

What does this mean? 

Why do we count test cases and all the other things we count? 

The unicorn looked at me and said that he could not answer that question.  He said that he preferred to consider more important things, like, whether or not unicorns can talk with humans.

Monday, December 16, 2013

Tea Parties, Perspective and Expectations or What Makes a Bug?

I'm writing this the evening December 16.  This is the anniversary of an event that gets a lot of attention in a fair number of middle school and high school American History classes.  It struck me as I was thinking about this while walking to the office today, that while some people consider this a watershed event, in reality, it was part of a continuum of tumultuous events that happened in a fairly short order. 

December, 1773

Consider two descriptions: 
1. Militant Anti-government terrorists destroy massive amounts of private property in Boston Harbour.
2. Freedom-loving Patriots destroy hated symbol of unjust oppression by dumping tea in Boston Harbour.

Both describe the same event, each from a distinct perspective.  Both engage in hyperbole, ostensibly to make a point.  The facts of the matter are these:  Between 30 and 130 men, some dressed as Mohawk warriors, boarded three ships owned by the British East India Company, over powered the anchor watch and threw 342 casks of tea into the water. 

There are bits that often get left out of the narrative.  For example, the tax on tea was originally passed in 1767.  At the time, Britain was in deep financial trouble as a result of the Seven Years War - what is taught as the French and Indian War.  Much of the expense of the war was to defend these same American Colonists from the French.  It seemed reasonable that some measure of tax be levied to pay for the bills of the war.  The East India Company argued against the tax, and through a series of negotiations and compromise with Parliament had them offset for a period time. 

These expired in 1772.  The taxes were modified slightly in the Tea Act of 1773.  The East India Company tried to extend these "tax breaks" again, and offset these taxes.  The government of Lord North refused.  They suspended some, but not all of these taxes - some 10% of the value remained.  This worked out to be 3 pence in taxes per pound of tea.  In doing this, there was a "minor change." The salaries of some colonial officials would be paid from these funds.

The East India Company attempted to cover the taxes themselves, to simply pay the tax and keep the retail price the same.  To put it gently, the colonists would have nothing to do with it.

There was outrage.  There was fury.  There was anger directed at individuals in the colonies and in London.  

Never mind that among the people who were most vocally opposed to both the tax and the actions of the East India Company to minimize the impact on the colonists, their customers, were smugglers of tea. 

Perspectives

The perspectives around the facts drive the narrative. Both of the above descriptions, the ones about "terrorists" and "patriots", are accurate depending on the perspective of the individual.

Let us consider how this same "details" impact software.  One customer likes a given feature, one does not.  Which one is right?  One complains of "bugs" and demands a fix immediately.  One refuses to consider any change at all.

I've actually encountered that.  Two equally large customers - one likes the software as it is and the other demands changes.

Which one has the "bug"?  How do you count that?  The description of the software and the promises of the sales staff could easily be interpreted either way. 

When people demand "bug free software" I wonder if they have any idea what that means?

A bug is a bug only if everyone involved with the software agrees it is a bug.  

A bug is not a thing - it is a description of a relationship.  That relationship describes a variance between expectations, perceptions and the actual behavior of the software.

In setting expectations, we must be able to anticipate and describe perspective of the persons using, or responsible for using the software we are working on.

Do we understand how people use the software?

We understand what how we think they use the software.  We may understand how we think they will use it.

If our perspectives are wrong, if our expectations are wrong, we are exercising - and looking for "expected results" that may not be what anyone else would describe as "expected."

Tuesday, November 19, 2013

On Brevity and Simplicity and Lean

There is much talk, discussion and debate over Lean - everything.  What does Lean look like?  What is "just enough" and "just in time" of anything?

I'm writing this the evening of 19 November.  It marks the 150th Anniversary of the Consecration of the National Cemetery at Gettysburgh, Pennsylvania.  The battle at the same town was fought in July of 1863.  At the time of the dedication an consecration ceremony, the dead who died in that three day blood letting were being transferred from the shallow, battlefield graves, dug where they fell, to the new cemetery near the existing cemetery along what became known as "Cemetery Ridge".  That was the site of the climax of the battle - Pickett's Charge.

Along with the dead Federal soldiers being disinterred and reburied, there were thousands of horses and other animals that died on the field that still needed to be dealt with.  Many of the Confederate dead were buried in mass graves.  Many are still there.

The ceremony included bands playing, a chorus singing and an address.  The Gettysburg Address was 13,607 words and took roughly two hours to deliver.  It was a stirring work of oratory that was considered a masterpiece by all who heard it.  The speaker was Edward Everett, a pastor, educator, diplomat and at one time a member of the US House of Representatives and the US Senate.   He was a master of his craft. 

At the end of this stirring epic, the President gave "a few appropriate remarks."  He spoke for around two minutes, speaking 270 words. 

Four score and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty, and dedicated to the proposition that all men are created equal.

Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battlefield of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this.

But, in a larger sense, we can not dedicate, we can not consecrate, we can not hallow this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.

In this, the political process kicked in.  Newspapers that leaned toward Republican sentiments sang the praises of Mr. Lincoln.  Those that leaned toward Democratic sentiments slammed Mr. Lincoln and his sentiments.

Simplicity wins.  Precision overwhelms artifice. 

Friday, November 8, 2013

Agile Testing Days 2013: Tester Roundtable Workshop

At Agile Testing Days in Potsdam, I ran a workshop Tuesday afternoon demonstrating an approach to solving problems.  These may range across a variety of types of problems, but all of them are problems that are bothering people now.  I had run similar exercises before at conferences (most recently at CAST this past August), user groups (like at GR Testers earlier in August) and as an exercise for test organizations in companies.

Considering problems.
The real purpose at conferences is not so much to find solutions as it is to demonstrate the process so the participants can take the exercise back to their day-jobs and try it there.

The format is simple. Each participant describes problems or pain points they are dealing with; they are written down; participants vote on the problem they are most interested in discussing; when there is a single item selected, that is the one discussed.

And we're off...

We begin by describing the selected problem in greater depth, including what is the impact, what possible contributing factors may be involved and other aspects around it.

The list of problems. 


Voting!
The list of pain points and concerns were fairly extensive.  That there were people contributing more than one certainly helped with this.  After a brief summary of the problems, we began voting.  The consensus of the voting landed on the Potential 2nd/3rd Order Ignorance.  The explanation of this was that instead of "knowing there are things we don’t know," people don’t know there might be things that are not known.

Describe the problem. In explaining the situation, the participant whose problem it was walked  through a series of steps, including Deliberate Discovery of problems and the Accidental Discovery of the same problems.  (He had been in Dan North's tutorial the day before and this helped him frame the problem nicely.)  The difference seems to be the level of ignorance of stakeholders and the problems to be discovered. 

Discussing the Selected Problem
 The question at the root is, how can the people who will be impacted by problems not found in testing be made to recognize there may be a problem in the software that is, as yet, undetected?  The real issue is how can we help stakeholders understand the risks of not recognizing there may be problems lurking?

Describing the problem is extremely challenging and can be a problem for many people.  Without being able to do this, we will have an even greater challenge to sort out the problems in making software better.  

In describing the problem, several things became clear.  There are a number of references available for consideration that may give people food for thought for addressing this and similar issues. 

Consider, Nicholas Taleb’s Black Swan (this was mentioned in Matt Heusser’s keynote Wednesday afternoon.)  Looking at this from a slightly different angle, particularly on thinking and thought processes, 6 Thinking Hats gives help in considering how people think.  This can help form what we do and how we approach problems. 

Consider, in addition, Michael Bolton’s work around Frames and Test Framing.  Start here or simply engage the wonders of Google and try this broader search here.  Additionally, work on Focus/Defocusing approaches may have value in looking to answer the question “What else is going on?”

Define Possible Solutions.  Several ideas came up in the meeting.  

First Option:  Professional Experience/Authority 

The first possible solution drew a significant portion of the time available.   “In my professional opinion there are things we don’t know that we don’t know.”  

This may or may not have mixed results.  The context of the organization, the situation specific to the cause of concerns, may make this extremely difficult.  The real issue here appears to be “How do we get the attention of the stakeholders?”  This will be problematic.  In some companies, and aggressive approach may work.  In others, such an approach may not and could have significant consequences. 

Another option:  Change Expectations.

By demonstrating there are defects in production and defects/bugs found in each sprint, it may be possible to build a case.  Noting that the lack of finding problems does not mean the problems are there is only the beginning.    The question to consider becomes “Based on what we know, as established fact, what else might be going on?”

This brings us to the final stage of the exercise – This gives us the chance to identify, and more importantly put these ideas on paper.  Somehow, putting things on paper (or some other recorded form) in such instances helps us visualize the components we need to consider, if not address. 

Identify the Constraints (or Roadblocks) / Assumptions / Stakeholders / Tasks.  

Where the same item appears in more than one area, you have identified a key point to address.  HOW you address it will need to vary on the specifics.  For this exercise, we found this:

Constraints/Roadblocks                      Assumptions
===================                        =============
Product Owner                                   People want a better product
“Team is ‘OK with it.”


Tasks                                                  Stakeholders
===================                         =============
Retrospectives                                    Product Owner
-          Invite PO (and her boss)             Legal Team
-          Invite Legal                               Head of Channels
-          Invite anyone else who may
       contribute
-          Get the Tech staff to stays 
       after Demo


Final Suggestion for ideas on considering options with problems like this, Jerry Weinberg’s “Are Your Lights On?”

AND - we were out of time.  

Thursday, November 7, 2013

Agile Testing Days 2013, Looking Back

Agile Testing Days wrapped up one week ago today.  As I am sitting here in my living room, typing away on my trusty laptop.  How can I summarize what happened over the course of the week?

There were many conversations - many very good conversations.  Chris George, Markus Gaertner, Lisa Crispin, Huib Schoots, Gitte Ottosen, Matt Heusser, Eddy Bruin, Seb Rose, Carlos Ble,  Ajay Balamurugadas, J. B. Rainsberger, Lars Sjödahl, Janet Gregory, George Thomas, Sara Rainsberger, Mary Gorman, Matt Heusser - and the list goes on.

There were a couple of themes that were driven home to me - One: the need for open, clear communication;  Two: Trust.

Without one, the other won't happen.  There needs to be a level of trust between the participants in a project, or, anything - for communication, real communication to happen.  Without that, nothing else seems to work or matter.

The images I have in my mind of Potsdam are of an eclectic city, a variety of building styles.  Friendly people with an interest in others.  The feeling I have is a city with an active, vibrant community. 

There are other things from the conference itself.

The Lean Coffee sessions were excellent ways to start each day.  The energy and interest of the participants helped me get engaged and get ready for the day.  The Lean Beer on Monday was a fun way to wrap the formal sessions and blend into the more relaxed evening entertainment.

Part of me was a little disappointed in the energy - as one might expect, some of the people at Lean Beer were generally more interested in the beer than in the conversation and exchange of ideas.  Still, I thought it was a fun, relaxing exercise.  

Games night - what a fun event.  After the sponsor's reception, "Agile Games" with pizza, beer, learning, ideas and getting to meet people.  I'm not sure how many take-aways people ended the evening with, but most people seemed to find it relaxing as well. 

Thursday, the "Lean Cocktail" event - frankly, I was a little concerned.  What I really enjoyed about it was the chance for people who did not quite want the week to end to get together for another "meet and greet" type event.  This was a good exercise run by Matt Heusser in getting people to share their personal take-aways - the "thing I want to try at the office on Monday."  We also went around sharing about testing books and resources and ideas.

The cool thing - at many conferences, those whose flights head out the day after the conference end often wander about looking rather lost.  This gave a fair number a chance to hang with others, make plans for the evening, dinner and decompress together. 

That was pretty cool.  I liked the idea.  Oh, and the outing we had for dinner grew from 9 to 23.  Which lead to way more conversation over dinner then drinks after - until we headed out to our respective destinations the next morning.

Thank you Diaz-Hilterscheid for organizing an extremely good learning event.

Wednesday, November 6, 2013

Almost Live! Agile Testing Days 2013 - Day 0 - Tutorials!

Matt Heusser and I presented a full day tutorial on Exploratory Testing for the Monday tutorials at Agile Testing Days.  We had a reasonable number of people - with 8 signed up and a total of 10 in the room.  This was one of the smaller groups we've done this type of workshop with - however, given the number of tutorials at the conference this year, it was a respectable number of participants. 

One of those participants, a very highly regarded tester named Ajay Balamurugadas, made a mind map of his notes and posted them as a blog post here.

This is a fair representation of the days events.  We started with people arranging the room (re-arranging?) for the room to be suitable for the needs of the group.  While we were waiting for people to arrive, we pulled out dice and gradually drew people in with a problem solving, rule determination game.  This is always an interesting process as participants vary in their willingness to engage - some are shy and a little afraid to speak up or ask questions.  Others are a little nervous, possibly concerned about being wrong.  Some think that taking 20 or 30 minutes is too long and a sign of failure.  When I tell them honestly that it took nearly an hour the first time I played it, They don't seem to believe me.

Two lessons from that: people are allowed to make mistakes, if they learn from them; persistence wins.

The next step was to split the participants into two groups.  One group was instructed to define an approach to testing for a given application, a game actually.  Then when the approach was described, they were to script the tests.  The OTHER group was given some instruction around the ideas of quick attacks and other ideas on framing and defining test approaches.  Then both groups were brought together to test the app.  After a given period of time, we looked the results for that exercise - then swapped roles.

In the end both groups went through defining test approaches, then scripting them.  Also, both groups were given some ideas on how to apply various other approaches to testing problems.

One thing Matt and I try and do is to make sure participants get what they hope to by, well, asking what it is they are interested in learning.  We then list these, have participants vote on them, then start down the line.  Agile workshop instruction.

The draw back, for us, is we need to be able to present information on a whole PILE of topics.  We spent much of the balance of the day working through various ideas and how they can be applied.  In the end, we had one more exercise.

We gave one more task - another application - and asked "How would you test this?"  The rule was simple - testing this app would be done, How?  And we turned them loose on it.  No other rules.

What excited me, personally, was how the participants latched onto and tried the ideas we had presented during the day.  That was fantastic.

Following this, we repaired to the Fritze Pub, one of the bars in the conference center/hotel, where we launched into the inaugural Agile Testing Days Lean Beer!  Yeah!  Like Lean Coffee, but in the evening, with BEER!

LeanBeer!



As we wrapped this participants headed out for a couple of outings.  One was a pub tour of Potsdam - This actually sounded fantastic.  Potsdam is an amazing town, a variety of architectures, each reflecting a period in Potsdam's history.  Like I said - pretty cool.

However, I repaired off to a dinner hosted by Diaz and Hilterscheid (the organizers of the conference) for the speakers at the conference.  It was amazing - extremely good food, lovely wine and fantastic conversations.

We wrapped up and headed to the conference center and more networking and more, umm beer.









Thursday, October 31, 2013

LIVE! Agile Testing Days 2013 - Day 3! In Potsdam!

Thursday morning - Breakfast & Lean Coffee.  Loads of tired people sitting in the Fritze at Dorint SansSoucci in Potsdam.  Good energy though - the coffee helps!

Setting up for today's opening keynote -

NOTE!  I will try REALLY HARD to clearly flag my own reactions to assertions made in the presentations I attend.  Going as fast as my tired fingers & brain allow... Will clean the blog post up later.

===

Keynote: David Evans - Visualizing Quality

The Product of testing is..... ?  Evans launches into "Value" questions - by inserting the idea of "more and better testing will do what for the product.

Product of Testing is Confidence - in the choices and Decisions we have to make in the steps to understand the product - not the product itself.

Testing services are not the judge, not on trial - It is the expert witness.  How credible are we when it comes to providing information around the product. In the end, we must balance the risks and benefits of implementing the project - launching as it were.

Evans then describes/launches into the Challenger shuttle disaster (28 Jan, 1986).  In this he describes the meeting the night before - the subject of which was "Will the O-Rings suffer a Catastrophic Failure Due to the Cold Temperatures."  Of course, now, we know the answer was "yes."

Many pages of actual copies of the meeting agenda and technical notes - Yeah - These guys really were rocket scientists so there are loads of tecnical data here.  They launced - the shuttle blew up. 

"Failures in communication... resulted in a decision to launch based on incomplete and sometimes misleading information, a conflict between engineering data and management judgements." Wllm Rodgers, Investigator

Evans - We need to make sure that we are making it as clear as possible what the information we are presenting means and what the likely consequences are to the decisions based on that information.

Consider - the McGurk Effect - when there is a conflict between what we see and what we hear, the tendency is to rely on the seen, not heard.  Is this what happened with Challenger?  Is this what happens with software projects.

Now - US Military budget, 2008, was $607 Billion.  (Pete: thats a bunch of cash)  However, a single data point conveys not terriblly much information.  Adding information on other countries gives more information.  However, when comparing the spending compared to GDP - the total output of a country - while the US military, in gross terms, is the sum of the next 8 country's national spending - it was

BUG COUNTS! Any negative must be expressed in the context of "what are we getting that is positive."

In response, Evans posts the first clearly identifiable inforgraphic - with charts, lines, numbes, etc...  So - this was a graph made in the 1860s of Napoleon's invasion of Russia in 1812.  The lines represent the size (numbers) of the Grand Armee (Napoleon's army) at full strngth starting out with a very wide swath - and gradually narrowing over time as the army shrinks thru attrition (death.)

Consider how we are building/using/exercising testing suites compared to the actual documentation

This is in contrast to the London Tube maps - which is fantastic for giving an idea of how to get where iin London - Yet without understanding the actual street maps

US Election maps - red-state/blue state - looks like Obama should not have won - except the land doesn't vote.  Adjusting by STATE level you get something DIFFERENT. When you look at each State by COUNTY, you see something different again - straight "results" and the results adjusted geographically by population density - gives us a series of interesting information for consideration.

Then there is the Penfield Homoculus, where the difference between sensory and motor control is - remarkable.

All of these boil down to 1 important thing - a diagram is a REPRESENTATION of a system - NOT THE SYSTEM.  There are Other considerations around how that same system can be represented FOR A DIFFERENT PURPOSE.

Be careful of what your data points are PERCEIVED to represent.

Information Radiators can help us visualize the process.  Simple low-tech stuff is still - visualization.  Suggests to represent people on the board - AS WELL AS TASKS.  So - not just what is in flite, but who is working, or taking point, on that task.  (Pete: Good addition.)

Citing James Bach's "low-tech testing dashboard" to express state of testing - and note IT IS NOT COMPUTER GENERATED!

Remember this:
* What is the thing I am doing - what do I want to represent (and why) 
* Stay focused on the information you wish to present - what message is given by a bar chart vs a "mountain range" of the same data;
* Transitions - if a unit is a unit, how do you know what the unit is?  Save the X-axis for time - most people presume that is what it is. Order your data so people understand the point YOU WANT TO MAKE - sequence on the chart need not match the sequence in the app/software dashboard/whatever.

Remember - Testing is Decision-Support - it does not do "quality" - it gives information for people to make decisions on the product.

===

Track Session - Ajay Balamurugadas - Exploratory Testing in Agile Projects: The Known Secret

Ajay is a testing guru (ok - that was my personal comment) - with interests in many things.  HE begins by asking if the Agile manifesto has a bearing on testing in projects in an Agile environment.  He then ppresents a definition of ET from Cem Kaner - this is slightly different than James Bach's defnition.  He then discusses what this means -

Important point:  If your next test is influenced by the learning from your previous test, then you are doing exploratory testing.

AND PROCEEDS to LAUNCH INTO A MINDMAP based on stuff (words) from the audience.  (Pete - Nice - mob mind mapping?)

This process is an interesting exercise in "testing" the session while he is presenting it. By getting people to contribute aspects of ET, based on their understanding, this is drawing people into the conversation.  (Pete: photo taken from Ajay's tablet, to be tweeted or otherwise - Hey - LOOK!)



Whatever you do, if the customer is not happy, oh dear.

Probelm - much of what is on the picture is "positive" - what about problems?

* Mis-applied/misunderstood how to apply
* Mis-represented
* Hard to explain to management
* Hard to ensure coverage
* Miss important content (and context)
* Perceived as "monkey testing"

Skills related to ET:

Modelling;        Chartering;      Generating/Elaborating;    Recording
Resourcing;      Observing;       Refocusing;                        Reporting
Questioning;     Manipulating;  Alternating;
                          Pairing;            Branching/Backtracking
                                                  Conjecturing

Consider that "Exploratory Testing is a mindset using this skillset." (Jon Bach)

The Secret of ET -

Skills
Experience
Customers/Context
Risks
Exposure/Exploration
Test

=====

Track session:  Vencat Moncompu Transforming Automation Frameworks to Enable Agile Testing - a Case Study

(Pete:  I'm Missing points.  This guy is speaking really fast - bang, bang, bang - challenging to keep up!)

Agile Automation by itself can't bring about faster time to market.  Traditional Automation (not in an Agile environment) are representative.  Problems with that include: (usually) UI dependent, accrues tech debt, builds up waste. 

Function / behavior attributes are central to ensuring software quality.  Challenges in automation include variances from tool to tool in language - often restricted to "regression:

The question becomes how do we make this stuff work -

Make the features scriptless and self documenting tests;
Develop scrips before UI is ready;
Intelligent Maintenance;
Options to run tests across multiple layers;
Intuitive Business freindly - behavioral testing

Presenting known/reasonably common (Pete: at least to me) multi-layer coverage ideas - (pete note: some of the slides are really hard to read between font & background colour, points spoken - not on the slides are fine, but referring to stuff on the slide makes it challenging.)

Flexibility in test automation is needed to reduce/minimize two concers - 1, Beizer's pesticide paradox where methods to find bugs will continue finding the same types of bugs; 2. James Bach's Minefield Analogy - where if you follow the same path, time after time, you clear a single path of mines (bugs) but learn nothing more about any others.

Balancing Automation & ET - the challenge is to keep things in touch with each other.  (Pete: there seems to be a growing trend that there is a clear dichotomy between automation and ET. I'm not convinced this is still the case w/ more modern tools.  Need to think on this.)

Cites "No More Teams" - Act of Collaborationis an act of shared creation and/or discovery.

Keeping the balance between behavior and function - and reflecting this in the test scripts may help with making this testing clear and bring the value to the test process.  (Pete: I'm not certain the dichotomy between "Agile" and "Traditional" automation is as clear - or valid - as the claims some people make about it.)

-===

LUNCH!

===

Keynote: J B Rainsberger -

BANG BANG BANG - If we're so good at this why aren't we rich yet?  (Kent Beck, 2003)

The fact is, we tend to sound zen-mink-like when people ask what Agile is about. Well, its like ... and its a mindset ... and we need to stay focused on...   OK. We get it

Part of the problem is the mindset that improvements will cost something.  This is kind of done with us being a pain in the butt with our own touchy-feely thing.  We argue against restrictive dogmatic rules and we land in the thing of "we need to change this and focus on the mindset" - followed by papers, books and whatnot that date back to 2005 or 2006.

Etudes - a particular piece of music that is intended to incorporate specific exercises (Pete: in context of music).  Why don't we do that with Agile?  Practice skills while doing our work?

For years, we argue stuff - and still need to make things work and - somehow - something is missing.

The problem is "They" have no real reason to change - so "They" will work to the Rule - Translated, they'll do the absolute minimum based on the "rules."

Citing "New Strategic Selling" for the reasons why people don't buy.  The idea of perceived cost vs pperceived value  is the crux of the problem.  We fail in that manner.

Cites Dale Emery - A person will feel motivated to do something as long as they have a series of elements and see value in what they want to do.  Sustaining change is HARD -

The most we can really do is support each other - we know the answers and what needs to be done - so hang in there and help, listen.  We can either storm off and never deal with the idiots again.  Or - We can pity ourselves.  Or...

We can look at things in other ways.  Shows a video from Mad TV with Bob Newhart as a Doctor of some sort. And a "patient" who fears being buried in a box - His solution - STOP IT!  In fact - every problem has a solution of STOP IT!

Let us consider what our most "well advertised" popular practices are... stupid.

If you haven't read "Waltzing With Bears" - then do so.  There's a chapter/section to the effect of "Agile Project Management is Risk Management" - which is kind of part of what ET does.  Why? maybe for the same reason that we have daily stand ups and we manage to completely miss stuff that gets /stated/ - can't get resolved in a few seconds - it gets lost.  MAYbe this is what

Cucumber - what most people associate with BDD.  Consider this...GAH!  THIS JUNK ENDS UP LOOKING LIKE COBOL!!!!!!!!!!  BAD!!!!!!!!!! We get so much baggage that stuff gets lost because we're worried about the baggage -

Rule - INVOLVE THE CUSTOMER - DUH!  (And Joe says he's been saying that for 15 years.)

DeMarco/Lister's "Lost but making good time" Excellent point - AND the Swing cartoon (meh, I'll dig a copy out and post it here.)

RULE - Talk in examples - eg, lost luggage - Which of these bags are closest to your bag? followed by - How is your bag different from the one in the picture?  This allows us to get common examples present, and trigger information to get important details that may be missed/overlooked.

One problem with this is we forget to have the conversation - we want to record the conversation - but forget to have the conversation in the first place.(Cites Goyko Adzuk's Communications book.)

The problem with recording stories on a card and doing the "this is how its done" thing - They are fantastic for figuring out how to create them - and many years later we have not examined any other way to do things - we simply are shifting the form of the heavily documented requirements.  Translated - You should not still be limited to "As a, I want, so that"to describe stories.

This gives us some really painful problems in how to

Promiscuous Pairing and Beginners Mind: Embrace Inexperience

Awesome paper on beginning/learning to do pairing.

Angela Harms - "Its totally ok for you to suck... that means you can walk around and listen to people criticize your work and tell you it sucks.  It means 'that's a stupid name' is OK to hear and to say."

The point of a RETROSPECTIVE is to be part of a continuous improvement process.  Usually that part gets ignored.  The reason it gets ignored is - Lack of Trust - The ting is -Trust is the key to things working - and this comes from the 'trust' to open yourself up to vulnerability.

Consider - James Shore's paper that Continuous Integration Is An Attitude, Not a Tool.  CI is NOT a BUILD!!!!!!!!!!!!!!!!! 

When we impose Scrum on the world without them understanding WHY and THE POINT - we get re labeled stage/stop gate stuff.

Part og the Problem is EGO - WE DON"T LIKE TO FEEL SOMETHING OTHER THAN AWESOME!

AND - since he's running out of time, Joe is BLOWING through the handful of slides -( Pete: he's got loads of visuals - I hope the slide deck becomes availabe cuz there is no way I am capturing more than 1% of the balance.)

One thing - Consider throwing test ideas up on the wall adn play the "This is what I hate about this test idea" game.  See what happens.

AND WE'RE DONE!!!!!!!!!!!!

====

Consensus Talks -

OK  -Lost connection to my blog's hosting server and THAT is a problem.

--

Pete note:  I finally am reconnected - I got in late in a very nice woman's presentation - I met her in line for lunch and wanted to hear what she had to say - except I've forgotten her name and could get nothing recorded except one really, really important point she made.

Monika Januszek - Automation - a Human Thing (its for people)

Tools, processes, approaches, process models, STUFF that we want people to do differently or change - including automation - in order to be adopted and accepted - and then used - address ONE BIG QUESTION - "What's in it for me?" 

If we can't address that - INCLUDING FOR AUTOMATION MODELS - don't expect to see any meaningful change.

--

Next up - Lindsey Prewer -

Pete note: So busy getting connected and following up on the last talk - now trying to catch up here!

Here we go.  You can't fix problems by throwing people into the fray.  When hiring people is several a month, and the goal is to make things happen cleanly - THEN - there is a cycle of hire/train/start testing/expand training.

Because they can not bring people in - smart automation approaches were needed -

Start with the values and goals
Make hiring scalable
HAve clear priorities
(Pete: at least 1 more I missed)
Mind the points in the Agile Manifesto - Particularly the FIRST one - the idea of PEOPLE.

Without people - you are doomed.

--

Fairl Rizal
Agile Performance testing

Started with the difference between stress and load testing.  (Pete: OK - I get that.)
Made significant point that not everything needs or should be automated (Pete: Almost an after thought? possibly the result of trying to fit too much in from a much larger presentation.)


---

Anahit Asatryan -
Continuous Delivery

SAFe - Scaled Agile Framework

Tool - AtTask -

--

Pete: And I finally have connection to my blog again. (hmmm)

Lars Sjodhal - Comunication

Loads of stuff that is really good - so here's a summary (lost connection thru most of the presentation)

Realize that positive questions may impact results - e.g., Would you like coffee? vs Would anyone like coffee? vs I'm getting coffee can I get some for anyone else?

Silence may imply a false acceptance - OR - presence of group think - encourge full discussion

Major fail - 1991 - Christmas Cards from company called Locum -
Except someone wanted to change the o in the name to a heart - reasons unknown -
And no one said anything - and - they became famous on the interwebs as a result.

---

Eddy Bruin

Gojko's Hierarchy of Software quality

Verify Assumptions - except first you muse learn/find/discover the assumptions ?
And the Tree Swing cartoon shows up AGAIN!

Without getting inside the head of the customers, you may not the real expectations/requirements.

"You have to start with the customer experience and work backward to technology." - Steve Jobs

 If you have multiple paths, with a channel each, consider cross channel testing - (Pete - but don't cross the streams - Ghostbusters)

Consider user based analytics - who/what is your customer - then you can begin to derive possible paths. 

====

Pete: So now that I can get to the blog - yup - lost connection - AGAIN - Trying to resume for the final key note of the whole show - bare boes notes on that as connection is sporradic.

Lisa Crispin and Janet Gregory - On Bridging gaps.

After a tremendous skit of bringing testers and developers together with customers which included a tremendous narration performance by a tester who happens to live in Michigan (ahem) - they launch into a series of experience reports around false communication models, restrictive models within the organization - the lack of any recognition of contractual understanding.

Simply put - without Trust, this stuff does not happen.

They do a fine job of referring to keynotes and selected track talks that have been presented during the week. 

-- Lost connection again - and its back --

Dividing work into small pieces, baby-steps, is one way of influencing work and making things happen.  It makes it a bit more palatable. It makes it easier to work on small pieces.  It makes it easier to simply do stuff.

And there is a new book coming out from the Agile Testing Wonder Twins.

===

This wraps up the conference.  I have more notes coming from Day 0 - the Tutorial Day.  We have some interesting notes to finish...

AND - The conference organizers announced that next year's conference (November, 2014) will have a kindergarten to mind the kids of attendees. (Cool idea.)

===

Good night -

Finished with Engines.

Wednesday, October 30, 2013

LIVE! Agile Testing Days 2013 - Day 2! In Potsdam!

Wednesday dawned bright and early (well, it dawned about the same time it always does) on a group of very tired conference participants.  Last night there was the "Most Influential Agile Testing Professional" awards banquet (congratulations to Markus Gaertner who won!)  This also featured a Halloween them, complete with costumes and ghoulish decorations.

Loads of fun, but made getting to Lean Coffee almost an impossibility and cost me time getting into the "Early Keynote" by Chaehan So.

So, here we go!

The "Early Keynote" title is "Business First, Then Test" - which pretty well sums up the core ideas presented.  Begins with a fair description of product owner and tester having potential areas of conflict and the problems that result from that.  A simple (well, maybe not simple - common perhaps?) approach to addressing this is to share experiences and discuss the intent in an safe environment.  Chaehan's example was "drink beer" (Pete: yup, I can agree!)

Instead of mapping use cases/user stories to abstract buzz-wordy terms, use the same use case or user story name/identifier the Product Owner is familiar with.  Pretty solid idea, not new (to some of us) but important to state.

References coming from the use cases/user story, including data relationships, can result in complexities not obvious to the technical staff, often caused by abstraction to "simplify" the representation.  However, sometimes the representation itself is the issue.  (I'm not capturing this idea well, but I think this covers the gist of it.) 

The idea of relationships and abstraction argues against the common "IT/Geek" approach to simplify for them - DON'T DO THIS.  Keep the reductions at a business intent level.  Chaehan suggests doing THIS by mapping the user story across multiple channels - not redefining the stories to track to the cahnnels themselves.

If you are working on a web-based ordering system, the "story" is replicated in each use channel.  This makes for a complex (and difficult to execute)test path and representation of needs, process and the presentation of information - Even if the implementation of this is complex.

Keep the information as simple as possible!  This is the point of ALL reporting to Management! 

Design to Community - D2C - create a simple design that reflects what needs to be done.  Like many things this allows for multiple levels of abstraction - and avoids the itchy-scratchy feeling that some people have in relation to having tests/progress reported to them in terms they don't use.

Discusses  how the cost curve of correcting problems in the application is usually presented in a manner appropriate to "waterfall" and not so much to Agile.  This is a n interesting view.  If the commonly referenced hockey stick graph/chart is used (yeah, the same one shot to pieces in "Leprechauns of Software")

==

Second Keynote - Christian Hassa on "Live it - or leave it! Returning your investment into Agile"

Describing his presentation with Matt Heusser at Agile Conference in Nashville, Matt made the observation that "scaling Agile" was interesting but how does that related to testing?  (Pete Comment: gulp)

Scaling Agile is often presented as AgileWaterScrumFall - OR Disciplined Agile Delivery (DAD).  The then draws comparisons to "Underpants Gnoomes" who have a business plan something like:
Phase 1 - collect underpants;
Phase 2 - ??;
Phase 3 - profit.

Except the problem is that phase 2 thing.  Most people confuse the "get ready to produce" as phase 2 - it actually is in phase 1.

Scaling Agile Framework - not so different than the Underpants Gnomes. There are still gaps in the model.  There are holes that seem present in phase 2.

If we fail to focus on unlocking value, and instead focus on costs, we miss opportunity.

SAP "Business by Design" model is not that far from this either.  The published estimations from 2003 simply failed to materialize.  The problem was related to attemptign to model the product on current clients/users of SAP, not on what the intent was. 

Presents an example of applying (mis-applying?) Scrum to a given list.  As the team worked forward, the backlog of requirements grew.  How?  The team dove in and aggressively worked on the project diligently. Except the backlog grew.

After a high level meeting with "what is wrong?" as the theme, it dawned on the Product Owner that the problem with the backlog was attemptign to identify all the possible requirements and focusign on the core aspects that were needed/wanted so the product can be delivered /finished/ on time.  The additional ideas may be incorporated into future versions/upgrades, but get the stuff outthere so the procut can be used - then people can figure out what is really needed.

"Your job as developers is not to develop software, your job is to change the world." Jeff Patton

Assertion: "Your job as tester is NOT to verify software, job is to verify the world is actually changing (fast enough.)"

Yup.  The problem we in Dev (including testing) have is that we're a bit like Pinky & the Brain - We want to change the world/take over the world, but we fail to do so - We don't look long enough - we focus on the minutea and not the big picture.  (Pete Comment: OK, I'll reserve judgement. though I like the P&B reference!)

Turns to Scaling TDD for an enterprise.  Cyclical feedback loops (loops within loops) can provide insight within each pass/iteration. (Pete note: ok - seems interesting - consideration needed here on my part.)

Turns to Impact Maps as a tool to facilitate communication / transparency with stakeholders. Interesting example walk through (but it sounds a bit hypothetical to me) on applying the core ideas to this.  Goals/Actors/Impacts/Deliverables - (Pete: OK - I get that.)

Pete question is, does this translate well to people who may not recognize the intent?  I suspect it does - by forcing the consideration that seems "obvious" (by some measure) to someone (who may or may not matter.)

By using impact maps, we can then apply "5 whys" to features - (Pete: that is an interesting idea I had not considered.  I kinda like it.)

Working on scaling /anything/ tends to get bogged down in goals -  Create a roadmap of goals to define what it is / where it is, you'd like to go.  Predicting the future is not the point defining goals - instead look to see what you'd like to achieve.

Test Goals & impacts are similar in that they can act as guides for Scale, Measure and Range of each goal/activity.  Finally - Deliverables - Smaller slices delivered to production make it actually easier to the  get the product out there and improve the product while still developing it (Pete: fair point.)

Story Maps allow us to examine what it is that we are trying to implement, no?  Mapping the story can make aspects clear we have not considered.  Rather than "aligning to business goal" we can align to "actor goal" - This can help us view our model - and see flaws, holes or conflict.

By defining a "likely order of events" we can see what the experience of the user will be, without defining what the software does.  It allows us to remain true to the spirit of the purpose through each potential path. 

This, in combination with the other tools described, help measure the progress and check scope creep.  If we can identify this, we can then identify the purpose more clearly and identify potential and problems being introduced.

We can also use Story maps to control flow and define relationships between components and find potential conflict.  As we get more information we can define the higher/lower priority decisions around the story maps.   The higher the priority, the finer/more detailed the story maps become.  The lower the priority, the chunkier, more nebulous the story maps become. 

WOW! a Real example! (as opposed to hypothetical) 



Sprints expanded to 4 weeks in this case.  The first sprint had issues (ok, not uncommon) Yet by the end of Sprint 2, the core functions were in place.  By focusing on the MOST IMPORTANT features - the top priority story/story maps could be implemented cleanly, expanding ideas/needs as project developed to include the lower priority needs.

Pete: OK - completely lost the thread of his last points but I got pictures!!



General Gist - COMBINE TOOLS and TECHNIQUES to make things work.  A SINGLE tool or technique may have value, by combining them we can balance things a bit better.

Book Recommendations -

How to Measure Anything - Douglas W Hubbard
Impact Maps - ???

And BREAK TIME!

==

Track Session - Gitte Ottosen - Making Test-Soup on a Nail - Getting from Nothing to Something

Gitte is a Sogeti consultant speaking on Exploratory Testing.  OK. Here we go With a Unicorn!!



Starts with James Bach's (classic) definition of Exploratory Testing.  (Pete: yeah, the one on the Satisfice page)

Describing fairly common challenges in project limitations, liabilities and personality conflicts and potential for problems.  PM does not want "too many hours" used - views testing as overhead.  And Test Mangement Org wants "everything documented... in HP QC."

Fairly obvious solution - keep it simple.  When people pretend to be Agile, it is a challenge for everyone involved.  The challenge is to make things work in a balanced way, no?  Gitte was not an "early adapter" of mind maps, and described how she created bullet lists and converted them later - OK - I can appreciate this.  Then There were issues with documented structure of the app - which were not existent.  This is something we all get to play with sometimes, no?

so what's available?  Boundary analysis, pair-wise (orthogonal arrays - same thing, diff name), classification trees, etc.  (Pete: Yup - all good approaches).  AND she plugs Hexawise (Pete: yeah, way cool product!)

On examination - it was possible to lok at "cycles" and how users/customers are expecting the new app to work.  The "documented requirements" did not exist - and maybe they were not discussed and understood.  So - the question becomes when expectations are different between dev/design folks and customers/product owners - what happens?  "Learning opportunity"

Decision trees and process flows can help with this - examine what the customer/user (or their representatives) expect to happen and compare those with developments - as a whole.  Then exercise the software.  See what happens.  Exercise the things of interest THEN.

Testers (her) worked to support the team by "translating" the user stories into English - well, because of the team distribution, writing them in Danish was kind of a problem - as some folks spoke/wrote Danish (Danish company) but others did not - ewww

The good news is, when she exercised the software over documenting what she was going to test, she found problems.  The product owner noted this and thanked her.  By focusing on testing, she found she enjoyed testing again (Pete note - yeah - that helps)  

Interesting variation on mind maps - Use them to document testing - instead of step-by-step approach, simply mind map of function points to be tested.  (Pete Note: I do something similar to define sessions and charters for the sessions.)

==

Track Session: Myths of Exploratory Testing Louis Fraile and Jose Aracil

Starts with a fairly common BANG - Who is doing Exploratory Testing?"  Loads of hands go up.  (Pete note: ET by what model? Are they doing what I think of as ET?) (Note - they also did a pitch they are looking for people to join the company - boys - is that cricket?)

To do ET well, you need to...
"Inspect and Adapt" - change your thinking and observe what is going on around you. 
"Be creative/Take advantage of you're team's  creativity" - let people do their thing
"Additional to other testing" - don't just do ET - do other testing "like automation"
"Quickly finds defects" -  wait - is that a key to success or an attribute of ET?
"Add value to your customer" - hmmmmm what does this mean?
"Test Early! Test Often!" - what?

Myths...

Myth 1 - ET is the same as Ad-hoc Testing
"Good ET Must be planned and documented" -
You must know -
what has been tested;
When it was tested;
what defects where logged.

Some ideas -
Testing Tours - Whittaker
Session Based Testing - Bach/Bolton
Something Else (Huib suggests Mike Kelly's ideas and thrashing Whittaker's tour ideas)

Myth 2 - ET Can't be measured
Multiple measurements available - SBTM, etc.,
Pete comment - blah - what?

Myth 3 - ET is endless
Pete comment - no idea what their point here is.  sorry

Myth 4 - ET Can't reproduce defects
Be an explorer - really?
Be like David Livingstone from the video/computer game -
    Guys he was a real person ( http://en.wikipedia.org/wiki/David_Livingstone )
         not just a guy in a video game
    Record video, use screen capture, analog recording (pen & paper)
Empower developers - adopt one.
   Was that video really needed?

Myth 5 - ET is Only for Agile Team
Pete comments 
   - what?
   - CMMi works with ET?  REALLY?  By what definition of "CMMi Works?"

Myth 6 - ET is not documented 
Testers do things by not doing things in "Lonely Planet"
And then there are the ones who DO things in "Lonely Planet"

Pete comments - here to end
  - stretching the metaphor from Whittaker's tours just a little?

What?  " They don't do TDD with ET?"
Boys - TDD is a DEVELOPMENT tool - not a TEST TECHNIQUE
ET is an APPROACH not a TECHNIQUE

DIFFERENCES MATTER.  (shouting in my blog - not the room)

===

Keynote - Dan North (@tastapod) - Accelerating Agile Testing - beyond automation

Opening assertion- Testing is not a role it is a capability.

The question is - How do Agile teams do testing and how does testing happen?

Much effort is put into things that may, or may not, move us along.  The idea of backlog grooming is anathema to Dan North.  (Pete - something to that)   The thing is, in order to improve practices, we need to improve capabilities.  When people are capable of doing something, it makes it easier for them to actually do that thing.

We can divide effort into smaller pieces, sometimes this make sense, sometimes there are problems.  Sometimes there is a complete breakdown in the economic balance sheet to the software.  When they shift to "short waterfalls" you get "rapids."  Rapids are not the same as "rapid development."  Sometimes things don't make it better.

"User Experience is the Experience a user has." (OK - that was a direct quote.)  Translated - people will have an emotional reaction (experience) when they use the software/app/whatever.  Thus, people line up all night and around the corner to buy the newest Apple device.

"Don't automate things until they are boring."  If you are 6 sprints into something and have not delivered anything to the product owner/customer/etc., you are failing.  They can have developed all the cool interface stuff, test engine, internal structure - but if the product is not being produced - you are failing.

You have to decide the stuff you want to do - and base that on the stuff you choose not to do -

Opportunity cost - all the other things you could be doing if you weren't doing what you are.

The problem of course is we may not actually know what those things are.  The question of what can be tested and the actual cost of doing that testing is a problem we may find hard to reproduce, let alone, understand.

When there are problems, we need to consider a couple of things - Is it likely to happen again?  What is the chance of that happening again?  How bad will that be if it happens again?  These lead to "If that happens here" how bad will it be?

Thus Netflix (more traffic than porn by the way) does not worry too much if their server is down - they (and chaos monkey) may be interested what portion of their thousands of servers are down right now.  How much of the total is not available?  Since failure of some portion is determinate, why do we pretend if must be avoided/

Cites xkcd AND the  Leprechauns of Software book - stuff we know is bogus.  There is little evidence for many of the things we believe have little or no evidence supporting them..

Discusses coverage - Look at the important bits of the product, then see what make sense.  The stuff that is high impact of failure and high likelihood of failure had better get a whole pile more test effort than the stuff that no one will notice or care about if it DOES fail.



The question around this is CONTEXT - the context drives us - if it doesn't we are wasting time, money and losing credibility amongst thinking people.  We can get stuff worked out so we get a 80% coverage of something in testing, but if it is a context that is irrelevant, it doesn't matter.


Stakeholders, product owners, etc., MUST be part of the team for the project - they are also part of the context. However - we must know enough about something to know if it is important OR by what model it is important or  not.  Without these things we can not appreciate the context. 

Doing these things increases the chances that we have a clean, solid implementation - which makes ops folks happy.  They should be excited in a good way that we are implementing something - and looking forward to working with us to get it in.  If they are excited in a bad way about our deployments, we are doing it wrong.









TEST DELIBERATELY.

===

After spending time chilling in the hallway with people conversing on a variety of topics.  A needed afternoon off - it is time for Matt Heusser's keynote.  Scheduled talk is "Who Says Agile Can't Be Faster?"

Brief introduction of himself... developer/programmer - tester - agile guy - and ... author and - stuff.

After giving people a choice of topics - he launches into "Cool New Ideas and some old ones too."

And he gives away money... until he smacks the entire audience (except Seb Rose and those of us who heard him choose which game to try at the start.)  We become complacent - relaxed - and fall into "automatic" responses.  A cool video on attention awareness (or lack there of) launches him into his main theme.

Unless we know what to about what we are not looking for in particular.  Like "And nothing else goes wrong."  Except that takes really hard work.

Presents Taleb's Black Swan work - Risk at casinos - protect us from cheating and fraud and ... stuff.  Except when the tiger mauls Roy of Seigfried and Roy - Insurance on the performer, who recovered, except the point of the show was to bring people into the casino to spend money.  They didn't so Casino lost a bundle.

Walks through several examples - some more dramatic than others.  A brief survey of problems and examples of types of testing (Pete: favorite is "soap opera" where you run through elaborate stories that "no user would ever do - except this one does... what happens?)

Consider - Coverage decays over time, but we're never sure what parts decay at what rate.   We become complacent with automated tests or scripted manual (regression or whatever) and the more complacent we become the greater the odds that something will go horribly wrong.

This is the issue we all face whether we are aware of it or not.

Minefields!  (with a picture of a minefield) We get complacent and forget about stuff.  Its so easy because this always works - until something goes boom.

We MUST remember and keep this solidly in mind that this is a risk (awareness of a problem does not eliminate it, BUT - it helps us to keep it in the foreground and not slip into "system 1" thinking (autopilot mode.)

Presents, discusses a kanban board he used for test process/planning explicitly - the only thing on the board was testing stuff.  Thus - anyone can see what is being worked on in testing AND anyone can ask about it.  When people ask then about where are we?  They can look at the board.

OK - Matt has moved on to his Titanic story ... (Pete: I need to talk with him about this... there are some... issues.)  BUT he gets his Boat into the presentation!!

===

Break - and Game night!

Signing off from Potsdam for the day -

PS:  Evening testing/agile games night was loads of fun.  Matt did his Agile Planning session game, I did a collection of games around estimation and pattern recognition - gave away Scrabble Flash and puzzles made from erasers.  Then more beer and conversation at the conference hotel's bar.