Wednesday, November 21, 2012

Agile Testing Days, Day 1: Workshops

Monday in Potsdam was a lovely day.  Yeah, a little foggy, maybe a little damp outside, but hey - I was inside where there was good coffee, a variety of juices, waters and the odd snack or two.  A nice luncheon with great conversation following a very enjoyable breakfast with great conversation - Oh, and Matt and I had another opportunity to present Software Testing Reloaded - Our full day workshop.  This time in conjunction with Agile Testing Days.

As usual, we totally messed up the room - this time the staff of the hotel were more amused than horrified.  The folks wandered in after coffee and light snacks and found us playing The Dice Game - Yeah.  That one.

In retrospect, it was a great ice breaker to get people in the room, involved and thinking.  It was a good warmup for what was going to follow.  So, we chatted and conducted the first exercise, had everyone introduce themselves, asked what they were hoping to get from the workshop.

I think Matt and I were equally astounded when a couple of people said they wanted to learn how to test and how to transition from waterfall (well, V-model) to Agile.  We gently suggested that the people who wrote the book were down the hall and perhaps that might be better for them - and reassured everyone that if they were looking for something more that they could either re-evaluate their choice OR they could hang with us. 

So, after a couple of folks took off, and a couple more wandered in, we settled at 11 participants.  It was a lively bunch with a lot going on - great exercises, good interaction.  Kept us on our toes and, I think, we kept them in their toes as well.  

Somehow, we managed to have a complete fail in getting to every single topic that people wanted us to talk to or do exercises around.  Ummm - I think our record is perfect then.  You see, there is always more for us to talk on than there is time.  That is frighteningly like, well, life on a software project. 

We often find ourselves with more stuff to deliver in a given period of time than we can hope to.  If we promise to give everyone everything, we really can't deliver anything.  Well, maybe that is a bit of a stretch.  Maybe it is closer to say we will deliver far less than people expect, and less than what we really can deliver if we prioritize our work differently in advance. 

So, for Matt and I, we try to work our way through the most commonly occurring themes and address them to the best of our ability.  Sometimes we can get most of the list in, sometimes, well, we get less than "most."

Still, we try and let people know in advance that we will probably not be able to get to every single topic.  We will do everything we can to do justice to each one, but...

This got me thinking.  How do people manage the expectations of others when it comes to work, software projects and other stuff of that ilk. 

How well do we let people know what is on the cusp and may not make the iteration?  How do we let people know, honestly, that we can not get something in this release and will get it in the release after? 

I know - the answer depends on our context.

In other news,  It is really dark here in Potsdam (it being Wednesday night now.) 

To summarize, we met some amazingly smart people who were good thinkers and generally all around great folks to meet.  My brain is melted after 3 days of conference mode - and there is one more day to go. 

I've been live blogging on Tuesday and Wednesday, and intend to do the same tomorrow.  I wonder if that has contributed to my brain melt.  Hmmmmmmmmmmm.

Auf Wiedersehen.

Agile Testing Days: Day 3 - LIVE in Potsdam!

My Wednesday begins in Lean Coffee - yeah there was one yesterday, and I missed it completely (yeah, I overslept.) So, here we are, roughly a dozen people sitting around a table.  Folks suggest a topic they'd like discussed and everyone votes.

This format is not unique, in fact it is used fairly often at various conferences, and, well, meet-ups.

So far, it is far too fascinating to be able to describe adequately, so I'm going to not even try.  Its good stuff.
---
Right, so sitting in the main hall before Jurgen Appelo's keynote this morning.  I have a moment to summarize the Lean Coffee discussion.  So here we go:

Brilliant.

Yeah, that probably is too short, but you get the idea.  Conversation was happening so fast and furious around a handful of topics (we got through three I think - they kept getting extended...)  We talked on such topics as the challenges in training testers, training in softskills and ... hey wait - two on training?  Yeah.  And they are related.  There's a couple of questions that all of us struggle with - mostly because it is hard.  Deep discussion with a lot of good ideas.

And now it is time for Jurgen's keynote.

---

Let's Help Melly: Changing Work Into Life

Right, starts out with some interesting stats.  A recent study concluded that over half of American workers hate their jobs.  It seems that this is a global problem as over half the people in the world hate Americans as well.  Wait.  No.  He was kidding.  (I think.)  The problem is that most workers in the world hate their jobs.  They seem to think this is a new phenomenon.  Jurgen purports that this problem began some time back, like roughly, oh, 3000 BC (he may have said BCE, don't know - there is a guy with a veritable planetoid for a head sitting in front of me and I can't see the screen so well.)

He is going through some common ideas and tossing them aside.  Gotta love it when people do that.

Let's see - BPR, TQM, Six Sigma and the like, are extensions of the good old days of, oh, the Pharaohs.  There are loads of studies that show they don't really work, but that does not seem to change the attempts to "make things happen."

Partly because organizations tend to be organic - they are forms of communities that are ever changing.  Until people look into reinventing SOLUTIONS (not processes) things will never really get better.  This is essentially what Agile "Community" is to do (per Jurgen).

He's moved into setting aside SCRUM and Kanban - and is discussing Beyond Budgeting, Lean Startup.  OK - I'm liking the logical progression I'm seeing... This leads to this which conceptually leads to this which... yeah - and now Design Thinking.

The thing is, these are all attempts to correct problems where things are not working.  How do we make them work?  As a label, the stuff really is not that important.  The ideas are important.

How do we make things succeed?  Jurgen suggests we run safe-to-fail experiments, we steal (collect?) and then tweak ideas, collaborate on models and shorten feedback cycle

The PROCESS is what we see -scrum/kanban boards, discussions, people working together and collaborating.  The stuff in Binders, sitting on the closet are MODELS.  All that stuff that describes the formal book stuff does not promise success!

Jurgen is now citing Drucker, and is suggesting that just as "everyone should be involved in testing" then "everyone should be involved in management" (which by the way is similar to the word in Dutch, German and other languages meaning a place to keep horses.)

These ideas are core to his work on Management 3.0 (which is pretty cool by the way.)

For example, he cites a company with "Kudo Box" where people can drop a note to recognize someone's work, effort, help... something.  This can help lead to recognition - and acknowledgement of individuals (yeah - groups are made of individuals - so are teams and companies and... everything.)

Yeah, I can't type as fast as Jurgen is throwing out good ideas to fast to really do him justice. 

People are creative when they are playing - where they learn and get ideas - and wow - highly modified football table (foosbal for my former colleagues as ISD).  Lasers to detect scores, electronic score keeping, card readers to identify the players (which allows for cool score metrics)  to you video cameras to record goals and play back the BEST goals in slow motion.  Yeah - there's a company in Norway that does that.  Cool.  They also get really bright engineers.

Short feedback cycles are crucial - not just from customers but also from staff.  Jurgen now throws out a slide with a "Happiness Board" that is updated DAILY - so folks (like managers) can have a sense of the way folks are feeling RIGHT THEN - THAT DAY.

As opposed to "Yes, we are interested in how our people feel,  so we will send out surveys by inter office mail every 3 months."  Riiiiiiiiiiiiiiiiiiiiiiiiight.

So, in short - we CAN address problems, and make life and work-life better.  But we need to try not just whinge about it.
---
In Dawn Haynes' presentation on Agile and Documentation... with UNICORNS.

Dawn is opening with a brief story of her professional development progression. "I was a massive test team of ONE... surrounded by lots of hardware.  It felt good."  Many folks I know have been in that situation.  Some do really well.  Some folks have a harder time.

It is frightening when Dawn's stories are soooooooooooooooo similar to mine.  Oh yeah, this sounds familiar: "When will this be done?" "Done?  NEVER!!!!!!!!!"

Well, ok, maybe that is a little harsh.  It just struck me as an important point. Most folks need to learn to figure out how much has been done, and generally what still remains, according to some model.  Dawn then applied some thinking (way cool great step!)  and realized that you need to be able to remember more than one or two things at a time.

Her memory sounds a but like mine - you can handle two or three things in the "memory stack" and anything more than that scrolls off.  So, she began with a simple spread sheet that contained information on stuff to be tested - in a prioritized order.  (Pete Comment: Yeah, this is getting crazy spooky because she's describing some of the stuff I did early on as a tester.)

Her documentation allowed her to remember what she had done and what she still needed to do and sometimes dream up NEW things to try.  Now, this sounds good, except she managed to "melt" Unix devices.  Yeah, she rocks.

This is what documentation can do - help us remember and explain it to other people.

Now, some Agile pundits advocate NO documentation.  Real world people (not unicorn-land people) who advocate "There is no need for documentation" and things of that ilk.  Dawn explains, quite clearly and simply, that sometimes you need to be able to demonstrate what you did.  Hence, documenting work that needs to be done.

Mind Maps, simple spreadsheets, and other tools can record what is done - what the thought process behind them - and then the results.  Hey Folks!  There is a lightweight test plan/test result combination.  Keep an eye on what REALLY is needed.  You know, to communicate and ADD VALUE - that is kind of a big part of what Agile is supposed to be all about, no?

OK - I really liked this explanation.

And now... the questions/answers are getting interesting...Can these techniques be used for more than one person?  Yes.  Yeah.  It can.

Q: How does this work for inexperienced, maybe unskilled testers that don't know the application?
A: Non-skilled, untrained testers? I don't do that!

yup.
---
So I scooted over one room to pick up Simon Morley's (yeah, @YorkyAbroad) take on CI and multiple teams and people.  And he's underway NOW.

You get complex systems that are difficult to test which gives us slow feedback, because there is so much and it takes a lot of time and ... yeah. You get the idea.

When stuff gets rally hard to deal with, you invite other problems.  Simon is pulling in the idea of "Broken Windows" - which was a popular measure in crime prevention circles in the late 1980's and well into the 1990's.  Simply put, if a window is broken and not fixed, or at least not patched, then the message is that no one cares about it.  This leads to more vandalism, property damage, etc.  If this is not addressed, then more violent crime will creep into the area and things quickly spiral downward. So, fix the windows, even in vacant houses, paint over the graffiti, etc., quickly  Deal with them when they crop up.

In Software, the equivalent is "It was broken when we got it" or "That was broken 2 (or 3 o10) releases ago."  If they do not get fixed, the message is "No one cares."  If no one cares about something, then likely getting them fixed later, and getting similar things addressed WHEN they happen, will get kicked down the road a bit, until it is 10 releases later and obviously, no one cared or it would be fixed when it cropped up. (right?)  (Pete Comment: Some folks seemed a little confused over what the Broken Windows thing had to do with software quality in general or testing in particular. Ummm, think it through, please?)

So, when you have many people/teams working on interconnected products - or maybe multiple aspects of the same product, sometimes, folks need a tool.  Simon developed a series of spreadsheet-like tools to track progress and effort and results of tests.  These summary pages allowed a user to drill down into the individual reports that then gave you more information... yeah.  Pretty cool.

Then - by doing this, everyone in the organization could see the state of testing and (important point here) the Delivery Readiness of the product.  Thus, integrating information - and changing the overall message can help direct thinking and people acting on cues.  Sort of like saying "We are not doing Continuous Integration because we are exercising Delivery Readiness artifacts."  Yeah, it might be considered semantics, but I like it.  It brings to mind that the real key is to get people thinking in advance "When will we be ready?"

That is an idea I find lacking in many organizations.  It is not that we don't ask that question, but we ask that question well after we should.  We ask it too late (in many organizations) to address the stuff that REALLY needs to be considered before any code is written.

Why?  Just as a tool (like CI) is not THE solution (although it can help) the "Complex System" is more than just software - it is the PEOPLE that interact and impact the development of the software that makes a difference.  It is the people who are the solution, not the tools.

(Pete Comment:  That last point is something that can bear some thought, and that is probably worthy of more consideration.  I find this particularly true at "technical" conferences.)

---
LUNCH TIME!!!!
---
This afternoon, Markus Gartner kicks off his keynote with Unicorns & Rainbows... then shifts to Matrix memes.  Cool - of course his theme is Adaptation and Improvisation.

Right - All good rules should not be followed on faith.  He's 3 minutes into it and people are going "O Wow."

Using a common theme on mastery, Markus presents the concept of 10,000 hours of deliberate practice to achieve mastery.  Reasonable, no?  Add to that, the and self discovery needed to work through ideas that enable you to write meaningfully, engage in various exercises and ... well, thinking

Citing ideas from Weinberg's Becoming a Technical Leader (Pete Comment: excellent book) on how to be successful:
1. Learn from Failure;
2. Learn by Copying other people's successful approach; needed;
3. Combine ideas in new ways.

Now citing Computer Programming Fundamentals - which Jerry was a co-author in 1961: 
"If we have mastered a few of these fundamentals along with the habit of curious exploration, we can rediscover special techniques as we need them." 

(Pete Comment:  OK, re-read that statement then check the date - yeah - 51 years old. Yeah.  I'm not sure we have reached that level yet.)

Markus slides into questions on Management and the difference between Managing and Controlling  (Pete Comment: and Dictating).  The problem of course is that some folks have a hard time dealing with this because - well - Controlling practices, we may not actually allow for people to learn and master and grow, these controls are self-limiting.  The negative feedback loop that ensues will doom growth - personal, professional & organizational.  (See Jurgen's keynote.)

Clarity, Trust, Power and the ability to Search/look are all needed to resolve problems and allow the team to grow.

Consider that delegation relies on intrinsic motivation, pride of workmanship (Pete Comment: Yeah, you can be a craftsman, really) and desire to please customers.

Allowing for accountability is the key to allow teams and organizations to grow.  Yeah.  That is pretty significant.  The approaches to testing reflect the culture of the organization and the world-view of the management.

You MUST have trust among and between the team members.  WIthout this, there is no improvement.  Fear of Conflict is the next concern.  Conflict CAN be good - it helps us understand and resolve questions around out understanding.  Lack of Commitment - not the Scrum thing, but the question around what we do and commitment (professionalism?) to getting the best work done we can.  Related to that is Avoidance of Accountability - yeah - the "Its not my fault I did this wrong."  This is - well - silly.  Finally the inattention to Results - the question of "What did you DO?"

In an Agile environment, this kills chances of success.  If your team has these things (any of them) going pear-shaped, you get dysfunction.

Consider the idea of Testers need to be able to Code/Program.  If we can grow a mixed-skills body, where people have similar skills but in different combinations, we can move from Testers & Programmers to Programming Testers and Testing Programmers.

The difference between them can be negligible.  When we are open to it.  Communication and mutual support is needed, 

In the end, we each must decide whether we will take the Red Pill, and become fully Agile (or agile) or the Blue Pill and stay as we are.

---
After taking a break, chatting with people and getting a bit of writing done, BACK ONLINE - in Henrik Andersson's presentation on Excelling as an Agile Tester.  Yeah.  Sounds like fun AND WE'RE OFF!!!!!

Henryk is a good guy - Member of AST, has attended/presented at CAST, participated in Problem Solving Leadership and AYE - and presented at a pile of conferences.  Smart, outgoing and a good thinker.

In discussing Agile Testing, Henrik tells us that the acronyms tend to carry a fair amount of baggage with the,  The TDD, BDD, ATDD, and other acronyms tend to fall into the realm of Checking - in the model described by Michael Bolton.  In his estimation, the techniques embodied by those terms are actually design pricniples to facilitate clean, clear, simple design - and inspire confidence.  Frankly, it helps programmers know what to do and when to stop coding.

Why take those away from programmers?  Why add an extra feed-back loop to get a tester in the middle?

Henrik's key point :  DON'T TAKE THE VALUE AWAY FROM THE PROGRAMMER.

But don't worry, there is still a need for testers in an Agile environment (Pete Comment: or any environment for that matter.)

Citing the works of Weinberg - A Tester is one who knows that things can be different.  Citing the works of Bach and Bolton - Testing helps us answer the question "Is there a problem here?"

A tester should be driven by question that haven't been answered, or asked, before.

(Pete Comment - OK: Henryk is currently describing my views really closely, I am hoping there is not some confirmational bias on my part.)

So, what can an Agile Tester do?  How do you become EXCELLENT?

Pair: with a product owner on design of Acceptance Test ; with PO whendoing ET sessions; with PO to understand the customer; programmer on checking; with [programmer to understand the program.

(Pete Comment: Yeah, I find nothing to fault in this logic - see my above comment.)

And out comes James Bach's comment - "I'm here to make you look good."  And Henrick's corollary: I'm here to make us look good."

It is a subtle distinction, but an important one.  A tester in an Agile environment can cross multiple conceptual walls (Pete's description) across the entire team.  By working with the Product Owner as well as the Programmer(s) and the rest of the team to gain understanding, they can also help spread understanding and help manage expectations and guide the team toward success.

Sometimes gently, sometimes, well, kinda like Mr T on the A-Team television show.  (Pete Comment: Is that still running somewhere?)

When testers are participating in Sprint planning - they also need to remember to engage in Daily Test planning.  Henryk recommends planning these by Sessions (described by Bach) of 90 minute blocks of uninterrupted, guided/dedicated testing.  With Charters to guide each session - guidelines for what functions/portions are intended to be exercised each session - these charters can be placed on the Scrum board (or Kanban or... whatever) as you plan them.  Then also - put your BUGS on the same board.

Don't let them be HIDDEN - particularly in a bug system with limited visibility.  Look into recording it accurately - Then get them out in front of the world.  Other people may notice something or may possibly be able to provide insight into the results.  Most important, it lets everyone know what you are finding.

Taking the results/findings of the sessions back to the group, helps us understand the state of the product, the state of the project.  This can be done at the daily Scrum (in addition to the notes on the board.)  Grouping information needs to make sense.  The point is to give a nutshell - not the whole tree.  Make things clear without bogging things down in detail.

Understand how the User Stories related to which Function Areas and the corresponding sessions per, along with the "Health" of the system.

Generally, Henryk finds that giving information on the Health of the system of greater value than the typical "this is what I did, this is what I plan to do, these are the problems..."  This report may include session burndown charts, TBS (Test, Bug Investigation and Setup) Metric - along with the time spent in Learning, performance testing, business facing work and integration.  These pieces of information give a reasonable image of the state of the testing right then.  Getting to know the rhythm (Pete: there's that word) of the application and the project is crucial to understanding the application and the speed that information is being obtained.

Whew.
----
Closing Keynote for the day by Gojko Adzic - Reinventing Software Quality.

Opening Gambit - We are collecting all the wrong data, reporting it in all the wrong way and wasting time with both when there is much more reasonable, and better data to collect. (Pete Comment: That big CLANG was the armoured gauntlet being thrown down.)

Using his Specification by Example book as an example, Gojko launched into a story of how he found 27 problems that he considered a P1 - as in parts of words could not be read or stuff was mangled or... yeah.  Problems.  SO... He spent MONTHS going through the book looking for every possible problem and recorded them.  Then, he went back to the publisher with his "bugs."

"Don't worry about it."

WHAT? Look at all these bugs!  - ANd they showed him the reviews posted online, etc.,  All but one gave it full marks and rave reviews.  The one negative review had nothing to do with the production quality of the book itself. 

Why?  Because he was looking at this in a linear way - Looking at bugs in PROCESS - not performance in the market place.  Not Customer satisfaction - Not whether they liked the product or it did what they needed it to.  In his estimation, ET, SBT and the TDD, ATDD stuff are all detecting bugs in the process - not bugs in the market viability.

Now - one MAY impact the other - but do we have enough information to make that decision? Maybe - Maybe not.

His comparison then moved to driving a car.  The Lane Markers and seat belts help us move the car safely and make sure that we are not injured if something goes wrong.  They don't provide any information on whether we are going in the right way.  Other things may - Road signs for example - and sometimes those are missing or obscured.  GPS Systems can also help - and generally show your  distance from your destination - IF they know where they want to go in the first place.

Another Example - Blood pressure is generally an indicator of human health.  Generally, reducing blood pressure, if the blood pressure is too high, may be a good idea.  The way you reduce it may or may not help.  For example - a little red wine may help.  Maybe some medication may help.  Chopping off the head will certainly lower blood pressure, but may negatively impact overall health.  (Pete Comment: I wish I had dreamed that up.)

We talk about using User Stories and ... stuff to promise "good software."  Yeah, well, some user stories are good, some are... rubbish (Pete edited that word.)  A good User Story talks about what is going to be different. And how it will be different. Measure the change!

And moving on to the self-actualization table - as he applies it to Software.

Deployable Functionally - Yeah - the stuff does not break.  Good.  Does it do what we want it to do?  That is another question.  Is it a little over-engineered - like capable of scaling to 1,000,000,000 concurrent users may take more work than really needed.

Performance Secure - Yeah.  Does it work so it does not spring leaks everywhere? OK  Consider Twitter - ummm Gojko is pretty sure they do some performance testing - how much, who knows.  Still, the fail-whale shows up pretty frequently.  The

Usable - OK, so it is usable and does not blow up.

Useful - Does it actually do anything anyone wants?  If not - WHY DID YOU DO IT???

Contribute to Success - The Hoover-Airline example is one to consider.  Some years ago, Hoover Vacuum ran a promotion where if you bought certain models of their vacuum, they'd give you airline tickets.  Except they made it pretty open ended - like - intercontinental.  And They were giving away airline tickets worth far more than the vacuums they were selling.  On top of that, they were getting scads of complaints - and between trying to buy airline tickets and dealing with problems, the UK subsidiary went broke - and were in court for years.

Another problem - the "success rates" for training.  Typically we simply do it wrong and look for the wrong things to give us the information.  He cites Brikerhoff & Gill's  The Learning Alliance: Systems Thinking in Human Resource Development.  Generally, we fail to look at where the impacts are felt to the company and what the actual results, of training and software, actually are.  We are deluding ourselves if we are tracking stuff without knowing what the data you need is. 

So, Gojko provides a new Acronym for this consideration:  UNICORN:  Understanding Needed Impacts Captures Objectives Relatively Nicely.

The Q/A is interesting and way too fast moving to blog - check out twitter.... ;)

whew - done with engines for the day.

Thanks folks.



Tuesday, November 20, 2012

Agile Testing Days: Day 2 - LIVE in Potsdam!

Right.  Here we are in Potsdam, Germany for Agile Testing Days, after a day of workshops.

Here we go!

Scott Ambler is opening his keynote throwing down the gauntlet, planting his flag and a couple other euphemisms.  Starts out with the challenge that much of the rhetoric around the Agile community is incorrect.  How bad is "Waterfall"?  Try finding a way to do a financial transaction that does not encounter, at some point in its life, a software system created by a waterfall methodology.

Next point - interesting -Don't worry about Process, worry about getting work done.  Process stuff is BORING!  Worse, most of the folks who were early into Process were speaking from theory and not from practice.  Agreed, Scott.   

Instead of starting with something big and moving to the middle, or starting with something little and moving to the middle.  Skip the rubbish and start in the middle (duh.)  Yeah, I agree with that as well. Data Technical Debt is the killer in the world today, introduced by people who are not enterprise aware. Frankly, Scott is hitting a lot of points - many of which I agree with, and interestingly enough, Matt and I touched on in our workshop yesterday.  He touched on being at least Context Aware and recognize that some practices don't work in every situation.  He also is challenging the notion that some of the "rules" of Agile are really not as agile as folks would make it.  OK - he has me now.

One point, if you look at how Scrum actually works, it is linear, right?  You Plan, then Do, then... wait. That sounds like Waterfall.  Hold on, how can that be?  That sounds like not agile!  Advocating improving activities, software design and development is core to delivering in "the real world."  (Real world count is 4, maybe 5 at this point?)

His DAD process lays claim to getting the good stuff from SCRUM and RUP and making sure people are able to deliver good software.  Others in the room are noting that there is a demand in the presentation on needing "real data" without presenting any "real data" - OK - I get that criticism.

Can Agile Scale?  Can a team of 50 make the same decisions as a team of 5?  (I can hear rumblings about how 50 is "too many" to be Agile.  Yeah, I get that too.  Except large companies often act that way in practice, instead of what they should do or say they do.

In practice, organizational culture will always get in the way if given an opportunity.  Groups of 5 will have different dynamics than a team of 50.  OK.  He then talked about being "co-located" which most (maybe all?) forms of methodologies call "preferred."  The question is are they still co-located if the group is working in cubicles?  Are they really co-located if they are in the same building but not in the same area?  What about when you have people from different departments and or divisions are involved or ... yeah you get the idea.

OK - He just scored major points with me - "I don't trust any survey that does not share its source data."  He goes on to explain that the data from his surveys, and supporting his assertions are available for review, and free to download if you contact him.

Scott moves on to talking about scaling Agile and Whole Team and - yeah. He observes that sometimes there are regulatory compliance situations where independent test teams are needed and asserts that you read and understand the regulations yourself - and NOT take the word of a consultant whose income is based on... interpreting regulations for you and then testing stuff you need to do.

Scott moved on to questions of parallel testing for integration testing, then leaped back to his Agile Testing survey results.  He has an interesting slide on the adoption of "Agile Practices" like Code Analysis, TDD, etc.,  One thing I found that was interesting was the observation on the number of groups that kind of push testing off - like maybe to a different group?  Is it hard?  (well, yeah, maybe)

Other info graphic observed the number of developers not doing their own testing.  (Wait, isn't the point of "whole team" REALLY that we are all developers?) I am not certain that some of the results that Scott is presenting as "worrying" are really that bad.  Matt Heusser reports that his last client did Pairing & Reviews - so are informal reviews being done by "only" 68% a problem?  Don't know.  In my experience, Pairing is an organic review environment.

Overall, I quite enjoyed the keynote.  NOTE: You don't need to agree to learn something!  :)
 ----
Moving on to the first track session after a lovely round of hallway conversations.  Peter Varhol presenting a talk named "Moneyball and the Science of Building Great Agile Teams."

Opening is much what one might expect on a (fellow) American explaining the concept essentials of Moneyball (anyone remember how bad Oakland was a few years ago?) and the impact of inappropriate measures.  Yeah, (American) baseball has relationships to Agile - people need to respond to a variety of changes and - frankly, many people do a poor job of it.  Both in Software and in sports.  Essentially - to win games (or have successful implementations) you need to get on base, any way you can.

Thinking Fast and Slow (Kahneman) being referenced - yeah - good book.  

Peter is giving a reasonable summation of biases, impressions and how thinking and actions can be influenced by various stimuli.  For example "This is the best software we have ever produced" vs. "We are not sure how good this software is."  Both may be true statements about the same release!

One may cause people to only brush over testing.  One may lead you to aggressively test.  Both statements may impact your approach depending on the nature of your thinking model.

He's moved on to being careful with over-reliance on heuristics - which are by their nature, fallible.  They are useful, and you need to be aware of when you need to abandon one given the situation you are in.

He also warns against Kahneman's "Anchoring Effect" where one thing influences your perception of another, even when they are totally unrelated.  In this case, roll dice and come up with a number, then answer the question "How many countries are on the continent of Africa?"  The study he cites showed that when people had no idea, and were guessing, the higher the number rolled with the dice, the higher the number of countries were guessed.

OK - Really solid point: We want things to be better.  The result is that many people will tend to wish away problems that were in earlier releases.  Folks tend to HOPE that problems that are gone without confirming.

--

Leaving Peter's presentation I was stopped by, well, Peter.  We had a nice chat on his presentation.  Alas, the ending was a bit rushed, but, ah, some of the "pulling together" ideas were a challenge for some of the audience, but it was solid.

I then ran into Janet Gregory - very nice lady - who was kind enough to autograph my copy Agile Testing.  THEN!  I ran into yet another great mind - Markus Gaertner - who was good enough to sign the copy of How to Reduce the Cost of Software Testing - to which he contributed a chapter.

Then I realized I was late for the next session, where I am now - Cecile Davis' "Agile Manifesto Dungeons: Let's go really deep this time!"  As I'm late, I missed the introduction.  Alas, one exercise was wrapping up.  Bummer.

However, using children's blocks as an example, people working together, testing early, collaboration.  Cool idea.  I am going to need to remember this.

I like how this conversation is resolving.  I wish I had caught the beginning.

The Principles behind the Agile Manifesto boil down to fairly simple concepts, that can be a challenge to understand.  We need to communicate clearly so everyone understands what the goal is.  We need to have mutual understanding on what "frequent" means - how often do we meet/discuss? how often is code delivered?

What do we mean by simplicity?  If we reduce documentation, or eliminate formal documentation, how do we ensure we all understand what we have agreed to?  These are thoughts we must consider for our own organization - no single solution will fit everyone, or every group.

"When individuals feel responsible, they will communicate."

Yeah, that is kind of important.  Possible problem - when people feel responsible for the team, instead of for themselves, they turn into Managers, who are likely no longer doing (directly) what they really like doing - Or they burnout and fade away.

In the end, it is about the team . 

To get people functioning as a team, one must help then feel responsible as a team - not a collection of individuals.  Then, they can communicate.

--
LUNCH TIME!

--

After a lovely lunch and wonderful conversations, we are BACK!

Next up is Lisa Crispin and Janet Gregory - yeah, the authors of Agile Testing.  The book on.. yeah.  Cool.  Their topic: Debunking Agile Myths And they start with a slide of a WEREWOLF!  Then they move to a slide of MEDUSA - and they put on "medusa headbands."

Let's see....

Testing is Dead - Maye in some contexts.  When Whittaker said that, it addressed some contexts, not all.  The zombie (thanks Micheal Kelly) testers, unthinking drones need to be buried once and for all.  The others?  Ummm not so much

ATDD/SBE tests only confirm behavior - yeah, unless you are looking to distinguish between Checks & Tests (as defined by Michael Bolton.)  Knowing that difference is crucial.

Testers must be able to program (write production code). - And their UNICORN (#1) appears!  Do all testers need to write code?  Well, maybe at some companies.  Except in some circumstances, what is really needed is an understanding of code - even if what is needed is not really programming.  Maybe it is broad sets of skills.  Maybe the real need is understanding multiple crafts - testing and... other things.  One can not be an expert in all things.  No matter how much you want to.

T-Shaped the whole Breadth/depth balance is crucial.  Maybe technical awareness is a better description?

Agile teams are dazzled by tools - OOOOOoooohhh!! Look!  Bright! Shiny! New! Wow!  We need to have MORE TOOLS - or do we?  What is it that fascinates people with tools?  The best one help us explore wider and deeper.  We can look into things we might otherwise not be able to look into.

As long as they can help foster communication, in a reasonable way (I keep using those two words) tools rock.  Just don't abuse them!

Agile teams always deliver software faster. -With a DRAGON!  Looks like a blue dragon to be precise... yeah, the buzzwords are killer ... sprint, iteration, stuff people really don't understand.  Let's be real though.  Sometimes - like always - when you change something - like a team's behavior - the team is likely going to slow down.  It takes a while to learn how to do these new things

Alas, sometimes it takes longer to UNLEARN old behavior (like all of them) than it does to LEARN new behavior.

Allowing people to learn by experimentation is important, and can help them learn to perform well - you know in a flexible, responsive way.

The result of doing agile well is better quality software.  Speed is a byproduct!

---

Another break and I wandered into the Testing Lab where I found people diligently testing a Robot, which reacts to colo(u)rs - with the goal being to determine the rules behind how the robot responded.  There was a version of Michael Bolton's coin game going, and a couple of interesting apps being tested - one by none other than Matt Heusser!

Wandering back to the main reception area where I stumbled onto a handful of folks who were excitedly discussing, well, testing.  Trying to get caught up with email and twitter feed, I realized that Lisa Crispin was in the "Consensus Talks" (think Lightning Talks). 

Stephan Kamper gave a nice summary of how he uses PRY ( ) to get good work done.  This is a Ruby tool that simply seems to o everything he needs to do.  "Extract till you drop" is probably the tag line of the day (other than the Unicorn meme running thru things.  Pretty Cool.

Uwe Tewes from Genalto, gave a summary of how his organization engages in Regression, Integration and other tests including UI testing.It was pretty cool stuff.

---
And after a break - Sigge Birgisson's presentation - the last track session of the day on Developer Exploratory Testing: Raising the Bar.  He starts out well, with the "Developer's Can't Test" (myth) that he manages to include the now seemingly mandatory Unicorn image.  Yeah, Unicorns started showing up after the opening key-note.

His gist is that Developers want to be able to be proud of their work - to show that they are producing stuff that is not only good, but great.  The problem is, most developers have little or no training in testing methods, let alone Exploratory Testing.

The solution he used was to introduce paired work, with training on ET and Session Based Testing.  The people working together build understanding of what the others are good at, helping garner respect.  It was a huge gain for team cohesiveness ("togetherness" is the work Sigge used) along with encouraging developers build a more holistic view of the product.  Good stuff, man.

Using a fairly straightforward Session (Time Box) Method, testers pairing with developers are able to do some really solid work.  It also helped developers understand the situation, and be able to do good work.  Frankly, it is hard for people to keep their skills sharp if they are not engaged fairly frequently in an activity.  For Sigge, this meant there might be some significant breaks between when developers can actually be engaged in testing from one project to another - meaning they do testing work for a while and a sprint or two later, they are diving into testing again.

So with some simple mind maps (Smoke Testing Description for example) and a little guidance, they are able to be up and running quickly after each break.  Cool.

He's reminding us that we need to keep focused on the needs/concerns/value of the project.  How we do that will need to vary by the Context.

And in talking about Stakeholder Involvement, he flashes up a picture of a roller-coaster, and talking about keeping people involved, in the loop, and taking them for a ride.  (Groan)  But really, its a pretty good idea.

He describes the involvement of stakeholders, their participation in "workshops" (and not calling them "test sessions".  And focusing on paths that are reasonably clean, and branching out from there.  Yeah, there may be a bit of a possibility of confirmation bias, BUT - this allows them to start from a safe place and move forward.

With testers working with developers and stakeholder/customers, there is less thrash, and the ability to communicate directly, and manage expectations.  Again - a cool thought (I wonder how many more cool thoughts he'll have before the end of this session.)

Yeah, whining stakeholders - the ones that latch onto one thing that is not quite right - can slow things down.  Sometimes keeping people on track is a great challenge.  (Pete's comment: That is not unlike any other team in my experience.)

So, Sigge comes away from this convinced that Developers really CAN test.  Business Stakeholders/customers can test very well.  He reports no experience with Unicorns testing his applications.  (Pete Comment:  To me this is an incomplete consideration.  I would expect this covered in future releases/versions now that Sigge has seen the significance of the Unicorn Factor.)

--

Closing keynote of the day is just starting with Lasse Koskela speaking on "Self Coaching."  Yeah.  Interesting idea. When a speaker states "This doesn't really exist.  If you Google it, you get one book that has nothing to do with what I am talking about."  OK.  I'm interested.

And he got in the obligatory "Finland is in Europe" joke.

After admitting he is a certified Scrum Master, he claimed to be a nice guy.  (Pete Comment: OK, we'll give him the benefit of the doubt.)

Lasse begins with a series of skills needed for self coaching.

Understand the Brain - He talks about basic brain function - like if the brain detects a large carnivorous animal a reasonable response might be the brain sending a message that says "RUN!" There may also be a message that says "Be afraid."  At some point after running and after being afraid, another portion kicks in with cognitive response and maybe that will tell us that looking to see if the carnivorous animal is still chasing us and should we still be afraid.

After verifying the animal is no longer chasing us, a retrospective might inform/influence our threat/reward models.  This is turn can be informed by several considerations:
Status (is it current RIGHT NOW)
Certainty (is it plausible that this is not as definite or maybe more definite than we think?)
Autonomy (are we in control?)
Relatedness (what is our relationship to this situation - have we been there before? What about other people we know?
Fairness (pretty much what you might think)

The issue is that perceived threats tend to outweigh rewards in this - so it takes many good experiences to outweigh a single bad one.  This may be a challenge.

Reduce the Noise

In looking to overcome obstacles, we need to reduce - if not eliminate - the noise emanating from our own head.

Encourage good behavior - the visualization thing - and frame it in what you want to do, not what you are afraid of/do not want to have happen.  Funny thing - in golf, when folks t-up a shot - if that one goes bad and they need to t-up another, then the result is rarely the same mistake.  It actually tends to be the opposite. 

Part of the problem is once a bad thing happens, we tend to focus on that - a combination of "I can't believe I did that" (damaged self ego) to "DON'T DO THAT!" (undermine of intent by focusing on what you do not want.)

Ernie Els, the golfer, is a good example of how to sort this out.  He went from having a terrible year to rebounding back and leaving every other golfer in the dust.

Stopping the Brain - 

Cognitive Dissonance

That icky feeling when two pieces of information conflict.  For example "You did it all wrong!" when you believe that you completed the task perfectly. 

When our  (Ladder of Influence) Reality & Facts / Selected Reality / Interpreted Reality / Assumptions / Conclusions / Beliefs & Actions are not based in fact & are essentially false in nature, we are setting ourselves up for failure.  Getting out of this conflicting "box" reality is a huge problem - and is a significant portion of the problem set.

Changing that requires us to be aware of what is going on - we have something to do that may conflict with what we really want to do, then we are faced with a choice - you can either address one level of "right" with what is a negative reaction.

He is describing a "Frame" model similar to Michael Bolton's Frame model.

So - to summarize - Pause frequently and evaluate your own reasons; Check your own thinking; Obey the sense you have of right and wrong.

AND THAT WRAPS UP DAY 2!!!!

Sunday, November 11, 2012

What Makes Software Teams that Work, Work?

In pulling together some notes and reviewing some papers, I was struck by a seemingly simple question, and as I consider it, I pose it here.

Some software development teams are brilliantly successful.  Some teams are spectacular failures.  Most are somewhere in between.

Leaving the question of what constitutes a success or failure aside, I wonder what it is that results in which.

Some teams have strong process models in place.  They have rigorous rules guiding every step to be taken from the initial question of "What would it take for X?" through delivery of the software product.  These teams have strong control models and specific metrics in place that could be used to demonstrate the precise progress of the development effort.

Other teams have no such models.  They may have other models, perhaps "general guidelines" might be a better phrase.  Rather than hard-line metrics and measurement criteria, they have more general ideas.

Some teams schedule regular meetings, weekly, a few days a week or sometimes daily.  Some teams take copious notes to be distributed and reviewed.  Some teams have a shared model in place to track progress and others keep no records at all.

Some of each of these teams are successful - they deliver products on time that their customers want and use, happily.

Some of each of these teams are less successful.  They have products with problems that are delivered late and are not used, or used grudgingly because they have no option.

Do the models in use make a difference or is it something else?

Why do some teams deliver products on time and others do not?

I suspect that the answer does not lie in the pat, set-piece answers but somewhere else. 

I must think on this.

Wednesday, November 7, 2012

Weird in Portland, PNSQC Part II

For reference, I had never been to Portland before my trip out to PNSQC this past October.  I searched for hotels and eateries and transportation information and (being who I am) Irish Traditional Music Sessions and Pipe Bands.  I had been on the MAX - the light rail system - for 10 minutes when I realized I had failed to search for one key thing....

Weird.

Really.  Any city that has a website, bumper stickers, billboards and ... stuff dedicated to Keep X Weird gets huge bonus points in my book.  Massive points.

I met a whole passel of dedicated, creative, passionate, exuberant people who were excited to be in a place talking about software and quality and engineering and the like.  I've been to conferences before (yeah, the blog has a fair number of references to them) and have seen energy like this and have fed off it to get through a long week.  This was different.

Let's see, how was it different.  Well, I hung a lot with a bunch of people I had met but did not really know.  I also met people I had never met in person before - but had read their writings, blogs, articles and the like.

This was folks like Michael Larsen, crazy smart and all around nice guy with may more energy than I can muster.  Ben Simo, yeah, Quality Frog - another crazy smart guy who has lots of cool ideas and thoughts and is also way nice. The three of us went to lunch the Sunday before the conference started.  It was one of the most amazing conversations I can remember having in some time.

We covered the range of good quality Mexican food (we were eating at a place with fairly few non-Hispanics eating) to Tex-Mex to South-Western to - Oh yeah.  Software testing, software development, cool tech stuff to bands to music to ... what WAS that stuff Michael was drinking?  (a lightly carbonated fruit juice bevvy - pretty good actually.)

We experimented with artisinal chocolates (its Portland, EVERYTHING is made by an artist) on the walk back to our hotel, while discussing the amazing variety of food wagons that were parked (some looking like they were more permanent than others. 

Included in this was a discussion on the huge variety of opportunities for exquisite food and remarkably enjoyable people watching and meeting.  I know.  Weird, right?

The Conference

Instead of a blow-by-blow description of events and activities, I'd suggest checking out Michael Larsen's way-cool live blog posts.  The weird thing was that it seemed like every time I looked around in the session I was in - there he was typing away and way into the topic.

Michael -Dude - you are so inspirational it is crazy.  Here's what Michael had to say...

Day 1 
Day 2
Day 3

OK, so my personal remembrances of the conference - I had a nice coffee and walked to the conference site from my hotel - not the conference hotel, but nice and not too far.  I found myself focusing on nothing at all and simply drinking in how walkable the city is and how good the coffee was and ... why was I 4 blocks beyond the conference center?  Really.  Cool city with lots of things to see and small, comfortable parks.  Nice.

My Day 1.

So scurried back to where I was supposed to be, grabbed another coffee, registered at the front desk and promptly met Michael Dedolph and then Terri Moore and then - a bunch of very friendly people.

It was pretty exciting - the auditorium was packed for the opening key note by friend, colleague and sometime partner-in-crime Matt Heusser.  The fact is, there was not a seat left in the room, there were a bunch of people (myself included) were sitting just outside sipping wireless, power and coffee, and listening to Matt over speakers set up for that purpose.

I quite enjoyed Matt's presentation on Quality and manufacturing an software and... stuff.  I was astounded (and humbled) when he mentioned me toward the end. 

I found myself in some great conversations, bailed over to the "birds of a feather" lunch session on testing in the cloud, hosted/moderated by Michael Dedolph, I then wandered off to a couple of other presentations - then was drawn into Doc Norton's presentation on "Growing into Excellence" - it was really solid, dealing with encouraging and growing people to do better and... yeah. Good stuff. 

This set up Linda Rising's presentation on Agile Mindsets and Quality.  It was...  Wow.   Consider the similarities between people who are willing and able to adapt multiple views, consider a variety of information and approaches and select the appropriate solution based on the needs of the project.  Pretty agile, no?  How we communicate with people, starting in primary school and running through until finishing - high school or beyond - colors those expectations based entirely on what we praise and reward.  Thus, the question is, what do we want to reward?  Yeah.  I quite liked that.

Day one ended with a session with Matt on metrics.  It was kind of a fun discussion on "Santa Clause, the Easter Bunny and Quality Metrics."  It also was a good way to wrap up the day.

My Day Two.

Day Two started out similarly to day one.  Had a nice breakfast with Matt Heusser and Michael Larsen and Ben Simo, grabbed a coffee to go and headed out for a very nice walk to the conference center.  It was quite nice - and remarkably short when one pays attention to where one is going.

The morning keynote was by Dale Emory on "Testing Quality In."  Yeah.  Its one of those topics we dance around and best, and generally reject as naive and simplistic.  Unless one considers another oft-cited testing truism "Everything can be tested" - including requirements, design, everything.  Test stuff early and the product we get to "test" will be better.  In short, get testers involved early and participate as much, and as constructively and professionally as possible - and things can be better.

Ben Simo gave a really solid talk on Software Investigation.  It was interesting in the way that I tweeted very little - instead, I simply listened and observed.  Ben has a style of presentation that I enjoy and appreciate.  Frankly, I suggest that you read his stuff.  Its good.  Find his blog here. 

The "Birds of a Feather" session I went to was a roundtable discussion on a variety of topics.  Everything from Agile to Metrics to "How do we do things better."

I found myself in a conversation with a variety of bright people on software and perceptions and intent and goals.  Essentially =, Why do we do this and how do we know we're done?  More on that in another post.

I had a listen to Venkat Moncompo talk on UX.  As we were speaking on similar topics I was curious to hear what he had to say.  Now, Michael Larsen gave a really nice summary in his live blog post, above.

I then got up and spoke on my take on UX and testing and stuff.  The gist of my presentation was - Everything touches on UX.  Tests you can think of, interactions.  Most importantly, the reasonable expectations of a person who intends to use your software to a specific purpose - if they can't do that, the rest of the touchy-feely-kum-ba-ya stuff does not matter.  It was a good conversation and exchange of ideas - which is what I try to get going. 

That is perhaps the greatest thing I ran into at this conference - folks were willing to discuss a point, debate an idea and examine concepts while asserting themselves, politely.

I know, weird.

My Day Three.

The third and final day for me in Portland was an all-day workshop on Software Testing.  Really simple, eh?

Software Testing Reloaded.  This is a session developed with Matt Heusser, that we have run several times now, that looks at what makes testing, well, testing.

We look at some of the truisms that get bounced around.  We look at the "you must do to do good testing" assertions and, well, test them.  We come in with a set of goals and intentions, then add to them topics that the participants have as areas they want to explore, discuss and generally spend some time diving into.  

The first time we presented this - it was a leap of faith.  Now, it is just fun.  The weird thing is (yeah, there's that word again) that when we present it at conferences, we always end up with more people in the room at the end of the day than were there at the beginning of the day.  Its kinda cool.  (Shameless plug, if you are not doing anything the week of November 19, Matt and I are doing this workshopin Potsdam, Germany at Agile Testing Days.  Check it out.)

That evening, Matt and I met up with the SQAUG group - a new, home-grown testing and QA discussion meetup type group in Portland.  We had a great time with them, talking about Complete Testing and sharing ideas and doing some exercises around that. Good times.

Home Again Home Again Jiggity Jig

Thursday I needed to fly home so I could be at meetings at my client site on Friday. 

What I found enjoyable about this conference was a couple of things. 

First, and I've already mentioned this, sessions, hallway conversations and round-tables were very enjoyable. People were happy to discuss ideas and share views and be nice about it.  There were very few people I met where I thought "What a prat." In fact, everyone was very nice and polite.  I kinda grooved on that. 

The other thing that I really liked was how relaxed everything was.  Now, that is not to say "Not intense."  I came away each day with a feeling of "My brain is full." I was mentally drained and exhilarated at the same time.  Many conferences I find myself physically exhausted and just wanting to curl up in a corner.  Here, at the end of each day, I felt I could go a bit longer, even when the conversations around the table with adult beverages went into the wee hours. 

Yeah it was weird, in a really good way.

Sunday, October 28, 2012

Old Northwest to the Pacific Northwest: At PNSQC, Part 1

I live in Michigan.

Michigan is one of the states made up of what was once called the Northwest Territory.  Well, yeah, this was back in the late 1780s an early 1790s, but no matter.  If you are an American and ever took an American History class, you may possibly remember something about the Northwest Ordinance.

Brief History Lesson

What I remembered from my history courses was how it divided up the territory into grid-like sections and mapped out some basic boundaries and things of that ilk.  It did things like establish baselines where survey measurements were to be taken from and mandated that there would be schools available and whatnot.  Its kind of a blur, but that's OK.

I came across some thing in Gordon Wood's massive book Empire of Liberty which covers US history from 1789 to 1815.  It is part of the Oxford History of the United States and comes in with a mere 738 pages, not counting the index and bibliographic essay.  Wood put forth that the Northwest Ordinance was the single most important piece of legislation passed by Congress before the adoption of the Constitution. It defined a process for how territories could eventually join the Union as full-fledged States.

It is kind of a daunting idea when one thinks of it. 

How do you make a plan for bettering society when you know that most of the people who will benefit will be living their lives long after you are dead and gone and most likely forgotten?  

For example - who accepted the legislation for the Northwest Ordinance that was passed into law?  The President, of course.  But George Washington was not yet President.  So, the President of Congress was the one who signed the law and he was, ummm, ah, hmmm. Yeah. That guy.

These were among my thoughts and I boarded a plane and flew West to Portland a couple of weeks ago for the 30th Annual Pacific Northwest Software Quality Conference - PNSQC.

The Beginning

I had been contacted about being an invited speaker for the conference and joining my colleague and sometime partner-in-crime Matt Heusser in presenting a full-day workshop as part of the conference.  This was kind of a big deal for me.  While a regional conference, I looked over the list of previous invited speakers and workshop hosts and thought "Whoa.  Those are some huge shoes to fill.  What can I bring that will be on a similar level of what those folks have done?"

I admit, I had a brief moment of questioning myself.  Well, not so brief.  It kind of kept coming back.  I had a couple of ideas on topics to present - other than the workshop that is.  I drew on some thoughts of what I could address to the theme of Engineering Quality, and considered where the ideas led me.  So I submitted two proposals and essentially said "Pick one."

This resulted in some delightful emails and discussions. It seems one of my submissions had a similar title to the proposed keynote being given by Dale Emery.  It may have been fun, but alas, I reconsidered the topic and we agreed on the second one - on User Experience as a model for test development.

My Approach

People who have heard me present at the local meetup or conferences or company lunch and learn type things  know that I tend to avoid the "All your problems will be solved if you do ."  Partly this is because I never believe people when they tell the stuff like that.  

They can give examples of how they did and were successful, but I tend to think "Great. That is one or two times. How many times have you done this?  Total?"  

The result is I tend to prefer presenting around times that were a total train-wreck (do software long enough and you have a lot of those examples), what I learned from that and how I would (and sometimes did) do things differently the next go-round for that software.  I also try and talk about how I've applied those lessons more broadly beyond that, looking for truths I can carry with me, possibly as models for heuristics. 

Then I try and encourage discussion - get people in the room involved.  Why do I do that?  Because sometimes they have great insights from their own experience.  Sometimes they have comment or thoughts or observations that leave me gobsmacked. 

What I Learned

I sometimes have my doubts with that approach, particularly when I'm presenting at a conference or meeting or whatever, I have never presented at before.  I have memories of sessions that were themselves train-wrecks.  The anticipated "discussion" never happened - or was a total of two or three comments.

People did not want to discuss.  They wanted a lecture.  They wanted a power-point slide deck with answers, not with things that made them think things.  They wanted the spoken words to match the words on the slide deck and they wanted them to reaffirm their beliefs.

(Yo.  If that is the case, do you really want to go to a session where the word "discussion" appears at least twice in the abstract?)

I was assured that people would be willing to discuss pretty much anything during the conference sessions.  So, I took a deep breath and planned the session around that.

Ya know, when you get a bunch of people together who are smart and passionate about what they do, sometimes all you need to do to get them going is say something and then ask "What do you think?"  Then look out - they will most likely tell you.

The sessions I attended, where conversation/questions-and-answers were part of the plan were quite enjoyable.  There was a fair amount of good discussion that continued into the hallway.  Other sessions were more conventional - presentations, lecture, a few questions and answers.  Generally, these were informative and well presented.

Overall - I had a marvelous experience.  I learned a lot and met some astounding people.  I'll describe that more in another post. 

Monday, October 8, 2012

Testers and UX and That's Not My Job

OK.

I don't know if you are one of the several tester types I've talked with over the last couple of months who keep telling me that "Look, we're not supposed to worry about that UX stuff you talk about.  We're only supposed to worry about the requirements."

If you are, let me say this:  You are soooooooooooooooo wrong.

No, really.  Even if there is someone else who will "test" that,  I suggest, gently, that you consider what a reasonable person would expect while you are examining whatever process it is that you are examining.  "Reasonable person" being part of the polyglot that many folk label as "users."  You know - the people who are actually expected to use the software to do what they need to do?  Those folks?

It does not matter, in my experience at least, if those people (because that is what they are) work for your company or if they (or their company) pay you to use the software you are working on. 

Your software can meet all the documented requirements there are.  If the people using it can't easily do what they need to do, then it is rubbish.

OK, so maybe I'm being too harsh.  Maybe, just maybe, I'm letting the events of yesterday (when I was sitting in an airport, looking at a screen with my flight number displayed and a status of "On Time" when it is 20 minutes after I was supposed to be airborne) kinda get to me.  Or, maybe I've just run into a fair number of systems where things were designed - intentionally designed - in such a way that extra work is required by people who need the software to do their jobs.

An Example

Consider some software I recently encountered.  It is a new feature rolled out as a modeling tool for people with investments through this particular firm.

To use it, I needed to sign in to my account.  No worries.  From there, I could look up all sorts of interesting stuff about me generally, and about some investments I had.  There was a cool feature that was available so I could track what could happen if I tweaked some allocations in fund accounts, essentially move money from one account to another - one type of fund to another - and possible impact on my overall portfolio over time.

So far, so good, right?  I open the new feature to see what it tells me.

The first screen asked me to confirm my logon id, my name and my account number.  Well, ok.  If it has the first, why does it need the other two?  (My first thought was a little less polite, but you get the idea.)

So I enter the requested information, click submit and POOF!  A screen appears asking the types of accounts I currently had with them.  (Really?  I've given you information to identify me and you still want me to identify the types of accounts I have?  This is kinda silly, but, ok.)

I open another screen to make sure I match the exact type of account I have with what is on the list of options - there are many that are similar in name, so I did not want to be confused.

It then asked me to enter the current balance I had in each of the accounts.

WHAT????  You KNOW what I have!  It is on this other screen I'm looking at!  Both screens are part of the same system for crying out loud.  (or at least typing in all caps with a bunch of question-marks.)  This is getting silly.

So, I have a thought.  Maybe, this is intended to be strictly hypothetical.  OK, I'll give that a shot.

I hit the back button until I land on the page to enter the types of accounts.  I swap some of my real accounts for accounts I don't have - hit next and "We're sorry, your selections do not agree with our records."  OK - so much for that idea.

Think on

Now, I do not want to cast disparaging thoughts on the people who obviously worked very hard on this software, by some measure.  It clearly does something.  What it does is not quite clear to me.   There is clearly some knowledge of the accounts I have in this tool - but then why do I need to enter the information?

This seems, awkward, at best.

I wonder how the software came to this state.  I wonder if the requirements handed off left room for the design/develop folks to interpret them in ways that the people who were in the requirements discussions did not intend.

I wonder if the objections raised were met with "This is only phase one.  We'll make those changes for phase two, ok?"  I wonder if the testers asked questions about this.  I wonder how that can be.

Actually I think I know.  I believe I have been in the same situation more than once.  Frankly it is no fun.  Here is what I have learned from those experiences and how I approach this now.

Lessons

Ask questions.

Challenge requirements when they are unclear.
Challenge requirements when they are clear.
Challenge requirements when there is no mention of UX ideas,
Challenge requirements when three are mentions of US ideas.

Draw them out with a mind map or decision tree or something.  They don't need to be be fancy, but they can help you focus your thinking and may give you an "ah-HA" moment - paper, napkins, formal tools - whatever.  Clarify them as best you can.  Even if everyone knows what something means, make sure they all know the same thing..

Limit ambiguity - as others if their understanding is the same as yours.

If there are buzzwords in the requirement documents,  as for them to be defined clearly (yeah, this goes back to the thing about understanding being the same.

Is any of this unique to UX?  Not really.  I have a feeling that some of the really painful stuff I've run into lately would have been less painful if someone had argued more strongly early on in the projects where that software was developed.

The point of this rant - If, in your testing, you see behavior that you believe will negatively impact a person attempting to use the software, flag it.

Even if "there is no requirement covering that" - .  Ask a question.  Raise your hand.

I hate to say that requirements are fallible, but they are.  The can not be your only measure for the "quality" of the software you are working on if you wish to be considered a tester.

They are a starting point.  Nothing more. 

Proceed from them thoughtfully.