Sunday, October 28, 2012

Old Northwest to the Pacific Northwest: At PNSQC, Part 1

I live in Michigan.

Michigan is one of the states made up of what was once called the Northwest Territory.  Well, yeah, this was back in the late 1780s an early 1790s, but no matter.  If you are an American and ever took an American History class, you may possibly remember something about the Northwest Ordinance.

Brief History Lesson

What I remembered from my history courses was how it divided up the territory into grid-like sections and mapped out some basic boundaries and things of that ilk.  It did things like establish baselines where survey measurements were to be taken from and mandated that there would be schools available and whatnot.  Its kind of a blur, but that's OK.

I came across some thing in Gordon Wood's massive book Empire of Liberty which covers US history from 1789 to 1815.  It is part of the Oxford History of the United States and comes in with a mere 738 pages, not counting the index and bibliographic essay.  Wood put forth that the Northwest Ordinance was the single most important piece of legislation passed by Congress before the adoption of the Constitution. It defined a process for how territories could eventually join the Union as full-fledged States.

It is kind of a daunting idea when one thinks of it. 

How do you make a plan for bettering society when you know that most of the people who will benefit will be living their lives long after you are dead and gone and most likely forgotten?  

For example - who accepted the legislation for the Northwest Ordinance that was passed into law?  The President, of course.  But George Washington was not yet President.  So, the President of Congress was the one who signed the law and he was, ummm, ah, hmmm. Yeah. That guy.

These were among my thoughts and I boarded a plane and flew West to Portland a couple of weeks ago for the 30th Annual Pacific Northwest Software Quality Conference - PNSQC.

The Beginning

I had been contacted about being an invited speaker for the conference and joining my colleague and sometime partner-in-crime Matt Heusser in presenting a full-day workshop as part of the conference.  This was kind of a big deal for me.  While a regional conference, I looked over the list of previous invited speakers and workshop hosts and thought "Whoa.  Those are some huge shoes to fill.  What can I bring that will be on a similar level of what those folks have done?"

I admit, I had a brief moment of questioning myself.  Well, not so brief.  It kind of kept coming back.  I had a couple of ideas on topics to present - other than the workshop that is.  I drew on some thoughts of what I could address to the theme of Engineering Quality, and considered where the ideas led me.  So I submitted two proposals and essentially said "Pick one."

This resulted in some delightful emails and discussions. It seems one of my submissions had a similar title to the proposed keynote being given by Dale Emery.  It may have been fun, but alas, I reconsidered the topic and we agreed on the second one - on User Experience as a model for test development.

My Approach

People who have heard me present at the local meetup or conferences or company lunch and learn type things  know that I tend to avoid the "All your problems will be solved if you do ."  Partly this is because I never believe people when they tell the stuff like that.  

They can give examples of how they did and were successful, but I tend to think "Great. That is one or two times. How many times have you done this?  Total?"  

The result is I tend to prefer presenting around times that were a total train-wreck (do software long enough and you have a lot of those examples), what I learned from that and how I would (and sometimes did) do things differently the next go-round for that software.  I also try and talk about how I've applied those lessons more broadly beyond that, looking for truths I can carry with me, possibly as models for heuristics. 

Then I try and encourage discussion - get people in the room involved.  Why do I do that?  Because sometimes they have great insights from their own experience.  Sometimes they have comment or thoughts or observations that leave me gobsmacked. 

What I Learned

I sometimes have my doubts with that approach, particularly when I'm presenting at a conference or meeting or whatever, I have never presented at before.  I have memories of sessions that were themselves train-wrecks.  The anticipated "discussion" never happened - or was a total of two or three comments.

People did not want to discuss.  They wanted a lecture.  They wanted a power-point slide deck with answers, not with things that made them think things.  They wanted the spoken words to match the words on the slide deck and they wanted them to reaffirm their beliefs.

(Yo.  If that is the case, do you really want to go to a session where the word "discussion" appears at least twice in the abstract?)

I was assured that people would be willing to discuss pretty much anything during the conference sessions.  So, I took a deep breath and planned the session around that.

Ya know, when you get a bunch of people together who are smart and passionate about what they do, sometimes all you need to do to get them going is say something and then ask "What do you think?"  Then look out - they will most likely tell you.

The sessions I attended, where conversation/questions-and-answers were part of the plan were quite enjoyable.  There was a fair amount of good discussion that continued into the hallway.  Other sessions were more conventional - presentations, lecture, a few questions and answers.  Generally, these were informative and well presented.

Overall - I had a marvelous experience.  I learned a lot and met some astounding people.  I'll describe that more in another post. 

Monday, October 8, 2012

Testers and UX and That's Not My Job

OK.

I don't know if you are one of the several tester types I've talked with over the last couple of months who keep telling me that "Look, we're not supposed to worry about that UX stuff you talk about.  We're only supposed to worry about the requirements."

If you are, let me say this:  You are soooooooooooooooo wrong.

No, really.  Even if there is someone else who will "test" that,  I suggest, gently, that you consider what a reasonable person would expect while you are examining whatever process it is that you are examining.  "Reasonable person" being part of the polyglot that many folk label as "users."  You know - the people who are actually expected to use the software to do what they need to do?  Those folks?

It does not matter, in my experience at least, if those people (because that is what they are) work for your company or if they (or their company) pay you to use the software you are working on. 

Your software can meet all the documented requirements there are.  If the people using it can't easily do what they need to do, then it is rubbish.

OK, so maybe I'm being too harsh.  Maybe, just maybe, I'm letting the events of yesterday (when I was sitting in an airport, looking at a screen with my flight number displayed and a status of "On Time" when it is 20 minutes after I was supposed to be airborne) kinda get to me.  Or, maybe I've just run into a fair number of systems where things were designed - intentionally designed - in such a way that extra work is required by people who need the software to do their jobs.

An Example

Consider some software I recently encountered.  It is a new feature rolled out as a modeling tool for people with investments through this particular firm.

To use it, I needed to sign in to my account.  No worries.  From there, I could look up all sorts of interesting stuff about me generally, and about some investments I had.  There was a cool feature that was available so I could track what could happen if I tweaked some allocations in fund accounts, essentially move money from one account to another - one type of fund to another - and possible impact on my overall portfolio over time.

So far, so good, right?  I open the new feature to see what it tells me.

The first screen asked me to confirm my logon id, my name and my account number.  Well, ok.  If it has the first, why does it need the other two?  (My first thought was a little less polite, but you get the idea.)

So I enter the requested information, click submit and POOF!  A screen appears asking the types of accounts I currently had with them.  (Really?  I've given you information to identify me and you still want me to identify the types of accounts I have?  This is kinda silly, but, ok.)

I open another screen to make sure I match the exact type of account I have with what is on the list of options - there are many that are similar in name, so I did not want to be confused.

It then asked me to enter the current balance I had in each of the accounts.

WHAT????  You KNOW what I have!  It is on this other screen I'm looking at!  Both screens are part of the same system for crying out loud.  (or at least typing in all caps with a bunch of question-marks.)  This is getting silly.

So, I have a thought.  Maybe, this is intended to be strictly hypothetical.  OK, I'll give that a shot.

I hit the back button until I land on the page to enter the types of accounts.  I swap some of my real accounts for accounts I don't have - hit next and "We're sorry, your selections do not agree with our records."  OK - so much for that idea.

Think on

Now, I do not want to cast disparaging thoughts on the people who obviously worked very hard on this software, by some measure.  It clearly does something.  What it does is not quite clear to me.   There is clearly some knowledge of the accounts I have in this tool - but then why do I need to enter the information?

This seems, awkward, at best.

I wonder how the software came to this state.  I wonder if the requirements handed off left room for the design/develop folks to interpret them in ways that the people who were in the requirements discussions did not intend.

I wonder if the objections raised were met with "This is only phase one.  We'll make those changes for phase two, ok?"  I wonder if the testers asked questions about this.  I wonder how that can be.

Actually I think I know.  I believe I have been in the same situation more than once.  Frankly it is no fun.  Here is what I have learned from those experiences and how I approach this now.

Lessons

Ask questions.

Challenge requirements when they are unclear.
Challenge requirements when they are clear.
Challenge requirements when there is no mention of UX ideas,
Challenge requirements when three are mentions of US ideas.

Draw them out with a mind map or decision tree or something.  They don't need to be be fancy, but they can help you focus your thinking and may give you an "ah-HA" moment - paper, napkins, formal tools - whatever.  Clarify them as best you can.  Even if everyone knows what something means, make sure they all know the same thing..

Limit ambiguity - as others if their understanding is the same as yours.

If there are buzzwords in the requirement documents,  as for them to be defined clearly (yeah, this goes back to the thing about understanding being the same.

Is any of this unique to UX?  Not really.  I have a feeling that some of the really painful stuff I've run into lately would have been less painful if someone had argued more strongly early on in the projects where that software was developed.

The point of this rant - If, in your testing, you see behavior that you believe will negatively impact a person attempting to use the software, flag it.

Even if "there is no requirement covering that" - .  Ask a question.  Raise your hand.

I hate to say that requirements are fallible, but they are.  The can not be your only measure for the "quality" of the software you are working on if you wish to be considered a tester.

They are a starting point.  Nothing more. 

Proceed from them thoughtfully.