Monday, October 8, 2012

Testers and UX and That's Not My Job

OK.

I don't know if you are one of the several tester types I've talked with over the last couple of months who keep telling me that "Look, we're not supposed to worry about that UX stuff you talk about.  We're only supposed to worry about the requirements."

If you are, let me say this:  You are soooooooooooooooo wrong.

No, really.  Even if there is someone else who will "test" that,  I suggest, gently, that you consider what a reasonable person would expect while you are examining whatever process it is that you are examining.  "Reasonable person" being part of the polyglot that many folk label as "users."  You know - the people who are actually expected to use the software to do what they need to do?  Those folks?

It does not matter, in my experience at least, if those people (because that is what they are) work for your company or if they (or their company) pay you to use the software you are working on. 

Your software can meet all the documented requirements there are.  If the people using it can't easily do what they need to do, then it is rubbish.

OK, so maybe I'm being too harsh.  Maybe, just maybe, I'm letting the events of yesterday (when I was sitting in an airport, looking at a screen with my flight number displayed and a status of "On Time" when it is 20 minutes after I was supposed to be airborne) kinda get to me.  Or, maybe I've just run into a fair number of systems where things were designed - intentionally designed - in such a way that extra work is required by people who need the software to do their jobs.

An Example

Consider some software I recently encountered.  It is a new feature rolled out as a modeling tool for people with investments through this particular firm.

To use it, I needed to sign in to my account.  No worries.  From there, I could look up all sorts of interesting stuff about me generally, and about some investments I had.  There was a cool feature that was available so I could track what could happen if I tweaked some allocations in fund accounts, essentially move money from one account to another - one type of fund to another - and possible impact on my overall portfolio over time.

So far, so good, right?  I open the new feature to see what it tells me.

The first screen asked me to confirm my logon id, my name and my account number.  Well, ok.  If it has the first, why does it need the other two?  (My first thought was a little less polite, but you get the idea.)

So I enter the requested information, click submit and POOF!  A screen appears asking the types of accounts I currently had with them.  (Really?  I've given you information to identify me and you still want me to identify the types of accounts I have?  This is kinda silly, but, ok.)

I open another screen to make sure I match the exact type of account I have with what is on the list of options - there are many that are similar in name, so I did not want to be confused.

It then asked me to enter the current balance I had in each of the accounts.

WHAT????  You KNOW what I have!  It is on this other screen I'm looking at!  Both screens are part of the same system for crying out loud.  (or at least typing in all caps with a bunch of question-marks.)  This is getting silly.

So, I have a thought.  Maybe, this is intended to be strictly hypothetical.  OK, I'll give that a shot.

I hit the back button until I land on the page to enter the types of accounts.  I swap some of my real accounts for accounts I don't have - hit next and "We're sorry, your selections do not agree with our records."  OK - so much for that idea.

Think on

Now, I do not want to cast disparaging thoughts on the people who obviously worked very hard on this software, by some measure.  It clearly does something.  What it does is not quite clear to me.   There is clearly some knowledge of the accounts I have in this tool - but then why do I need to enter the information?

This seems, awkward, at best.

I wonder how the software came to this state.  I wonder if the requirements handed off left room for the design/develop folks to interpret them in ways that the people who were in the requirements discussions did not intend.

I wonder if the objections raised were met with "This is only phase one.  We'll make those changes for phase two, ok?"  I wonder if the testers asked questions about this.  I wonder how that can be.

Actually I think I know.  I believe I have been in the same situation more than once.  Frankly it is no fun.  Here is what I have learned from those experiences and how I approach this now.

Lessons

Ask questions.

Challenge requirements when they are unclear.
Challenge requirements when they are clear.
Challenge requirements when there is no mention of UX ideas,
Challenge requirements when three are mentions of US ideas.

Draw them out with a mind map or decision tree or something.  They don't need to be be fancy, but they can help you focus your thinking and may give you an "ah-HA" moment - paper, napkins, formal tools - whatever.  Clarify them as best you can.  Even if everyone knows what something means, make sure they all know the same thing..

Limit ambiguity - as others if their understanding is the same as yours.

If there are buzzwords in the requirement documents,  as for them to be defined clearly (yeah, this goes back to the thing about understanding being the same.

Is any of this unique to UX?  Not really.  I have a feeling that some of the really painful stuff I've run into lately would have been less painful if someone had argued more strongly early on in the projects where that software was developed.

The point of this rant - If, in your testing, you see behavior that you believe will negatively impact a person attempting to use the software, flag it.

Even if "there is no requirement covering that" - .  Ask a question.  Raise your hand.

I hate to say that requirements are fallible, but they are.  The can not be your only measure for the "quality" of the software you are working on if you wish to be considered a tester.

They are a starting point.  Nothing more. 

Proceed from them thoughtfully. 

6 comments:

  1. There is a rule of thumb here which is that systems should never ask for data they already know about. It's a total turn-off for the user.

    Also, whenever the user has to do perfom some mental overhead, such as typing in a credit card number in four separate four-digit fields, that the system could otherwise interpret correctly in a millisecond, that is a bar to usability.

    In other words, whenever the user has to think like a computer, that's a red flag for usability.

    Asking the user to confirm (or change) existing data at the point where it is needed is correct, however.

    ReplyDelete
    Replies
    1. Excellent point, Paul.

      The concern I have is less about confirming who the user is, or should be, or something. Once you are past the "validate who you are stage" why should I, as a user, have to enter information into a system when that same system already has it? Maybe I have different expectations than the folks who designed the software.

      Delete
  2. Great blog post!

    Great example too. And I totally agree with you...
    It's essential to question and analyse requirements as much as possible, as early as possible in a development cycle... especially when it comes to functionality crossover between 2 development phases!

    ReplyDelete
  3. Hi Pete,
    I agree that testers must question requirements. But tester must not fall into a trap of 'fighting against windmills' if those requirements corrections are dismissed by project manager. Tester has to put those changes in bug tracking system and clearly communicate that their advice is to implement those corrections. And this is where their job is finished.

    ReplyDelete
    Replies
    1. True enough, depending on the organization. Still, if we, as testers, accept the "this is how it is supposed to be" at face value, and never ask "Why?" are we doing anything other than rubber-stamping other people's biases?

      Delete
  4. What I try and encourage testers to do is - find other well known & commercial applications with same or similar UI / behaviour.
    And then document this information as an "issue" in the bug tracking system, or even a request the change as an enhancement.
    This way there is evidence to support the "bug" and avoids the notion that it is a "personal preference / bias" of the tester.
    I agree with Pete's implication that its a testers job to "poke their nose into everything".

    ReplyDelete