When looking at the question of "Does this work?" have you noticed that all of us tend to have a slightly different view of what "work" means?
OK, so its not just me then. Good. I was a little concerned for a while.
In my case, I look for a variety of measures and models to determine if a piece of software "works" or not. Under one model it might work perfectly well. Under another model, it might not work at all.
These can have the same set of "documented requirements" and completely different models around expected behavior. In a commercial software setting, I try and get the description of what the sales staff have been pushing that this software will do. (If the sales folks are setting the customer expectations, I like to know what they are saying.)
If this is software that people inside the company are going to be using, I try and get with representatives from the department where this software will be used. Not just the "product owners" but the people using the software as part of their daily function. These groups can, and often do, have distinctly different definitions of "it works" for software impacting their departments.
Finally, there is the view of development, individuals and the team. On software with many points of collaboration and integration, let me make one thing really, particularly clear:
It does not matter if your part "works" (by some definition of "works") if the entire piece does not "work" (by the definition of someone who matters.)
That goes for functions within the application and the entire application itself. Resting on "I did my bit." is not adequate if the people working with the piece of software if that piece of software does not do what it needs to do.
No matter how well the individual pieces work, if they don't work together, the software doesn't work.
I was just thinking about this last week. Internally I phrased it as "My testing has an audience."
ReplyDeleteI was thinking about how I'm not just testing for the 'end user'. I'm not even just testing for different kinds of end users. I'm also testing for people like the internal testers who determine acceptance for the customer.
To me, this means the difference between a user that might be used to having to close and re-open a window to see refreshed data vs a tester who see that as a genuine refresh problem that might indicate sloppiness.
I guess one challenge this proposes is who's definition of working do you use if you can't use them all, or if there are opposing definitions of 'works'. I've worked at places where the IT guys makes the software decisions, not the users, and they seemed to be more concerned about how and where software could be installed than how the users actually used the software.