Friday, October 24, 2014

On Data and Testing and Inquiry: StarWest Retrospective pt 1

I was at StarWest last week, deep in conversation and thought with a huge number of people.  Back at the office this week and many people asked if it was a good conference.

Yes - it was!  I found a large number of people who were good thinkers and conversationalists.  Among these were some of the usual suspects - Michael Bolton, Jon Bach, Griffin Jones, Rob Sabourin, James Christie, Jon Hagar and more.  Added this time was Paco Hope, Lee Copeland and more.  I had the extreme pleasure of meeting Rob's wife Anna.

We had lovely conversations, shared some nice wine - some good beer and excellent sharing of ideas.

Some of them were about testing.

One interesting thing, there were some awesome discussions around testing-ish stuff,  For example - we had some discussions around motivating people - and how that does not work. This was an interesting discussion - we had some points that were worth considering.  Among other things - can we "motivate" people or remove the obstacles so people can discover their own motivation. 

That may be worth consideration another time.

What had me thinking this week was not that conversation - nor the conversation around how teams can so easily retreat within their cells and fail to work together in a meaningful way.

No, what had me thinking was a comment made during a keynote, then conversation later that day.  The comment was something to the effect of "needing production data" to do good testing.  Check out my Day 1 blog with the live blog notes here: http://bit.ly/1uhiLei

When I was a mainframe jockey, making systems do things that I was told were impossible to be done with the technology we had available, I used "production data" for testing.  I still advocate using "production data" in some contexts for testing.  What I caution people over is relying on said "production data" for all your testing.

I find it can be helpful to mimic production data for some instances of acceptance testing.  If we want to emulate what is happening, for example, to compare the "old system" with the "new system" it has proven helpful, in some instances.

In other instances, it can be less than helpful.  It can give a false sense of security.

Instead, I strongly recommend people consider what the data itself looks like.  What data is needed to exercise the application?  What happens to other applications or systems using that data when there are changes?

What happens when we make changes to how the data gets handled?  What happens to other systems that use the data made or updated by the system we are testing?  Sure - there was an impact analysis done - but those are not as thorough as we'd like to think, right?

So, what about production data?  In those situations, does it really help us?  Is there something more we can do? 

Absolutely. 

Examine what the data elements are - their characteristics - and how they are used. 

How do we do that?  Well, that is another blog post.

My point? 

The idea of production data being a cure-all for testing and the "go-to best practice" is a fallacy.  Don't be lured by the simplicity of it.  The world is rarely that simple. 

Unfortunately.


No comments:

Post a Comment