Tuesday, May 17, 2016

On Quality Engineering and Testing and Defect Prevention

Some time ago, I wrote a response to a post I read extolling the virtues of "Quality Engineering" over mere testing. (You can my response here.) Since then I have received some emails and been in some conversations on the topic. I've also seen a variety of threads on twitter related to the discussions I've had with others.

This, then, is some more of my thinking around the topic, based on what people have said to me - mostly trying to convince me of the error of my thinking.

It was explained to me that Quality Engineering is, at it's heart, prevention of bugs and problems in software. Thus, a Quality Engineer is not looking for bugs, instead, a Quality Engineer focuses on bug prevention - keeping them from being created in the first place.

"A good QE works to avoid bugs in software."

That was precisely what I was told by a very nice young lady. It struck me a bit like "A good AO (Automobile Operator) works to avoid potholes in the road." Apparently I was not amusing to her (maybe she lived in a city, as I do, where there are myriad potholes on nearly every road.)

There were several examples presented.

One involved a QE finding a problem in a planned change to a DB table. The QE prevented a problem by identifying the flaw in the development group's intended change. Their work flow consists of proposing DB changes, reviewing it with the development team, then the full Scrum team, then presenting it to the DBAs for review. It was in the Scrum team review that the QE identified the problem.

Another involved a QE identifying a problem in the design of some changes to an application. Again, the QE spoke up and raised an issue during review of the design with the Scrum team.

The third example was a QE speaking out over requirements that seemed contradictory. The reason was simple. They had not been understood and were noted down incorrectly.

Each of these were presented to me as examples of what a good Quality Engineer does. They prevented bugs from being created.

Except...

My response was, in each of these cases, the QE found a problem or inconsistency and raised the issue. They did not so much prevent a bug, as they did find a problem (bug) in a spot other than the working code. They found the problem earlier in the course of software development.

This, to me, is part of the role of testing and why testers need to be involved in the early discussions.

Taking the next logical step, including a tester who is familiar with the application in the initial discussions could benefit the entire process by helping other participants think critically about what the story/change/new feature is about.

By engaging in these discussions and exploring the intent and nuances around the request, recorded notes and conversations on the work, a tester might be able head-off issues while they are in the "bounce ideas around" mode - while discussions are happening around what terms or concepts mean.

In an Agile team (whatever flavour your group uses) if people are engaged in working toward better quality software, the role of a critical thinker is necessary - whatever you call it.

Some folks tend to get rather, emmm, pedantic over how words get used. Here's what I mean...

Each person in a team is trained to do something. Usually, they are better at that than the other activities needed to be done. Ideally, each person can contribute to each task that needs to be done - but their expertise in certain areas is needed to support and lead the team when it comes to doing those tasks and activities they are trained in particularly.

Some people are trained, and very good, at eliciting and discovering requirements. Some are trained in building a usable design. Some are trained in developing production code. Some are trained in database design. Some people are trained in assembling components together into a working, functioning build and/or release.

Testers have a role in each of these tasks.

Testers can help requirements be defined better.
Testers can help the design be better.
Testers can help the person writing production code write better code and execute unit tests better.
Testers can help with DB work (this may shock some people.)
Testers can help verify and validate the builds are as good as they can be.

Testers can test each of these things. It is what we do.

Getting to a position where testers are trusted, welcome and encouraged to participate fully in each of these tasks takes time, effort and gaining the trust of others on the team.

People tell me that testers only test code.

Those people have no idea what testing can be in their organization.

What some people are calling Quality Engineering tasks, from what I have been told (very patiently in some cases) are testing functions.

Think.

Test.

Saturday, May 14, 2016

On Releases and Making Decisions

I've gotten some interesting feed back in conversation and in email on this blog post.

It generally consisted of "Pete, that's fine for a small team or small organization. My team/department/organization is way too big for that to possibly work. We have very set processes documented and we rely on them to make sure each team with projects going in has met the objectives so we have a quality release."

To begin, I'm not suggesting you have no criteria around making decisions about what is in a release or if the release is ready to be distributed to customers. Instead, what if we reconsidered what it means to be "ready" to be distributed to customers?

In most organizations doing some form of "Agile" development, there is a product owner acting on behalf of the customers, looking after their needs, desires and expectations to the best of their ability. They are acting as the proxy for the customers themselves.

If they are involved in the regular discussions around progress of the development work, testing and results from the testing, and if they are weighing in on the significance of bugs found, is it not appropriate to have them meet and discuss the state of all the projects (stories) each team is working on for a given release?

Rather than IT representatives demanding certain measures be met, what if we were to have the representatives of our customers meet and discuss their criteria, their measures that need to be met for that release?

If each team is working on the most important items for their customers first, then does it matter if less important items are not included in the release, and are moved to the next? Does it matter if a team, working with the product owner, decides to spend more time on a given task than originally scheduled, as new information is discovered while working on it?

As we approach the scheduled release date, as the product owners from the various teams meet to discuss progress being made, is it really the place of IT to impose its own measures over the measures of the customers and their representatives?

I would suggest that doing so is a throw-back to the time when IT controlled everything, and customers got what they got and had to be content with it - or they would never get any other work done... ever.

I might gently suggest that whether your customers are internal or external, we, the people who are involved in making software, should give the decision on readiness to the customers and their representatives - the Product Owners. We can offer guidance. We can cajole and entreat. We should not demand.

Who is it, after all, that we are making to software for?

Friday, April 15, 2016

On Facts, Numbers, Emotions and Software Releases

A recent study published in Science Magazine looks at communication, opinion, beliefs and how they can be influenced, in some cases over very long terms, by a fairly simple technique: open communication and honest sharing.

What makes this particular study interesting is that it was conducted by two researchers who attempted to replicate the results of a previous study also published in Science Magazine on the same topic. The reason they were unable to do so was simple: The previous study had been intentionally fraudulent.

The results of the second study were more astounding, in some ways, than the previous study. In short, people are influenced to the point of changing opinions and views on charged, sensitive topics, after engaging in non-confrontational, personal, anecdotal-based conversation.

The topics viewed included everything from abortion to gay and transgender rights. Hugely sensitive topics, particularly in the geographic areas where the studies were conducted.

In short, when discussing sensitive topics, basing your arguments in "proven facts" does little to bring about a change in perception or understanding with people with firmly held and different beliefs.

Facts don't matter.

Well-reasoned, articulate, fact-based dissertations will often do little to change people's minds about pretty much anything. They may "agree" with you so you will go away, but they really have not been convinced. There are scores of examples currently in the media, I won't bore (or depress) anyone (including myself) with listing any of them.

Instead, consider this: Emotions have a greater impact on most people's beliefs and decision making processes than the vast majority of people want to believe.

This is as true for "average voters" as it is for people making decisions about releasing software.

That's a pretty outrageous statement, Pete. How can you honestly say that? Here's one example...

Release Metrics

Bugs: If you have ever worked at a shop, large or small, that had a rule of "No software will be released to production with known P-0 or P-1 bugs" it is likely you've encountered part of this. It is amazing how quickly a P-1 bug becomes a P-2 bug and the fix gets bumped to the next release if there is a "suitable" work-around for that.

When I hear that, or read it, I wonder "Suitable to whom?" Sometimes I ask flat out what is meant by "suitable." Sometimes, I smile and chalk that up to the emotion of the release.

Dev/Code Complete: Another favorite is "All features in the release must be fully coded and deployed to the Test Environment {X} days (or weeks) before the release date. All code tasks (stories) will be measured against this at the quality of the release will be compared against the percentage of stories done of all stories tasks in the release." What?

That is really hard for me to say aloud and is kind of goofy in my mind. Rules like this make me wonder what has happened in the past to have strict guidelines in place.I can understand wanting to make sure there are no last-minute code changes going in. I have also found changing people's behaviors tends to work better by using the carrot - not a bigger stick to hit them with.

Bugs Found in Testing: There is a fun mandate that gets circulated sometimes. "The presence of bugs found in the Test Environment indicates Unit Testing was inadequate." Hoo-boy. It might indicate that unit testing was inadequate. It might also indicate something far more complex and difficult to address by demanding "more testing." 

Alternatives?

Saying "These are bad ideas" may or may not be accurate. They may be the best ideas available to the people making "the rules." They may not have any idea on how to make them better.

Partly, this is the result of people with glossy handouts explaining to software executives how their "best practices" will work to eliminate bugs in software and eliminate release night/weekend disasters. Of course, the game there is that these "best practices" only work if the people with the glossy handouts are doing the training and giving lectures and getting paid large amounts of money to make things work.

And when they don't, more times than not the reason presented is because the company did not "follow the process correctly" or is "learning the process." Of course, if the organization tries to follow the consultant's model based on the preliminary conversations, the effort is doomed to failure and will lead to large amounts of money going to the consultant anyway.

Consider

A practice I encountered the first time many years ago, before "Agile" was a cool buzzword was enlightening. I was working with on a huge project as a QA Lead. Each morning, early, we had a brief touch point meeting of project leadership (development leads and managers, me as QA Lead, PM, other boss-types) discussing what was the goal for the day in development and testing.

As we were coming close to the official implementation date, a development manager proposed a "radical innovation." At the end of one of the morning meetings, he went around the room asking the leadership folks how they felt about the state of the project. I was grateful because I was pushing hard to not be the gatekeeper for the release or the Quality Police.

How he framed the question of "state of the project" was interesting - "Give a letter grade for how you think the project is going where 'A' is perfect and 'E' is doomed." Not surprising, some of the participants said "A - we should go now, everything is great..." A few said "B - pretty good but room for improvement..." A couple said "C - OK, but there are a lot of problems to deal with." Two of us said "D - there are too many uncertainties that have not been examined."

Later that day, he and I repeated the exercise in the project war-room with the developers and testers actually working on the project. The results were significantly different. No one said "A" or "B". A few said "C". Most said "D" or "E".

The people doing the work had a far more negative view of the state of the project than the leadership did. Why was that?

The leadership was looking at "Functions Coded" (completely or in some state of completion) and "Test Cases Executed" and "Bugs Reported" and other classic measures.

The rank-and-file developers and testers were more enmeshed in what they were seeing - the questions that were coming up each day that did not have an easy or obvious answer; the problems that were not "bugs" but were weird behaviors and might be bugs; a strong sense of dread of how long it was taking to get "simple, daily tasks" figured out.

Upshot

Management had a fit. Gradually, the whiteboards in the project room were covered with post-its and questions written in colored dry-erase markers. Management had a much bigger fit.

Product owner leadership was pulled in to weigh in on these "edge cases" which lead to IT management having another fit. The testers were raising legitimate questions. When the scenarios were being explained to the bosses of people actually using the software, they tried it. And sided with the testers and the developers: There were serious flaws.

We reassessed the remaining tasks and worked like maniacs to address the problems uncovered. We delivered the product some two months late - but it worked. Everyone involved, including the Product Owner leadership who were now regularly in the morning meetings, felt far more comfortable with the state of the software.

Lessons

The "hard evidence" and metrics and facts all pointed to one conclusion. The "feelings" and "emotions" and "beliefs" pointed to another.

In this case, following the emotion-based decision path was correct.

Counting bugs found and fixed in the release was interesting, but did not give a real measure of the readiness of the product. Likewise, counting test cases executed gave a rough idea of progress in testing and did nothing at all to look at how the software actually functioned for the people really using it.

I can hear a fair number of  folks yelling "PETE! That is the point of Agile!"

Let me ask a simple question - How many "Agile" organizations are still relying on "facts" to make decisions around implementation or delivery?

Saturday, March 5, 2016

On Visions and Things Not There

When I was playing in an Irish folk band, one thing we did each March was visit elementary schools and play music and talk a bit about Ireland in an attempt to get away from the image of dancing leprechauns and green beer and "traditional Irish food" like corn beef and cabbage.

One year, we were playing for a room full of kindergartners when one of them asked "Are leprechauns real?" The teacher smiled and chuckled a bit and for some reason, the other four guys in the band looked at me and one said "This one is yours Pete." 

I looked at the little girl who asked the question and said "Just because you don't see something does not mean it is not there." This made the teacher smile and nod. It also got us out of a pickle.

A few days ago, our tomcat, Pumpkin, was staring intently at something neither my lady-wife nor I could see. He was clearly watching something, and it was moving. He looked precisely as if he was stalking something. My lady-wife asked if I knew what he was watching - I had no idea.

Now, we live with three cats in the house. All of them, at different times, will watch something very intently. The fact that the humans could not see anything did not matter in the least.

Software is a bit like that. You know something is wonky and you can stare all that bit all day knowing something isn't right. And not see a blasted thing.

You know something is there. You see bits that don't seem right. No one else seems to see it. You see odd behavior and sometimes you can recreate it - but often, you repeat the same steps and ... nothing is there.

So you keep looking. You might find it. You might lose interest and move on. I find it a good idea to write myself a note on what I saw and what I thought might be factors in the behavior.

Because it is likely to come back again.


Monday, February 29, 2016

On Testing and Quality Engineering

The other day I read an article on how Quality Engineering was something beyond testing. It struck me that, in the course of reading that article, it struck me that the author had a totally different understanding of those two terms.

Here then, is my response...



On Testing and Quality Engineering

A common view of testing, perhaps what some consider is the "real" or "correct" view, is that testing validates behavior. Tests "pass" or "fail" based on expectations and the point of testing is to confirm those expectations.

The challenge of introducing the concept of “Quality” with this conception of testing brings in other problems. It seems the question of "Quality" is often tied to a "voice of authority.” For some people that "authority" is the near-legendary Jerry Weinberg: "Quality is value to some person." For others the “authority” is Joseph Juran: "fitness for use."

How do we know about the software we are working on? What is it that gives us the touch points to be able to measure this?

There are the classic measures used by advocates of testing as validation or pass/fail: 
·         percentage of code coverage;
·         proportion of function coverage;
·         percentage of automated Vs. manual tests;
·         number of test cases run;
·         number of passing test cases;
·         number of failing test cases;
·         number of bugs found or fixed.

For some organizations, these may shed some light on testing or on the perceived progress of testing. But they speak nothing about the software itself or the quality of the software being tested, in-spite of the claims made by some people.

One response, a common one, is that the question of the “quality of the software” is not a concern of “testing,” that it is a concern for “quality engineering.” Thus, testing is independent of the concerns of overall quality.

My view of this? 

Hogwash.

Rubbish. 

 When people ask me what testing is, my working definition is:

Software testing is a systematic evaluation of the behavior of a piece of software,
based on some model.

By using models that are relevant to the project, epic or story, we can select appropriate methods and techniques in place of relying on organizational comfort-zones. If one model we use is “conformance to documented requirements” we exercise the software one way. If we are interested in aspects of performance or load capacity, we’ll exercise the software in another way.

There is no rule limiting a tester to using a single model. Most software projects will need multiple models to be considered in testing. There are some concepts that are important in this working.

What does this mean?

Good testing takes disciplined, thoughtful work. Following precisely the steps that were given is not testing, it is following a script. Testing takes consideration beyond the simple, straightforward path.

As for the idea of “documented requirements,” they serve as information points, possibly starting points for meaningful testing.

Good testing requires communication. Real communication is not documents being emailed back and forth. Communication is bi-directional. It is not a lecture or a monologue. Good testing requires conversation to help make sure all parties are in alignment.

Good testing looks at the reason behind the project, the change that is intended to be seen. Good testing looks to understand the impact within the system to the system itself and to the people using the software.

Good testing looks at these reasons and purposes for the changes and compared them to the team and company purpose and values.  Are they in alignment with the mission, purpose and core values of the organization? Good testing includes a willingness to report variances in these fundamental considerations beyond requirements and code.

Good testing can exercise the design before a single line of code is written. Good testing can help search out implied or undocumented requirements to catch variances before design is finalized.

Good testing can help product owners, designers and developers in demonstrating the impact of changes on people who will be working with the software. Good testing can help build consensus within the team as to the very behavior of the software.

Good testing can navigate between function level testing to broader aspects of testing, by following multiple roles within the application and evaluating what people using or impacted by the change will experience.

Good testing can help bring the voice of the customer, internal and external, to the conversation when nothing or no one else does.

Good testing does not assure anything. Good testing challenges assurances. It investigates possibilities and asks questions about what is discovered.

Good testing challenges assumptions and presumptions. It looks for ways in which those assumptions and presumptions are not valid or are not appropriate in the project being worked on.

Good testing serves the stakeholders of the project by being in service to them.

What some people describe as “quality engineering” is, in my experience, part of good software testing.

 

Monday, November 23, 2015

On Motivation, part 2

As the discussion I was having with the Unicorn at the coffee shop was winding up, a fellow I worked with a few years ago came in looking rather, frazzled. He joined us, although he looked rather askance at the unicorn. We made small talk for a bit. He had been promoted some 6 months before to a manager position and seemed frustrated.

The reason he seemed frustrated eventually seeped out. He was frustrated. He was trying to get "his resources" to "engage" in some new methods of doing things.

About this time, the unicorn bowed out and excused himself. I'm not sure this fellow even noticed him sitting at the table.

When we had worked together, he struck me as one who was perpetually looking to make a mark in some way. He always acted as if he knew better than anyone on the team or in a discussion on addressing any problem. He made sure that he offered advice to team leads and managers on how to address a problem - which normally involved wholesale changes to bring whatever was under discussion to be brought in line with whatever his set of beliefs were at the moment.

Funny though - his "beliefs" tended to shift. I'm not sure why.

It almost was as if he looked at whatever the situation was - and decided it needed to be different. Why things were the way they were or how they got that way did not seem to matter.

He deemed them valueless and needed to be completely replaced.

I got pretty tired of it after a while. When I was moved to a different group following a reorganization (yeah, these guys did that every 6 months or so) I did not miss the turmoil or drama of someone ranting about how screwed up things were.

Back to the coffee shop...

So, the fellow was trying to get "his resources" to "engage" in new methods of doing things. The challenge was that people were pushing back. They had always grumbled. Now, they were refusing "to cooperate."

And he was frustrated.

So I took a deep breath and tilted my head, just so, and asked "The processes that were in place before, the ones you replaced. Why were they implemented?"

I think he wanted to glare at me. Actually, I suspect he wanted to punch me. Instead, he said, "Look. This is stupid. I know what needs to be done and how things should be. And they just don't want to do it."

And I sipped my coffee and asked, "Remember when we used to complain about the 'policy du jour' and every 6 months everything changed, unless a new manager rolled in sooner than that? Remember how we used to kvetch about things changing for no apparent reason?"

He glared at me. Frankly, I think he thought I wanted to hit me. (That is funny to people who have met both of us.) "Look," he says, "the problem is these people just don't want to embrace anything new. It is not me or my problem - it is them."

He left the coffee shop. I suspect it may be a while before he goes to that coffee shop again.

The Problem

I suspect that is a pretty good summation of the view of people - managers, directors, VPs, dictators, whatever - "It is not me, it is them."

The irony is, in my experience, the first and foremost rule of anyone looking to change or improve things is - Learn and Understand how things got the way they are.

It is rarely as straight-forward as some would have it. Problems exist - Processes exist - Processes are normally introduced to address specific problems. Other problems may not be addressed by the changes, but, these are usually judged to be lower priority than the ones that are being addressed.

So, new Managers, Leads, VPs, Directors, Bosses... whatever - Before you make changes, I have found it to be a really good idea to take the time in how the organization got there. Even if you "watched" the "mistakes" happen - it is unlikely you were in the discussions that looked at the needs, the problems and the alternatives that got you to where you are.

Motivation?

If you want your "resources" to "get on the bus" and support you, I suggest you take the time to learn these things. Without doing so, it is almost certain that the people you expect to do the things you are mandating, will give your direction and instructions the appropriate level of effort and dedication.

None at all.

Because, when you move on, all these changes will be changed, and nothing will really change.

So, what is the motivation you have to make changes? Are you trying to "make your mark?" Or are you trying to do what is right for the organization?

Friday, October 30, 2015

On Motivation, part 1

I recently wandered into a neighborhood coffee shop for a little defocusing - and some of their Kenyan roast coffee and a fresh scone. While in line to place my order, my friend the unicorn walked in.

We had not intended to meet, it was just a happy chance. We sat down with our respective coffee and began talking. As happens sometimes, the 'catching up' developed into talking about something of interest. In this case, we found ourselves talking about motivation. We quickly set aside the stuff about "motivating people" and turned to forms of motivation - what motivates, maybe inspires, people to do work.

Most technical people we know who seek advancement and promotion into leadership or management positions fall into a few groups. Now, this isn't a terribly scientific study, just what the unicorn and I have seen.

There are the folks who really don't want to manage people and like having their hands dirty - they like the technical challenges that come with bigger titles and pay-grades.

Then there is the other major group - They want to lead beyond a technical perspective. They want to be "in charge"

The first type - These are the same type you find in very technical enlisted roles in the military - they soar through ranks at lightning speed. They display astounding prowess at tasks that others cannot comprehend. They show others how to do things, then dive in next to them in the doing - teaching their juniors what they are doing, how and why. They leave officers shaking their heads at how astoundingly well they do their jobs.

Until they get to the level where they "supervise" others. Then they don't get to do what they really like doing. Then they watch other people do what they want to be doing. And the longer they are in, the higher the rank they achieve and the further they get from doing what they truly want to do. So they leave - they don't reenlist.

In Corporate-Land, these same people, if they get assigned or promoted beyond "getting their hands dirty" and doing what they like doing, tend to resign and take another job. 

The second type - These are the folks who want to get into "leadership" positions. They are the movers and shakers and the up-and-comers in the organization.

Some folks have a negative view of everyone who is in this second, broad group. Neither the unicorn nor I can really fault people for having ambitions or desires. Nor could we really find fault with people wanting to get ahead and move up the ladder.

After all, if they are reasonably competent in technical roles, maybe - just maybe - they will remember what it was like in those roles as they move up in the organization chart.

For me, when dealing with managers or directors or other boss-types, I find it helpful if they have some appreciation of the challenges of the work done by technical folks, be it developers, DBAs, testers, whatever. While they may not be able to help from a technical perspective, they may be able to offer assistance in other ways, for example, running interference with other, less technical managers or functionaries.

People growing into roles that challenge them is an excellent thing. It is a desirable thing in my mind. Granted, the roles I have moved into have not been management ones. My forays into management have convinced me that I do not have the right "makeup" for managing others.

I salute those who do have that makeup and make full use of it. Indeed, I salute those managers who are motivated to manage others well, and help those they manage discover what it is that motivates them.

A third type - These folks who want to get into "leadership" positions for reasons I find to be less than honorable. Maybe you have heard that "Power Corrupts." I find the question of why one seeks power to perhaps shine a light on just how true that is, or is not.

Some people have something less than altruistic motives. Some desire high rank for achieving their own ends - their own self-aggrandizement. In these instances, I suspect the corruption has already occurred - and the quest for power is, in fact, the motivation.

The unicorn blinked at me.

He said something to the effect that people have their own motivations. He chuckled (a scary sound, frankly) at the thought that some of these sounded like Death Eaters. I did stop a moment and consider.

I was reminded that individual people are motivated by different things and these generally are internal to each of them. Their motivation drives their choices and how they work, just as mine do.

I can accept or reject those motivations and actions based on my values and what I hold important.  I can also choose to not associate with those whom I find I can not support.