Right. Here we are in Potsdam, Germany for Agile Testing Days, after a day of workshops.
Here we go!
Scott Ambler is opening his keynote throwing down the gauntlet, planting his flag and a couple other euphemisms. Starts out with the challenge that much of the rhetoric around the Agile community is incorrect. How bad is "Waterfall"? Try finding a way to do a financial transaction that does not encounter, at some point in its life, a software system created by a waterfall methodology.
Next point - interesting -Don't worry about Process, worry about getting work done. Process stuff is BORING! Worse, most of the folks who were early into Process were speaking from theory and not from practice. Agreed, Scott.
Instead of starting with something big and moving to the middle, or starting with something little and moving to the middle. Skip the rubbish and start in the middle (duh.) Yeah, I agree with that as well. Data Technical Debt is the killer in the world today, introduced by people who are not enterprise aware. Frankly, Scott is hitting a lot of points - many of which I agree with, and interestingly enough, Matt and I touched on in our workshop yesterday. He touched on being at least Context Aware and recognize that some practices don't work in every situation. He also is challenging the notion that some of the "rules" of Agile are really not as agile as folks would make it. OK - he has me now.
One point, if you look at how Scrum actually works, it is linear, right? You Plan, then Do, then... wait. That sounds like Waterfall. Hold on, how can that be? That sounds like not agile! Advocating improving activities, software design and development is core to delivering in "the real world." (Real world count is 4, maybe 5 at this point?)
His DAD process lays claim to getting the good stuff from SCRUM and RUP and making sure people are able to deliver good software. Others in the room are noting that there is a demand in the presentation on needing "real data" without presenting any "real data" - OK - I get that criticism.
Can Agile Scale? Can a team of 50 make the same decisions as a team of 5? (I can hear rumblings about how 50 is "too many" to be Agile. Yeah, I get that too. Except large companies often act that way in practice, instead of what they should do or say they do.
In practice, organizational culture will always get in the way if given an opportunity. Groups of 5 will have different dynamics than a team of 50. OK. He then talked about being "co-located" which most (maybe all?) forms of methodologies call "preferred." The question is are they still co-located if the group is working in cubicles? Are they really co-located if they are in the same building but not in the same area? What about when you have people from different departments and or divisions are involved or ... yeah you get the idea.
OK - He just scored major points with me - "I don't trust any survey that does not share its source data." He goes on to explain that the data from his surveys, and supporting his assertions are available for review, and free to download if you contact him.
Scott moves on to talking about scaling Agile and Whole Team and - yeah. He observes that sometimes there are regulatory compliance situations where independent test teams are needed and asserts that you read and understand the regulations yourself - and NOT take the word of a consultant whose income is based on... interpreting regulations for you and then testing stuff you need to do.
Scott moved on to questions of parallel testing for integration testing, then leaped back to his Agile Testing survey results. He has an interesting slide on the adoption of "Agile Practices" like Code Analysis, TDD, etc., One thing I found that was interesting was the observation on the number of groups that kind of push testing off - like maybe to a different group? Is it hard? (well, yeah, maybe)
Other info graphic observed the number of developers not doing their own testing. (Wait, isn't the point of "whole team" REALLY that we are all developers?) I am not certain that some of the results that Scott is presenting as "worrying" are really that bad. Matt Heusser reports that his last client did Pairing & Reviews - so are informal reviews being done by "only" 68% a problem? Don't know. In my experience, Pairing is an organic review environment.
Overall, I quite enjoyed the keynote. NOTE: You don't need to agree to learn something! :)
Moving on to the first track session after a lovely round of hallway conversations. Peter Varhol presenting a talk named "Moneyball and the Science of Building Great Agile Teams."
Opening is much what one might expect on a (fellow) American explaining the concept essentials of Moneyball (anyone remember how bad Oakland was a few years ago?) and the impact of inappropriate measures. Yeah, (American) baseball has relationships to Agile - people need to respond to a variety of changes and - frankly, many people do a poor job of it. Both in Software and in sports. Essentially - to win games (or have successful implementations) you need to get on base, any way you can.
Thinking Fast and Slow (Kahneman) being referenced - yeah - good book.
Peter is giving a reasonable summation of biases, impressions and how thinking and actions can be influenced by various stimuli. For example "This is the best software we have ever produced" vs. "We are not sure how good this software is." Both may be true statements about the same release!
One may cause people to only brush over testing. One may lead you to aggressively test. Both statements may impact your approach depending on the nature of your thinking model.
He's moved on to being careful with over-reliance on heuristics - which are by their nature, fallible. They are useful, and you need to be aware of when you need to abandon one given the situation you are in.
He also warns against Kahneman's "Anchoring Effect" where one thing influences your perception of another, even when they are totally unrelated. In this case, roll dice and come up with a number, then answer the question "How many countries are on the continent of Africa?" The study he cites showed that when people had no idea, and were guessing, the higher the number rolled with the dice, the higher the number of countries were guessed.
OK - Really solid point: We want things to be better. The result is that many people will tend to wish away problems that were in earlier releases. Folks tend to HOPE that problems that are gone without confirming.
Leaving Peter's presentation I was stopped by, well, Peter. We had a nice chat on his presentation. Alas, the ending was a bit rushed, but, ah, some of the "pulling together" ideas were a challenge for some of the audience, but it was solid.
I then ran into Janet Gregory - very nice lady - who was kind enough to autograph my copy Agile Testing. THEN! I ran into yet another great mind - Markus Gaertner - who was good enough to sign the copy of How to Reduce the Cost of Software Testing - to which he contributed a chapter.
Then I realized I was late for the next session, where I am now - Cecile Davis' "Agile Manifesto Dungeons: Let's go really deep this time!" As I'm late, I missed the introduction. Alas, one exercise was wrapping up. Bummer.
However, using children's blocks as an example, people working together, testing early, collaboration. Cool idea. I am going to need to remember this.
I like how this conversation is resolving. I wish I had caught the beginning.
The Principles behind the Agile Manifesto boil down to fairly simple concepts, that can be a challenge to understand. We need to communicate clearly so everyone understands what the goal is. We need to have mutual understanding on what "frequent" means - how often do we meet/discuss? how often is code delivered?
What do we mean by simplicity? If we reduce documentation, or eliminate formal documentation, how do we ensure we all understand what we have agreed to? These are thoughts we must consider for our own organization - no single solution will fit everyone, or every group.
"When individuals feel responsible, they will communicate."
Yeah, that is kind of important. Possible problem - when people feel responsible for the team, instead of for themselves, they turn into Managers, who are likely no longer doing (directly) what they really like doing - Or they burnout and fade away.
In the end, it is about the team .
To get people functioning as a team, one must help then feel responsible as a team - not a collection of individuals. Then, they can communicate.
After a lovely lunch and wonderful conversations, we are BACK!
Next up is Lisa Crispin and Janet Gregory - yeah, the authors of Agile Testing. The book on.. yeah. Cool. Their topic: Debunking Agile Myths And they start with a slide of a WEREWOLF! Then they move to a slide of MEDUSA - and they put on "medusa headbands."
Testing is Dead - Maye in some contexts. When Whittaker said that, it addressed some contexts, not all. The zombie (thanks Micheal Kelly) testers, unthinking drones need to be buried once and for all. The others? Ummm not so much
ATDD/SBE tests only confirm behavior - yeah, unless you are looking to distinguish between Checks & Tests (as defined by Michael Bolton.) Knowing that difference is crucial.
Testers must be able to program (write production code). - And their UNICORN (#1) appears! Do all testers need to write code? Well, maybe at some companies. Except in some circumstances, what is really needed is an understanding of code - even if what is needed is not really programming. Maybe it is broad sets of skills. Maybe the real need is understanding multiple crafts - testing and... other things. One can not be an expert in all things. No matter how much you want to.
T-Shaped the whole Breadth/depth balance is crucial. Maybe technical awareness is a better description?
Agile teams are dazzled by tools - OOOOOoooohhh!! Look! Bright! Shiny! New! Wow! We need to have MORE TOOLS - or do we? What is it that fascinates people with tools? The best one help us explore wider and deeper. We can look into things we might otherwise not be able to look into.
As long as they can help foster communication, in a reasonable way (I keep using those two words) tools rock. Just don't abuse them!
Agile teams always deliver software faster. -With a DRAGON! Looks like a blue dragon to be precise... yeah, the buzzwords are killer ... sprint, iteration, stuff people really don't understand. Let's be real though. Sometimes - like always - when you change something - like a team's behavior - the team is likely going to slow down. It takes a while to learn how to do these new things
Alas, sometimes it takes longer to UNLEARN old behavior (like all of them) than it does to LEARN new behavior.
Allowing people to learn by experimentation is important, and can help them learn to perform well - you know in a flexible, responsive way.
The result of doing agile well is better quality software. Speed is a byproduct!
Another break and I wandered into the Testing Lab where I found people diligently testing a Robot, which reacts to colo(u)rs - with the goal being to determine the rules behind how the robot responded. There was a version of Michael Bolton's coin game going, and a couple of interesting apps being tested - one by none other than Matt Heusser!
Wandering back to the main reception area where I stumbled onto a handful of folks who were excitedly discussing, well, testing. Trying to get caught up with email and twitter feed, I realized that Lisa Crispin was in the "Consensus Talks" (think Lightning Talks).
Stephan Kamper gave a nice summary of how he uses PRY (http://pryrepl.org ) to get good work done. This is a Ruby tool that simply seems to o everything he needs to do. "Extract till you drop" is probably the tag line of the day (other than the Unicorn meme running thru things. Pretty Cool.
Uwe Tewes from Genalto, gave a summary of how his organization engages in Regression, Integration and other tests including UI testing.It was pretty cool stuff.
And after a break - Sigge Birgisson's presentation - the last track session of the day on Developer Exploratory Testing: Raising the Bar. He starts out well, with the "Developer's Can't Test" (myth) that he manages to include the now seemingly mandatory Unicorn image. Yeah, Unicorns started showing up after the opening key-note.
His gist is that Developers want to be able to be proud of their work - to show that they are producing stuff that is not only good, but great. The problem is, most developers have little or no training in testing methods, let alone Exploratory Testing.
The solution he used was to introduce paired work, with training on ET and Session Based Testing. The people working together build understanding of what the others are good at, helping garner respect. It was a huge gain for team cohesiveness ("togetherness" is the work Sigge used) along with encouraging developers build a more holistic view of the product. Good stuff, man.
Using a fairly straightforward Session (Time Box) Method, testers pairing with developers are able to do some really solid work. It also helped developers understand the situation, and be able to do good work. Frankly, it is hard for people to keep their skills sharp if they are not engaged fairly frequently in an activity. For Sigge, this meant there might be some significant breaks between when developers can actually be engaged in testing from one project to another - meaning they do testing work for a while and a sprint or two later, they are diving into testing again.
So with some simple mind maps (Smoke Testing Description for example) and a little guidance, they are able to be up and running quickly after each break. Cool.
He's reminding us that we need to keep focused on the needs/concerns/value of the project. How we do that will need to vary by the Context.
And in talking about Stakeholder Involvement, he flashes up a picture of a roller-coaster, and talking about keeping people involved, in the loop, and taking them for a ride. (Groan) But really, its a pretty good idea.
He describes the involvement of stakeholders, their participation in "workshops" (and not calling them "test sessions". And focusing on paths that are reasonably clean, and branching out from there. Yeah, there may be a bit of a possibility of confirmation bias, BUT - this allows them to start from a safe place and move forward.
With testers working with developers and stakeholder/customers, there is less thrash, and the ability to communicate directly, and manage expectations. Again - a cool thought (I wonder how many more cool thoughts he'll have before the end of this session.)
Yeah, whining stakeholders - the ones that latch onto one thing that is not quite right - can slow things down. Sometimes keeping people on track is a great challenge. (Pete's comment: That is not unlike any other team in my experience.)
So, Sigge comes away from this convinced that Developers really CAN test. Business Stakeholders/customers can test very well. He reports no experience with Unicorns testing his applications. (Pete Comment: To me this is an incomplete consideration. I would expect this covered in future releases/versions now that Sigge has seen the significance of the Unicorn Factor.)
Closing keynote of the day is just starting with Lasse Koskela speaking on "Self Coaching." Yeah. Interesting idea. When a speaker states "This doesn't really exist. If you Google it, you get one book that has nothing to do with what I am talking about." OK. I'm interested.
And he got in the obligatory "Finland is in Europe" joke.
After admitting he is a certified Scrum Master, he claimed to be a nice guy. (Pete Comment: OK, we'll give him the benefit of the doubt.)
Lasse begins with a series of skills needed for self coaching.
Understand the Brain - He talks about basic brain function - like if the brain detects a large carnivorous animal a reasonable response might be the brain sending a message that says "RUN!" There may also be a message that says "Be afraid." At some point after running and after being afraid, another portion kicks in with cognitive response and maybe that will tell us that looking to see if the carnivorous animal is still chasing us and should we still be afraid.
After verifying the animal is no longer chasing us, a retrospective might inform/influence our threat/reward models. This is turn can be informed by several considerations:
Status (is it current RIGHT NOW)
Certainty (is it plausible that this is not as definite or maybe more definite than we think?)
Autonomy (are we in control?)
Relatedness (what is our relationship to this situation - have we been there before? What about other people we know?
Fairness (pretty much what you might think)
The issue is that perceived threats tend to outweigh rewards in this - so it takes many good experiences to outweigh a single bad one. This may be a challenge.
Reduce the Noise
In looking to overcome obstacles, we need to reduce - if not eliminate - the noise emanating from our own head.
Encourage good behavior - the visualization thing - and frame it in what you want to do, not what you are afraid of/do not want to have happen. Funny thing - in golf, when folks t-up a shot - if that one goes bad and they need to t-up another, then the result is rarely the same mistake. It actually tends to be the opposite.
Part of the problem is once a bad thing happens, we tend to focus on that - a combination of "I can't believe I did that" (damaged self ego) to "DON'T DO THAT!" (undermine of intent by focusing on what you do not want.)
Ernie Els, the golfer, is a good example of how to sort this out. He
went from having a terrible year to rebounding back and leaving every
other golfer in the dust.
Stopping the Brain -
That icky feeling when two pieces of information conflict. For example "You did it all wrong!" when you believe that you completed the task perfectly.
When our (Ladder of Influence) Reality & Facts / Selected Reality / Interpreted Reality / Assumptions / Conclusions / Beliefs & Actions are not based in fact & are essentially false in nature, we are setting ourselves up for failure. Getting out of this conflicting "box" reality is a huge problem - and is a significant portion of the problem set.
Changing that requires us to be aware of what is going on - we have something to do that may conflict with what we really want to do, then we are faced with a choice - you can either address one level of "right" with what is a negative reaction.
He is describing a "Frame" model similar to Michael Bolton's Frame model.
So - to summarize - Pause frequently and evaluate your own reasons; Check your own thinking; Obey the sense you have of right and wrong.
AND THAT WRAPS UP DAY 2!!!!