Thursday morning is here, the last day of Agile Testing Days in Potsdam, Germany. I managed to over sleep and miss the start of Lean Coffee. When I walked past, it seemed they had a good number of folks in the room, broken into two groups. Broken? Maybe "refactored" is a better term...
OK, that was a little lame. I think I may be a little tired yet.
So, here we go with another day. Today we have keynotes by Ola Ellnestam, Scott Barber and Matt Heusser. There are a variety of track talks as well. Rumor has it that at least one will combine unicorns with Harry Potter.
And, we are about to kick off with Ola Ellnestam on Fast Feedback Teams. Ready? Set? GO!
So, Ola launches into a story of trying to explain what he does to his kids. (Pete Comment: Yeah, if you have kids, its one of those weird conversations to think about when you deal with software. Also, kinda digging the hand drawn slide deck.) It was a good story about kids and understanding. It also included the idea of feedback - when things (like games) are predictable, how much fun are they? Unless you figure out the pattern and your younger brother has not...
Showers are a good example of "feedback loop." Depending on how far the shower head is from the faucet handles, you may have a bit of delay - like the farther away the two are, the longer it takes for you to know if the water is the temperature you want.
Reminder - if you remove the feedback mechanism, you are not "closing the loop" you are kicking it open so the loop never responds.
Reminder - never presume that "everyone knows" - when you are the one who does not know.
The velocity of the project (or aircraft) will determine the timing for feedback. One can not trust a response loop of, of, a couple of minutes, when the feedback involves aircraft at 33,000 feet. You kind of need it sooner - like - instantly.
Other times, consider how clear the communication is - the nature of the feedback can impact the understanding involved. Consider, if our task is to help solve problems, does the solution always involve creating software? Ola says while he likes software, he likes not creating software if there is another solution. (Pete Comment: Yeah - I can dig that.)
Instead of letting the backlog grow perpetually - which tends to freak people out when they look at it - consider paring it down - prioritize the list so the stuff that is really wanted/needed is on it. If the stuff that drops off comes back, then reconsider it. Don't let yourself get bogged down.
The problem is a bit like a bowl of candy - when the user stories are all "compelling" (Pete: by some measure) it gets really hard to choose. Limit the candy in the bowl to that which is important. This can help people understand. Allow the user stories to act as reminders of past conversations. When that conversation comes around again, perhaps the priority on that story needs to go up.
Ola tells a story about trying to pull from a build server - except there is a time difference between the time stamp on the build server and the machine he is working on. Problems resulted - like incomplete understanding / noise / in the response - which causes confusion.
Classic example of noise in response - what Ola calls "Chinese Whisper Game" - which I know as the "Telephone Game" - yeah. Start with one thing and by the time it gets told to everyone in the room and comes back to the first person, it is totally different.
Instead of looking for improvements in "outer" feedback loops, look at the inner-most loop. Feedback tends to happen in (generally) concentric loops, often centered around the life-cycle in the methodology in use. If the initial cycle takes a long time to get feed back, does it matter how efficient (or not) the outer loops are? Optimizing the inner loop as best you can, will give you opportunity to tighten the outer loops more than you can now.
This is true of classic Scrum cycles - as well as in other examples - like in Banking. Banking tends to run multiple batch processes each day. Yeah - that happens a lot more than some folks may realize. Recognizing the results of overlapping batches may possibly impact each other, the result of that impact the nature and type of feedback.
Moving on to double feedback loops - Generally stated - it is better to do the right thing wrong than the wrong thing right. For example - a radiator (room hear for those in the States who don't know about such things) has a thermostat to keep a room at a steady temperature. Recognizing the door or window is open and may have a bearing on the results - if one is looking at how well (or poorly) the radiator is doing its job.
Bug Reports? Yeah - those are records of things we did wrong. We have an option, look at them and figure out what went wrong, or make sure we don't do anything wrong. Reminder - the easiest way to avoid doing something wrong is to not do anything.
To move an organization, particularly a new organization, toward success, sometimes the easiest way is to reduce stuff that does not help. It may be user stories from the backlog - or it may be existent features that are of no value that can be removed. This will close loops that may currently only add noise instead of value. It can also speed the feedback return so you can do a better job.
Interesting question - What about slow feedback loops - those that start now, but the event for the feedback will not occur for some time? Well - good question. Consider Ola's flight to the conference. He bought round trip tickets on Scandinavian Air (SAS) - except there is a bunch of stuff going on with them right now, and his return ticket may not be "any use." So, he invested in a backup plan - specifically a 1-way ticket on Lufthansa- just in case. He'll know which one he needs when he goes home.
Right - so - I kinda took the morning off to practice my presentation and - well - confer with really smart people. So, after lunch - Scott Barber is up.
Scott Barber launches his keynote with a clip from 2001 A Space Odyssey - where he describes what was shown as not the dawn of man, but the beginning of development. He tracks this through manual, waterfall development, into automation - via horsepower and steam engines, with internal combustion engines to follow.
Then we get electricity - which gives us computers and the first computer bug - from there things go downhill.
Scott's assertion is that neither Agile nor Context Driven ideas are new. They are, in fact, how society, most cultures, most people, live their lives. He then wonders why so many software shops describe software development in terms of manufacturing than in terms of Research and Development. After all, we're not making widgets (which takes a fair amount of R&D before it got to the point where it could be mass-produced.
Ummmmm - yeah - does anyone really mass produce software - other than burning a bunch of CDs and shrink-wrapping them?
So, when it comes to context driven or agile or... whatever - can we really do stuff that people say we do? Or maybe think we do?
Citing the fondue restaurant at CAST in Colorado. And the dinner with Jerry Weinberg.
And discussing what testing development in 1960's - like in satellites and and aircraft and military stuff - you MUST know performance testing. Why? Because the tolerances were micro-scopic. No titles - just a bunch of smart people working together to make good stuff. Deadlines? Really? We have no idea if it will WORK let alone when it might be delivered. Oh. And they tested on paper - because it was faster and better than testing it on the machine.
Did this work? Well, it put people on the moon and brought them back.
Then two things happened.
Before 1985 (in the US) Software had no value - it could not be sold - legally - as a product. Before then, the software had to do something - Now it just needs to sell and make money. If it makes money - then why not apply manufacturing principles to it?
Scott then gives an interesting version of testing and development that is painfully accurate and - depressing at the same time. BUT - it resolved in a rainbow of things that are broadly in common, except for the terminology.
DevOps, Agile, Lean, Incremental, Spiral, Itarative, W-Model, V-Model, Waterfall.
Yeah - ewwwwwwwwwwwwwwwwwwww
So - until around 2010 stuff was this way. After that rough time zone - something happened...
Software production methods experienced a shift, where they split away, never to reuine. This gives us these two models:
1. Lean Cloudy Agile DevOps - Or The Unicorn Land
2. Lean-ish Traditional Regulated Audible - Or The Real World
How do we resolve this stuff? Simple - we add value. How do we add value? We support the business needs. Yeah. OK
FLASH! Test is Dead - well - the old "heads down don't think stuff" is dead.
So, what about the test thing - the no testers at Facebook, etc., So what? If there's a problem, next patch is in 10 or 15 minutes - the one after that will be another 10 or 15 minutes. So what? No one pays for it. Oh, the only time FaceBook actually came down hard? Ya know what that was?
Justin Beiber got a haircut - and every teeny-bopper girl in the world got on all at once to scream about it.
In Scott's model - FaceBook is the model. Don't worry about titles - worry about what the work is. Worry about the talented people you are working with - or not working with.
Scott predicts that the R&D / Manufacturing models will reunite - except right now we don't have the same language among ourselves.
Maybe we need to focus instead on what the Management Words are. If we speak in terms they understand - like use their words - we can get things sorted out. This helps us become an invaluable project team member - not an arrogant tester who acts like your bug is more important than $13M in lost revenue if it doesn't ship on time. (That is straight off his slide)
Help your team produce business valuable systems - faster and cheaper. Be a testing expert and a project jack-of-all-trades - Reject the Testing Union mentality.
Do not assume that you can know the entire context of business decisions. However, you can take agile testing and develop a skill in place of a role.
The ONLY reason you get paid to test is because some exevutive thinks it will reduce their time to a bigger yacht.
(Pete Comment: Ummmm - Yeah.)
And now for Huib Schoots on Changing the Context: How a Bank Changes their Software Development Methodology.
Huib, until recently, worked with Rabobank International - a bank in the Netherlands that has no share holders - the depositors ownthe bank (Pete Comment: Sounds like a Credit Union in the States).
Huib worked with a team doing Bank Operations - doing - well, bank stuff. The problems when he came in included testing with indefinite understanding of expected behavior -- not a huge problem, unless the experts can't agree.
BANG - Gauntlet is thrown - Agile is not about KPIs and Hard Measures and Manager stuff. Its kinda scary. Manager says - You need templates and ... Eewwwwwwwwww. Not for Huib.
So - the test plans are non-existant and the bosses want stuff that doesn't really work - (Pete Comment: ...and the junk that never seems to make sense to me.) Instead, he asked if any of them had heard of Rapid Software Testing? Ummmm - No.
So Huib began working his "Change" toward Context Driven practices, RST, Passion as a tester (and for other things in life), Thinking - yeah - thinking is really important for testers (Pete Comment: its a pity how many people believe they are thinking when in fact they are not.) - and to develop Skills over Knowledge.
With this, Agile practices came into play and acted as a "lubricant." Lubricant help things work together when they don't automatically really want to work together - they kinda rub against each other - that is why there's motor oil in your car engine.
Story Boards helped people talk - it helped people communicate and understand (to some level) what others were working on. Before, no one was really sure. Moving on - Passion became contagious. In good form, Huib dove in to show the team that its OK to make mistakes - and he did. Loads of them. Each time it was like "OK, its good, I learned something." Good move.
These changes led to the "Second Wave" - More agile testing, including shared responsibilities and pairing and... yeah. Cool stuff. Then some Exploratory Testing was introduced - by Michael Bolton himself. The thing was, Huib was a victim of his own sucess. Some 80 testers showed up when he expected half that number. Oops. Then, a cool tool was introduced, Mind Maps. They can help visualize plans and relationships in a clear concise way. This lead to concurrent Workgroups to share work and distribute knowledge and understanding.
Yeah, some tools are needed. But use them wisely.
What is ahead? Likely Session Based Test Management - loads of Automation (as they really don't have any) - Coaching (yeah) - Practice (definitely)
What made it work? Careful steps - passion - adaptability, building community, persistence (can you please stop asking questions? Why?) and - Yeah - the QA word - Question Asking!
What did not work? Plain straight training (don't do shotgun training and then not follow up). Project pressure - yeah, you are not doing that weird stuff on this project it is too important. You can't everything at once. Out and out resistance to change. We did that and it did not work.
Huib's suggestions - RST training - Passion - THINK! - Question EVERYTHING! - Testing as a social science - Explore (boldly!) - continuous learning.
OK - Recovered enough from my own presentation to pick up for Matt Heusser's keynote.
PLAY IS IMPORTANT - ok that is not really the title, but hey - that was ummmm - a little hard to sneak in here.
So, we are considering play and ideas and ... stuff. and shows a clip from A Beautiful Mind with of John Nash describing game theory, whilst sitting in a bar when attractive women come in and ... well - apparently beer is a great thought motivator. Game theory presents that we can do work that will benefit us. Yeah, that much we get. Yet, Reciprocity means that we will act in the belief that in help one person, we will also be helped, by some measure.
Why are we doing this? Matt pulled up four folks and did an exercise (before mentioning Reciprocity) and moving forward and - yeah - they act against the stand alone Game Testing theory, in hopes of benefit later - apparently. And the expected outcome occurred - its just one of those things - People like good endings and it happened - Reciprocity worked in this case.
Matt is describing software testing as The Great Game of Testing. Cool observation.
He's got a picture of a kanban board up - a real one - not a make believe one - The danger of course is that sometimes, there is a problem with the way work gets done. The "rules" are set up so everyone is happy and gets stuff done within the Sprint - except QA becomes the bottleneck and why isn't QA done? Never mind that the stories were delivered the day before.
Instead, if we look at a flow process where there are "workflow limits" in place - so the QA column has spots for a few stories, no new stories can enter dev until the stories in dev get pushed - So if dev can help QA clean their plate they can then push the stories that are waiting ...
So, sometimes things can work out. Elizabeth Hendrickson's Shortcut Game is an example of what happens when you try and short circuit the activity list. It demonstrates what happens when we do "extra work" to make this sprint's goals, but may negatively impact the next sprint. That could be a problem.
The challenge of conferences, of course is to be able to implement the stuff you pick up at a conference. Sometimes you just need to do stuff. Consider this - when you go to a conference, write a two page report with three things that could be done - like - are not physically impossible. Add a fourth that would need help to get done. THEN - do the three things and try the fourth. You never know what might happen.
This ends the last day of the conference. I need to consider the overall event. Look for a summary post in the next few days. Right now, my brain hurts.
Thank you Jose, Madeleine and Uwe!
Thank you Potsdam!
Finished with engines