Thursday morning - Breakfast & Lean Coffee. Loads of tired people sitting in the Fritze at Dorint SansSoucci in Potsdam. Good energy though - the coffee helps!
Setting up for today's opening keynote -
NOTE! I will try REALLY HARD to clearly flag my own reactions to assertions made in the presentations I attend. Going as fast as my tired fingers & brain allow... Will clean the blog post up later.
===
Keynote: David Evans - Visualizing Quality
The Product of testing is..... ? Evans launches into "Value" questions - by inserting the idea of "more and better testing will do what for the product.
Product of Testing is Confidence - in the choices and Decisions we have to make in the steps to understand the product - not the product itself.
Testing services are not the judge, not on trial - It is the expert witness. How credible are we when it comes to providing information around the product. In the end, we must balance the risks and benefits of implementing the project - launching as it were.
Evans then describes/launches into the Challenger shuttle disaster (28 Jan, 1986). In this he describes the meeting the night before - the subject of which was "Will the O-Rings suffer a Catastrophic Failure Due to the Cold Temperatures." Of course, now, we know the answer was "yes."
Many pages of actual copies of the meeting agenda and technical notes - Yeah - These guys really were rocket scientists so there are loads of tecnical data here. They launced - the shuttle blew up.
"Failures in communication... resulted in a decision to launch based on incomplete and sometimes misleading information, a conflict between engineering data and management judgements." Wllm Rodgers, Investigator
Evans - We need to make sure that we are making it as clear as possible what the information we are presenting means and what the likely consequences are to the decisions based on that information.
Consider - the McGurk Effect - when there is a conflict between what we see and what we hear, the tendency is to rely on the seen, not heard. Is this what happened with Challenger? Is this what happens with software projects.
Now - US Military budget, 2008, was $607 Billion. (Pete: thats a bunch of cash) However, a single data point conveys not terriblly much information. Adding information on other countries gives more information. However, when comparing the spending compared to GDP - the total output of a country - while the US military, in gross terms, is the sum of the next 8 country's national spending - it was
BUG COUNTS! Any negative must be expressed in the context of "what are we getting that is positive."
In response, Evans posts the first clearly identifiable inforgraphic - with charts, lines, numbes, etc... So - this was a graph made in the 1860s of Napoleon's invasion of Russia in 1812. The lines represent the size (numbers) of the Grand Armee (Napoleon's army) at full strngth starting out with a very wide swath - and gradually narrowing over time as the army shrinks thru attrition (death.)
Consider how we are building/using/exercising testing suites compared to the actual documentation
This is in contrast to the London Tube maps - which is fantastic for giving an idea of how to get where iin London - Yet without understanding the actual street maps
US Election maps - red-state/blue state - looks like Obama should not have won - except the land doesn't vote. Adjusting by STATE level you get something DIFFERENT. When you look at each State by COUNTY, you see something different again - straight "results" and the results adjusted geographically by population density - gives us a series of interesting information for consideration.
Then there is the Penfield Homoculus, where the difference between sensory and motor control is - remarkable.
All of these boil down to 1 important thing - a diagram is a REPRESENTATION of a system - NOT THE SYSTEM. There are Other considerations around how that same system can be represented FOR A DIFFERENT PURPOSE.
Be careful of what your data points are PERCEIVED to represent.
Information Radiators can help us visualize the process. Simple low-tech stuff is still - visualization. Suggests to represent people on the board - AS WELL AS TASKS. So - not just what is in flite, but who is working, or taking point, on that task. (Pete: Good addition.)
Citing James Bach's "low-tech testing dashboard" to express state of testing - and note IT IS NOT COMPUTER GENERATED!
Remember this:
* What is the thing I am doing - what do I want to represent (and why)
* Stay focused on the information you wish to present - what message is given by a bar chart vs a "mountain range" of the same data;
* Transitions - if a unit is a unit, how do you know what the unit is? Save the X-axis for time - most people presume that is what it is. Order your data so people understand the point YOU WANT TO MAKE - sequence on the chart need not match the sequence in the app/software dashboard/whatever.
Remember - Testing is Decision-Support - it does not do "quality" - it gives information for people to make decisions on the product.
===
Track Session - Ajay Balamurugadas - Exploratory Testing in Agile Projects: The Known Secret
Ajay is a testing guru (ok - that was my personal comment) - with interests in many things. HE begins by asking if the Agile manifesto has a bearing on testing in projects in an Agile environment. He then ppresents a definition of ET from Cem Kaner - this is slightly different than James Bach's defnition. He then discusses what this means -
Important point: If your next test is influenced by the learning from your previous test, then you are doing exploratory testing.
AND PROCEEDS to LAUNCH INTO A MINDMAP based on stuff (words) from the audience. (Pete - Nice - mob mind mapping?)
This process is an interesting exercise in "testing" the session while he is presenting it. By getting people to contribute aspects of ET, based on their understanding, this is drawing people into the conversation. (Pete: photo taken from Ajay's tablet, to be tweeted or otherwise - Hey - LOOK!)
Whatever you do, if the customer is not happy, oh dear.
Probelm - much of what is on the picture is "positive" - what about problems?
* Mis-applied/misunderstood how to apply
* Mis-represented
* Hard to explain to management
* Hard to ensure coverage
* Miss important content (and context)
* Perceived as "monkey testing"
Skills related to ET:
Modelling; Chartering; Generating/Elaborating; Recording
Resourcing; Observing; Refocusing; Reporting
Questioning; Manipulating; Alternating;
Pairing; Branching/Backtracking
Conjecturing
Consider that "Exploratory Testing is a mindset using this skillset." (Jon Bach)
The Secret of ET -
Skills
Experience
Customers/Context
Risks
Exposure/Exploration
Test
=====
Track session: Vencat Moncompu Transforming Automation Frameworks to Enable Agile Testing - a Case Study
(Pete: I'm Missing points. This guy is speaking really fast - bang, bang, bang - challenging to keep up!)
Agile Automation by itself can't bring about faster time to market. Traditional Automation (not in an Agile environment) are representative. Problems with that include: (usually) UI dependent, accrues tech debt, builds up waste.
Function / behavior attributes are central to ensuring software quality. Challenges in automation include variances from tool to tool in language - often restricted to "regression:
The question becomes how do we make this stuff work -
Make the features scriptless and self documenting tests;
Develop scrips before UI is ready;
Intelligent Maintenance;
Options to run tests across multiple layers;
Intuitive Business freindly - behavioral testing
Presenting known/reasonably common (Pete: at least to me) multi-layer coverage ideas - (pete note: some of the slides are really hard to read between font & background colour, points spoken - not on the slides are fine, but referring to stuff on the slide makes it challenging.)
Flexibility in test automation is needed to reduce/minimize two concers - 1, Beizer's pesticide paradox where methods to find bugs will continue finding the same types of bugs; 2. James Bach's Minefield Analogy - where if you follow the same path, time after time, you clear a single path of mines (bugs) but learn nothing more about any others.
Balancing Automation & ET - the challenge is to keep things in touch with each other. (Pete: there seems to be a growing trend that there is a clear dichotomy between automation and ET. I'm not convinced this is still the case w/ more modern tools. Need to think on this.)
Cites "No More Teams" - Act of Collaborationis an act of shared creation and/or discovery.
Keeping the balance between behavior and function - and reflecting this in the test scripts may help with making this testing clear and bring the value to the test process. (Pete: I'm not certain the dichotomy between "Agile" and "Traditional" automation is as clear - or valid - as the claims some people make about it.)
-===
LUNCH!
===
Keynote: J B Rainsberger -
BANG BANG BANG - If we're so good at this why aren't we rich yet? (Kent Beck, 2003)
The fact is, we tend to sound zen-mink-like when people ask what Agile is about. Well, its like ... and its a mindset ... and we need to stay focused on... OK. We get it
Part of the problem is the mindset that improvements will cost something. This is kind of done with us being a pain in the butt with our own touchy-feely thing. We argue against restrictive dogmatic rules and we land in the thing of "we need to change this and focus on the mindset" - followed by papers, books and whatnot that date back to 2005 or 2006.
Etudes - a particular piece of music that is intended to incorporate specific exercises (Pete: in context of music). Why don't we do that with Agile? Practice skills while doing our work?
For years, we argue stuff - and still need to make things work and - somehow - something is missing.
The problem is "They" have no real reason to change - so "They" will work to the Rule - Translated, they'll do the absolute minimum based on the "rules."
Citing "New Strategic Selling" for the reasons why people don't buy. The idea of perceived cost vs pperceived value is the crux of the problem. We fail in that manner.
Cites Dale Emery - A person will feel motivated to do something as long as they have a series of elements and see value in what they want to do. Sustaining change is HARD -
The most we can really do is support each other - we know the answers and what needs to be done - so hang in there and help, listen. We can either storm off and never deal with the idiots again. Or - We can pity ourselves. Or...
We can look at things in other ways. Shows a video from Mad TV with Bob Newhart as a Doctor of some sort. And a "patient" who fears being buried in a box - His solution - STOP IT! In fact - every problem has a solution of STOP IT!
Let us consider what our most "well advertised" popular practices are... stupid.
If you haven't read "Waltzing With Bears" - then do so. There's a chapter/section to the effect of "Agile Project Management is Risk Management" - which is kind of part of what ET does. Why? maybe for the same reason that we have daily stand ups and we manage to completely miss stuff that gets /stated/ - can't get resolved in a few seconds - it gets lost. MAYbe this is what
Cucumber - what most people associate with BDD. Consider this...GAH! THIS JUNK ENDS UP LOOKING LIKE COBOL!!!!!!!!!! BAD!!!!!!!!!! We get so much baggage that stuff gets lost because we're worried about the baggage -
Rule - INVOLVE THE CUSTOMER - DUH! (And Joe says he's been saying that for 15 years.)
DeMarco/Lister's "Lost but making good time" Excellent point - AND the Swing cartoon (meh, I'll dig a copy out and post it here.)
RULE - Talk in examples - eg, lost luggage - Which of these bags are closest to your bag? followed by - How is your bag different from the one in the picture? This allows us to get common examples present, and trigger information to get important details that may be missed/overlooked.
One problem with this is we forget to have the conversation - we want to record the conversation - but forget to have the conversation in the first place.(Cites Goyko Adzuk's Communications book.)
The problem with recording stories on a card and doing the "this is how its done" thing - They are fantastic for figuring out how to create them - and many years later we have not examined any other way to do things - we simply are shifting the form of the heavily documented requirements. Translated - You should not still be limited to "As a, I want, so that"to describe stories.
This gives us some really painful problems in how to
Promiscuous Pairing and Beginners Mind: Embrace Inexperience
Awesome paper on beginning/learning to do pairing.
Angela Harms - "Its totally ok for you to suck... that means you can walk around and listen to people criticize your work and tell you it sucks. It means 'that's a stupid name' is OK to hear and to say."
The point of a RETROSPECTIVE is to be part of a continuous improvement process. Usually that part gets ignored. The reason it gets ignored is - Lack of Trust - The ting is -Trust is the key to things working - and this comes from the 'trust' to open yourself up to vulnerability.
Consider - James Shore's paper that Continuous Integration Is An Attitude, Not a Tool. CI is NOT a BUILD!!!!!!!!!!!!!!!!!
When we impose Scrum on the world without them understanding WHY and THE POINT - we get re labeled stage/stop gate stuff.
Part og the Problem is EGO - WE DON"T LIKE TO FEEL SOMETHING OTHER THAN AWESOME!
AND - since he's running out of time, Joe is BLOWING through the handful of slides -( Pete: he's got loads of visuals - I hope the slide deck becomes availabe cuz there is no way I am capturing more than 1% of the balance.)
One thing - Consider throwing test ideas up on the wall adn play the "This is what I hate about this test idea" game. See what happens.
AND WE'RE DONE!!!!!!!!!!!!
====
Consensus Talks -
OK -Lost connection to my blog's hosting server and THAT is a problem.
--
Pete note: I finally am reconnected - I got in late in a very nice woman's presentation - I met her in line for lunch and wanted to hear what she had to say - except I've forgotten her name and could get nothing recorded except one really, really important point she made.
Monika Januszek - Automation - a Human Thing (its for people)
Tools, processes, approaches, process models, STUFF that we want people to do differently or change - including automation - in order to be adopted and accepted - and then used - address ONE BIG QUESTION - "What's in it for me?"
If we can't address that - INCLUDING FOR AUTOMATION MODELS - don't expect to see any meaningful change.
--
Next up - Lindsey Prewer -
Pete note: So busy getting connected and following up on the last talk - now trying to catch up here!
Here we go. You can't fix problems by throwing people into the fray. When hiring people is several a month, and the goal is to make things happen cleanly - THEN - there is a cycle of hire/train/start testing/expand training.
Because they can not bring people in - smart automation approaches were needed -
Start with the values and goals
Make hiring scalable
HAve clear priorities
(Pete: at least 1 more I missed)
Mind the points in the Agile Manifesto - Particularly the FIRST one - the idea of PEOPLE.
Without people - you are doomed.
--
Fairl Rizal
Agile Performance testing
Started with the difference between stress and load testing. (Pete: OK - I get that.)
Made significant point that not everything needs or should be automated (Pete: Almost an after thought? possibly the result of trying to fit too much in from a much larger presentation.)
---
Anahit Asatryan -
Continuous Delivery
SAFe - Scaled Agile Framework
Tool - AtTask -
--
Pete: And I finally have connection to my blog again. (hmmm)
Lars Sjodhal - Comunication
Loads of stuff that is really good - so here's a summary (lost connection thru most of the presentation)
Realize that positive questions may impact results - e.g., Would you like coffee? vs Would anyone like coffee? vs I'm getting coffee can I get some for anyone else?
Silence may imply a false acceptance - OR - presence of group think - encourge full discussion
Major fail - 1991 - Christmas Cards from company called Locum -
Except someone wanted to change the o in the name to a heart - reasons unknown -
And no one said anything - and - they became famous on the interwebs as a result.
---
Eddy Bruin
Gojko's Hierarchy of Software quality
Verify Assumptions - except first you muse learn/find/discover the assumptions ?
And the Tree Swing cartoon shows up AGAIN!
Without getting inside the head of the customers, you may not the real expectations/requirements.
"You have to start with the customer experience and work backward to technology." - Steve Jobs
If you have multiple paths, with a channel each, consider cross channel testing - (Pete - but don't cross the streams - Ghostbusters)
Consider user based analytics - who/what is your customer - then you can begin to derive possible paths.
====
Pete: So now that I can get to the blog - yup - lost connection - AGAIN - Trying to resume for the final key note of the whole show - bare boes notes on that as connection is sporradic.
Lisa Crispin and Janet Gregory - On Bridging gaps.
After a tremendous skit of bringing testers and developers together with customers which included a tremendous narration performance by a tester who happens to live in Michigan (ahem) - they launch into a series of experience reports around false communication models, restrictive models within the organization - the lack of any recognition of contractual understanding.
Simply put - without Trust, this stuff does not happen.
They do a fine job of referring to keynotes and selected track talks that have been presented during the week.
-- Lost connection again - and its back --
Dividing work into small pieces, baby-steps, is one way of influencing work and making things happen. It makes it a bit more palatable. It makes it easier to work on small pieces. It makes it easier to simply do stuff.
And there is a new book coming out from the Agile Testing Wonder Twins.
===
This wraps up the conference. I have more notes coming from Day 0 - the Tutorial Day. We have some interesting notes to finish...
AND - The conference organizers announced that next year's conference (November, 2014) will have a kindergarten to mind the kids of attendees. (Cool idea.)
===
Good night -
Finished with Engines.
Thursday, October 31, 2013
Wednesday, October 30, 2013
LIVE! Agile Testing Days 2013 - Day 2! In Potsdam!
Wednesday dawned bright and early (well, it dawned about the same time it always does) on a group of very tired conference participants. Last night there was the "Most Influential Agile Testing Professional" awards banquet (congratulations to Markus Gaertner who won!) This also featured a Halloween them, complete with costumes and ghoulish decorations.
Loads of fun, but made getting to Lean Coffee almost an impossibility and cost me time getting into the "Early Keynote" by Chaehan So.
So, here we go!
The "Early Keynote" title is "Business First, Then Test" - which pretty well sums up the core ideas presented. Begins with a fair description of product owner and tester having potential areas of conflict and the problems that result from that. A simple (well, maybe not simple - common perhaps?) approach to addressing this is to share experiences and discuss the intent in an safe environment. Chaehan's example was "drink beer" (Pete: yup, I can agree!)
Instead of mapping use cases/user stories to abstract buzz-wordy terms, use the same use case or user story name/identifier the Product Owner is familiar with. Pretty solid idea, not new (to some of us) but important to state.
References coming from the use cases/user story, including data relationships, can result in complexities not obvious to the technical staff, often caused by abstraction to "simplify" the representation. However, sometimes the representation itself is the issue. (I'm not capturing this idea well, but I think this covers the gist of it.)
The idea of relationships and abstraction argues against the common "IT/Geek" approach to simplify for them - DON'T DO THIS. Keep the reductions at a business intent level. Chaehan suggests doing THIS by mapping the user story across multiple channels - not redefining the stories to track to the cahnnels themselves.
If you are working on a web-based ordering system, the "story" is replicated in each use channel. This makes for a complex (and difficult to execute)test path and representation of needs, process and the presentation of information - Even if the implementation of this is complex.
Keep the information as simple as possible! This is the point of ALL reporting to Management!
Design to Community - D2C - create a simple design that reflects what needs to be done. Like many things this allows for multiple levels of abstraction - and avoids the itchy-scratchy feeling that some people have in relation to having tests/progress reported to them in terms they don't use.
Discusses how the cost curve of correcting problems in the application is usually presented in a manner appropriate to "waterfall" and not so much to Agile. This is a n interesting view. If the commonly referenced hockey stick graph/chart is used (yeah, the same one shot to pieces in "Leprechauns of Software")
==
Second Keynote - Christian Hassa on "Live it - or leave it! Returning your investment into Agile"
Describing his presentation with Matt Heusser at Agile Conference in Nashville, Matt made the observation that "scaling Agile" was interesting but how does that related to testing? (Pete Comment: gulp)
Scaling Agile is often presented as AgileWaterScrumFall - OR Disciplined Agile Delivery (DAD). The then draws comparisons to "Underpants Gnoomes" who have a business plan something like:
Phase 1 - collect underpants;
Phase 2 - ??;
Phase 3 - profit.
Except the problem is that phase 2 thing. Most people confuse the "get ready to produce" as phase 2 - it actually is in phase 1.
Scaling Agile Framework - not so different than the Underpants Gnomes. There are still gaps in the model. There are holes that seem present in phase 2.
If we fail to focus on unlocking value, and instead focus on costs, we miss opportunity.
SAP "Business by Design" model is not that far from this either. The published estimations from 2003 simply failed to materialize. The problem was related to attemptign to model the product on current clients/users of SAP, not on what the intent was.
Presents an example of applying (mis-applying?) Scrum to a given list. As the team worked forward, the backlog of requirements grew. How? The team dove in and aggressively worked on the project diligently. Except the backlog grew.
After a high level meeting with "what is wrong?" as the theme, it dawned on the Product Owner that the problem with the backlog was attemptign to identify all the possible requirements and focusign on the core aspects that were needed/wanted so the product can be delivered /finished/ on time. The additional ideas may be incorporated into future versions/upgrades, but get the stuff outthere so the procut can be used - then people can figure out what is really needed.
"Your job as developers is not to develop software, your job is to change the world." Jeff Patton
Assertion: "Your job as tester is NOT to verify software, job is to verify the world is actually changing (fast enough.)"
Yup. The problem we in Dev (including testing) have is that we're a bit like Pinky & the Brain - We want to change the world/take over the world, but we fail to do so - We don't look long enough - we focus on the minutea and not the big picture. (Pete Comment: OK, I'll reserve judgement. though I like the P&B reference!)
Turns to Scaling TDD for an enterprise. Cyclical feedback loops (loops within loops) can provide insight within each pass/iteration. (Pete note: ok - seems interesting - consideration needed here on my part.)
Turns to Impact Maps as a tool to facilitate communication / transparency with stakeholders. Interesting example walk through (but it sounds a bit hypothetical to me) on applying the core ideas to this. Goals/Actors/Impacts/Deliverables - (Pete: OK - I get that.)
Pete question is, does this translate well to people who may not recognize the intent? I suspect it does - by forcing the consideration that seems "obvious" (by some measure) to someone (who may or may not matter.)
By using impact maps, we can then apply "5 whys" to features - (Pete: that is an interesting idea I had not considered. I kinda like it.)
Working on scaling /anything/ tends to get bogged down in goals - Create a roadmap of goals to define what it is / where it is, you'd like to go. Predicting the future is not the point defining goals - instead look to see what you'd like to achieve.
Test Goals & impacts are similar in that they can act as guides for Scale, Measure and Range of each goal/activity. Finally - Deliverables - Smaller slices delivered to production make it actually easier to the get the product out there and improve the product while still developing it (Pete: fair point.)
Story Maps allow us to examine what it is that we are trying to implement, no? Mapping the story can make aspects clear we have not considered. Rather than "aligning to business goal" we can align to "actor goal" - This can help us view our model - and see flaws, holes or conflict.
By defining a "likely order of events" we can see what the experience of the user will be, without defining what the software does. It allows us to remain true to the spirit of the purpose through each potential path.
This, in combination with the other tools described, help measure the progress and check scope creep. If we can identify this, we can then identify the purpose more clearly and identify potential and problems being introduced.
We can also use Story maps to control flow and define relationships between components and find potential conflict. As we get more information we can define the higher/lower priority decisions around the story maps. The higher the priority, the finer/more detailed the story maps become. The lower the priority, the chunkier, more nebulous the story maps become.
WOW! a Real example! (as opposed to hypothetical)
Sprints expanded to 4 weeks in this case. The first sprint had issues (ok, not uncommon) Yet by the end of Sprint 2, the core functions were in place. By focusing on the MOST IMPORTANT features - the top priority story/story maps could be implemented cleanly, expanding ideas/needs as project developed to include the lower priority needs.
Pete: OK - completely lost the thread of his last points but I got pictures!!
General Gist - COMBINE TOOLS and TECHNIQUES to make things work. A SINGLE tool or technique may have value, by combining them we can balance things a bit better.
Book Recommendations -
How to Measure Anything - Douglas W Hubbard
Impact Maps - ???
And BREAK TIME!
==
Track Session - Gitte Ottosen - Making Test-Soup on a Nail - Getting from Nothing to Something
Gitte is a Sogeti consultant speaking on Exploratory Testing. OK. Here we go With a Unicorn!!
Starts with James Bach's (classic) definition of Exploratory Testing. (Pete: yeah, the one on the Satisfice page)
Describing fairly common challenges in project limitations, liabilities and personality conflicts and potential for problems. PM does not want "too many hours" used - views testing as overhead. And Test Mangement Org wants "everything documented... in HP QC."
Fairly obvious solution - keep it simple. When people pretend to be Agile, it is a challenge for everyone involved. The challenge is to make things work in a balanced way, no? Gitte was not an "early adapter" of mind maps, and described how she created bullet lists and converted them later - OK - I can appreciate this. Then There were issues with documented structure of the app - which were not existent. This is something we all get to play with sometimes, no?
so what's available? Boundary analysis, pair-wise (orthogonal arrays - same thing, diff name), classification trees, etc. (Pete: Yup - all good approaches). AND she plugs Hexawise (Pete: yeah, way cool product!)
On examination - it was possible to lok at "cycles" and how users/customers are expecting the new app to work. The "documented requirements" did not exist - and maybe they were not discussed and understood. So - the question becomes when expectations are different between dev/design folks and customers/product owners - what happens? "Learning opportunity"
Decision trees and process flows can help with this - examine what the customer/user (or their representatives) expect to happen and compare those with developments - as a whole. Then exercise the software. See what happens. Exercise the things of interest THEN.
Testers (her) worked to support the team by "translating" the user stories into English - well, because of the team distribution, writing them in Danish was kind of a problem - as some folks spoke/wrote Danish (Danish company) but others did not - ewww
The good news is, when she exercised the software over documenting what she was going to test, she found problems. The product owner noted this and thanked her. By focusing on testing, she found she enjoyed testing again (Pete note - yeah - that helps)
Interesting variation on mind maps - Use them to document testing - instead of step-by-step approach, simply mind map of function points to be tested. (Pete Note: I do something similar to define sessions and charters for the sessions.)
==
Track Session: Myths of Exploratory Testing Louis Fraile and Jose Aracil
Starts with a fairly common BANG - Who is doing Exploratory Testing?" Loads of hands go up. (Pete note: ET by what model? Are they doing what I think of as ET?) (Note - they also did a pitch they are looking for people to join the company - boys - is that cricket?)
To do ET well, you need to...
"Inspect and Adapt" - change your thinking and observe what is going on around you.
"Be creative/Take advantage of you're team's creativity" - let people do their thing
"Additional to other testing" - don't just do ET - do other testing "like automation"
"Quickly finds defects" - wait - is that a key to success or an attribute of ET?
"Add value to your customer" - hmmmmm what does this mean?
"Test Early! Test Often!" - what?
Myths...
Myth 1 - ET is the same as Ad-hoc Testing
"Good ET Must be planned and documented" -
You must know -
what has been tested;
When it was tested;
what defects where logged.
Some ideas -
Testing Tours - Whittaker
Session Based Testing - Bach/Bolton
Something Else (Huib suggests Mike Kelly's ideas and thrashing Whittaker's tour ideas)
Myth 2 - ET Can't be measured
Multiple measurements available - SBTM, etc.,
Pete comment - blah - what?
Myth 3 - ET is endless
Pete comment - no idea what their point here is. sorry
Myth 4 - ET Can't reproduce defects
Be an explorer - really?
Be like David Livingstone from the video/computer game -
Guys he was a real person ( http://en.wikipedia.org/wiki/David_Livingstone )
not just a guy in a video game
Record video, use screen capture, analog recording (pen & paper)
Empower developers - adopt one.
Was that video really needed?
Myth 5 - ET is Only for Agile Team
Pete comments
- what?
- CMMi works with ET? REALLY? By what definition of "CMMi Works?"
Myth 6 - ET is not documented
Testers do things by not doing things in "Lonely Planet"
And then there are the ones who DO things in "Lonely Planet"
Pete comments - here to end
- stretching the metaphor from Whittaker's tours just a little?
What? " They don't do TDD with ET?"
Boys - TDD is a DEVELOPMENT tool - not a TEST TECHNIQUE
ET is an APPROACH not a TECHNIQUE
DIFFERENCES MATTER. (shouting in my blog - not the room)
===
Keynote - Dan North (@tastapod) - Accelerating Agile Testing - beyond automation
Opening assertion- Testing is not a role it is a capability.
The question is - How do Agile teams do testing and how does testing happen?
Much effort is put into things that may, or may not, move us along. The idea of backlog grooming is anathema to Dan North. (Pete - something to that) The thing is, in order to improve practices, we need to improve capabilities. When people are capable of doing something, it makes it easier for them to actually do that thing.
We can divide effort into smaller pieces, sometimes this make sense, sometimes there are problems. Sometimes there is a complete breakdown in the economic balance sheet to the software. When they shift to "short waterfalls" you get "rapids." Rapids are not the same as "rapid development." Sometimes things don't make it better.
"User Experience is the Experience a user has." (OK - that was a direct quote.) Translated - people will have an emotional reaction (experience) when they use the software/app/whatever. Thus, people line up all night and around the corner to buy the newest Apple device.
"Don't automate things until they are boring." If you are 6 sprints into something and have not delivered anything to the product owner/customer/etc., you are failing. They can have developed all the cool interface stuff, test engine, internal structure - but if the product is not being produced - you are failing.
You have to decide the stuff you want to do - and base that on the stuff you choose not to do -
Opportunity cost - all the other things you could be doing if you weren't doing what you are.
The problem of course is we may not actually know what those things are. The question of what can be tested and the actual cost of doing that testing is a problem we may find hard to reproduce, let alone, understand.
When there are problems, we need to consider a couple of things - Is it likely to happen again? What is the chance of that happening again? How bad will that be if it happens again? These lead to "If that happens here" how bad will it be?
Thus Netflix (more traffic than porn by the way) does not worry too much if their server is down - they (and chaos monkey) may be interested what portion of their thousands of servers are down right now. How much of the total is not available? Since failure of some portion is determinate, why do we pretend if must be avoided/
Cites xkcd AND the Leprechauns of Software book - stuff we know is bogus. There is little evidence for many of the things we believe have little or no evidence supporting them..
Discusses coverage - Look at the important bits of the product, then see what make sense. The stuff that is high impact of failure and high likelihood of failure had better get a whole pile more test effort than the stuff that no one will notice or care about if it DOES fail.
The question around this is CONTEXT - the context drives us - if it doesn't we are wasting time, money and losing credibility amongst thinking people. We can get stuff worked out so we get a 80% coverage of something in testing, but if it is a context that is irrelevant, it doesn't matter.
Stakeholders, product owners, etc., MUST be part of the team for the project - they are also part of the context. However - we must know enough about something to know if it is important OR by what model it is important or not. Without these things we can not appreciate the context.
Doing these things increases the chances that we have a clean, solid implementation - which makes ops folks happy. They should be excited in a good way that we are implementing something - and looking forward to working with us to get it in. If they are excited in a bad way about our deployments, we are doing it wrong.
TEST DELIBERATELY.
===
After spending time chilling in the hallway with people conversing on a variety of topics. A needed afternoon off - it is time for Matt Heusser's keynote. Scheduled talk is "Who Says Agile Can't Be Faster?"
Brief introduction of himself... developer/programmer - tester - agile guy - and ... author and - stuff.
After giving people a choice of topics - he launches into "Cool New Ideas and some old ones too."
And he gives away money... until he smacks the entire audience (except Seb Rose and those of us who heard him choose which game to try at the start.) We become complacent - relaxed - and fall into "automatic" responses. A cool video on attention awareness (or lack there of) launches him into his main theme.
Unless we know what to about what we are not looking for in particular. Like "And nothing else goes wrong." Except that takes really hard work.
Presents Taleb's Black Swan work - Risk at casinos - protect us from cheating and fraud and ... stuff. Except when the tiger mauls Roy of Seigfried and Roy - Insurance on the performer, who recovered, except the point of the show was to bring people into the casino to spend money. They didn't so Casino lost a bundle.
Walks through several examples - some more dramatic than others. A brief survey of problems and examples of types of testing (Pete: favorite is "soap opera" where you run through elaborate stories that "no user would ever do - except this one does... what happens?)
Consider - Coverage decays over time, but we're never sure what parts decay at what rate. We become complacent with automated tests or scripted manual (regression or whatever) and the more complacent we become the greater the odds that something will go horribly wrong.
This is the issue we all face whether we are aware of it or not.
Minefields! (with a picture of a minefield) We get complacent and forget about stuff. Its so easy because this always works - until something goes boom.
We MUST remember and keep this solidly in mind that this is a risk (awareness of a problem does not eliminate it, BUT - it helps us to keep it in the foreground and not slip into "system 1" thinking (autopilot mode.)
Presents, discusses a kanban board he used for test process/planning explicitly - the only thing on the board was testing stuff. Thus - anyone can see what is being worked on in testing AND anyone can ask about it. When people ask then about where are we? They can look at the board.
OK - Matt has moved on to his Titanic story ... (Pete: I need to talk with him about this... there are some... issues.) BUT he gets his Boat into the presentation!!
===
Break - and Game night!
Signing off from Potsdam for the day -
PS: Evening testing/agile games night was loads of fun. Matt did his Agile Planning session game, I did a collection of games around estimation and pattern recognition - gave away Scrabble Flash and puzzles made from erasers. Then more beer and conversation at the conference hotel's bar.
Loads of fun, but made getting to Lean Coffee almost an impossibility and cost me time getting into the "Early Keynote" by Chaehan So.
So, here we go!
The "Early Keynote" title is "Business First, Then Test" - which pretty well sums up the core ideas presented. Begins with a fair description of product owner and tester having potential areas of conflict and the problems that result from that. A simple (well, maybe not simple - common perhaps?) approach to addressing this is to share experiences and discuss the intent in an safe environment. Chaehan's example was "drink beer" (Pete: yup, I can agree!)
Instead of mapping use cases/user stories to abstract buzz-wordy terms, use the same use case or user story name/identifier the Product Owner is familiar with. Pretty solid idea, not new (to some of us) but important to state.
References coming from the use cases/user story, including data relationships, can result in complexities not obvious to the technical staff, often caused by abstraction to "simplify" the representation. However, sometimes the representation itself is the issue. (I'm not capturing this idea well, but I think this covers the gist of it.)
The idea of relationships and abstraction argues against the common "IT/Geek" approach to simplify for them - DON'T DO THIS. Keep the reductions at a business intent level. Chaehan suggests doing THIS by mapping the user story across multiple channels - not redefining the stories to track to the cahnnels themselves.
If you are working on a web-based ordering system, the "story" is replicated in each use channel. This makes for a complex (and difficult to execute)test path and representation of needs, process and the presentation of information - Even if the implementation of this is complex.
Keep the information as simple as possible! This is the point of ALL reporting to Management!
Design to Community - D2C - create a simple design that reflects what needs to be done. Like many things this allows for multiple levels of abstraction - and avoids the itchy-scratchy feeling that some people have in relation to having tests/progress reported to them in terms they don't use.
Discusses how the cost curve of correcting problems in the application is usually presented in a manner appropriate to "waterfall" and not so much to Agile. This is a n interesting view. If the commonly referenced hockey stick graph/chart is used (yeah, the same one shot to pieces in "Leprechauns of Software")
==
Second Keynote - Christian Hassa on "Live it - or leave it! Returning your investment into Agile"
Describing his presentation with Matt Heusser at Agile Conference in Nashville, Matt made the observation that "scaling Agile" was interesting but how does that related to testing? (Pete Comment: gulp)
Scaling Agile is often presented as AgileWaterScrumFall - OR Disciplined Agile Delivery (DAD). The then draws comparisons to "Underpants Gnoomes" who have a business plan something like:
Phase 1 - collect underpants;
Phase 2 - ??;
Phase 3 - profit.
Except the problem is that phase 2 thing. Most people confuse the "get ready to produce" as phase 2 - it actually is in phase 1.
Scaling Agile Framework - not so different than the Underpants Gnomes. There are still gaps in the model. There are holes that seem present in phase 2.
If we fail to focus on unlocking value, and instead focus on costs, we miss opportunity.
SAP "Business by Design" model is not that far from this either. The published estimations from 2003 simply failed to materialize. The problem was related to attemptign to model the product on current clients/users of SAP, not on what the intent was.
Presents an example of applying (mis-applying?) Scrum to a given list. As the team worked forward, the backlog of requirements grew. How? The team dove in and aggressively worked on the project diligently. Except the backlog grew.
After a high level meeting with "what is wrong?" as the theme, it dawned on the Product Owner that the problem with the backlog was attemptign to identify all the possible requirements and focusign on the core aspects that were needed/wanted so the product can be delivered /finished/ on time. The additional ideas may be incorporated into future versions/upgrades, but get the stuff outthere so the procut can be used - then people can figure out what is really needed.
"Your job as developers is not to develop software, your job is to change the world." Jeff Patton
Assertion: "Your job as tester is NOT to verify software, job is to verify the world is actually changing (fast enough.)"
Yup. The problem we in Dev (including testing) have is that we're a bit like Pinky & the Brain - We want to change the world/take over the world, but we fail to do so - We don't look long enough - we focus on the minutea and not the big picture. (Pete Comment: OK, I'll reserve judgement. though I like the P&B reference!)
Turns to Scaling TDD for an enterprise. Cyclical feedback loops (loops within loops) can provide insight within each pass/iteration. (Pete note: ok - seems interesting - consideration needed here on my part.)
Turns to Impact Maps as a tool to facilitate communication / transparency with stakeholders. Interesting example walk through (but it sounds a bit hypothetical to me) on applying the core ideas to this. Goals/Actors/Impacts/Deliverables - (Pete: OK - I get that.)
Pete question is, does this translate well to people who may not recognize the intent? I suspect it does - by forcing the consideration that seems "obvious" (by some measure) to someone (who may or may not matter.)
By using impact maps, we can then apply "5 whys" to features - (Pete: that is an interesting idea I had not considered. I kinda like it.)
Working on scaling /anything/ tends to get bogged down in goals - Create a roadmap of goals to define what it is / where it is, you'd like to go. Predicting the future is not the point defining goals - instead look to see what you'd like to achieve.
Test Goals & impacts are similar in that they can act as guides for Scale, Measure and Range of each goal/activity. Finally - Deliverables - Smaller slices delivered to production make it actually easier to the get the product out there and improve the product while still developing it (Pete: fair point.)
Story Maps allow us to examine what it is that we are trying to implement, no? Mapping the story can make aspects clear we have not considered. Rather than "aligning to business goal" we can align to "actor goal" - This can help us view our model - and see flaws, holes or conflict.
By defining a "likely order of events" we can see what the experience of the user will be, without defining what the software does. It allows us to remain true to the spirit of the purpose through each potential path.
This, in combination with the other tools described, help measure the progress and check scope creep. If we can identify this, we can then identify the purpose more clearly and identify potential and problems being introduced.
We can also use Story maps to control flow and define relationships between components and find potential conflict. As we get more information we can define the higher/lower priority decisions around the story maps. The higher the priority, the finer/more detailed the story maps become. The lower the priority, the chunkier, more nebulous the story maps become.
WOW! a Real example! (as opposed to hypothetical)
Sprints expanded to 4 weeks in this case. The first sprint had issues (ok, not uncommon) Yet by the end of Sprint 2, the core functions were in place. By focusing on the MOST IMPORTANT features - the top priority story/story maps could be implemented cleanly, expanding ideas/needs as project developed to include the lower priority needs.
Pete: OK - completely lost the thread of his last points but I got pictures!!
General Gist - COMBINE TOOLS and TECHNIQUES to make things work. A SINGLE tool or technique may have value, by combining them we can balance things a bit better.
Book Recommendations -
How to Measure Anything - Douglas W Hubbard
Impact Maps - ???
And BREAK TIME!
==
Track Session - Gitte Ottosen - Making Test-Soup on a Nail - Getting from Nothing to Something
Gitte is a Sogeti consultant speaking on Exploratory Testing. OK. Here we go With a Unicorn!!
Starts with James Bach's (classic) definition of Exploratory Testing. (Pete: yeah, the one on the Satisfice page)
Describing fairly common challenges in project limitations, liabilities and personality conflicts and potential for problems. PM does not want "too many hours" used - views testing as overhead. And Test Mangement Org wants "everything documented... in HP QC."
Fairly obvious solution - keep it simple. When people pretend to be Agile, it is a challenge for everyone involved. The challenge is to make things work in a balanced way, no? Gitte was not an "early adapter" of mind maps, and described how she created bullet lists and converted them later - OK - I can appreciate this. Then There were issues with documented structure of the app - which were not existent. This is something we all get to play with sometimes, no?
so what's available? Boundary analysis, pair-wise (orthogonal arrays - same thing, diff name), classification trees, etc. (Pete: Yup - all good approaches). AND she plugs Hexawise (Pete: yeah, way cool product!)
On examination - it was possible to lok at "cycles" and how users/customers are expecting the new app to work. The "documented requirements" did not exist - and maybe they were not discussed and understood. So - the question becomes when expectations are different between dev/design folks and customers/product owners - what happens? "Learning opportunity"
Decision trees and process flows can help with this - examine what the customer/user (or their representatives) expect to happen and compare those with developments - as a whole. Then exercise the software. See what happens. Exercise the things of interest THEN.
Testers (her) worked to support the team by "translating" the user stories into English - well, because of the team distribution, writing them in Danish was kind of a problem - as some folks spoke/wrote Danish (Danish company) but others did not - ewww
The good news is, when she exercised the software over documenting what she was going to test, she found problems. The product owner noted this and thanked her. By focusing on testing, she found she enjoyed testing again (Pete note - yeah - that helps)
Interesting variation on mind maps - Use them to document testing - instead of step-by-step approach, simply mind map of function points to be tested. (Pete Note: I do something similar to define sessions and charters for the sessions.)
==
Track Session: Myths of Exploratory Testing Louis Fraile and Jose Aracil
Starts with a fairly common BANG - Who is doing Exploratory Testing?" Loads of hands go up. (Pete note: ET by what model? Are they doing what I think of as ET?) (Note - they also did a pitch they are looking for people to join the company - boys - is that cricket?)
To do ET well, you need to...
"Inspect and Adapt" - change your thinking and observe what is going on around you.
"Be creative/Take advantage of you're team's creativity" - let people do their thing
"Additional to other testing" - don't just do ET - do other testing "like automation"
"Quickly finds defects" - wait - is that a key to success or an attribute of ET?
"Add value to your customer" - hmmmmm what does this mean?
"Test Early! Test Often!" - what?
Myths...
Myth 1 - ET is the same as Ad-hoc Testing
"Good ET Must be planned and documented" -
You must know -
what has been tested;
When it was tested;
what defects where logged.
Some ideas -
Testing Tours - Whittaker
Session Based Testing - Bach/Bolton
Something Else (Huib suggests Mike Kelly's ideas and thrashing Whittaker's tour ideas)
Myth 2 - ET Can't be measured
Multiple measurements available - SBTM, etc.,
Pete comment - blah - what?
Myth 3 - ET is endless
Pete comment - no idea what their point here is. sorry
Myth 4 - ET Can't reproduce defects
Be an explorer - really?
Be like David Livingstone from the video/computer game -
Guys he was a real person ( http://en.wikipedia.org/wiki/David_Livingstone )
not just a guy in a video game
Record video, use screen capture, analog recording (pen & paper)
Empower developers - adopt one.
Was that video really needed?
Myth 5 - ET is Only for Agile Team
Pete comments
- what?
- CMMi works with ET? REALLY? By what definition of "CMMi Works?"
Myth 6 - ET is not documented
Testers do things by not doing things in "Lonely Planet"
And then there are the ones who DO things in "Lonely Planet"
Pete comments - here to end
- stretching the metaphor from Whittaker's tours just a little?
What? " They don't do TDD with ET?"
Boys - TDD is a DEVELOPMENT tool - not a TEST TECHNIQUE
ET is an APPROACH not a TECHNIQUE
DIFFERENCES MATTER. (shouting in my blog - not the room)
===
Keynote - Dan North (@tastapod) - Accelerating Agile Testing - beyond automation
Opening assertion- Testing is not a role it is a capability.
The question is - How do Agile teams do testing and how does testing happen?
Much effort is put into things that may, or may not, move us along. The idea of backlog grooming is anathema to Dan North. (Pete - something to that) The thing is, in order to improve practices, we need to improve capabilities. When people are capable of doing something, it makes it easier for them to actually do that thing.
We can divide effort into smaller pieces, sometimes this make sense, sometimes there are problems. Sometimes there is a complete breakdown in the economic balance sheet to the software. When they shift to "short waterfalls" you get "rapids." Rapids are not the same as "rapid development." Sometimes things don't make it better.
"User Experience is the Experience a user has." (OK - that was a direct quote.) Translated - people will have an emotional reaction (experience) when they use the software/app/whatever. Thus, people line up all night and around the corner to buy the newest Apple device.
"Don't automate things until they are boring." If you are 6 sprints into something and have not delivered anything to the product owner/customer/etc., you are failing. They can have developed all the cool interface stuff, test engine, internal structure - but if the product is not being produced - you are failing.
You have to decide the stuff you want to do - and base that on the stuff you choose not to do -
Opportunity cost - all the other things you could be doing if you weren't doing what you are.
The problem of course is we may not actually know what those things are. The question of what can be tested and the actual cost of doing that testing is a problem we may find hard to reproduce, let alone, understand.
When there are problems, we need to consider a couple of things - Is it likely to happen again? What is the chance of that happening again? How bad will that be if it happens again? These lead to "If that happens here" how bad will it be?
Thus Netflix (more traffic than porn by the way) does not worry too much if their server is down - they (and chaos monkey) may be interested what portion of their thousands of servers are down right now. How much of the total is not available? Since failure of some portion is determinate, why do we pretend if must be avoided/
Cites xkcd AND the Leprechauns of Software book - stuff we know is bogus. There is little evidence for many of the things we believe have little or no evidence supporting them..
Discusses coverage - Look at the important bits of the product, then see what make sense. The stuff that is high impact of failure and high likelihood of failure had better get a whole pile more test effort than the stuff that no one will notice or care about if it DOES fail.
The question around this is CONTEXT - the context drives us - if it doesn't we are wasting time, money and losing credibility amongst thinking people. We can get stuff worked out so we get a 80% coverage of something in testing, but if it is a context that is irrelevant, it doesn't matter.
Stakeholders, product owners, etc., MUST be part of the team for the project - they are also part of the context. However - we must know enough about something to know if it is important OR by what model it is important or not. Without these things we can not appreciate the context.
Doing these things increases the chances that we have a clean, solid implementation - which makes ops folks happy. They should be excited in a good way that we are implementing something - and looking forward to working with us to get it in. If they are excited in a bad way about our deployments, we are doing it wrong.
TEST DELIBERATELY.
===
After spending time chilling in the hallway with people conversing on a variety of topics. A needed afternoon off - it is time for Matt Heusser's keynote. Scheduled talk is "Who Says Agile Can't Be Faster?"
Brief introduction of himself... developer/programmer - tester - agile guy - and ... author and - stuff.
After giving people a choice of topics - he launches into "Cool New Ideas and some old ones too."
And he gives away money... until he smacks the entire audience (except Seb Rose and those of us who heard him choose which game to try at the start.) We become complacent - relaxed - and fall into "automatic" responses. A cool video on attention awareness (or lack there of) launches him into his main theme.
Unless we know what to about what we are not looking for in particular. Like "And nothing else goes wrong." Except that takes really hard work.
Presents Taleb's Black Swan work - Risk at casinos - protect us from cheating and fraud and ... stuff. Except when the tiger mauls Roy of Seigfried and Roy - Insurance on the performer, who recovered, except the point of the show was to bring people into the casino to spend money. They didn't so Casino lost a bundle.
Walks through several examples - some more dramatic than others. A brief survey of problems and examples of types of testing (Pete: favorite is "soap opera" where you run through elaborate stories that "no user would ever do - except this one does... what happens?)
Consider - Coverage decays over time, but we're never sure what parts decay at what rate. We become complacent with automated tests or scripted manual (regression or whatever) and the more complacent we become the greater the odds that something will go horribly wrong.
This is the issue we all face whether we are aware of it or not.
Minefields! (with a picture of a minefield) We get complacent and forget about stuff. Its so easy because this always works - until something goes boom.
We MUST remember and keep this solidly in mind that this is a risk (awareness of a problem does not eliminate it, BUT - it helps us to keep it in the foreground and not slip into "system 1" thinking (autopilot mode.)
Presents, discusses a kanban board he used for test process/planning explicitly - the only thing on the board was testing stuff. Thus - anyone can see what is being worked on in testing AND anyone can ask about it. When people ask then about where are we? They can look at the board.
OK - Matt has moved on to his Titanic story ... (Pete: I need to talk with him about this... there are some... issues.) BUT he gets his Boat into the presentation!!
===
Break - and Game night!
Signing off from Potsdam for the day -
PS: Evening testing/agile games night was loads of fun. Matt did his Agile Planning session game, I did a collection of games around estimation and pattern recognition - gave away Scrabble Flash and puzzles made from erasers. Then more beer and conversation at the conference hotel's bar.
Tuesday, October 29, 2013
LIVE: Agile Testing Days 2013 - Day 1
After a Brilliant Day of full day tutorials yesterday (sorry - busy presenting, will blog on that later) This morning, the first FULL day of the conference begins with a lively breakfast conversation with Matt Heuser, Mohinder Khosla, Lisa Crispin followed by Lean Coffee.
And We're Off!
We've divided into two groups - which given the numbers involved is a very good idea!
First topic for the group I'm sitting in is is "Teaching Soft Skills to Nerds." Challenging - Lisa suggests avoiding the "touchy feelly" stuff - possibly with relation to "interpersonal" instead of other terms?
Great question - "Why do we hire nerds without soft skills in the first place?" Consensus - we're looking for technical skills and sacrifice something. Since we tend to sacrifice one thing for another, this is the result. (OK, I missed a bunch of the conversation - not firing on all cylinders yet? need more coffee!)
Next topic: Why have test managers in an Agile environment? If testers are "all in" and part of the team, why DO we need test managers?
There are skills they may have, not addressable in a "whole team" environment - when the "team" consists of 7 people, this looks and acts completely differently than when the "team" is70 people. Some organizations use the term "manager" to mean "orchestrator" - where they are helping to coach people and build their skills in an environment where these personal development considerations are maybe not fully represented. Same issues apply to test and development. A good manager can make sure that skills are resident in the larger organization, even if they are not on EVERY team, if someone has a mix of certain skills, they may be able to engage on /their/ team - and sometimes be a support person for other teams when they need their help.
Next topic: A Continuation of this? (Actually the group "moved on" but the topic is similar to the previous and I could not actually hear what it was...) Note - it is REALLY WARM in here! Whoa!
Managers can help bridge cross-departmental challenges - e.g., environments and sandboxes and make sure people have what they need. (Huge point!) Once comment - an interesting one - at some organizations, because teams have different needs, people can be migrated from one location to another. As they are needed in one project, people can be moved as effort/workload allows and is needed. THis is a really interesting idea.
One aspect that was raised - small companies have such a different dynamic than large companies, that things like office space, cubicle size, workspace location (windows, etc.,) the question is how to balance the politics.
OK - frankly - there is too much happening to record reasonably well. They folks have interesting ideas - by the time I get 1 thing types, they've thrown another one out and I miss it. this isn't doing justice - BUT - I think it gives an idea, no?
Like this - the point is not being important or the title on business card or whatever because you are a manager - the point is how can you serve the team BECAUSE you are a manager? One important aspect is to nurture more novice testers - both technical skills and softer, communication/people skills. A manager needs to nurture their staff to make things happen well. It is up to the manager to help people grow by getting juniors aware of ideas and skills to learn/look into, and it is up to the jr testers to actually do the work to learn.
** OK - I blew it by not charging my laptop overnight and my half-charged batter from yesterday's workshop is now flat-lining. MUST FIND POWER!
**
First Keynote of the conference - Andrea Tomasini - Agile Testing IS Testing. Right. Interesting. I tend to agree with the title - That may not be a welcome attitude - BUT - There are certain things that many experienced testers would agree with, among them is that blind adherence to locked process models will not result in good testing. You'll get a fair amount of /checking/ (see Bolton's checking vs testing work) but fine little testing.
He makes an interesting point right out the gate - What matters more is the QUESTION - sometimes the answer matters, but the question is what generates thought. Excellent point!!! (Waves to @jbrains / J.B.Rainsberger).
Interesting demonstration - How many cars pass a line projected across a video of traffic on a highway. Figuring things out is the easy part, unless it s hard. For example - How many cars cross a certain point in 10 seconds. Go.... Ummm, both directions. Cars not vans or trucks.
Sound familiar?
Clients want problems solved - not a proxy for solving the problem. We need to consider if the assumption made is validated by an increment of the activity or not. How can we learn /together/ what is needed if we don't communicate around that. Building trust is required to that communication happening.
Testing is not about validating that people did their job. It is about examining what was done and considering that against the customers desires. looking into the product is looking into assumptions around the application. (OK - I'm editing this heavily as I'm writing. The testers are not part of the team? Sounds far more lock-step / SDLC/waterfall than Agile - so I'm ignoring those references.
We are looking for fast feedback, but sometimes the results can take longer to receive, etc., The description given is around "luck" - as in try something once and it seems to work - and you stop. Yet without proper investigation, the result is misleading.
We must build a culture of trust which requires a reduction in social risk for the product team. People not trained in how "whole team" process models function will have great reluctance in diving in whole hog when it is the opposite of what their experiences have been.
The challenge is to examine costs and business risks - the relationship between them. If we can validate ideas in an incremental way, short feedback loops - blah - blah - OK - So far there is not much in the way of "new ideas" here.
The promised "controversy" has not yet appeared. Wait - hold on - Best Practices were cited - in that sometimes they don't work.
Creating a vision with Stakeholders - ok - promise there - Discussing collaboration with customers - often starts with /testing/ - mostly of ideas. For example - if we have an idea - it may be a great idea - but if no one wants to buy the product that results from that idea - it is just an idea. If we act on that and sink loads of money, time and effort into something - we could easily be wasting time and building a bunch of stuff that will gather dust.
OK - I can live with that.
The idea of "user stories" gives context and content. They can be a declaration of intent and interest. This is something a customer wants. This needs to be considered and defined - this takes work to define what it is that people want / need.
Sorry. I'm not representing this well. Much stuff being asserted. Commitments can not be compelled to achieve success.
Classic Scrum stuff being described. OK, but not quite what I expected in this talk.(OK - that was an editorial comment - still...)
Moved on to Lean - and how Lean got very distorted in the version often presented in the US. Little to disagree with here - in a manufacturing model. When people don't understand the reasons for things, the result will be lower quality. However - that is fairly normal. Over burdening is bad. Normalized flow is good. Unnecessary variation is bad. Wasteful activity is bad.
Stuff that looks like work, but does not contribute to the overall effort may be a good idea. However, from project to project - sprint to sprint - the context changes. The change in context may be give results that one does not expect in the process model (whatever that may be.)
I recognize that some of these ideas are potentially new to some people here. Having said that, it seems a very compressed set of ideas being glanced over instead of a limited number of ideas dealt with in depth. Many conference speakers attempt to put out EVERYTHING THEY KNOW in one presentation. I've done it - and learned from the feedback.
There may be ideas of value - but they are lost in the noise.
==
Track Session - Sami Söderblom - Flying Under the Radar - How Conventional Testing Was Secretly Turned into Sapient Testing." Right - fresh coffee - and in a track session with a promising abstract. G R Stepehenson described how corporate (or anyy) culture develops over time. Essentially, something happens that is BAD. People see what happens - and association doing whatever happened just before the BAD thing and link the two. To avoid BAD things, don't do that one thing.
Simple.
Unless they have no relationship to each other OR - there is no true association between them. Cites Thinking Fast and Slow - Real effort requires thinking.
Classic example (well, to me) of a requirement that is "testable" or not. The question is "what does this mean? In what context is this a requirement? Does this requirement make sense for that context?"
Invokes Mindmaps as tools to evaluate software - and thinking in general. And to track progress in testing - testing sessions, session charters and session reports. OK - I like that. (Question - how many people in here are familiar with session based testing?)
I'm really liking Sami's presentation style after the the keynote - relaxed - laid back - staying on task and describing what he did - nice - comfortable style. Some folks may be missing context though (like the guy next to me.)
Citing Hendrickson's Explore It! "Explore with to discover ". The objective is the INFORMATION - not a coloured button. This is crucial - I hope it is understood by people in the room. The fact that I agree with him may be impacting my perception.
OK - interesting slides on info-graphics. He flashes up a weibull distribution diagram - a common display/model for project management. He then flashed up a coloured bar chart - asked if it meant anything... people said "SURE!" except he made graphs based on the characters of the Simpsons. Really? Do they mean anything?
Without understanding what the intent is, without understanding what is meant - simply taking information at face value is a problem, and dangerous.
And then Prezi blows up the machine running the presentation when he tries to open a linked video. Oops. Bummer.
He recovers with an important point: To be successful, remove the obstacles to success, Simple, no? What if you do not see those obstacles as obstacles? Are they "important and needed tools" that are standard to your context or environment? So what? What if they do not add value RIGHT THEN - get rid of them. They may have value another time, but if they do not this time - drop it.
If they are obstacles to your THINKING - they are not needed and are harmful!!!!!!!!!!!
===
Talk 2 - Tony Bruce on Be a Real Team Member -
Starts out with some interesting questions - like... What makes a real team member? Asks for people's ideas on that - and gets reasonable answers - open, shares information, contributes openly, etc.,
According to Belbin's study - team work is a model where people assume a role in the team that supports the team (in some way.) The roles may be action, people or thought oriented roles. These are reasonable - BUT - there may be things missing. (He promises to revisit this in a bit.)
AND the SECOND Prezi take down of the day! Well done, Tony!
Are PMs "shapers" or what? If a Shaper is one who drives forward the ideas that will enable success - who (or what) is the kind of person that is? Steve Jobs, perhaps?
While walking through various types of roles (rather - personalities?) Tony makes an important observation in an off-hand manner. "People can be good in these roles/functions sometimes - in different contexts these can help or hinder." You need to know which one is which - and when.
BING! Off hand comment on models - IMPORTANT FOR EVERYONE! "Take what you can use and leave the rest." Some things will be of value sometimes - other times they will not. Learn to recognize what is going on.
Communication/respect techniques discussed. The question of how people interact - including the balance for individual worth is crucial. discussed the problems people encounter - or inflict - based on biases, behaviors, etc.,
Interesting ideas - a bit fast to record fairly.
OK - This room is stinking hot. Sweat is streaming down - wow.
===
Lunch time -
===
OK - Food eaten! Costume (for the party this evening) Fitted! Slide deck finished/updated!
NOW! Mary Gorman presenting on Interdependency for Agile Testing.
I met Mary two days ago and had a lovely conversation with her over dinner - remarkably bright. She starts with a horror story from a previous project where a vendor (the one that was needed for people to get their work done) went dark - as far as developers, etc., were concerned. They simply stopped communicating with them - bad news...
OK - So - with some definitions and differentiation around dependence and interdependence.
When you work together - supporting each other, is that dependent or interdependent? Is your work related? So, Mary is describing comprehensive interdependence - where activities are dependent on each other - and on each of the individuals doing the actions/activities (yeah, there is a difference...)
Discusses Thomson's work describing interdependence as "the degree members interact with, and rely on each other, to accomplish their work" - this is fairly important. However - most people fail to appreciate this OR resist the opportunity - (pete note: maybe a control thing???)
Interdependence is linked inexorably to trust. The two of them must be tied together for any measure of (commonly identified) success.
Types of trust ...
Contractual - Clear expectations/meet commitments.
DO you meet the commitments and expectations of the people you are interdependent with?
What about the other way around?
Really?
Communication - Trust of disclosure... WHo knows what & when? Is feedback present? Are you and the "other" open to it? Is it direct and constructive?
Competence - Trust of Capability - people don't trust each other person/team's competence / ability. What about Respect?
Right -
So communication is dependent on Trust - to make that WORK though we must "shed light without generating heat." (Cool graphic of structured conversation)
We explore ideas - which leads to value & evaluation - which in turn yields to confirmation. The trick is these must be done openly - bi-directionally - so each thing is understood. (pete comment)
In testing, we need to discover & deliver cyclically. each round gives us another path to consider. We have internal interdependencies between the "user" or tester) and actions which impact users which... ok - you get it.
--
Running out to set up my workshop....
---
Wednesday Morning -
Right - following up on this. My session was a 2 hour workshop, a roundtable on problem solving. As with similar exercises, we went through a list of problems people are dealing with/encountering at work, described them, then presented ideas to consider to address the one that was voted (dot vote by the participants). The "solutions" portion was the most complex part of the the discussion.
There was a swirling of ideas and suggestions that shifted over an hour of discussion.
By the time the "grid" was introduced, there seemed to be some solid ideas on what could be done to address the problem described. (Look for more on this in a later blog post - I'll add a link when I get it finished.)
It took a fair amount of time to tear down the room, and by the time I had gathered up my materials, I had missed a fair amount of the closing keynote for the day.
===
Ajay Balamrugadas' notes form the same day: http://t.co/UhgZujVwcr
And We're Off!
We've divided into two groups - which given the numbers involved is a very good idea!
First topic for the group I'm sitting in is is "Teaching Soft Skills to Nerds." Challenging - Lisa suggests avoiding the "touchy feelly" stuff - possibly with relation to "interpersonal" instead of other terms?
Great question - "Why do we hire nerds without soft skills in the first place?" Consensus - we're looking for technical skills and sacrifice something. Since we tend to sacrifice one thing for another, this is the result. (OK, I missed a bunch of the conversation - not firing on all cylinders yet? need more coffee!)
Next topic: Why have test managers in an Agile environment? If testers are "all in" and part of the team, why DO we need test managers?
There are skills they may have, not addressable in a "whole team" environment - when the "team" consists of 7 people, this looks and acts completely differently than when the "team" is70 people. Some organizations use the term "manager" to mean "orchestrator" - where they are helping to coach people and build their skills in an environment where these personal development considerations are maybe not fully represented. Same issues apply to test and development. A good manager can make sure that skills are resident in the larger organization, even if they are not on EVERY team, if someone has a mix of certain skills, they may be able to engage on /their/ team - and sometimes be a support person for other teams when they need their help.
Next topic: A Continuation of this? (Actually the group "moved on" but the topic is similar to the previous and I could not actually hear what it was...) Note - it is REALLY WARM in here! Whoa!
Managers can help bridge cross-departmental challenges - e.g., environments and sandboxes and make sure people have what they need. (Huge point!) Once comment - an interesting one - at some organizations, because teams have different needs, people can be migrated from one location to another. As they are needed in one project, people can be moved as effort/workload allows and is needed. THis is a really interesting idea.
One aspect that was raised - small companies have such a different dynamic than large companies, that things like office space, cubicle size, workspace location (windows, etc.,) the question is how to balance the politics.
OK - frankly - there is too much happening to record reasonably well. They folks have interesting ideas - by the time I get 1 thing types, they've thrown another one out and I miss it. this isn't doing justice - BUT - I think it gives an idea, no?
Like this - the point is not being important or the title on business card or whatever because you are a manager - the point is how can you serve the team BECAUSE you are a manager? One important aspect is to nurture more novice testers - both technical skills and softer, communication/people skills. A manager needs to nurture their staff to make things happen well. It is up to the manager to help people grow by getting juniors aware of ideas and skills to learn/look into, and it is up to the jr testers to actually do the work to learn.
** OK - I blew it by not charging my laptop overnight and my half-charged batter from yesterday's workshop is now flat-lining. MUST FIND POWER!
**
First Keynote of the conference - Andrea Tomasini - Agile Testing IS Testing. Right. Interesting. I tend to agree with the title - That may not be a welcome attitude - BUT - There are certain things that many experienced testers would agree with, among them is that blind adherence to locked process models will not result in good testing. You'll get a fair amount of /checking/ (see Bolton's checking vs testing work) but fine little testing.
He makes an interesting point right out the gate - What matters more is the QUESTION - sometimes the answer matters, but the question is what generates thought. Excellent point!!! (Waves to @jbrains / J.B.Rainsberger).
Interesting demonstration - How many cars pass a line projected across a video of traffic on a highway. Figuring things out is the easy part, unless it s hard. For example - How many cars cross a certain point in 10 seconds. Go.... Ummm, both directions. Cars not vans or trucks.
Sound familiar?
Clients want problems solved - not a proxy for solving the problem. We need to consider if the assumption made is validated by an increment of the activity or not. How can we learn /together/ what is needed if we don't communicate around that. Building trust is required to that communication happening.
Testing is not about validating that people did their job. It is about examining what was done and considering that against the customers desires. looking into the product is looking into assumptions around the application. (OK - I'm editing this heavily as I'm writing. The testers are not part of the team? Sounds far more lock-step / SDLC/waterfall than Agile - so I'm ignoring those references.
We are looking for fast feedback, but sometimes the results can take longer to receive, etc., The description given is around "luck" - as in try something once and it seems to work - and you stop. Yet without proper investigation, the result is misleading.
We must build a culture of trust which requires a reduction in social risk for the product team. People not trained in how "whole team" process models function will have great reluctance in diving in whole hog when it is the opposite of what their experiences have been.
The challenge is to examine costs and business risks - the relationship between them. If we can validate ideas in an incremental way, short feedback loops - blah - blah - OK - So far there is not much in the way of "new ideas" here.
The promised "controversy" has not yet appeared. Wait - hold on - Best Practices were cited - in that sometimes they don't work.
Creating a vision with Stakeholders - ok - promise there - Discussing collaboration with customers - often starts with /testing/ - mostly of ideas. For example - if we have an idea - it may be a great idea - but if no one wants to buy the product that results from that idea - it is just an idea. If we act on that and sink loads of money, time and effort into something - we could easily be wasting time and building a bunch of stuff that will gather dust.
OK - I can live with that.
The idea of "user stories" gives context and content. They can be a declaration of intent and interest. This is something a customer wants. This needs to be considered and defined - this takes work to define what it is that people want / need.
Sorry. I'm not representing this well. Much stuff being asserted. Commitments can not be compelled to achieve success.
Classic Scrum stuff being described. OK, but not quite what I expected in this talk.(OK - that was an editorial comment - still...)
Moved on to Lean - and how Lean got very distorted in the version often presented in the US. Little to disagree with here - in a manufacturing model. When people don't understand the reasons for things, the result will be lower quality. However - that is fairly normal. Over burdening is bad. Normalized flow is good. Unnecessary variation is bad. Wasteful activity is bad.
Stuff that looks like work, but does not contribute to the overall effort may be a good idea. However, from project to project - sprint to sprint - the context changes. The change in context may be give results that one does not expect in the process model (whatever that may be.)
I recognize that some of these ideas are potentially new to some people here. Having said that, it seems a very compressed set of ideas being glanced over instead of a limited number of ideas dealt with in depth. Many conference speakers attempt to put out EVERYTHING THEY KNOW in one presentation. I've done it - and learned from the feedback.
There may be ideas of value - but they are lost in the noise.
==
Track Session - Sami Söderblom - Flying Under the Radar - How Conventional Testing Was Secretly Turned into Sapient Testing." Right - fresh coffee - and in a track session with a promising abstract. G R Stepehenson described how corporate (or anyy) culture develops over time. Essentially, something happens that is BAD. People see what happens - and association doing whatever happened just before the BAD thing and link the two. To avoid BAD things, don't do that one thing.
Simple.
Unless they have no relationship to each other OR - there is no true association between them. Cites Thinking Fast and Slow - Real effort requires thinking.
Classic example (well, to me) of a requirement that is "testable" or not. The question is "what does this mean? In what context is this a requirement? Does this requirement make sense for that context?"
Invokes Mindmaps as tools to evaluate software - and thinking in general. And to track progress in testing - testing sessions, session charters and session reports. OK - I like that. (Question - how many people in here are familiar with session based testing?)
I'm really liking Sami's presentation style after the the keynote - relaxed - laid back - staying on task and describing what he did - nice - comfortable style. Some folks may be missing context though (like the guy next to me.)
Citing Hendrickson's Explore It! "Explore
OK - interesting slides on info-graphics. He flashes up a weibull distribution diagram - a common display/model for project management. He then flashed up a coloured bar chart - asked if it meant anything... people said "SURE!" except he made graphs based on the characters of the Simpsons. Really? Do they mean anything?
Without understanding what the intent is, without understanding what is meant - simply taking information at face value is a problem, and dangerous.
And then Prezi blows up the machine running the presentation when he tries to open a linked video. Oops. Bummer.
He recovers with an important point: To be successful, remove the obstacles to success, Simple, no? What if you do not see those obstacles as obstacles? Are they "important and needed tools" that are standard to your context or environment? So what? What if they do not add value RIGHT THEN - get rid of them. They may have value another time, but if they do not this time - drop it.
If they are obstacles to your THINKING - they are not needed and are harmful!!!!!!!!!!!
===
Talk 2 - Tony Bruce on Be a Real Team Member -
Starts out with some interesting questions - like... What makes a real team member? Asks for people's ideas on that - and gets reasonable answers - open, shares information, contributes openly, etc.,
According to Belbin's study - team work is a model where people assume a role in the team that supports the team (in some way.) The roles may be action, people or thought oriented roles. These are reasonable - BUT - there may be things missing. (He promises to revisit this in a bit.)
AND the SECOND Prezi take down of the day! Well done, Tony!
Are PMs "shapers" or what? If a Shaper is one who drives forward the ideas that will enable success - who (or what) is the kind of person that is? Steve Jobs, perhaps?
While walking through various types of roles (rather - personalities?) Tony makes an important observation in an off-hand manner. "People can be good in these roles/functions sometimes - in different contexts these can help or hinder." You need to know which one is which - and when.
BING! Off hand comment on models - IMPORTANT FOR EVERYONE! "Take what you can use and leave the rest." Some things will be of value sometimes - other times they will not. Learn to recognize what is going on.
Communication/respect techniques discussed. The question of how people interact - including the balance for individual worth is crucial. discussed the problems people encounter - or inflict - based on biases, behaviors, etc.,
Interesting ideas - a bit fast to record fairly.
OK - This room is stinking hot. Sweat is streaming down - wow.
===
Lunch time -
===
OK - Food eaten! Costume (for the party this evening) Fitted! Slide deck finished/updated!
NOW! Mary Gorman presenting on Interdependency for Agile Testing.
I met Mary two days ago and had a lovely conversation with her over dinner - remarkably bright. She starts with a horror story from a previous project where a vendor (the one that was needed for people to get their work done) went dark - as far as developers, etc., were concerned. They simply stopped communicating with them - bad news...
OK - So - with some definitions and differentiation around dependence and interdependence.
When you work together - supporting each other, is that dependent or interdependent? Is your work related? So, Mary is describing comprehensive interdependence - where activities are dependent on each other - and on each of the individuals doing the actions/activities (yeah, there is a difference...)
Discusses Thomson's work describing interdependence as "the degree members interact with, and rely on each other, to accomplish their work" - this is fairly important. However - most people fail to appreciate this OR resist the opportunity - (pete note: maybe a control thing???)
Interdependence is linked inexorably to trust. The two of them must be tied together for any measure of (commonly identified) success.
Types of trust ...
Contractual - Clear expectations/meet commitments.
DO you meet the commitments and expectations of the people you are interdependent with?
What about the other way around?
Really?
Communication - Trust of disclosure... WHo knows what & when? Is feedback present? Are you and the "other" open to it? Is it direct and constructive?
Competence - Trust of Capability - people don't trust each other person/team's competence / ability. What about Respect?
Right -
So communication is dependent on Trust - to make that WORK though we must "shed light without generating heat." (Cool graphic of structured conversation)
We explore ideas - which leads to value & evaluation - which in turn yields to confirmation. The trick is these must be done openly - bi-directionally - so each thing is understood. (pete comment)
In testing, we need to discover & deliver cyclically. each round gives us another path to consider. We have internal interdependencies between the "user" or tester) and actions which impact users which... ok - you get it.
--
Running out to set up my workshop....
---
Wednesday Morning -
Right - following up on this. My session was a 2 hour workshop, a roundtable on problem solving. As with similar exercises, we went through a list of problems people are dealing with/encountering at work, described them, then presented ideas to consider to address the one that was voted (dot vote by the participants). The "solutions" portion was the most complex part of the the discussion.
There was a swirling of ideas and suggestions that shifted over an hour of discussion.
By the time the "grid" was introduced, there seemed to be some solid ideas on what could be done to address the problem described. (Look for more on this in a later blog post - I'll add a link when I get it finished.)
It took a fair amount of time to tear down the room, and by the time I had gathered up my materials, I had missed a fair amount of the closing keynote for the day.
===
Ajay Balamrugadas' notes form the same day: http://t.co/UhgZujVwcr
Saturday, October 5, 2013
Conferring at Agile Testing Days or Sharing Ideas to Find Solutions
Later this month at Agile Testing Days in Potsdam, Germany. I'll be collaborating with Matt Heusser on a full-day tutorial on Exploratory Testing practices and how they can be applied to speed information from testing to stakeholders. Frankly, I'm looking forward to that. This is going to be really good.
Matt gave a good explanation of what we're doing, and why, in the video linked above. In short, we'll be talking on how you can give fast and informative feedback to the people who care (and matter) on projects. Yeah, I know we're talking about this at Agile Testing Days. The thing is, these approaches can work, and have worked at shops Matt and I have worked in, no matter what the development methodology in use is.
The next day I'll be conducting a two hour workshop we're calling a "Tester Round Table". The idea is testers meet and talk about problems they are facing or dealing with right now.
This is something similar to a SWOT analysis (what ever cool cred I have just took a significant hot by referencing that) where people talk about what the problems they have, what possible things they can do about them, what steps they might need to take to fix these issues and who, or what roles) they need to get on board and gain support from to make this work.
Simple, no?
We ran a similar exercise as an evening/extra activity at CAST. One thing I learned, or re-learned, is that some things will not be able to be fixed from the grass-roots level. If you are the only one who sees a problem, maybe that is, in itself, the problem? Are you not understanding the context of the organization or the mission? Excellent point for introspection.
Other folks participating at CAST came away with suggestions to try at their company. These generally work by introducing them in a small way. Going in with trumpets and slogans and big "motivational pitches" and "kick-off celebrations" and "bright! shiney! NEW!" ways of doing things tend to get rejected as "cool process du-jour.
The goal at CAST, and at ATD, is to present the model so people can take it back to their company and try it with their team or a subset of the team.
Why do I think there is value in this?
Simple. I've seen it work.
I'm not much into the philosophical considerations around varied and sundry aspects of software and software testing. I'm concerned with getting problems addressed and corrected, and getting good software out the door so people can do their jobs better.
This works.
Another example of this was presented recently.
At my local tester meetup, GRTesters (Grand Rapids Testers) we ran this same exercise for the August meeting. The results were interesting. Three folks from a single company showed up and presented an interesting problem. After voting, that problem was selected as the one to work on. We discussed aspects around it, including the roadblocks and how to deal with them. It was a really good conversation. (Checkout the notes here.)
The September meeting, the same guys from the same company came back and excitedly reported they took the ideas from the meeting, presented them to their management who were convinced enough to try it. The notes from the September meetup read:
They were seeing positive change in less than a month!
Can it work? Absolutely. Maybe not that quickly for you, but it does work when people are open to communication and understanding.
If you are going to Agile Testing Days, please check these two things out. They are going to be pretty cool. If yo are considering going to Agile Testing Days, just do it. Sign up and go. Tell the Boss you NEED to go. Get the boss to go!
This is going to be good.
Matt gave a good explanation of what we're doing, and why, in the video linked above. In short, we'll be talking on how you can give fast and informative feedback to the people who care (and matter) on projects. Yeah, I know we're talking about this at Agile Testing Days. The thing is, these approaches can work, and have worked at shops Matt and I have worked in, no matter what the development methodology in use is.
The next day I'll be conducting a two hour workshop we're calling a "Tester Round Table". The idea is testers meet and talk about problems they are facing or dealing with right now.
This is something similar to a SWOT analysis (what ever cool cred I have just took a significant hot by referencing that) where people talk about what the problems they have, what possible things they can do about them, what steps they might need to take to fix these issues and who, or what roles) they need to get on board and gain support from to make this work.
Simple, no?
We ran a similar exercise as an evening/extra activity at CAST. One thing I learned, or re-learned, is that some things will not be able to be fixed from the grass-roots level. If you are the only one who sees a problem, maybe that is, in itself, the problem? Are you not understanding the context of the organization or the mission? Excellent point for introspection.
Other folks participating at CAST came away with suggestions to try at their company. These generally work by introducing them in a small way. Going in with trumpets and slogans and big "motivational pitches" and "kick-off celebrations" and "bright! shiney! NEW!" ways of doing things tend to get rejected as "cool process du-jour.
The goal at CAST, and at ATD, is to present the model so people can take it back to their company and try it with their team or a subset of the team.
Why do I think there is value in this?
Simple. I've seen it work.
I'm not much into the philosophical considerations around varied and sundry aspects of software and software testing. I'm concerned with getting problems addressed and corrected, and getting good software out the door so people can do their jobs better.
This works.
Another example of this was presented recently.
At my local tester meetup, GRTesters (Grand Rapids Testers) we ran this same exercise for the August meeting. The results were interesting. Three folks from a single company showed up and presented an interesting problem. After voting, that problem was selected as the one to work on. We discussed aspects around it, including the roadblocks and how to deal with them. It was a really good conversation. (Checkout the notes here.)
The September meeting, the same guys from the same company came back and excitedly reported they took the ideas from the meeting, presented them to their management who were convinced enough to try it. The notes from the September meetup read:
We start with a follow-up from last month. The group that was having the issues we talked through last month - have actually ACTED on the discussion and are seeing improvements. Coolness!
They were seeing positive change in less than a month!
Can it work? Absolutely. Maybe not that quickly for you, but it does work when people are open to communication and understanding.
If you are going to Agile Testing Days, please check these two things out. They are going to be pretty cool. If yo are considering going to Agile Testing Days, just do it. Sign up and go. Tell the Boss you NEED to go. Get the boss to go!
This is going to be good.
Subscribe to:
Posts (Atom)