Loads of people have weighed in on the question of testers needing to learn to code - or not. The last several weeks have helped me develop my thoughts more clearly than they were developed before.
At shops where there is a distinct "automate a bunch of stuff" culture or a stand alone "automation team" it is easy to see why it is reasonable to presume that "learning to code" is "essential" for testers.
Most of the time, my standard response is a series of questions. People think I'm doing something Socratic or that I'm leading them down a garden path just to pounce on them and say "Ah HAH!!! What about the FIZban project? RIGHT! What good would that do you?"
The fact is, when I'm asking questions it is because I am trying to understand the situation you are in. Most people who assert absolutely one thing or another tend to do so without considering other people's context, or that there might possibly be a situation where the "absolute" fails. Most of them also look at you when you question the use of the term "best practice" for their context.
Here's what my current thinking is regarding Testers Learning to Code, coming from someone who is desperately trying to dust off/clean up rusty Java Script and even MORE rusty/limited JAVA skills.
Depending on what might be expected of you, there could be a reasonable expectation that you are at least conversant with the terms being used by the people you work with. We expect developers and BAs and PMs to use the terms use, and not randomly assign definitions to things that make us cringe. It strikes me that a fundamental understanding of what people mean by "assert" or "class" or something else might be a reasonable thing to expect.
At the same time, if we, testers, are expected to be able to assist in some way with code, either reviews of production or test code, or possibly contribute to the development of test code (yes, I know that is a potential can of worms, if not spaghetti, does it not make sense to at least learn the fundamentals?
If your organization is open to pairing, the need for testers to become anything other than "almost slightly dangerous" with writing or understanding code would, I expect, decrease dramatically. However, that minimal knowledge might help you do a better job of testing.
Having a curious mind, I am not sure why people would not want to do that - at the least.
Like many people, I bristle when those with no concept of what I do insist that I must absolutely do something they do, so I can "be of value" - or something.
This brings in the question of are testers really failed developers? Do testers want to be production code developers but can't handle it?
I don't think so.
Once upon a time, people developing production code also tested it. They also worked with the people asking for the software about what the requirements were. Of course, in some shops, they told the people what the requirements were because, after all, they were IT. They were the people who developed the software that would work the magic that allow those lesser beings to do their assigned work.
Sometimes, I wonder if the people insisting that all testers "must learn to code" are not making the same "this is a 'best practice'" argument that seems to defy the actual definitions of the words that other people make when they mean "just do it like this because it worked for us." I want to believe that and not that they are descended from those same people who, once upon a time, told people what they, the software experts from IT would deliver.
The people who always know what is best. The same one where we should not try and confuse them with any other viewpoints. And facts to the contrary of those views are simply not allowed.
My fellow Software Testers - Learning something about what production code developers do and how they do it may have great value and may help your development as a professional.
Learn to code because you want to learn one more tool to make yourself better, if it is appropriate for what you want your career to be - not because someone is compelling you to do so.
Sunday, July 27, 2014
Sunday, July 13, 2014
On Software Quality and Software Testing
The last week or so I have been deep, very deep, into considering the relationship between Quality of Software and Software Testing. In this, the conversation has been more at the Meta level, something akin to ASQ's view on quality in general. (Fair warning disclaimer, along with being a software tester I am also a member of ASQ - American Society for Quality - these folks.)
Interestingly, that relationship helps me when I challenge assertions, usually gratuitous, often fundamentally flawed, on something published by the ASQ or something Deming said or wrote. Its interesting sometimes to lean into the table and say "Can you explain what that means? I'm not making the connection between what you are asserting here and my understanding of what {insert quality buzzword} means. Its possible we have a different understanding of the concept and I'd like to address that to avoid future problems and potential future conflict."
The response often comes back citing some authority, for example, Six Sigma or some concept championed by ASQ. Interestingly, that was recently coupled with the ideas of Context Driven Testing and AST - Association for Software Testing (Ummm, for those who don't know me, I'm a member of that, too.) Oftentimes I will then, when it is clearly an attempt to assert a position by citing authority, say something in as non-threatening a manner as possible along the lines of "I'm a member of ASQ and of AST. I have read the white papers and books on Six Sigma (or whatever else they are asserting, usually out of the recommended context) and I'm not sure how they align with what your are saying. I would like to understand what you are saying better. Can you explain it or would you prefer to have that discussion off-line, maybe over coffee? I'll buy."
I find people will be much more open to such discussions if I buy the coffee and /or bagel to go with it.
And yeah, I realize that I am doing my own version of citing authority by making the above statement. It does serve to get their attention and blow away the smoke screen that is intended to be set up. Well, maybe not so much removing the smoke screen as is bringing high-powered radar into the mix - I can see their position in spite of the smoke screen.
Where am I going with this?
Many people I meet use the terms "testing" or "quality assurance" or "QA" interchangeably or in conjunction with each other. You get statements like "Let me know when this has been QA'd" when they mean "tested." Then there is "QA Testing." Do NOT get me started on that.
The idea of "testing improves quality" is often the response to the question "Why do we test?" The bit that gets left out, possibly because it seems obvious or maybe because people are oblivious to the idea, is that testing improves quality only if someone acts on what is learned from the testing.
If something is not changed as a result of testing - configuration, code, processes - maybe all three - maybe other stuff as well, then will "quality" be any better? What is the point of testing?
If people want confirmation that things "work" then by all means - run the happy-path scenarios that possibly were used for unit testing, or build confirmation testing, or maybe in the CI tools - but don't confuse this with "testing."
The point of testing is to learn something about the system or piece of software at hand. It is usually not to prove anything. It is rarely done to prove something is "right." It may be done to check certain behaviors - or to see if specific scenarios behave well enough for a demonstration - or even, in a limited sense, validate some piece of functionality.
However - if there are any variances found, the testing identifies those variances - nothing more.
Testing does not make anything better. Testing does not improve quality - ever.
Testing provides information for someone to decide that action needs to be taken. and then someone must act on that decision. Then the quality may improve.
It is taking action after testing is completed that improves quality - not testing.
Interestingly, that relationship helps me when I challenge assertions, usually gratuitous, often fundamentally flawed, on something published by the ASQ or something Deming said or wrote. Its interesting sometimes to lean into the table and say "Can you explain what that means? I'm not making the connection between what you are asserting here and my understanding of what {insert quality buzzword} means. Its possible we have a different understanding of the concept and I'd like to address that to avoid future problems and potential future conflict."
The response often comes back citing some authority, for example, Six Sigma or some concept championed by ASQ. Interestingly, that was recently coupled with the ideas of Context Driven Testing and AST - Association for Software Testing (Ummm, for those who don't know me, I'm a member of that, too.) Oftentimes I will then, when it is clearly an attempt to assert a position by citing authority, say something in as non-threatening a manner as possible along the lines of "I'm a member of ASQ and of AST. I have read the white papers and books on Six Sigma (or whatever else they are asserting, usually out of the recommended context) and I'm not sure how they align with what your are saying. I would like to understand what you are saying better. Can you explain it or would you prefer to have that discussion off-line, maybe over coffee? I'll buy."
I find people will be much more open to such discussions if I buy the coffee and /or bagel to go with it.
And yeah, I realize that I am doing my own version of citing authority by making the above statement. It does serve to get their attention and blow away the smoke screen that is intended to be set up. Well, maybe not so much removing the smoke screen as is bringing high-powered radar into the mix - I can see their position in spite of the smoke screen.
Where am I going with this?
Many people I meet use the terms "testing" or "quality assurance" or "QA" interchangeably or in conjunction with each other. You get statements like "Let me know when this has been QA'd" when they mean "tested." Then there is "QA Testing." Do NOT get me started on that.
The idea of "testing improves quality" is often the response to the question "Why do we test?" The bit that gets left out, possibly because it seems obvious or maybe because people are oblivious to the idea, is that testing improves quality only if someone acts on what is learned from the testing.
If something is not changed as a result of testing - configuration, code, processes - maybe all three - maybe other stuff as well, then will "quality" be any better? What is the point of testing?
If people want confirmation that things "work" then by all means - run the happy-path scenarios that possibly were used for unit testing, or build confirmation testing, or maybe in the CI tools - but don't confuse this with "testing."
The point of testing is to learn something about the system or piece of software at hand. It is usually not to prove anything. It is rarely done to prove something is "right." It may be done to check certain behaviors - or to see if specific scenarios behave well enough for a demonstration - or even, in a limited sense, validate some piece of functionality.
However - if there are any variances found, the testing identifies those variances - nothing more.
Testing does not make anything better. Testing does not improve quality - ever.
Testing provides information for someone to decide that action needs to be taken. and then someone must act on that decision. Then the quality may improve.
It is taking action after testing is completed that improves quality - not testing.
Saturday, July 5, 2014
On the AST and CAST and Conferring about Testing
The Association for Software Testing says this about... well, itself:
I'm not going to write about that. Well, not directly anyway.
Once upon a time I was a regular participant at SQAForums, and a newly minted "QA Lead." I was digging for information, ideas and the like to use with my new position. I remember threads where people posting things like"Really, there are no 'best practices'." They sometimes went on to say something like "There may be something that works well in some circumstances or may help sometimes, but the idea of 'this is the best thing to always do' is misguided."
This made a lot of sense to me. One guy posted something about starting a new group for testers. This was roughly 10 years ago. I remember thinking "that sounds fantastic, but it is obviously aimed at people more experienced in testing than I am." I let the chance pass by.
Two mistakes in One. Impressive, Pete.
Fast Forward to 2009.
I was at a conference in Toronto, sitting at breakfast with a group of people I did not know, when I realized the nice lady who sat down next to me and with whom I was speaking was Fiona Charles. The same Fiona Charles whose articles I'd read and bookmarked. WHOA! A few minutes later, here comes Michael Bolton - no, not the singer or the guy from Office Space - Michael Bolton the tester guy. He sits down at the same table! WHOA^2!
We're digging into a breakfast of eggs and sausage and potatoes and fruit and coffee and tea and we're talking - I'm trying hard to be nonchalant - its not working very well. Two of my "testing idols" are sitting at breakfast and we are talking. WHOA^3!
So, the conference day begins - we're doing our thing at a workshop they are conducting and I'm participating in. Make it through the day. I grabbe some supper and had a drink and stumble off to bed with my mind quite melted.
The next day, in between sessions, I find myself chatting with Michael who says "Got anywhere you need to be? How about we play some games?" YES! DICE! Awesome! So, we dive in. Pretty soon, there is a small group standing around the table we're working at - drinking coffee and juice and tea and talking and there are some really smart people there. A lively discussion around "metrics" and "measurements" and "expectations" and "needs."
There is Michael, myself, Fiona joined us, as did Lynn Mckee, Nancy Kelln, Paul Carvalho. I realized that this "hallway track" had some of the best information of the day. I also realized I was the only participant who was not a speaker. WHOA^4
At one point, Fiona looked at me and said "What are you doing here? You don't really fit. You need to go to CAST." I responded something like "CAST? What's that?" And got a chorus of "It's awesome! You'd love it! Its like this conversation but bigger!" Then Michael said something like, "You're from Michigan, you said. CAST is in Grand Rapids next year."
Gobsmacked.
I LIVE in Grand Rapids. This way-cool conference is coming to Grand Rapids? REALLY? WOW!!!!!!!!!!!!!!
So when I got home, I looked it up. I found "The Association for Software Testing" and saw the names of some of the people involved - and I said to myself "Self, these are the folks whose writing makes sense to you! These guys rock!"
I did something I have continued to do since then - I bought myself a birthday present of a membership in AST. I have not regretted it.
Why? In AST, I found a community of people who are willing to share ideas and hear you out. They don't see you as a novice, even when you are. Instead, most of the people who really get it see you as someone who is on a journey with them to learn about more and better software testing.
That conversation in the hallway at a conference in Toronto was only the beginning. When CAST was in Grand Rapids the next August, I swung by the conference site the day before it began and ran into Fiona Charles, who was sitting with Griffin Jones. I loaded them into my car and dragged them kicking and screaming to my favorite Italian place in Grand Rapids for dinner. Giving them a mini tour in the process.
We landed at the restaurant, sat down on the terraza, ordered wine, the lady-wife joined us - and we had an amazing conversation with dinner that covered nearly everything in our heads - architecture, art, the economy, US-Canadian history, software testing. It was an amazing evening.
Every CAST since then has been like that for me - Exquisite conversation, learning, enlightenment and challenges.
Ideas are presented - and it is strongly suggested you be able to explain and defend them - otherwise the results will be "less than ideal" for you. People selling stuff - from tools to snake oil - are sent packing. People with challenges are encouraged. People looking for ideas find them.
Each year is different - and each year there are similarities. Generally, the sessions inspire conversation and discussion. This leads to thinking and consideration. Sometimes they result in "Interesting Encounters."
Last year, someone was presenting on failed projects and mentioned the Mars lander - the one that crashed several years ago? Remember that? Partway through their story a hand went up and said "That's not quite what happened - I was an engineer on that project..." Yeah. Really.
This lead to a series of interesting hallway conversations - and the session she presented was very well attended.
So, what is it that, for me, AST is about?
It helps me be better at what I do.
The Association for Software Testing (AST) is an international non-profit professional association with members in over 50 countries. AST is dedicated and strives to build a testing community that views the role of testing as skilled, relevant, and essential to the production of faster, better, and less expensive software products. We value a scientific approach to developing and evaluating techniques, processes, and tools. We believe that a self-aware, self-critical attitude is essential to understanding and assessing the impact of new ideas on the practice of testing.This is kind of a mouthful.
I'm not going to write about that. Well, not directly anyway.
Once upon a time I was a regular participant at SQAForums, and a newly minted "QA Lead." I was digging for information, ideas and the like to use with my new position. I remember threads where people posting things like"Really, there are no 'best practices'." They sometimes went on to say something like "There may be something that works well in some circumstances or may help sometimes, but the idea of 'this is the best thing to always do' is misguided."
This made a lot of sense to me. One guy posted something about starting a new group for testers. This was roughly 10 years ago. I remember thinking "that sounds fantastic, but it is obviously aimed at people more experienced in testing than I am." I let the chance pass by.
Two mistakes in One. Impressive, Pete.
Fast Forward to 2009.
I was at a conference in Toronto, sitting at breakfast with a group of people I did not know, when I realized the nice lady who sat down next to me and with whom I was speaking was Fiona Charles. The same Fiona Charles whose articles I'd read and bookmarked. WHOA! A few minutes later, here comes Michael Bolton - no, not the singer or the guy from Office Space - Michael Bolton the tester guy. He sits down at the same table! WHOA^2!
We're digging into a breakfast of eggs and sausage and potatoes and fruit and coffee and tea and we're talking - I'm trying hard to be nonchalant - its not working very well. Two of my "testing idols" are sitting at breakfast and we are talking. WHOA^3!
So, the conference day begins - we're doing our thing at a workshop they are conducting and I'm participating in. Make it through the day. I grabbe some supper and had a drink and stumble off to bed with my mind quite melted.
The next day, in between sessions, I find myself chatting with Michael who says "Got anywhere you need to be? How about we play some games?" YES! DICE! Awesome! So, we dive in. Pretty soon, there is a small group standing around the table we're working at - drinking coffee and juice and tea and talking and there are some really smart people there. A lively discussion around "metrics" and "measurements" and "expectations" and "needs."
There is Michael, myself, Fiona joined us, as did Lynn Mckee, Nancy Kelln, Paul Carvalho. I realized that this "hallway track" had some of the best information of the day. I also realized I was the only participant who was not a speaker. WHOA^4
At one point, Fiona looked at me and said "What are you doing here? You don't really fit. You need to go to CAST." I responded something like "CAST? What's that?" And got a chorus of "It's awesome! You'd love it! Its like this conversation but bigger!" Then Michael said something like, "You're from Michigan, you said. CAST is in Grand Rapids next year."
Gobsmacked.
I LIVE in Grand Rapids. This way-cool conference is coming to Grand Rapids? REALLY? WOW!!!!!!!!!!!!!!
So when I got home, I looked it up. I found "The Association for Software Testing" and saw the names of some of the people involved - and I said to myself "Self, these are the folks whose writing makes sense to you! These guys rock!"
I did something I have continued to do since then - I bought myself a birthday present of a membership in AST. I have not regretted it.
Why? In AST, I found a community of people who are willing to share ideas and hear you out. They don't see you as a novice, even when you are. Instead, most of the people who really get it see you as someone who is on a journey with them to learn about more and better software testing.
That conversation in the hallway at a conference in Toronto was only the beginning. When CAST was in Grand Rapids the next August, I swung by the conference site the day before it began and ran into Fiona Charles, who was sitting with Griffin Jones. I loaded them into my car and dragged them kicking and screaming to my favorite Italian place in Grand Rapids for dinner. Giving them a mini tour in the process.
We landed at the restaurant, sat down on the terraza, ordered wine, the lady-wife joined us - and we had an amazing conversation with dinner that covered nearly everything in our heads - architecture, art, the economy, US-Canadian history, software testing. It was an amazing evening.
Every CAST since then has been like that for me - Exquisite conversation, learning, enlightenment and challenges.
Ideas are presented - and it is strongly suggested you be able to explain and defend them - otherwise the results will be "less than ideal" for you. People selling stuff - from tools to snake oil - are sent packing. People with challenges are encouraged. People looking for ideas find them.
Each year is different - and each year there are similarities. Generally, the sessions inspire conversation and discussion. This leads to thinking and consideration. Sometimes they result in "Interesting Encounters."
Last year, someone was presenting on failed projects and mentioned the Mars lander - the one that crashed several years ago? Remember that? Partway through their story a hand went up and said "That's not quite what happened - I was an engineer on that project..." Yeah. Really.
This lead to a series of interesting hallway conversations - and the session she presented was very well attended.
So, what is it that, for me, AST is about?
It helps me be better at what I do.
Friday, July 4, 2014
On Coffee and Process and Ritual and Testing
I am writing this the morning of July 4th. In the US, this is a holiday celebrating the original 13 colonies declaring their Independence from Great Britain. This morning is nearly perfect. The sun us out, the air is not too hot and not too cold. There is a gentle breeze. I'm sitting out in the back garden reading and writing and sipping freshly made coffee.
I like a really good cup of coffee.
No, I really like a good cup of coffee.
I like a well made cup of tea as well. Don't get me wrong, a well made cup of tea with a bit of sugar and a dollop of milk, its a wonderful thing.
Still, I really like a good cup of coffee.
A couple of years ago, my lady-wife bought me a coffee press as a Christmas gift. It was amazing for me to experiment with some of my favorite coffee beans and work out how to get the flavor and balance just right. When I did, I was a very happy tester who really likes a good cup of coffee.
Things were great, until one crucial part went missing. The wee tiny bit that held the screen mesh to the plunger that filters the water, now coffee, and separate the coffee grounds from the stuff that you want to drink. I never proved it, but I strongly suspect that one day after washing the press and waiting for it to dry, our orange tom cat pumpkin found it an irresistible toy.
Needless to say, the choices of the stove top percolator (not bad, but still not as good as the press) or the electric drip coffee maker (ummm, ok, nuff said.) So, struggling through for what seems an interminable period of "ok" coffee, the arrival of ANOTHER coffee press this past Father's Day was deeply appreciated.
Did I mention that I really like a good cup of coffee? I do. A LOT.
So, this was slightly larger than the previous one. I would need to make some minor changes to my remembered favorite permutations based on which coffee I was making. Then - I began comparing the coffee I had been drinking the week before to what I was making right then. I mean, right - then.
Now, let us look at this. What was different between these? The coffee itself was the same - same roast, same grind, same water - same... Everything. Really - that is what coffee is - a mix of ground coffee beans and water and... yeah. That's about it.
So, what is the difference? Maybe - the Process of making coffee?
We talk about Process Improvement - how do we make something better - like, Testing? So, while the lady-wife chuckles at me for "my fussy coffee ritual" I find it makes things... better. Now, if I use one coffee, like a nice dark roast, or another coffee, like a lighter, maybe a medium or a lighter roast, I may use a slightly different amount of ground coffee. Or, I may allow the grounds to brew just a tad longer with one than the other.
The difference? I'm not sure. Maybe that slight variations will impact the coffee. The tool I use to make coffee and the method I use to make coffee will definitely make a difference.
What does this mean? I'm not sure. Except for one thing.
I know that if I blindly use the SAME amount of coffee, no matter the roast or the manner of which I plan to make it, and use the tools I plan on using - I will be disappointed.
Here's what I mean. If I am camping, which I really like to do, I have a handy percolator that I can make coffee in over a camp stove or over the camp fire. It does a fine job and makes an enjoyable pot of coffee. It is less work, well, cleanup work anyway than using a coffee press. It is also less likely to break than the glass container of the coffee press.
If I am traveling on the road, like flying somewhere instead of driving, I may figure something out with making coffee in my hotel room that is less pleasing to me than my normal "at home" methods or when I am camping. But, by changing how I make coffee, I get a much better cup than the thin stuff one normally gets from "complementary in-room coffee makers."
The exception to this, perhaps, is the "complementary in-room coffee" I've had in Germany and Estonia. (Hey, American hotel folks - go to Europe and check these guys out - they really GET coffee.) Still, the in-room coffee in Europe is still not as good as I make at home. Its not bad, just not as good as mine or a really good coffee shop.
What is my rambling point? Well, I think it is this: Using the same measures for everything, without looking at the broad circumstances, the context in which you are working, and the tools and means available to the task at hand, is foolish.
It does not matter if you are looking at test practices, management practices or coffee making practices. Applying something without examination, because it is "the best way" to do something, is folly.
It yields disappointing results in testing, management and coffee.
I like a really good cup of coffee.
No, I really like a good cup of coffee.
I like a well made cup of tea as well. Don't get me wrong, a well made cup of tea with a bit of sugar and a dollop of milk, its a wonderful thing.
Still, I really like a good cup of coffee.
A couple of years ago, my lady-wife bought me a coffee press as a Christmas gift. It was amazing for me to experiment with some of my favorite coffee beans and work out how to get the flavor and balance just right. When I did, I was a very happy tester who really likes a good cup of coffee.
Things were great, until one crucial part went missing. The wee tiny bit that held the screen mesh to the plunger that filters the water, now coffee, and separate the coffee grounds from the stuff that you want to drink. I never proved it, but I strongly suspect that one day after washing the press and waiting for it to dry, our orange tom cat pumpkin found it an irresistible toy.
Needless to say, the choices of the stove top percolator (not bad, but still not as good as the press) or the electric drip coffee maker (ummm, ok, nuff said.) So, struggling through for what seems an interminable period of "ok" coffee, the arrival of ANOTHER coffee press this past Father's Day was deeply appreciated.
Did I mention that I really like a good cup of coffee? I do. A LOT.
So, this was slightly larger than the previous one. I would need to make some minor changes to my remembered favorite permutations based on which coffee I was making. Then - I began comparing the coffee I had been drinking the week before to what I was making right then. I mean, right - then.
Now, let us look at this. What was different between these? The coffee itself was the same - same roast, same grind, same water - same... Everything. Really - that is what coffee is - a mix of ground coffee beans and water and... yeah. That's about it.
So, what is the difference? Maybe - the Process of making coffee?
We talk about Process Improvement - how do we make something better - like, Testing? So, while the lady-wife chuckles at me for "my fussy coffee ritual" I find it makes things... better. Now, if I use one coffee, like a nice dark roast, or another coffee, like a lighter, maybe a medium or a lighter roast, I may use a slightly different amount of ground coffee. Or, I may allow the grounds to brew just a tad longer with one than the other.
The difference? I'm not sure. Maybe that slight variations will impact the coffee. The tool I use to make coffee and the method I use to make coffee will definitely make a difference.
What does this mean? I'm not sure. Except for one thing.
I know that if I blindly use the SAME amount of coffee, no matter the roast or the manner of which I plan to make it, and use the tools I plan on using - I will be disappointed.
Here's what I mean. If I am camping, which I really like to do, I have a handy percolator that I can make coffee in over a camp stove or over the camp fire. It does a fine job and makes an enjoyable pot of coffee. It is less work, well, cleanup work anyway than using a coffee press. It is also less likely to break than the glass container of the coffee press.
If I am traveling on the road, like flying somewhere instead of driving, I may figure something out with making coffee in my hotel room that is less pleasing to me than my normal "at home" methods or when I am camping. But, by changing how I make coffee, I get a much better cup than the thin stuff one normally gets from "complementary in-room coffee makers."
The exception to this, perhaps, is the "complementary in-room coffee" I've had in Germany and Estonia. (Hey, American hotel folks - go to Europe and check these guys out - they really GET coffee.) Still, the in-room coffee in Europe is still not as good as I make at home. Its not bad, just not as good as mine or a really good coffee shop.
What is my rambling point? Well, I think it is this: Using the same measures for everything, without looking at the broad circumstances, the context in which you are working, and the tools and means available to the task at hand, is foolish.
It does not matter if you are looking at test practices, management practices or coffee making practices. Applying something without examination, because it is "the best way" to do something, is folly.
It yields disappointing results in testing, management and coffee.
Subscribe to:
Posts (Atom)