The requirements are completely unambiguous. Brown sides. White ring. What could be simpler? So let's design a development process that will make our pancakes come out right:
- Developer pours pancake batter.
- Developer flips pancakes at a rate based on the schedule.
- Developer throws "done" pancake over the wall to QA.
- QA inspects pancake. Rejects are thrown back over the wall to the developer to "fix".
OK, so let's design a different process. Based on our observations of pancake cooking, we notice that the perfect time to flip the pancake is when the bubbles on the top have just popped, and the surface starts to look slightly dry.
- Developer pours pancake batter.
- Developer applies the test to determine when to flip the pancake.
- Developer sees pancakes are perfect and ships them.
- QA simply observes the pancakes on a statistical basis.
Moral of the story: Developers are not done until the acceptance tests pass!
Who designs the acceptance tests? QA of course.
I recently read a blog by Dennis Stevens entitled We Are Doing QA All Wrong. He is, of course, quite right. We are, and have been, doing QA all wrong for years. Indeed, the current role of QA only exists because developers have been so bad at doing their jobs.
QA is at the end of the process because development never learned when to flip the pancakes. Decades ago, frustrated by the terrible quality coming out of development, managers created an inspection step at the end of the process. QA (or QC as Stevens would have it) was born. This role for QA reinforced the bad behavior of development that spawned it. Because QA was at the end, developers didn't need to care about getting things right. Developers no longer had to worry about bugs; that was now QAs role. All developers needed to do was to "say" that the code worked as far as they were concerned, and then throw it over the wall. Deadlines are a lot easier to make when you don't have to make the code actually work.
Management, in order to justify the existence of QA to the accountants, who were very concerned about the cost, began to measure QAs efficiency. The more bugs that QA found, the better the job they were doing. Notice how insane that is! The only way for QA to be measured well is for development to screw up royally. The more bugs that developers create the better QA looks.
And so an unholy alliance of blame avoidance was born. Developers can appear to meet deadlines by delivering crap. QA is measured highly because they find lots of bugs. Everybody is happy -- except the end customer, and the accountants who are very concerned about costs.
Look, this isn't rocket science. If QA's input is primarily at the back end of the process, you are going to have lots of waste, rework, delay, and angry customers. I mean: Duh! (Pronounced "DUUU-uuuh")
So where do we put QA? How do we break the unholy alliance, and stop avoiding the blame? Simple! Move QA to the front.
What if QA's job was not to test the product, but to design the tests that developers use to know when to flip the pancake? If QA created suites of automated acceptance tests using tools like FitNesse, then developers would know when they were done. Developers would continue working until the acceptance tests all passed. Indeed, it would be the developers who executed those tests.
This is how good agile teams are organized. QA (and development) works with the business to define the requirements as a suite of automated tests that developers execute to know when they are done. When all the tests pass, QA makes a final exploratory pass over the product, and sends it on to production.
That last step is a little more complicated than that, but is beyond the scope of this article. Suffice it to say that exploratory testing is a craft in it's own right that needs to be part of the process on an on-going basis.
So, in the end, when do you flip a pancake? What is the definition of "done"? Developers are done when the automated acceptance tests design by QA all execute and all pass.