Saturday, December 21, 2013

QA is Dead. Long live QA!

This isn't specific to startups but it still applies.  I was recently asked for advice on how to go from two week sprints to one.  The conversation was one I've had several times.

Client: "We are a scrum shop that has two week sprints.  We'd like to release faster.  Any suggestions?"
Me: "Do you have a QA handoff during the sprint?"
Client: "Sure.  We basically do waterfall during the sprint."
Me: "I've got it!"
Client: "Great!"
Me: "Fire your testers."
Client: "..."

I'm only half joking.  

I used to think having a QA person was the essential fourth technical hire, adding more as needed as the organization grew.  For close to ten years that's how I'd managed teams, ensuring each team had access to at least one.  That changed last year.  We were pushing for faster and faster releases with a client and something didn't feel right.  As it happens we were having trouble keeping our QA role filled, the problem wreaking havoc with our release schedule.  It needed to stop.    

We held a series of meetings to discuss our needs and what could be done to address them.  We all agreed we needed tests.  We all agreed we needed someone to own testing.  We also agreed that devs should own unit tests, but the question of if or how much in the way of integration testing they should do was a matter of intense debate.  Should we have a QA that only does spot testing?  Should a human ever repeat a test by hand?  Should we forgo human testing and instead have a QA Engineer that was chiefly a programmer?  If so how could we cleanly divide their work from the other engineers?

It was a lot to process.  During this time we plowed through countless testing resources, books, blogs, and tweets.  We hit the jackpot when we ran into How Google Tests Software, a great book on how testing evolved during the early days at the Goog and it gave us the answers we were looking for.  The sky opened.  We had been looking at QA all wrong.

I'm paraphrasing, but the problem is essentially in thinking that any part of QA is somebody else's job.  We weren't so far gone to think that engineers didn't own any of it it but we certainly weren't owning enough.  Engineers write a few unit tests and figure that's it.  Managers jam a QA person between the engineers and each release and call their job done.  The reality is if you want to avoid waterfalls entirely you've got to bake your testing completely into your code effort - not some of the tests, but all of them.  The code isn't done until the testing is.

We were initially skeptical at first.  I mean when you're used to seeing a net below you when you cross the high wire it's a little unnerving when it's gone, right?  Once we realized that having devs own the whole process meant the wire was actually a bridge there was no fear.  The need for a safety net was an illusion perpetuated by our own bad behavior.

How do you know when you've done it right?  You won't need any testers.  Having a tester from the get go creates an artificial dependence on someone else to do your testing for you. It also creates an unnecessary step in your release process. Be your own tester first. Separate QA roles should only exist once your QA needs involve a strategic planning component that can no longer be distributed throughout the development team.  It depends somewhat on your dev team and your product, but for most places this isn't until the third or fourth year.

Do the work yourself.  Design a workflow that requires developers to wipe their own behinds, by writing automated tests for and testing their own code.  Your devs make smarter decisions.  You can stop paying for people you don’t need.  You can finally get the waterfall out of your scrum.  I would go so far as to suggest that Continuous Delivery can't be achieved without this approach.  You can do without dedicated QA.  Start now.  Your code, your process, your developers, your timeline, and your budget will thank you for it.


  1. Is this trolling or are you serious?

    Fire the testers?! I bet this opinion goes down a storm with the CIO's looking shave a few k off the 'resource' budget but to smacks of short termism.

    As a dev, I have worked with a lot of testers - some good, some not so good. I can say 100% that when you get a good tester they add immeasurably to the team. They don't just sit there in isolation & 'do testing'. More often than not they have come up from the business and have a ton of domain knowledge, so they are useful in the planning stage & for talking through with the product owner.
    They also don't think like developers and no matter how well I reckon I've done my unit tests, those testers come up with something I may not have considered. Or as part for my "moving it to done" review, we talk about what exactly I have done therefore unearthing bugs or any misunderstandings I may have had.
    You have to remember, most organisations don't have IT staff who are at a Google level - else they would be at Google.
    I have a problem with how organisations use testers - like some kind of dumb code police but just because Google say so, doesn't mean its going to work everywhere. Not all Googles ideas are good ones - ah-hem - Google Wave.

  2. I'm totally serious. The initial dialog was meant as hyperbole but my point remains. By and large if developers change the way they think about their role in testing by accepting a much larger share of responsibility, the tester's role, if any remains, is significantly diminished. Two important things to achieve this are focusing on integration tests rather than unit tests, and using stories (see BDD) to describe and implement requirements and tests. There are many more but those two are a good start. Testing for a dedicated tester then becomes all about automation and helping the team define the strategy and process to achieve it. I'm not certain you can achieve continuous delivery without making this mental shift.

    I would challenge your opinion of Google staff. Or rather, I would challenge your opinion of the people staffing most organizations. In this context, the difference has less to do with talent and more to do with culture and process. Every person has the capacity for greatness. You just have to give them the right environment and resources to succeed.

    As for why we chose this approach, we read and read and then read some more before we found something that really resonated with what we were trying to accomplish. It wasn't so much that the approach they suggested was revelatory so much as it confirmed what our own experiments were already showing us. We could do without a tester. And we would outperform the version of us that couldn't.

    I hope this addresses your main points. I can suggest a ton of further reading or point out books/blogs/anecdotes that can help if you want to dig deeper.
    Feel free to hit me up on twitter (@nullsync). I LOVE talking about this stuff.

  3. I think that you're making a good point and also missing a good point.

    Automated tests are better than humans imitating robots. ALWAYS. Robots are better robots.

    "Testers" actually are three things.
    1) Experts on how to CONFIGURE and USE the system
    2) A human brain that can MISUNDERSTAND or DISLIKE the interface
    3) The perspective of "how to use this" (value) over "what this does" (mechanics).

    A tester can check the *experience* of using the system, of configuring and running it, and even the human efficiency of the layout. Scripts don't.

    You always put humans where DNA, logic, experience, and ambiguity are the requirement.

    Where and consistency and repetition are the goal, you use software.

  4. I agree. There are situations where humans are a better fit. If you're scaling, for example, at some point dedicated human testers are essential. Why, how, and when is very much context dependent. As an owner working with seed or A round capital, $40k-80k for a dedicated tester is hard to justify when for most start-ups I'd argue that you can more than make due with customer interviews/surveys/etc or your own internal use of the product for feedback. There are always exceptions but those are just that - exceptions. Even if you can afford them and they make sense I would still argue that you should invest heavily in automation and make hiring a human a fallback position. The payoff to the approach is hard to ignore. Anecdotal evidence has so far born this out. I suppose your mileage may vary.