Saturday, December 21, 2013

QA is Dead. Long live QA!

This isn't specific to startups but it still applies.  I was recently asked for advice on how to go from two week sprints to one.  The conversation was one I've had several times.

Client: "We are a scrum shop that has two week sprints.  We'd like to release faster.  Any suggestions?"
Me: "Do you have a QA handoff during the sprint?"
Client: "Sure.  We basically do waterfall during the sprint."
Me: "I've got it!"
Client: "Great!"
Me: "Fire your testers."
Client: "..."

I'm only half joking.  

I used to think having a QA person was the essential fourth technical hire, adding more as needed as the organization grew.  For close to ten years that's how I'd managed teams, ensuring each team had access to at least one.  That changed last year.  We were pushing for faster and faster releases with a client and something didn't feel right.  As it happens we were having trouble keeping our QA role filled, the problem wreaking havoc with our release schedule.  It needed to stop.    

We held a series of meetings to discuss our needs and what could be done to address them.  We all agreed we needed tests.  We all agreed we needed someone to own testing.  We also agreed that devs should own unit tests, but the question of if or how much in the way of integration testing they should do was a matter of intense debate.  Should we have a QA that only does spot testing?  Should a human ever repeat a test by hand?  Should we forgo human testing and instead have a QA Engineer that was chiefly a programmer?  If so how could we cleanly divide their work from the other engineers?

It was a lot to process.  During this time we plowed through countless testing resources, books, blogs, and tweets.  We hit the jackpot when we ran into How Google Tests Software, a great book on how testing evolved during the early days at the Goog and it gave us the answers we were looking for.  The sky opened.  We had been looking at QA all wrong.

I'm paraphrasing, but the problem is essentially in thinking that any part of QA is somebody else's job.  We weren't so far gone to think that engineers didn't own any of it it but we certainly weren't owning enough.  Engineers write a few unit tests and figure that's it.  Managers jam a QA person between the engineers and each release and call their job done.  The reality is if you want to avoid waterfalls entirely you've got to bake your testing completely into your code effort - not some of the tests, but all of them.  The code isn't done until the testing is.

We were initially skeptical at first.  I mean when you're used to seeing a net below you when you cross the high wire it's a little unnerving when it's gone, right?  Once we realized that having devs own the whole process meant the wire was actually a bridge there was no fear.  The need for a safety net was an illusion perpetuated by our own bad behavior.

How do you know when you've done it right?  You won't need any testers.  Having a tester from the get go creates an artificial dependence on someone else to do your testing for you. It also creates an unnecessary step in your release process. Be your own tester first. Separate QA roles should only exist once your QA needs involve a strategic planning component that can no longer be distributed throughout the development team.  It depends somewhat on your dev team and your product, but for most places this isn't until the third or fourth year.

Do the work yourself.  Design a workflow that requires developers to wipe their own behinds, by writing automated tests for and testing their own code.  Your devs make smarter decisions.  You can stop paying for people you don’t need.  You can finally get the waterfall out of your scrum.  I would go so far as to suggest that Continuous Delivery can't be achieved without this approach.  You can do without dedicated QA.  Start now.  Your code, your process, your developers, your timeline, and your budget will thank you for it.

Monday, December 9, 2013

Joe's Rule

Nobody likes to be in debt, but money and time are scarce when you're getting started, and whether it's student loans or technical debt most of us have a hole to climb out of. You can ignore it but debt is a beast which feeds on inattention. The challenge for a start-up is that even when you're paying attention it's hard to know how to prioritize the debts you're facing.  When a client's team was struggling for yet another week to determine which tickets on the backlog were the most important I was determined to help them find a way through.  

Conversationally the problem seemed simple. Pick the tasks that provided the most value. But in practice it is much more challenging. What is value? Value to who? How to define it? How do you explain it? What's valuable to engineering may not be valuable to the business. We needed a simple definition that we could all agree on.

One difficulty is that engineers struggle to prioritize technically interesting problems over tasks that provide business-facing value.  It’s natural.  Don’t deny it.  When I'm wearing my developer hat I do it too. The other challenge is that when you care about code every refactor feels important. So how do you decide?

Time for some soul searching. I had recently read First Fire All the Managers, a piece on flat management and intrinsic motivators. It argued that the way to get the best out of your people was to make them all managers. While chewing on that it occurred to me. Take it one step further. What if everyone was an owner or investor? Typical investors won't even talk about something unless the return is at least 3x the intial investment, so why should we? What are we investing? Time.

The currency in engineering is time but devs don’t usually consider ROI on time concretely when prioritizing.  So I took this back to the client, "Don't do anything that doesn't have at least a 3x return. Prioritize what remains by what provides the greatest return in time or dollars for that time." In our next meeting I suggested each engineer put on an investor hat and to prioritize tasks accordingly. I was optimistic but even I was surprised by how well this was received. The payoff was immediate.  Priorities ceased to be arbitrary. Many tasks ceased to be relevant at all. It changed the context of everything we were doing, from the way we approached what to refactor, to whether we wrote tests at all. Discussions of refactors with the business became much more productive and challenges we had in defending to the business what technical debt we needed to pay off and when fell away.  Having a concrete threshold made all the difference.

This spread quickly from a way to deal with technical debt to how we approached looking at everything they did as an engineering team. The irony is that despite having introduced the rule myself it was easy to forget. Joe, one of the lead developers, took it up as his banner and became its consistent champion. Time and time again it cut debates short about what to do next, and saved them on many occasions from sinking effort in to tasks without sufficient value. It's become a permanent tool in my belt and I have Joe to thank for that. Here's to you Joe.