Why We Stopped Using OKRs

Objectives and Key Results (OKRs) are a tool for setting and measuring goals.

The framework was invented by Andy Grove in his influential book, High Output Management, and it has been heavily popularized by big tech companies like Google, Microsoft, and Atlassian.

In theory, OKRs are better than just regular goals because they force you to think in terms of outcomes rather than outputs.

Too often, we set out to finish projects and measure our success by whether or not a project was complete, even if that project didn’t move the needle on our business.

Let’s use a surgeon as an example. In the traditional goal-setting model, a surgeon might measure success by whether or not they completed the surgery and the patient didn’t die on the operating table.

But did the patient actually get better? Or was there perhaps a less invasive way to achieve that same result?

Why OKRs are great (in theory)

OKRs force you to set an Objective that achieves a business outcome and then measures whether or not that objective was achieved.

In the surgeon example, perhaps a key result is getting the patient’s cholesterol levels from 6.2 down to 5.2.

The key result needs to map to the objective so that once achieved it makes the objective true.

If OKRs are properly set, they leave a lot of freedom for teams to figure out the initiatives or projects to achieve the OKR. This can empower teams and make them responsible for setting and achieving goals that move the business forward without management being overly prescriptive.

Why did we stop using OKRs then?

When we rolled out OKRs, I ignored the good advice that is contained within most OKR literature, which is ‘Do not try to implement them across the entire organization all at once.’

Because OKRs require a more abstract and high-level approach to thinking, they can be tricky to get right, and require multiple quarters, and sometimes years, before they’re really working.

The recommendation out there is usually to try them with one team for a while and then gradually roll it out across multiple teams.

Otherwise, everyone does it a different way, OKRs never get achieved, they don’t achieve business results and then people just stop using them because they don’t believe in them.

Another reason why our implementation of OKRs failed is that because teams didn’t really know how to use them, they would either retrofit initiatives into objectives, set unrealistic goals that could never be achieved, or they would set inappropriate key results that were disconnected from the outcome.

Let’s go through them one at a time:

Retrofit initiatives into objectives

If a team doesn’t really think in outcomes, they just know project X needs to get finished, and they will shape an initiative to fit into an objective. So if a team knows they want to build a company wiki, they’ll phrase their objective as “Launch Company Wiki”. 

Then they’ll measure their key results based on whether this gets done. It might look something like this:

KR: Publish 100 pages in the wiki.

But wait, what is the outcome there? What does launching a wiki do? What problem are we trying to solve with a wiki? If we know that, then maybe there’s another way to solve it.

Set unrealistic goals

OKRs are typically set each quarter. This makes sense because humans can only really plan in 90-day chunks. Most of us don’t even remember things we committed ourselves to past 90 days.

What happens though is that teams will want to achieve a big outcome and try to force it into a quarterly OKR.

For example, “Make a product that delights our customers”. Perhaps they measure it with the KR: 95% customer satisfaction score.

Now, this is a properly set OKR except for just one thing: There is no way in hell this can get achieved in 90 days. 

So teams will set out with this wild, ambitious goal and then when they get to the end of 90 days they’ll be demoralized that they didn’t even get close.

OKR literature says to set stretch goals, and that if you can even get to 70% the team should celebrate the progress.

Unfortunately, teams often take this too far and set stretch targets that they can’t even reach 5% of in a quarter.

Inappropriate key results

Even if a team can think in business outcomes AND they can set a realistic objective for a quarter there is one more pitfall they run into: not knowing how to set a key result.

The whole idea of KRs is that they make the objective true.

But what you often see in practice is that teams don’t know how to measure success, so they resort to measuring project status.

Using the previous example, let’s say a team set the totally appropriate objective “Ensure teams have access to up-to-date process documentation”

That seems like a reasonable objective, especially if the problem we’re trying to solve is that nobody knows where anything is and thus our processes are inconsistently being followed resulting in a bad customer experience and lower profitability. That is a real business problem!

Let’s even assume that a company wiki is the right solution to that problem.

How do you measure that teams have access to the documentation?

Many teams will still force project work into their OKRs.

KR: 100 wiki pages published.

Does that really make the objective true? Well, what if we publish the pages but nobody knows where they are? Just publishing them alone doesn’t move the needle on the objective!

The other problem is that key results should be progressive. If the KR at the beginning of the quarter is at 0% then we want to aim for 100% and measure our progress weekly. Are we getting closer, even if we end the quarter at 70% of the goal?

But this isn’t how teams often set KRs. 

You see in practice a lot of binary 0/1 goals. Achieved or not achieved.

In many cases, teams never establish a baseline, so we have no idea where we’re starting and where we’re trying to get to. I’ve seen KRs that are “Reduce X by 40%”. Without a baseline that doesn’t tell me anything. These types of percentage-based KRs make it really hard to know what is actually being measured.

Overhead of OKRs

There’s one more issue with OKRs, which is the overhead of creating and managing them.

For one, measuring and tracking OKRs generally requires special software like Perdoo, which we used. Perdoo is really good software for what it is, but it’s another expensive tool priced per seat that people need accounts for and have to learn and remember to go in and update every week.

There’s also the time spent having each team come up with OKRs.

Teams would plan offsites to plan their OKRs for the upcoming quarter, spend an entire day in a room, and then come out with OKRs they present to management that is not at all tied to company goals (for the aforementioned reasons).

Then the management team would need to provide feedback on the OKRs to get them to a state where everyone is bought in.

Which begs the question, why didn’t management didn’t just tell the team the goal they want to be achieved and let the team figure out the how?

What we’re using instead

We’re back to good old-fashioned “Big Rocks”.

They are very similar to OKRs, except they are simpler - you can put a measurable within a rock, or not. It’s up to you.

A big rock could be “Close 20 new $10K accounts” or it could be “Launch the new product 2.0 and get 20 customers adopting”.

The management team generally comes up with these company and department rocks and lets the team figure out the how.

Of course, this framework assumes that management is setting the right rocks that will achieve the outcome, but in my experience, leaders and executives are better at thinking in terms of business outcomes than individual teams. 

Individual contributors focus a lot on the ‘how’ - that’s why they make great individual contributors. 

You want them to be bought in, of course, so they see the value that the “big rock” will create, but often they just want to be told what is important rather than be given vague, abstract goals and then have to figure out what is expected of them.

Kyle Racki