Your KPIs are probably backfiring (here’s how to fix them)


Hello there,

Quick question:
Have you ever designed a metric that created exactly the opposite behavior you wanted?

If you’re nodding, you’ve discovered Goodhart's Law in action:

"When a measure becomes a target, it ceases to be a good measure."

This suggests the fundamental truth about human nature: As soon as people know a number is being watched or used to make decisions, they start optimizing for that number - often at the expense of what it was meant to represent.

Here’s why this matters to your daily work:
Every time you create a KPI, propose a metric, or get measured by someone else’s numbers, you’re either encouraging the right behavior or accidentally rewarding the wrong behavior.

And the wrong behavior is way more common than you think.


Why This Happens (And Why It's Hurting Your Work)

We measure what's easy to count instead of what actually matters. Response time instead of customer satisfaction. Error rates instead of analytical insight. Completion dates instead of value-adding solutions.

Once people know a number is being tracked, they optimize for that number. Teams start rushing to hit time targets. Analysts avoid complex work that might generate errors. Project managers only take on guaranteed wins.

The numbers look great. But, the actual work suffers.

Most of the time, this isn't malicious gaming - it's human nature. People naturally optimize for whatever gets measured, even when it undermines the real goal.

Here's how this hurts your daily work: You either waste time optimizing for metrics that don't improve your actual performance, or you're stuck in systems that reward the wrong behavior while you're trying to do meaningful work.


The Three Most Common Metric Traps

The Speed Trap

What it looks like: Any metric focused on how fast something gets done - response time, processing speed, time to completion, turnaround time

What happens: Quality drops as people rush to hit time targets

Better approach: Pair speed with outcome metrics. For example, track both "average response time" and "issues resolved on first contact."

The Volume Game

What it looks like: Any metric that counts outputs - reports delivered, tickets closed, projects completed, tasks finished

What happens: Quality plummets as people focus on hitting quantity targets

Better approach: Add impact indicators. For example, measure "reports delivered" alongside "decisions influenced" or "stakeholder follow-up requests."

The Perfection Paradox

What it looks like: Any metric that penalizes mistakes - error rates, accuracy scores, compliance percentages, defect counts

What happens: People avoid challenging work that might not be perfect

Better approach: Reward learning alongside accuracy. For example, track "experiments attempted" and "lessons documented" plus your accuracy metrics.


Design Metrics Like They’ll Be Gamed

Don’t just measure what’s easy. Before creating or accepting any metric, ask these questions:

What behavior will this create?
If someone wanted to hit this number without caring about the underlying goal, what would they do? If the gaming strategy undermines your real objective, fix the metric.

What tradeoffs does this ignore?
Most metrics encourage focus on one thing at the expense of others. Speed vs. quality. Volume vs. impact. Individual performance vs. team collaboration. Make these tradeoffs visible.

What would someone do just to hit the number?
This is your gaming test. If the most efficient path to hitting your metric completely misses the point, you’re measuring the wrong thing


One Thing to Remember

Good metrics should make the work better, not just make the work look better.

Design them like they’ll be gamed - catch the issues before they even happen.

That’s it for this week. Hope you found this issue helpful.

Until next time,
Donabel


Have you checked out these resources?

🔨

Build to Sell - 7 Days to Your First Digital Data Product

Check out the course →

🤖

Teach Data with AI

View the Newsletter →

🎗️

The Introverted Analyst's 5 Day Guide to Professional Visibility

Start Getting Recognition at Work →

Learn Practical Data Skills

Join 5K+ subscribers who receive weekly, bite-sized, practical and actionable lessons for the data professional. | Free video tutorials at youtube.com/sqlbelle | Teaching data? Incorporate AI - tips and prompts at https://teachdatawithai.substack.com/

Read more from Learn Practical Data Skills

You know the signs: glazed eyes during your presentation, people checking phones while you explain a process, or the dreaded interruption - ”Sorry, but why does this matter to me?” It happens because we lead with how things work instead of what breaks when they don’t. We assume people want to understand the process when they really want to understand the consequences. The Gap Between Data Professionals and Everyone Else Here’s what usually happens: you spend time crafting a clear technical...

You know that feeling when you show someone your analysis and… nothing happens? The numbers are solid. Your work is spot-on. Everything makes perfect sense. But somehow your ideas just sit there. Nobody's acting on it. Most of the time, this is the reason: data doesn’t convince people - understanding how people think does. The Situation The analysts whose recommendations actually get implemented aren’t always the ones with the fanciest techniques or the cleanest data. They’re the ones who...

Hey there! I've been getting a lot of messages lately about my old Tableau practice sets (you know, the Superstore ones). The feedback has been consistently positive, but there's also been a pattern: "These were great for learning the mechanics, but my real work data is... messier." This got me thinking. A lot. The Problem with Perfect Practice Most Tableau training (including my own previous work ... hello Superstore data set!) uses sanitized datasets. Everything adds up perfectly. No...