Your north-star metric is up. Everyone's happy. You shipped the feature, the engagement moved, and the metric moved with it. You feel validated. And then three months later, the metric plateaus. Or worse, it inverts. And you have no idea why, because you never understood what was actually moving it in the first place.
A north-star metric without diagnostic levers is just a vanity number with fancy charts.
The problem with outcomes without levers
Most teams define their north-star metric as an outcome: Monthly Active Users. Customer Lifetime Value. Engagement Hours. These are real and important. But they're not actionable.
When MAU is down, what do you ship? Do you ship features? Improve onboarding? Reduce price? Expand to new markets? All of these could move the needle. Or none of them. You have no idea because the outcome doesn't tell you what to pull.
BCG research shows that 74% of companies struggle to extract value from their metrics. Not because the metrics are wrong. But because they measure outcomes without building the diagnostic framework to understand what drives those outcomes.
The disconnect gets worse in product-engineering conversations. Product brings the north-star metric and says "move it." Engineering asks what's actually broken and needs fixing. Product doesn't know. The conversation devolves into arguing about which feature might move the needle. Everyone ships, hopes, and checks the metric at the end of the quarter.
What actually works: The three-layer metric system
Instead of a single outcome metric, build a system of three tiers:
Tier 1: North-star metric (the outcome). This is your long-term destination. For a productivity app, it might be "Daily Active Users." For a marketplace, "Gross Transaction Value." This doesn't change often. It's the scorecard.
Tier 2: Leading indicators (the predictors). These are the metrics that actually predict whether your north-star moves. They lead the outcome by weeks or months. For a DAU metric, leading indicators might be: "Time-to-first-save," "Average session length," "Feature adoption rate," "Retry rate after error." These are the metrics that tell you if your changes are working before the north-star catches up.
Tier 3: Diagnostic levers (the actions). These are the specific metrics you can influence directly through product decisions. For "Time-to-first-save," diagnostic levers might be: "Onboarding completion rate," "UI clarity score," "Time from signup to first feature access." These are things you can ship against.
"You can't manage what you can't measure. And you can't move what you don't understand. North-star metrics give you the destination. Diagnostic levers give you the map."
John Cutler, on leading and lagging indicators
How to build this framework for your product
Start with your north-star. Now reverse-engineer backwards:
What has to happen for this metric to move? For DAU to go up, users have to come back more often. For users to come back more often, they need to find value in the product. For them to find value, they need to succeed at their core task faster. Work backwards. This is your Tier 2.
What can we actually influence to move Tier 2? For "time-to-first-success," you can influence: onboarding clarity, default settings, template availability, UI complexity, error recovery. These become your diagnostic levers.
Now set up the measurements. Daily tracking of diagnostic levers. Weekly tracking of leading indicators. Monthly check on the north-star. When a diagnostic lever moves, you should see movement in leading indicators within days or weeks. When leading indicators move consistently, the north-star should follow within weeks or months.
Why this matters more than frequency of metrics
Most teams check their north-star monthly. Some check quarterly. The frequency doesn't matter. What matters is that you have a diagnostic framework that explains why the metric is moving before it moves.
This creates a feedback loop where you're not flying blind. Product ships a feature to improve onboarding clarity. Within 24 hours you see time-to-first-save drop. Within a week you see retry rate improve. Within two weeks you see session length extend. At the end of the month, you see DAU nudge up. You know exactly what worked.
Compare that to the typical flow: "Ship feature. Check north-star next month. Metric moved. Ship more of the same thing." It's luck dressed up as strategy.
The dashboard that actually works
Your dashboard should be three sections:
Section 1: Leading Indicators (updated daily). Is time-to-first-save trending down? Is session length trending up? These move fast and predict your outcome.
Section 2: Diagnostic Levers (updated real-time). Onboarding completion rate: 45%. UI clarity perceived rating: 7.2/10. Template usage in first 7 days: 62%. These are the things engineering shipped and should be checking multiple times a day to see if changes landed.
Section 3: North-Star Metric (updated weekly, reviewed monthly). The outcome. The scorecard. The thing that matters.
This structure keeps teams focused on what they can control while still tracking what matters.
The hard part: Patience
Building this framework takes work. You have to spend weeks mapping what actually drives your north-star. You have to instrument diagnostic levers. You have to be disciplined about not adding vanity metrics.
But the payoff is velocity. Once you have this system in place, you ship with conviction. You know what you're trying to move. You know how to measure if it moved. You know what to adjust if it didn't.
Teams with solid diagnostic frameworks ship 2-3x faster because they're not guessing. They're diagnosing.