← All writing

ROAS Is Not the Goal. It's a Sensor on the Dashboard.

ROAS, revenue, and scale aren't the goal of a business. They're sub-KPIs that mislead the moment you start optimizing for them instead of contribution margin.

Jordan Glickman·May 8, 2026·10
Strategy

The metric you celebrate is rarely the metric that pays you

I've sat in a lot of rooms where a CMO proudly walks through a deck and the headline number is a 4.2x ROAS. Everyone nods. The slide gets a checkmark.

Then I open the bank account and the business is bleeding cash.

This happens because ROAS is not a goal. It's a sensor. And the moment you start steering by a sensor, the sensor stops telling you the truth.

What a business is actually trying to do

Strip away the dashboards and the slack channels and the Looker Studio screenshots, and a business is doing one thing: producing something people want for less than it costs to deliver it, then keeping the difference.

That's the whole job.

Not ROAS. Not revenue. Not a vanity GMV number on a year-end deck. Those are all proxies — derivatives of the real thing — and the further down the derivative chain you go, the easier it is to game.

At Impremis we manage over $250M a year in paid spend across our portfolio. The brands that scale durably almost never optimize for the metric on the report. They optimize for contribution dollars on the bank statement. Everything else is a leading indicator they triangulate against.

Goodhart's Law, but for marketers

Charles Goodhart was a British economist watching his country try to control inflation by targeting money supply. He noticed something. The moment the Bank of England declared M3 the official target, M3 stopped behaving like the underlying economy. Banks rerouted activity to keep M3 happy and the actual economy did its own thing.

He wrote it down: when a measure becomes a target, it ceases to be a good measure.

It is the single most useful sentence I know in performance marketing.

A media buyer hears "we need 3x ROAS this quarter" and within three weeks the campaign mix has shifted toward retargeting branded search returning customers. ROAS goes up. New customer revenue goes down. Six months later the brand is in trouble and nobody can explain why the dashboard still looks fine.

The sensor told the truth right up until you turned it into a steering wheel.

Every stakeholder over-optimizes for their favorite proxy

Different capital structures. Different favorite metric. Different way to wreck a business.

| Stakeholder | Favorite proxy | What gets sacrificed | |---|---|---| | Private equity | EBITDA / profit margin | Brand investment, R&D, future cohort quality | | Venture capital | Top-line revenue, growth rate | Unit economics, payback period | | Performance agency | ROAS / CPA inside the platform | Incremental new customers, blended P&L | | Founder on Twitter | Revenue announcements | Literally everything else | | CFO | Cash on hand | Anything that takes more than a quarter to mature |

None of these people are wrong to track their metric. They're wrong to optimize for it without checking what's happening one layer up.

Where this shows up inside an ad account

Let me make this concrete. Below are four real patterns I've watched destroy good accounts. Every one of them was driven by a team that turned a sensor into a target.

Pattern 1: "Make more ads"

A brand decides creative volume is the bottleneck. Targets 80 new ads per month. The team hits it. Six months later the account is producing 80 near-identical variations of three working concepts, the algorithm can't tell them apart, and the genuinely net-new angles that used to break through are buried under a pile of mediocre permutations.

Volume is a sensor. It tells you whether you're testing enough. It is not a goal. The goal is tested concepts, which is a very different number.

Pattern 2: "Hit a 25% hook rate"

A team decides hook rate predicts winners. Sets a 25% threshold. Within a month every ad opens with a shock cut, a screaming creator, or a thumb-stopping pattern interrupt that has nothing to do with the actual product.

Hook rate goes up. CTR goes up. Conversion rate collapses because the hook is now selecting for an audience that has zero buying intent. The sensor told the team to do exactly the wrong thing.

Pattern 3: "Kill anything below 1.5x ad-level ROAS"

This one looks responsible. It is not.

In any healthy account, the top-spending winners are propped up by a quiet supporting cast. Top-of-funnel ads that drive view-through value. Older creatives that anchor the algorithm. Variations that absorb cheap impressions and feed the conversion path. Kill them individually and the headline winners start choking — sometimes within a week, sometimes within a quarter — and nobody connects it back to the kill rule.

Pattern 4: "Pause anything older than 90 days"

Same disease, different symptom. Aged creatives often have the cleanest delivery, the most learning, and the lowest CPMs in the account. Cycle them out on a calendar rule and you watch your blended efficiency drop while the dashboard insists everything is "healthy."

The biggest accounts suffer the most

Here's the part most operators miss. The bigger and better an account performs, the more it relies on system-level optimization that looks ugly at the unit level.

A healthy eight-figure account is full of ads that, viewed in isolation, you would pause. Their job is not to win on their own. Their job is to make the winners possible.

This is why agencies that take over a great account often blow it up in 60 days. They walk in, run a kill rule on "underperformers," and gut the supporting structure that made the headline ROAS achievable in the first place.

Pause one bad ad and you save $400 of spend. Pause the wrong bad ad and you cost the account $40,000 in compounding inefficiency. Most teams cannot tell the difference. Almost no dashboard helps them.

At &you we explicitly track which assets we expect to underperform on a unit basis but contribute on a system basis. They live on a different sheet than the kill list.

The sensors-vs-steering-wheel test

Here's the rule I run any new metric against before letting it influence a decision.

Ask: if a clever person on my team gamed this metric — moved it 30% in the right direction without changing anything real about the business — would the headline P&L metric move in the same direction?

If yes, the metric can be a target.

If no, it can only be a sensor.

| Metric | Game-able without real improvement? | Use it as | |---|---|---| | Ad count produced | Yes — produce slop | Sensor only | | Hook rate | Yes — pattern interrupt anything | Sensor only | | Average creative age | Yes — kill old winners | Sensor only | | Platform-reported ROAS | Yes — over-attribute branded retargeting | Sensor only | | Tested concepts (distinct angles) | Much harder to fake | Acceptable target | | Cost per validated winner | Hard to fake | Acceptable target | | Blended new-customer CAC | Very hard to fake | Acceptable headline target | | Contribution margin in dollars | Cannot be faked | The actual goal |

Notice the pattern. The further you move toward dollars in the bank, the harder the metric is to game, and the closer it sits to the real objective.

What a real headline metric looks like

At the businesses I run and advise, the headline number sitting at the top of the weekly is almost never ROAS.

It is one of three things:

  1. Blended MER (marketing efficiency ratio) — total revenue divided by total marketing spend. Cuts through platform-attribution lies.
  2. New-customer CAC vs. payback target — what it actually costs to bring in a customer who has never bought before, measured against the LTV math the business is built on.
  3. Contribution margin dollars — the literal money the business produces after variable costs, including paid media.

Everything else — ROAS, CPA, CTR, hook rate, thumbstop, ad count, creative age — sits underneath. They get watched, they get diagnosed, but they do not get optimized against directly. The instant you let a sub-KPI become a target, you've handed your team a steering wheel disguised as a sensor.

How I implement this in practice

At Impremis we run a tiered metric system on every account. It looks like this.

  • Tier 1 — Goal: contribution dollars, blended MER, new-customer CAC. Reviewed weekly with the founder. Decisions get made here.
  • Tier 2 — Diagnostic: platform ROAS, CPA by campaign, creative-level efficiency. Reviewed daily by the buying team. Decisions never get made only from this tier.
  • Tier 3 — Operational: hook rate, CTR, thumbstop, ad count, win rate. Tracked by the creative and ops team to flag where to look, not what to do.

If a Tier 3 metric moves, somebody investigates. If a Tier 2 metric moves, somebody investigates and writes a hypothesis. If a Tier 1 metric moves, the whole pod stops what it's doing until we understand why.

That single piece of structure has saved more accounts than any creative framework I've ever built.

The headline you actually want

A business does not exist to produce a 4.2x ROAS. A business exists to produce something valuable for less than it costs to make it, find the people who want it, and sell it to them at a price they're willing to pay.

ROAS is just one of fifteen sensors on the dashboard that tell you whether the engine is running clean.

Don't grip the sensor. Grip the wheel.

FAQ

Isn't ROAS still useful?

Absolutely. ROAS is one of the most useful diagnostic numbers in performance marketing. It just isn't a goal. It tells you whether a specific channel or campaign is converting attention into revenue at the rate you expected, and that's incredibly valuable. The mistake is letting it walk upstairs and become the thing the company is trying to maximize. Use it the way a pilot uses an altimeter — not the way a pilot uses a destination.

What about blended ROAS?

Better. Blended ROAS, or blended MER, captures the entire P&L instead of letting platform-reported attribution play games with you. I use it as the headline efficiency number on most accounts. But even blended ROAS has the same Goodhart problem at extreme scale — push it too hard and you'll starve the new-customer engine. So it gets paired with a new-customer CAC target underneath.

How do I get my CFO to stop demanding ROAS targets?

Give them something better. Most CFOs are not married to ROAS — they're married to predictability. Build a contribution-margin model with payback period and a new-customer CAC ceiling, and most CFOs will switch over within a quarter because it actually maps to the cash flow statement they care about.

What's the right kill rule then?

Kill rules at the campaign and concept level, not the ad level. And always paired with a budget rule — if killing this thing freed up $X, where does the $X go and is the next-best home likely to perform better? Single-asset kill rules are how good accounts get destroyed.

Is contribution margin the only "real" goal?

For a profit-seeking business, basically yes — modulated by cash position and growth stage. A pre-Series-A startup might temporarily prioritize topline because that's what unlocks the next round. A mature DTC brand should be ruthless about contribution dollars. The point isn't that contribution margin is sacred. The point is that something tied to dollars in the bank has to be the goal, or you'll end up optimizing for a number that doesn't pay rent.

Doesn't this contradict "set a North Star metric"?

Not at all. Set a North Star. Just make sure your North Star is actually the thing you want, not a proxy for it. "Weekly active users" is a fine sensor and a terrible North Star for a paid product. "Net revenue retention" is a much better North Star because it can't be faked by a notification spam campaign. Pick the metric that, if you 10x'd it tomorrow, would unambiguously mean the business won.

How does this apply to AdFuse and tooling?

When we built AdFuse, the question we kept asking ourselves was: which metrics do we surface front-and-center, because whatever a tool puts on the home screen is what teams will optimize against. We deliberately put blended efficiency and new-customer numbers above platform ROAS in the default view. The UI is a value statement. So is your dashboard.


The metric you put on the wall becomes the metric the team optimizes for. Choose it like it's a hire.

If the number can be gamed, it will be. If it can't be tied to dollars, it shouldn't be a goal. And if your dashboard is full of green checkmarks while the bank account shrinks, your dashboard is lying to you in the most expensive way possible.

For more on the operating systems behind these decisions, see the attribution stack guide and the lessons that almost broke me building Impremis.

Keep reading

Pieces I've written on related topics that pair well with this one:

Subscribe to the newsletter

Get every post in your inbox.

New writing every two weeks. No fluff. Unsubscribe anytime.

Subscribe