← All writing

12 Metrics That Matter More Than ROAS for DTC Brands

ROAS tells you what already happened. These 12 leading indicators tell you what's about to happen next. The operator dashboard every ecommerce brand needs.

Jordan Glickman·December 4, 2025·9
Frameworks

ROAS Is a Lagging Indicator Pretending to Be a Compass

ROAS tells you what your ads did last week. It does not tell you what your ads are about to do.

Most DTC brands operate as if those are the same thing. They're not. By the time ROAS moves, the cause has been baking for 30 to 60 days. The fix usually has to be made before the metric shows the problem.

At Impremis, across $250M+ in annual ad spend, we've stopped staffing meetings around ROAS. We staff them around the 12 metrics that predict ROAS. You can't manage a number that's already happened. You can only manage the upstream behavior that produced it.

The Two Buckets

The 12 metrics split cleanly into two operational systems. Both have to be functioning. A failure in either one will eventually surface as a ROAS collapse.

| System | What it measures | Failure mode | |---|---|---| | Media Buying Operations | How well the human running the account makes decisions | Erratic CPA, algorithm confusion, wasted spend | | Creative Supply Chain | How well the brand produces and selects creative | Fatigue, plateau, declining new customer rate |

Bucket 1 is process. Bucket 2 is output. We measure both every week on every account. The second a metric drifts, we know which lever needs attention before the customer-facing number breaks.

Bucket 1: Media Buying Operations (Six Metrics)

1. Change Cadence

Number of edits per week, normalized by spend.

A $250K/month account should see roughly 60-150 edits per week of strategic activity. Below that and the account is on autopilot. Above 200 and the buyer is over-tinkering and probably destabilizing the algorithm.

This is the single fastest tell on whether you have an active operator or a passive one.

2. Change Magnitude Distribution

The shape of the curve when you bin edits by % size.

Good: most changes 5-25%, a few 25-40%, almost nothing above 50%. Bad: bimodal at +50% and -50%, with a hollow middle. The bimodal pattern means the buyer doesn't have nuanced confidence in their data — they're swinging between full-go and full-stop.

3. Account Composition

The ratio of testing-tier ads (low spend, mixed CPA) to scaling-tier ads (high spend, target CPA), and the spend share of each.

A healthy account has a clear two-tier structure. Testing tier is 10-20% of spend. Scaling tier is 70-80%. The middle — high-spend ads with bad CPA, or low-spend ads with great CPA — should be small and decreasing. A swollen middle is a sign of decisions that didn't get made.

4. Predictive Accuracy

When the buyer makes a change, did the next-day metric move the way they predicted?

This is the most damning metric in the whole set. We track it with rolling 30-day cohorts. A buyer who can predict their own changes' impact will show measurable next-day CPA improvement on >55% of meaningful edits. A buyer who can't will show it on under 40%. The ones below 40% are flipping coins with your money.

5. Daily Balance

Distribution of edit days across the week.

A buyer making 90% of their changes on Mondays is reading the account on Mondays. That means Tuesday through Sunday is unattended. We see this constantly on accounts inherited from underpriced agencies. Daily balance should be roughly flat across the five business days, with weekend touches when needed.

6. Retest Discipline

What percentage of paused ads get re-evaluated within 30 days?

Killed ads aren't dead — they're paused with information attached. A great operator revisits 30-50% of paused ads within a month, either to reactivate them in a new context or to use their data to brief the next concept. A weak operator paused them, forgot them, and is now blind to a library of past learnings.

Bucket 2: Creative Supply Chain (Six Metrics)

7. Creative Cohort Replacement Rate

Of the top-spending ads this month, what % were launched in the last 90 days?

A healthy creative pipeline replaces 40-60% of top spend every quarter. A pipeline replacing under 20% is fragile — the account is depending on aging assets that will eventually fatigue all at once. We've seen $2M/month brands collapse 30% in a single month because their three top-spending ads aged out of the algorithm in the same window. Always feed the top of the funnel with fresh creative.

8. Production-to-Scale Conversion

Of ads launched, what % reached scale (>$1K/day for 7+ days)?

This is the creative team's hit rate. For a strong team running a 50/40/10 production mix (see the power law of ad creative), expect 8-15%. Below 5% and your team is producing volume without insight. Above 20% and they're being too safe.

If you launch 50 ads in a month and zero reach scale, the issue is not your media buyer. The issue is your creative brief. Stop blaming the algorithm for a broken supply chain.

9. Angle Distribution

Tag every ad by underlying insight — not format, not visual style, but the customer truth being addressed.

A healthy library has 8-15 distinct angles in active spend. A weak library has 2-3 angles with 40+ executions of each. The latter is what most brands accidentally build because the team optimizes for production efficiency rather than creative diversity.

10. Audience Fit

Does your creative production reflect your actual customer demographic?

We see this disconnect constantly. Brands producing Gen Z content while their LTV-positive customer is a 47-year-old woman. Or producing aspirational lifestyle content while their actual customer is a problem-solver looking for relief.

Pull your post-purchase data. Match it against your creative cast and angle distribution. The mismatch is the budget you're wasting on the wrong audience.

11. Creative Spend Ratio

How much of your media budget goes back into producing the next round of creative?

Under 3% and you're starving the engine. The account will plateau within two quarters. Healthy in-house brands invest 5-10%. Agencies running creative production typically invest 10-15% on behalf of the brand. A creative team underfunded by founder reluctance is the slowest, most invisible way to kill a Meta account.

12. Format Allocation vs. Format Performance

What % of spend is in video, static, carousel, and what's the actual CPA in each?

Most brands over-produce in their preferred format and under-produce in the format that's actually working. We had a Brand A audit recently where 80% of production was static, but every single top-10 scaling ad was video. The format mix was an artifact of the production team's comfort, not the algorithm's preference.

Let performance data drive format ratio, not internal habits.

A Real Account Snapshot

Here's what a healthy dashboard for a $400K/month brand looks like, monthly:

| Metric | Healthy range | What yellow looks like | |---|---|---| | Change cadence | 80-150/wk | Below 50/wk | | Change magnitude (median) | 8-18% | 25%+ | | Predictive accuracy | 55%+ | Below 45% | | Daily balance | <30% on any day | 70%+ on one day | | Cohort replacement rate | 40-60%/qtr | Below 25% | | Production-to-scale | 8-15% | Below 5% | | Active angles | 8-15 | Below 5 | | Creative spend ratio | 5-10% | Below 3% |

When any three of these go yellow simultaneously, ROAS is going to drop within 30-45 days. We've watched it play out so many times we can almost set a calendar reminder.

Why This Stack Beats Watching ROAS

ROAS is a single number summarizing thousands of decisions. By the time it moves, you've lost the ability to identify which decision broke. The 12 metrics let you see the breakage upstream, in the system that produced it.

  • If change cadence drops, your buyer disengaged.
  • If predictive accuracy collapses, your buyer is guessing.
  • If cohort replacement falls, your creative pipeline stopped shipping.
  • If production-to-scale rate craters, your briefs got bad.
  • If active angles drop, your team is repeating itself.

Each failure has its own fix. ROAS doesn't tell you which one to make. The 12 metrics do.

The Operating Cadence

Weekly review on the buyer-side metrics. Monthly review on the creative-side metrics. Quarterly review on the system itself.

The weekly review takes 20 minutes. The monthly takes 45. The quarterly takes a half-day and includes the production team.

Brands that run this cadence don't get blindsided by ROAS drops. Brands that don't, do.

For more on the buyer-side specifically, see how to audit your media buyer. For why ROAS is a misleading north star in the first place, see why ROAS isn't the goal.

FAQ

Do I need software to track all 12, or can I do this in spreadsheets?

Spreadsheets work for the first nine. The change log exports give you the buyer-side data. Manual tagging in a Google Sheet handles the angle distribution. The last three (creative spend ratio, audience fit, format allocation vs. performance) need a basic pivot of your ads manager export. AdFuse exists because doing this monthly across 100+ accounts is unworkable in spreadsheets — but for a single brand with one account, spreadsheets are fine.

Which of these are most predictive of ROAS drops?

In rank order: cohort replacement rate, predictive accuracy, production-to-scale rate, change cadence. The first one is the most reliable 30-day leading indicator we've seen. When it drops below 25%, we mark the calendar.

How small a brand can use this framework?

The principles work at any scale. The cadence and tooling have to scale down. A $20K/month brand can track these monthly with a simple sheet. A $500K/month brand should be tracking weekly with at least one piece of software. The rules don't change — only the resolution.

What about LTV, CAC, payback period?

Those are business-level metrics, separate from ad-account-level metrics. They belong in the CFO's dashboard, not the media buyer's. The 12 here are about whether the ad function itself is healthy. A healthy ad function still needs a healthy business model around it.

Are any of these metrics platform-specific?

The principles transfer to TikTok, YouTube, and Google. The exact thresholds shift — TikTok's predictive accuracy benchmark is lower because the algorithm is more volatile. The structural framework holds.

How long until I see results from running this dashboard?

The diagnostic value shows up the first week — you'll see exactly which lever is broken. The performance impact takes 30-60 days because you're rebuilding upstream behaviors that had been drifting. ROAS rebuilds last because it's the lagging indicator.

Can my agency object to me tracking predictive accuracy?

They can object. They shouldn't be able to. Every action in your account is logged with a timestamp and a cause-and-effect window. If they push back on transparency around their own decision quality, that's the audit answer.

The Bottom Line

ROAS is the result. The 12 metrics are the inputs. Watch the inputs, manage the inputs, and the result takes care of itself.

Most brands don't lose because their ads stop working. They lose because the system that produces the ads quietly broke 60 days earlier and nobody was reading the right gauges.

Read the right gauges. Or pay for the lesson the expensive way.

Keep reading

Pieces I've written on related topics that pair well with this one:

Subscribe to the newsletter

Get every post in your inbox.

New writing every two weeks. No fluff. Unsubscribe anytime.

Subscribe