← All writing

The 90-Day Cohort Analysis That Predicts Whether Your Paid Media Can Actually Scale

Most brands scale paid media using blended LTV averages that hide which channels produce customers worth keeping. Here's the 90-day cohort framework.

Jordan Glickman·May 10, 2026·8
Frameworks

Most eCommerce brands are making scaling decisions with the wrong data.

They optimize for platform ROAS. They celebrate conversion rate improvements. They scale ad spend when dashboards turn green. Then six to twelve months later, margins are compressing, repeat purchase rates are declining, and the business that looked healthy on the weekly reporting is visibly deteriorating on the quarterly P&L.

The problem is not creative or targeting. It is that the brands making those scaling decisions never ran a cohort analysis before pouring fuel on the fire. They scaled blended LTV averages that disguised which acquisition channels were building durable customer value and which were producing one-and-done buyers with a destructive payback window.

The 90-day new customer cohort analysis is the framework that surfaces that distinction before it is too late to act on it.

Image brief: Five-row table — Cohort Source, 30-Day Rev/Customer, 60-Day Rev/Customer, 90-Day Rev/Customer, Repeat Purchase Rate. Email row highlighted green (highest), TikTok Paid highlighted yellow (lowest). Clean minimal design. alt: "90-day cohort analysis by acquisition channel." caption: "Blended LTV hides the channels quietly destroying your payback window. Cohort analysis makes them visible before the damage compounds."

What a 90-day cohort analysis actually reveals

A cohort analysis groups customers by the month they made their first purchase and tracks their subsequent behavior over time. The 90-day window is the critical diagnostic period: long enough to capture meaningful repeat purchase behavior, short enough to stay actionable, and close enough in time that you can still course-correct before a degraded cohort compounds into a structural business problem.

Four metrics matter across those 90 days:

  • Repeat purchase rate by acquisition channel
  • Average order value progression from first order to second and beyond
  • Revenue per customer at 30, 60, and 90 days
  • Margin contribution per cohort after factoring in the acquisition cost

The distinction that makes cohort analysis powerful is the channel-level view. Most brands track blended LTV averages. Blended averages hide the fact that the Meta campaign that produced the best platform ROAS last quarter may have acquired a disproportionate share of customers who never purchased again, while the higher-CAC channel with less impressive attributed performance was building a much more valuable customer base.

Blended LTV tells you the average. Cohort analysis by channel tells you where to actually invest.

Why attribution complicates this analysis

A cohort analysis is only as reliable as the data feeding it. And if your brand is running paid media at scale, that data has a structural problem.

Meta reports on a 7-day click, 1-day view attribution window by default. GA4 attributes conversions to the session in which they occur. A customer who clicked a Meta ad on Monday, saw a Google Shopping ad on Thursday, and converted on Friday will be claimed by both platforms. Meta's dashboard says the campaign worked. GA4 says a different campaign worked. Your cohort table says you spent $140 to acquire a customer with a 90-day LTV of $82.

That gap — between what platforms report and what business data shows — is the scaling decision risk that destroys margin at volume. The attribution discrepancy is not a technical problem to solve. It is a structural feature of multi-platform advertising. Your cohort framework needs to account for it rather than pretend it will be resolved.

For brands with meaningful TikTok Shop volume, the complication deepens. TikTok's in-app checkout frequently bypasses Shopify's attribution entirely. Conversions report cleanly inside TikTok's platform but arrive in Shopify without UTM data, ending up miscategorized as direct or organic traffic in your cohort table. The practical response is to treat TikTok Shop customers as a separate cohort from the start and measure them in isolation — new customer rate, 30-day repurchase rate, and AOV per first order — rather than attempting reconciliation with the main attribution model.

The five-step framework

Step 1: Pull clean first-order data

Export every new customer order from the past 12 months. Required fields: order date, order value, acquisition channel from UTM or your attribution tool, and customer email or ID for downstream repeat purchase tracking.

If UTM hygiene is inconsistent — campaign names that fragment into dozens of unrecognizable variants in your analytics — fix that before proceeding. The cohort analysis will reflect the data quality it receives.

Step 2: Segment by acquisition channel

Group customers into cohorts by channel: Meta paid prospecting, Meta paid retargeting, Google Shopping, Google Search, TikTok paid, email acquisition, organic search, and direct. The more precisely you can segment, the more useful the diagnostic.

This is where most operators discover their "best-performing" Meta campaigns acquired a significant share of customers who never returned.

Step 3: Calculate 30/60/90-day revenue per customer

For each cohort, calculate the average revenue per customer at 30, 60, and 90 days by looking up all subsequent orders placed by customers in that cohort.

| Cohort Source | 30-Day Rev/Customer | 60-Day Rev/Customer | 90-Day Rev/Customer | Repeat Purchase Rate | |---|---|---|---|---| | Email acquisition | $88 | $143 | $182 | 41% | | Organic search | $81 | $126 | $158 | 37% | | Google Shopping | $72 | $109 | $138 | 29% | | Meta paid prospecting | $65 | $86 | $101 | 21% | | TikTok paid | $51 | $59 | $65 | 13% |

When a table like this is visible, the scaling conversation changes. You are no longer just optimizing for CAC. You are optimizing for which channel delivers customers with staying power — and those are frequently not the same channel.

Step 4: Overlay blended CAC per channel

Divide total channel spend in a given period by the number of new customers attributed to that channel. That is your blended CAC for the cohort.

Compare it against the 90-day LTV. The ratio is your payback health score. If 90-day LTV is less than 1.5x CAC, the program is a treadmill — sufficient to maintain the business at current scale, insufficient to justify growth capital. If LTV is consistently above 2x CAC across the primary channels, the program supports aggressive scaling.

Step 5: Set channel-specific scaling thresholds

Based on your contribution margins and payback tolerance, define scaling triggers per channel and document them before the next planning cycle. For a brand with 55% gross margins and a 90-day payback window, a reasonable scaling threshold might be: scale any channel where 90-day LTV exceeds CAC by 1.8x or more.

This removes emotion from the scaling decision. The media buyer is not making judgment calls about creative fatigue or platform momentum. They are operating against a quantified system.

What high-LTV creative actually looks like

When cohort analysis reveals that a specific acquisition channel is producing low-LTV customers, the instinct is to diagnose it as a channel problem. It is often a creative problem.

Discount-forward hooks at the top of funnel filter for deal-seekers. Deal-seekers have lower repeat purchase rates, higher return rates, and lower margin contribution than customers who purchased because they genuinely wanted the product. The cohort profile is downstream of the creative strategy that defined who clicked.

High-LTV creative leads with product utility, brand story, or genuine transformation — not percentage discounts. It attracts buyers who understand what they are buying and why it addresses their specific situation, which produces higher satisfaction, lower return rates, and higher repeat purchase probability.

This is a testable hypothesis. If your creative testing program tags each variation by hook type and tracks cohort outcomes by hook, the data will show which creative approach is not just converting better at day one but producing better customers at day 90. That learning is worth more than any single winning creative — it is the signal that trains the entire subsequent creative strategy.

The KPI framework for cohort-driven scaling

Stop using platform ROAS as the primary scaling metric. The metrics that actually drive sound scaling decisions:

  • New customer CAC by channel (not blended) — the acquisition cost you are actually paying per new buyer in each channel, separately
  • 90-day LTV by channel cohort — what those customers are actually worth after 90 days of purchase history
  • LTV:CAC ratio by channel — the payback health score; 1.8x at 90 days is a reasonable floor for most DTC brands
  • 60-day repeat purchase rate — the early signal for cohort health before the 90-day data is available
  • Margin contribution per cohort — LTV minus CAC minus variable fulfillment and processing costs

Building a dashboard that surfaces these metrics monthly — not as a once-a-year exercise but as a standing operational view — is what converts cohort analysis from an interesting retrospective into a prospective allocation tool.

FAQ

How many months of purchase data do we need to run a valid cohort analysis? Twelve months at minimum to capture at least one full seasonal cycle and to give the earliest cohorts enough time to accumulate meaningful repeat purchase data. With less than 12 months, the 90-day LTV numbers for the most recent cohorts will be complete but the older cohorts may not show full annual repeat purchase patterns.

What if our attribution data is too unreliable to segment by channel accurately? Start with the channels where your data is most reliable — typically owned channels like email acquisition and organic search — and treat paid channels as directional rather than precise. Even a rough channel segmentation reveals patterns that blended LTV completely hides. Improve attribution over time but do not let data quality concerns prevent running the analysis at all.

Should we run cohort analysis at the campaign level or the channel level? Channel level is the minimum. Campaign level is better if your attribution tool and data volume support it. The most useful granularity is the campaign type level — prospecting vs. retargeting, brand-keyword vs. non-brand, discount vs. non-discount creative — because these categories represent genuinely different audience and intent profiles that produce meaningfully different cohort outcomes.

How do we handle cohorts that are too small to be statistically meaningful? Group smaller channels or campaign types into broader categories until each cohort has at least 200 to 300 first-time customers. Below that threshold, variance in individual behavior will dominate the cohort statistics and produce unreliable signals. Add smaller channels to a combined "other paid" category and wait until volume is sufficient to break them out.

Closing

The brands that scale paid media profitably are not the ones that optimized for the best platform dashboard numbers last quarter. They are the ones that understood which customers they were acquiring and whether those customers were worth what they paid to get them.

The 90-day cohort analysis does not require sophisticated tooling or a dedicated data team. It requires clean first-order data, channel-level segmentation, and the discipline to look at LTV-to-CAC ratios before making scaling decisions rather than after the margin damage is already visible.

Run the analysis before you scale. Run it quarterly while you scale. Let the cohort data — not the platform dashboard — make the allocation decisions.

Keep reading

Pieces I've written on related topics that pair well with this one:

Subscribe to the newsletter

Get every post in your inbox.

New writing every two weeks. No fluff. Unsubscribe anytime.

Subscribe