The 8 Attribution Models DTC Brands Use, and the 3 That Matter
Attribution isn't one model. It's a stack of imperfect ones that check each other. Here's the system we use across $250M+ in annual spend to stay accurate.
You are running a portfolio of attribution lies
Every DTC operator I talk to is looking for the same thing. The right attribution model. The number they can trust. The dashboard that finally tells them where to put the next dollar.
That thing does not exist.
Not because the tooling is bad. Not because measurement is hopeless. Because every attribution model is structurally biased toward whoever built it, and the operators who win at scale are the ones who learn to triangulate across multiple imperfect models rather than chasing a single perfect one.
Meta inflates Meta's numbers. Google inflates Google's numbers. Triple Whale's MTA tilts toward whatever it has clean tracking on. GA4 systematically undercounts paid social. Last-click underweights TOF. Marketing mix modeling overweights baseline. Every single tool is grading its own homework, and almost every tool is grading on a curve that benefits the tool.
At Impremis we run measurement across $250M+ in annual spend. We use eight different attribution models. We pay for three of them. The other five exist as cross-checks.
Here is the entire stack, what each one is good for, and the three you should actually optimize against.
The 8 attribution models
Let me lay out the full set first, then we'll talk about which ones to use.
| Model | What It Does | Bias | |---|---|---| | Platform-reported (Meta, Google, TikTok) | Click and view-through credit inside the platform | Inflates platform's contribution | | Last-click (GA4, Shopify) | Credits the final touch before purchase | Undervalues TOF, ignores view-through | | Multi-touch attribution (MTA) | Distributes credit across the whole journey | Only sees what it can track; iOS gaps | | Marketing mix modeling (MMM) | Statistical model on aggregated spend and revenue | Needs scale; over-weights baseline at low spend | | Incrementality / geo-lift testing | Holdouts measuring true causal lift | Expensive, slow, requires geographic spread | | Post-purchase surveys | Asks the customer where they heard about you | Memory bias; great for narrative truth | | Cohort LTV by source | Tracks long-term value by acquisition channel | Slow signal; takes 60-90 days minimum | | Blended business metrics (MER, nCAC) | Total revenue / total spend; aggregate truth | No channel attribution; just the floor |
Each of these is useful. None of them is right alone.
Why precision is the trap
The biggest mistake operators make is buying tools that produce precise-looking numbers from biased foundational data. If your Pixel is firing inconsistently, no MTA tool can fix that. The tool will give you a confident, decimal-place ROAS attribution number. The number will be wrong. It will just be wrong with more decimal places.
Precision is not accuracy.
Most attribution debates inside DTC teams are arguments about which precise number to trust. The honest answer is none of them, individually. The useful answer is to triangulate across at least three.
We lost about $400K of efficiency on a telehealth account I run because we trusted MTA's daily channel allocations during a period when the iOS opt-in pool had shifted underneath us. The numbers looked clean. They were precise. They were also wrong, and we didn't catch it for almost three weeks because no other model was disagreeing with MTA loudly enough to trigger an audit. We now require disagreement audits whenever any single model contradicts the other two by more than 20%.
A hierarchy that actually works
The operating model we run at Impremis is what I think of as a hierarchy of measurement, working top-down from least manipulable to most.
Tier 1: Blended business metrics (the floor)
MER (revenue / total marketing spend) and nCAC (total spend / new customers acquired) cannot be manipulated by any single platform. The math is forced by your bank account. These are the numbers I look at first, every morning, on every account.
If MER and nCAC are healthy, the rest of the stack is allowed to argue about details. If they're not, no amount of platform-level attribution gymnastics will save the business.
Tier 2: Quarterly truth (MMM and incrementality)
Once a quarter, we run marketing mix modeling and a geo-lift incrementality test. MMM tells us the directional contribution of each channel over a long enough window for noise to wash out. Geo-lift tells us the true causal incrementality of specific campaigns or channels.
MMM is meaningful at $500K+/year in ad spend. Below that, the model has too few data points to converge. For brands at that scale or larger, this is the layer that resolves arguments between platforms.
Tier 3: Daily allocation (post-purchase survey, MTA)
For day-to-day decisions, post-purchase surveys are the most underused tool in DTC. A single Shopify-level question on the thank-you page, "Where did you first hear about us?", with channel options, returns more decision-relevant signal than any third-party attribution platform we have ever paid for.
We pair surveys with MTA at the ad-set level. Surveys tell us about channel allocation. MTA tells us which specific ads inside Meta are doing the work.
Tier 4: Long-term validation (cohort LTV)
Finally, we cohort all new customers by acquisition source and track them for 90+ days. The ad sets and channels that look great on day-1 ROAS sometimes look terrible on day-90 LTV. The reverse is also true.
This is also where most brands die quietly. They optimize for cheap CAC, scale a channel that produces low-LTV customers, and find out six months later that the cohort never repurchases. The cohort table is what catches that drift before it becomes existential.
The three to actually optimize against
If you do nothing else with attribution, do these three:
1. Blended MER / nCAC at the business level
This is the only number nobody can lie to you about. Calculate it weekly. Set a target. Hold the entire marketing organization accountable to it.
A reasonable starting target: blended MER of 3.5x for established DTC brands, 4.5x for early-stage. nCAC should be at most 30-35% of 60-day revenue per customer. These are starting heuristics, not laws.
2. Post-purchase survey at the channel level
Install it on Shopify. Ask one question. Show six options including organic, friend referral, podcast, search, social, and other. Track responses weekly.
What surveys catch that platforms cannot: brand-search lift driven by paid social, podcast-attributable traffic that GA4 routes to direct, the gap between platform-reported "view-through" conversions and actual customer-reported recall.
We use survey data to set channel mix targets. Platforms get a budget that's correlated to their share of survey-attributed first-touch traffic, not their share of platform-reported conversions.
3. Geo-lift incrementality at the campaign level
Quarterly, minimum. The cleanest causal evidence available outside of a randomized experiment. Pause a channel or campaign in a matched set of geos, run it in another, measure the lift.
Geo-lift is what gave us the confidence to over-scale TOF Meta creative on multiple Impremis accounts past what last-click said was profitable. The geo tests showed true ROAS was 35-50% higher than last-click claimed. We funded accordingly. Blended profitability went up.
This is also why I talk so much about why ROAS is not the goal. If your only number is platform ROAS, you cannot ever make a confident over-scale decision, which means the brands that can run incrementality will always have a structural advantage over the brands that can't.
When the models disagree
The three measurements above will disagree with each other. This is the feature, not the bug.
When they all align, scale aggressively. When they conflict, pause and investigate before making allocation decisions.
| Pattern | Likely Cause | Action | |---|---|---| | MER good, MTA flat, surveys flag a channel | Platform is under-claiming credit for that channel | Increase budget on the channel | | MTA bullish, surveys flat, MER flat | Platform is over-claiming credit | Hold or decrease | | MER declining, all platform ROAS healthy | Cannibalization between channels | Reduce overlap, lean on incrementality | | Surveys spiking on a channel, MTA flat | Late-stage attribution gap on TOF spend | Run geo-lift to confirm before scaling | | Day-1 ROAS strong, cohort LTV weak | Customer-quality drift | Reset audience or messaging on that source |
Why most attribution stacks are broken
The most common failure mode I see in audits: brands invest heavily in MTA software, treat the daily attribution numbers as truth, ignore surveys, never run incrementality, and forget to track cohort LTV. They have one tool, one number, and they make every allocation decision against it.
The MTA number is precise. It is also the most biased and most volatile of the three options I would actually use. Building an entire decision-making process on it is the analytical equivalent of betting the company on a single witness whose memory is correlated with whoever is paying them.
The second most common failure: brands install all eight tools, generate ten dashboards, and never use any of them to make a decision. Measurement infrastructure that doesn't change behavior is just expensive analytics theater.
The 90-day attribution upgrade
If you are starting from scratch or rebuilding a broken stack, here is the order I recommend:
- Week 1. Install a post-purchase survey on Shopify. One question, six options. Start collecting.
- Week 2-3. Build a blended MER and nCAC dashboard. Pull from your bank account, not from platforms. Update weekly.
- Week 4-6. If you're at $500K+/year in spend, scope an MMM project. If not, defer.
- Week 6-8. Plan a geo-lift incrementality test for your largest paid channel. Most testing tools will run this for you for $5-15K.
- Week 8-12. Set up cohort LTV tracking by acquisition source. 60 and 90-day repurchase rates by channel. Every brand should have this regardless of size.
This is also the foundation for the three-lever Meta operating model. Without measurement infrastructure that catches platform bias, you cannot make confident structural or budget decisions on Meta. The lever-pulling and the measurement stack have to work together.
FAQ
Do I really need MMM if I'm under $500K/year in spend?
No. Below that scale, MMM has too little data to converge on stable estimates. Use blended metrics, surveys, and a small geo-lift test instead. Reassess MMM when you cross $500K.
Are post-purchase surveys reliable? People forget where they heard about brands.
They are biased toward memorable touches. That's the whole point. Memory-weighted attribution catches things last-click can't. It is not a replacement for incrementality, but as a directional read on channel allocation it is the highest-ROI instrument in the stack.
What about LLM-driven attribution tools?
Most of them are MTA with a UX skin. The underlying tracking limitations are the same. Some are useful for ad-set-level pattern recognition. None of them solve the precision-vs-accuracy problem unless they're combined with incrementality.
How often should I run incrementality tests?
Quarterly minimum on your largest paid channel. Monthly if you're testing major creative shifts or new channels. Anything less than quarterly and your view of true causal lift is too stale to drive decisions.
Is GA4 attribution worth using at all?
As a baseline truth-check, yes. GA4 is conservative and that's useful as a floor. As a primary decision tool, no. It systematically undercounts paid social and over-credits direct.
Why do my Meta numbers look so different from my Shopify numbers?
Meta uses 7-day click + 1-day view by default. Shopify only sees last-click within session. Both are technically correct; they're measuring different things. Use blended numbers to reconcile, never platform numbers in isolation.
Can I just use one tool and call it a day?
If you are under $1M/year in spend, blended MER plus a post-purchase survey is probably enough. Above that, you need at least three measurement methods that disagree productively. Below it, the cost of additional tooling exceeds the decision quality it produces.
Build the cross-check, not the certainty
Attribution is not a search for truth. It is a search for productive disagreement. The brands that win at measurement are the ones that build a stack of imperfect models that check each other, then teach the operating team to act on the consensus and investigate the conflicts.
MER is the floor. Surveys are the compass. Incrementality is the proof.
Every other number in your stack is a useful witness. None of them are the judge.
You are the judge.
Keep reading
Pieces I've written on related topics that pair well with this one:
- How I Actually Build an Attribution Stack for $30M+ in Spend — The exact tools, formulas, and weekly workflow I use to run attribution at a scale where platform reports lie to you constantly.
- The Attribution Problem That's Costing You Real Money — Attribution in Meta Ads is distorting budget decisions across channels.
- The Post-iOS 14 Playbook: How High-Performing Agencies Rebuilt Attribution from the Ground Up — iOS 14 broke the attribution model most agencies were built on.
- 12 Metrics That Matter More Than ROAS for DTC Brands — ROAS tells you what already happened. These 12 leading indicators tell you what's about to. The operator dashboard for ecommerce brands.
- Why Your Meta ROAS Looks Great But Margins Don't — Meta reports 5x ROAS while your margins compress. It's not a glitch — it's structural. Here's the three-signal attribution system I run at Impremis.