← All writing

How I Actually Build an Attribution Stack for $30M+ in Spend

The exact tools, formulas, and weekly workflow I use to run attribution at a scale where platform reports lie to you constantly. No theory — just the system.

Jordan Glickman·April 24, 2026·10
Attribution

Most attribution decks are theater

I have read maybe two hundred attribution decks in my career. Almost all of them are useless.

They describe an idealized world where you stand up an MMM, light incrementality candles, and a unified truth descends from the cloud. The actual operator running $1M, $5M, or $30M a month does not have a unified truth. They have a Slack DM at 9pm asking why MER fell two points and a CFO asking whether to push more spend tomorrow.

This post is the actual stack I use — the tools, the spreadsheet columns, the formulas, and the weekly workflow that runs across the $250M+ in annual ad spend we manage at Impremis. No theater.

The shape of the stack — top down, not bottom up

The biggest mistake I see in attribution work is starting with platform-level precision ("how much credit does this ad get?") and trying to roll up to account-level truth ("is the business healthy?").

It's backwards. Build top-down. Start with the number on the bank statement and only descend to ad-level granularity once the layer above is trustworthy.

My stack has four layers, and they get added in this order, never out of sequence.

| Layer | Tool | What it answers | Spend tier | |---|---|---|---| | 1. Blended truth | Google Sheet (MER tracker) | Is the whole business profitable? | All | | 2. Channel attribution | Post-purchase survey (Fairing/KnoCommerce) | Where do new customers actually come from? | $30K+/mo | | 3. Multi-touch | Triple Whale or Northbeam | Which campaigns and ads are pulling weight? | $100K+/mo | | 4. Causal | Geo holdouts + MMM | What is the true incremental contribution? | $500K+/mo |

If you skip Layer 1 and 2, Layer 3 will lie to you and you won't have anything to check it against.

Layer 1 — The MER tracker (build this in 20 minutes)

This is a Google Sheet. Not Looker. Not a dashboard tool. A Google Sheet, because everyone on the team can edit it and it's allowed to be ugly.

It has nine columns.

  1. Date
  2. Total revenue (from Shopify)
  3. New-customer revenue (Shopify, filtered to first-order)
  4. Total orders
  5. New-customer orders
  6. Total ad spend (all channels combined)
  7. Acquisition spend (prospecting only — exclude branded search and retargeting)
  8. Gross margin % (your variable margin, including shipping and CC fees)
  9. Contribution after ads (computed)

Then a few formulas that earn their keep.

MER         = Total Revenue / Total Ad Spend
nMER        = New-Customer Revenue / Acquisition Spend
CAC         = Total Ad Spend / Total Orders
nCAC        = Acquisition Spend / New-Customer Orders
Contribution$ = (Total Revenue * Gross Margin %) - Total Ad Spend

The two numbers I actually live by: nMER and nCAC. Total MER lies to you because it bundles in repeat customers and branded search and brand-loyal returns. New-customer MER cuts through it.

The gap between CAC and nCAC is the most informative single number in the sheet. If your blended CAC is $40 and your nCAC is $95, your account is being subsidized by repeat purchases — and the moment new-customer volume slips, you'll feel it 30 days later when the cohort wave breaks.

Layer 2 — Post-purchase survey (the truth serum)

Platform-reported attribution is a confidence trick. Meta wants to take credit for everything Meta touched. Google wants the same. Their numbers will sum to 130% of your actual revenue and you will believe them because the dashboards are pretty.

The post-purchase survey is the simplest way to break the spell.

I use Fairing on most accounts. KnoCommerce works fine too. The single question to ask, on the order confirmation page, is:

Where did you first hear about us?

Not "how did you find out about this purchase" (last-touch). Not "what convinced you to buy" (mushy). First touch. The thing that put the brand into the customer's head.

Options, randomized: Facebook/Instagram, TikTok, YouTube, Google search, podcast, friend or family, influencer, TV/streaming, email, other.

What to do with the data. Survey response rate is usually 40-60%. Take channel responses, divide by response rate, multiply by total new customers in the period. That's your survey-implied new customers per channel.

Survey-implied CAC by channel = Channel spend / (channel survey % * total new customers)

Now you have a number to compare against the platform-reported CAC.

| Channel | Platform-reported CAC | Survey-implied CAC | Inflation factor | |---|---|---|---| | Meta | $52 | $84 | 1.6x | | Google (non-brand) | $48 | $61 | 1.3x | | TikTok | $73 | $94 | 1.3x | | YouTube | $108 | $122 | 1.1x |

This is a real-shape table from a recent supplements brand audit. Meta was claiming a 1.6x more efficient CAC than the survey indicated. That's a $32 gap on every customer. At their volume, $1.1M a year of misallocated decision-making.

The survey doesn't have to be precise. It has to be directionally honest. It will tell you which platform is bullshitting you the hardest, and by how much.

Layer 3 — Multi-touch (Triple Whale, Northbeam, or roll your own)

Once you have a blended truth and a survey check, now you can earn the right to look at ad-level attribution.

I use Triple Whale on most Impremis accounts. Northbeam on some. The choice matters less than what you do with the data.

The thing I actually use the MTA tool for is the account control chart. Plot every campaign on a 2x2:

  • Y-axis: CAC (lower is better)
  • X-axis: spend (the size of the bet)
  • Quadrant 1 (low CAC, high spend): these are your engines. Protect them.
  • Quadrant 2 (low CAC, low spend): scaling candidates. Increase budget cautiously.
  • Quadrant 3 (high CAC, low spend): test bench. Acceptable losses.
  • Quadrant 4 (high CAC, high spend): the killers. Cut or restructure today.

The rule on every account: nothing sits in Quadrant 4 longer than 14 days. If a campaign has high spend and bad CAC, it's either going to a new structure, a new creative slate, or it's getting paused. Quadrant 4 is where bad accounts hide their losses.

The other thing the MTA tool earns its keep on: cross-channel discrepancies. If your Meta-reported CAC says $50 and the MTA-blended CAC for that same campaign says $90, that's a 1.8x platform inflation factor and you should be deeply skeptical of any decision you'd make from the platform number alone.

Layer 4 — Geo holdouts and MMM

This is where most teams get philosophically interested and operationally lost. So let me be blunt.

You don't need MMM until you're spending $500K/month. Below that, the noise inside the model is bigger than the signal it's trying to extract. You'll spend three months building it and the margin of error will be wider than the decisions you're trying to make.

What you can do at much smaller scale is run geo holdouts. Pick a state or DMA where you currently advertise. Pause Meta in that geo for 4 weeks. Compare new-customer volume in the holdout geo to a matched control geo. The lift is your incremental contribution from Meta in that market.

It's crude. It's also one of the only causal tests available to a brand under $1M/month, and it will tell you within a quarter whether the platform numbers are telling the truth at a level the survey can't.

At &you we ran a geo holdout on a paid channel last year and discovered the platform was overstating its true incremental contribution by about 35%. We rebudgeted. Saved a meaningful amount of money. The MMM consultant we'd been talking to wanted $80K to tell us roughly the same thing.

The weekly workflow that actually runs

Here is the operational rhythm I run at Impremis on every account above $200K/month.

Daily (15 minutes, head buyer)

  • Pull yesterday's row into the MER sheet
  • Flag any nMER deviation > 15% from 7-day average
  • Note any campaign that crossed into Quadrant 4
  • Log every ad-level kill with reason

Weekly (90 minutes, full pod)

  • Review MER, nMER, nCAC against weekly target
  • Look at survey response distribution — has the channel mix shifted?
  • Account control chart review — what moved between quadrants?
  • Top 5 winners and top 5 losers, both with hypotheses
  • One scaling decision and one cutting decision, both written

Monthly (3 hours, with founder/CFO)

  • Cohort analysis — is the LTV holding for last month's new customers?
  • Survey-implied CAC by channel reconciled against platform CAC
  • Any geo holdouts running, status update
  • One structural change for the next month, written and committed

Quarterly (a full day, leadership)

  • Backtest survey predictions against cohort revenue
  • Audit the kill log — what came back when retested?
  • Re-baseline targets based on the trailing quarter
  • Decide whether to add the next layer of the stack

This cadence is not optional. The stack is worthless without the rhythm. A spreadsheet that nobody opens on Monday morning is just a worse Notion page.

What I'd tell a $50K/month brand vs. a $1M/month brand

| Spend tier | Stack | Time investment | Expected payoff | |---|---|---|---| | $10K-50K/mo | MER sheet only | 30 min/week | 5-10% efficiency gain just from honest measurement | | $50K-200K/mo | MER sheet + Fairing | 90 min/week | Catch one over-credited channel, save 10-20% of spend | | $200K-500K/mo | + Triple Whale or Northbeam | 4 hours/week + analyst | Quadrant discipline, real testing budget | | $500K-1M/mo | + Server-side tracking, geo holdouts | Dedicated analyst | Causal validation of channel mix | | $1M+/mo | + MMM, cohort LTV by channel | Analytics team | True portfolio optimization |

The most expensive mistake I watch brands make is over-tooling at small spend. A $40K/month brand does not need Northbeam. It needs the Google Sheet, the Fairing survey, and one human who looks at both every Monday morning. The tooling can wait.

The second-most expensive mistake is under-tooling at scale. A $1M/month brand running attribution off Meta's reported numbers is gambling with millions a year. You can afford the team. Hire it.

The principle underneath all of this

Never let a platform grade its own homework.

That's the entire philosophy in eight words. Meta will tell you Meta is great. Google will tell you Google is great. TikTok will tell you TikTok is the future. Each one is technically true and operationally misleading.

Your job is to build a stack of independent, cross-checking measurements where the platform's number is just one input among several. When the platform number agrees with the survey and agrees with the geo holdout, you can trust it. When they disagree, you investigate, and the disagreement itself is the most valuable data you have.

Attribution is not a quest for a single number. It is a triangulation problem. Build the triangle.

FAQ

Do I need server-side tracking?

Above ~$100K/month, yes. The data loss from iOS and ad blockers is large enough that platform optimization starts degrading without server-side conversions firing back. Below that, focus on the survey and the MER sheet — server-side has a real implementation cost and won't pay off at small scale.

How accurate is the post-purchase survey?

Directionally extremely accurate. Numerically rough. Customers will misremember the channel they first heard about you on, and certain channels (TV, podcast) chronically get under-credited. But the delta between platform-reported and survey-reported CAC is consistent enough across audits that it reliably surfaces over-attribution.

What if my survey response rate is low?

Anything above 30% is workable. Below that, the sample skews toward customers who are already engaged, which over-represents owned channels (email, brand search). Test the question wording, the placement, and the option list before you give up on it.

Should I use Triple Whale or Northbeam?

For most accounts, either works. Triple Whale is faster to set up and has a better default UI for buyers. Northbeam tends to be stronger on the model side for sophisticated multi-channel accounts. Pick one and commit — switching mid-stream destroys your trend data.

How do I run a geo holdout if my product ships nationally?

Pause one platform (usually Meta) in 5-10 matched DMAs for 4 weeks while keeping all other channels live. Track new-customer rate in those DMAs vs. matched control DMAs. The difference is your incremental contribution. Match on baseline conversion volume, demographics, and seasonality.

What's the fastest way to get a CFO on board with this?

Show them one survey-vs-platform CAC table from your own data. The first time they see Meta claiming a $52 CAC while the survey implies $84, they'll fund the rest of the stack themselves.

Is AdFuse part of this stack?

AdFuse focuses on ad-ops execution — the layer below attribution. The two complement each other: AdFuse handles the what to do once attribution has told you where the truth is. Tools that conflate the two end up doing both poorly.


Attribution is not a software purchase. It is a weekly habit and a top-down architecture and a stubborn refusal to let any one tool tell you the whole story.

Build the sheet first. Add the survey second. Layer the platforms in the order their answers can be checked. And read why ROAS isn't a goal before you decide what target to put at the top of the stack.

Keep reading

Pieces I've written on related topics that pair well with this one:

Subscribe to the newsletter

Get every post in your inbox.

New writing every two weeks. No fluff. Unsubscribe anytime.

Subscribe