← All writing

The Attribution Problem That's Costing You Real Money

Attribution in Meta Ads is distorting budget decisions across channels. Learn why ROAS is misleading and how MER, multi-touch, and incrementality fix it.

Jordan Glickman·May 10, 2026·9
Attribution

Every performance marketer has sat in a meeting where the numbers on the dashboard don't match the numbers in the bank account.

Revenue looks strong on paper. Platform-reported ROAS is hitting target. And yet margins are compressing, CAC is creeping up, and the business feels harder to run than it did six months ago at half the spend.

The attribution problem is not a data collection issue. It's a decision-making issue. And the way most brands and agencies are currently structured, the wrong decisions are being made every single day on data that sounds credible but paints a fundamentally distorted picture of reality.

Here's what's actually happening, why it's getting worse, and what the most sophisticated operators are doing about it. (For the Meta-specific version, see the three-signal attribution system.)

Image brief: Horizontal flow — one customer path (TikTok view → branded search → SMS code → purchase) with all three paid channels claiming the same conversion above, and one actual-revenue figure below. alt: "Multi-channel attribution overclaim diagram." caption: "Every channel claims credit. The bank account doesn't reconcile."

The multi-channel blind spot

When a brand is running paid search, TikTok, programmatic display, email, and SMS simultaneously, each of those channels has its own tracking logic, its own attribution window, and its own incentive to claim as much conversion credit as possible.

None of them coordinate with each other.

A customer discovers a product through a TikTok video. Three days later they search the brand name on Google and click a branded search ad. That evening they get an SMS with a discount code and convert.

Google claims the conversion. SMS claims the conversion. TikTok gets nothing because there was no direct click from that session to purchase. And the brand's reporting dashboard shows a combined attributed revenue figure that's 60% higher than actual revenue for the period.

This isn't a fringe scenario. It's the default state of multi-channel attribution for every brand spending meaningfully across more than two paid channels.

The question isn't whether this is happening to you. It's whether your budget allocation decisions are accounting for it.

Why TikTok's attribution gap is especially expensive right now

TikTok has become one of the highest-impact discovery channels for consumer brands — particularly in fashion, beauty, food, and lifestyle. The platform's algorithm is extraordinarily good at surfacing products to audiences who didn't know they wanted them.

But TikTok has a structural attribution problem that causes most brands to systematically undervalue it.

The view-through disconnect. TikTok's standard attribution model gives conversion credit to ads that were viewed but never clicked. In theory, that captures the awareness impact of video advertising. In practice, it creates significant credit inflation that makes TikTok's self-reported numbers unreliable as a standalone performance metric.

More importantly, because TikTok is primarily a discovery and awareness channel, the conversions it influences often happen on other channels. Someone watches a TikTok, saves the product, buys it through a Google search two days later. TikTok drove the intent. Google gets the credit.

When brands evaluate TikTok purely on last-click — or even platform-reported ROAS — it consistently looks weaker than it actually is. That leads to under-investment in a channel that is genuinely building pipeline and demand.

The brands that have figured this out are increasing their TikTok budget while their competitors are cutting it, and their branded search volume and email-driven revenue are growing as a direct result.

The channels that always look better than they are

The attribution problem doesn't just cause brands to undervalue certain channels. It causes them to overvalue others.

Branded paid search is the most common example. Branded search campaigns show extraordinary ROAS because they're capturing customers who were already going to convert. These customers discovered the brand through some other channel, developed enough intent to search the brand name, and then clicked a paid search ad that appeared above the organic listing they would have clicked anyway.

The paid search click gets full credit. The discovery channel that created the intent gets nothing.

This is not an argument against running branded search. Protecting your branded terms has real value. But when branded search ROAS is being used to justify overall paid media investment — or to argue that paid search is your highest-performing channel — that analysis is built on a false foundation.

The same dynamic applies to email and SMS, which consistently report outsized ROAS because they're primarily converting customers who were already acquired and nurtured by other channels.

Attribution efficiency by channel type

| Channel | Typical last-click bias | Likely true contribution | Common mistake | |---|---|---|---| | Branded paid search | Heavily overstated | Moderate (capture, not creation) | Over-investing to protect ROAS metrics | | Email and SMS | Heavily overstated | High for retention, low for acquisition | Attributing new-customer revenue to owned channels | | TikTok (organic + paid) | Heavily understated | High for discovery and upper funnel | Cutting spend due to weak last-click data | | Programmatic display | Understated | Moderate, context-dependent | Eliminating the channel based on view-through skepticism | | Affiliate and influencer | Inconsistently measured | High when tracked properly | Under-investing due to tracking gaps | | Paid search (non-brand) | Moderately accurate | High for intent-driven categories | Treating it as a standalone acquisition channel |

Incrementality testing: the only attribution data you can actually trust

Every attribution model — including the sophisticated multi-touch models inside third-party tools — is making an educated guess about what caused a conversion.

Incrementality testing is different. It measures what would have happened without your advertising.

The methodology is straightforward. You take a defined audience and split it into two groups. One group sees your ads normally. The other is held out and doesn't see them for the duration of the test. At the end, you compare conversion rates between the two.

The difference is your true incremental lift. The portion of conversions that would not have happened without your paid media. Everything else is organic demand you would have captured anyway.

A practical incrementality framework:

  1. Pick one channel and one time window. Don't try to test everything at once. Choose your second- or third-highest-spend channel and run a two- to four-week test.
  2. Define your holdout size. A 10–20% holdout group is typically enough to generate statistically meaningful results without sacrificing too much potential revenue during the test.
  3. Measure at the business level, not the platform level. Compare total revenue and conversion rate for the exposed versus holdout group using your own first-party data — not the platform's reporting. This is the only way to eliminate self-attribution bias.
  4. Calculate true incremental CPA. Take the incremental conversions generated by the exposed group and divide total channel spend by that number. This is your actual cost to acquire an incremental customer, not the platform's reported CPA.

The results of a well-run incrementality test almost always reveal at least one channel is significantly less efficient than reported, and at least one is significantly more valuable than the platform data suggests.

Marketing Efficiency Ratio: the metric that cannot lie

The single most useful attribution metric for strategic decision-making is also the simplest.

Marketing Efficiency Ratio (MER) is total revenue divided by total paid media spend. No platform pixels. No attribution windows. No cross-channel deduplication logic. Just your actual revenue from your own systems divided by the actual dollars you spent on paid media.

MER is immune to the attribution games platforms play because it doesn't rely on platform data at all. It tells you whether your overall paid media investment is generating efficient revenue growth at the business level.

The practical application is straightforward. Set a target MER threshold based on your unit economics. Track it weekly. Use it as the primary gate for scaling or pulling back total paid media investment. Use channel-level data and incrementality results to allocate budget within that overall spend envelope.

When MER is healthy and stable, you have room to scale. When MER compresses despite flat or declining spend, you have an efficiency problem somewhere in the mix that needs diagnosing before you add budget.

This is the approach the most sophisticated DTC brands and performance agencies are using. It isn't flashy. It doesn't require expensive analytics infrastructure. It produces better budget allocation decisions than any dashboard built on platform-reported attribution.

The agency obligation: honest attribution conversations

If you run a performance marketing agency, attribution clarity isn't just a client-service issue. It's a trust issue that directly affects account longevity.

Clients evaluating your performance through last-click dashboards are getting a distorted view of what your work is actually contributing. Channels you manage that operate higher in the funnel will perpetually underperform on their reporting view, even when they're generating significant downstream revenue.

The answer is not to show clients a different number that makes your channels look better. The answer is to build attribution fluency into the client relationship from the beginning.

Introduce MER as a shared success metric before the first campaign launches. Present monthly results with a blended view alongside channel-level data. Run at least one incrementality test per year and bring the results — including the ones that challenge your own channel's contribution — to the client directly.

Agencies that do this consistently retain clients longer because they're operating as a strategic partner rather than a media vendor managing dashboards.

The performance marketing landscape is only getting more complex. More channels, more touchpoints, more platforms competing for attribution credit. The agencies and brands that build real attribution discipline now will have a structural advantage over everyone still optimizing toward numbers that were never telling the full story.

FAQ

What's a healthy MER target for a DTC brand? Most brands need a blended MER of 3.0–4.0 to support healthy contribution margins. Below 2.5 is structural; above 5.0 typically means you're under-spending and leaving growth on the table.

How often should I run incrementality tests? Once per quarter at minimum if you're running 3+ paid channels. Sooner after major creative or audience shifts.

Should I cut TikTok if its last-click ROAS is weak? Almost never on last-click alone. Pause and run an incrementality test first. The undervaluation pattern is real and consistent.

Is multi-touch attribution worth the cost? Useful as one signal in a stack. Not a replacement for incrementality. Treat MTA as a hypothesis generator, not a truth source.

Closing

Attribution will never be perfect. The customer journey is too fragmented, too cross-device, and too influenced by offline factors that leave no digital footprint.

But perfect attribution is not the goal. The goal is attribution that is consistently directional enough to produce better budget decisions than your competitors are making with their distorted data.

Build the measurement stack. Run incrementality tests. Manage to MER at the strategic level. Have honest conversations with clients about what the data actually shows.

The brands and agencies doing this aren't just making better decisions today. They're building the institutional knowledge and data infrastructure that compounds into a durable competitive advantage over time.

Start now. The cost of waiting is higher than it looks on any dashboard.

Keep reading

Pieces I've written on related topics that pair well with this one:

Subscribe to the newsletter

Get every post in your inbox.

New writing every two weeks. No fluff. Unsubscribe anytime.

Subscribe