Why Your Meta ROAS Looks Great But Margins Don't
Meta reports 5x ROAS while your margins compress. It's not a glitch — it's structural. Here's the three-signal attribution system I run at Impremis.
I've audited hundreds of ad accounts over the past eight years. The pattern I see more often than any other goes like this: a brand is running Meta at a reported 4x or 5x ROAS, the media buyer is confident, the client is happy — and then we pull the actual revenue from the backend and the numbers don't reconcile.
The platform says one thing. The bank account says another.
This is not a glitch. It's a structural feature of how Meta measures and reports conversions. Understanding it is not optional if you're spending serious money on paid social. At the scale I operate at Impremis — managing $250M+ in annual ad spend across 300+ brands — the gap between reported and real ROAS is one of the most expensive blind spots I fix.
Here's what's actually happening, why it matters, and how to build a measurement system that tells you the truth. (For the shorter, principle-level version, I covered the gap here.)
Image brief: Triangle diagram showing the three attribution signals (Platform-Reported, Backend Last-Click, Incrementality). alt: "Three-signal attribution diagram." caption: "The three signals I triangulate before I trust a ROAS number."
The three reasons Meta ROAS lies to you
1. Attribution windows claim conversions the ad never caused
Meta's default attribution setting is a 7-day click and 1-day view window. If someone sees your ad today and buys anything in the next 24 hours, Meta claims the purchase. If someone clicks today and buys within the next 7 days, Meta claims that too — regardless of what actually drove the decision.
The problem is that Meta is taking credit whether or not the ad caused the order.
A buyer who already had your product in their cart, scrolled past your ad, and then converted six hours later would have converted without the ad. Meta still records it as a Meta conversion. A buyer who clicked, left, got an email reminder three days later, and then bought is still a "Meta conversion" — the email gets nothing. The ad gets full credit.
This isn't fraud. It's just how view-through and click-through models work. But if you're making budget decisions on those numbers, you're optimizing against a metric that doesn't reflect incremental revenue.
2. Cross-device, cross-platform journeys break clean attribution
A customer sees your ad on their phone at lunch. They don't buy. They get home, open their laptop, search the brand, and convert through Google Shopping.
Google claims the conversion. Meta claims the conversion. Your Shopify backend records one order.
Two platforms claim a sale that produced one transaction. Your blended ROAS across both channels looks strong. Your actual revenue does not support the combined spend. This is the double-attribution problem, and it's endemic across any brand running more than one paid channel at the same time.
At small spend, the distortion is manageable. At $100K/mo and above, the compounding effect of double-counted conversions makes your real CAC essentially invisible behind the platform numbers.
3. View-through attribution inflates volume without proving causation
View-through is the most aggressive form of overclaim in the Meta ecosystem. A 1-day view conversion means: a user saw your ad for one second while scrolling, didn't stop, didn't click, and then bought within 24 hours. Meta records that as a campaign-driven conversion.
Most brands leave view-through on because it makes the numbers look better. Turning it off — or narrowing the window — reveals how many of those conversions were genuinely influenced by the ad versus how many were organic purchases that happened to occur in the same window.
I've seen accounts where disabling view-through dropped reported ROAS by 30–40% with no corresponding drop in actual revenue. The revenue was always real. The attribution wasn't.
What the gap actually costs you
The practical consequence of over-attributed ROAS is misallocated budget.
If you believe a campaign is hitting 5x when its true incremental contribution is 2.5x, you'll keep scaling a campaign that's quietly destroying margin. You'll cut the channels that are actually driving conversions because they can't compete with Meta's inflated self-reported numbers. You'll make creative and targeting decisions on signals that don't reflect reality.
This is how brands end up spending themselves into negative profitability while staring at green numbers in Ads Manager.
The brands I audit that are most exposed share one trait: they treat Meta's native reporting as their primary — and often only — source of attribution truth. No triangulation. No backend reconciliation. No incrementality testing. Just Ads Manager and a dashboard that pulls from it.
The three-signal attribution system
The fix is not to stop using Meta. The fix is to stop trusting any single source as the complete picture. The framework I run across accounts at Impremis:
Signal 1 — Platform-reported (Meta Ads Manager). Your hypothesis, not your conclusion. Useful for directional trends and creative comparison within the platform. Not useful as a standalone revenue truth.
Signal 2 — Backend last-click (GA4 / Shopify). Your sanity check. Pull revenue from the actual order management system and compare it to what platforms are claiming for the same window. The gap between Signal 1 and Signal 2 tells you the magnitude of inflation. If Meta is claiming $80K in revenue and your Shopify backend shows $55K in total store revenue for the same period, you have a 45% overclaim problem.
Signal 3 — Incrementality testing. Your truth. Holdout testing measures what would have happened to revenue if the campaign hadn't run at all. A geo holdout splits the addressable market into exposed and unexposed regions and compares conversion rates between them. The difference, adjusted for baseline differences, is the actual incremental lift the ads generated.
This is the only method that answers the question that actually matters: how much revenue did this campaign cause, not just correlate with.
Most brands under $50K/mo in spend don't need a full incrementality program. But if you're spending six figures monthly and making scaling decisions on Meta's reported ROAS alone, you're flying with one instrument in a three-instrument cockpit.
Attribution model comparison
| Method | What it measures | Overclaim risk | Best used for | |---|---|:---:|---| | Meta 7-day click + 1-day view (default) | Platform-reported conversions | High | Creative comparison within Meta | | Meta 7-day click only | Click-driven conversions | Medium | Cleaner in-platform benchmarking | | GA4 last-click | Session-level attribution | Medium | Cross-channel sanity check | | Shopify backend revenue | Actual recorded orders | Low | Revenue reconciliation baseline | | Multi-touch attribution (MTA) | Weighted path contribution | Medium | Channel contribution modeling | | Geo holdout (incrementality) | True causal lift | Very low | Incremental revenue measurement |
Image brief: Same comparison table styled as a stacked overclaim-risk visualization. alt: "Attribution model comparison chart." caption: "What each method measures — and what it doesn't."
How to start fixing this this week
You don't need a data science team to start building a more accurate picture. Three things you can do this week:
- Pull a revenue reconciliation report. Compare Meta-reported conversions and revenue for the last 30 days against your actual Shopify or backend order data for the same window. If Meta is claiming 1.4x the revenue that actually came in, your real ROAS is your reported ROAS divided by 1.4. That ratio is your overclaim multiplier, and it's the most useful number you don't currently track.
- Narrow your attribution window. Switch Meta campaigns from 7-day click + 1-day view to 7-day click only. Track what happens to reported ROAS. The conversions that disappeared were view-through claims you were probably crediting to Meta incorrectly. The real number is now closer to the surface.
- Run a simple geo holdout test. Pick a mid-sized market where you have meaningful spend. Pause all Meta activity in that region for two weeks while maintaining full spend everywhere else. Compare conversion rates and revenue between the holdout and active regions. The gap, adjusted for any pre-existing differences, is your incremental lift estimate. Not perfect — directionally far more honest than Ads Manager.
These three steps don't solve attribution completely. They move you from a single-signal system to a triangulated one. That alone changes the quality of every budget decision you make.
The CEO-level reality
Attribution accuracy is not just a measurement problem. At the agency level, it's a client-relationship problem.
When a brand is making budget calls on inflated ROAS, they're often scaling in ways that compress operating margin without realizing it. The moment actual profitability gets measured and the numbers don't support the platform story, trust in the agency erodes fast.
The brands I retain longest are the ones I introduce to multi-signal attribution early — even when the honest numbers look worse than what they were seeing before. An accurate 2.5x ROAS is a better foundation for scaling than a false 5x. You can make real decisions on the first number. The second one will eventually catch up with you.
Building this measurement infrastructure also changes how you hire. A media buyer who only knows how to read Ads Manager is optimizing against a fiction. The operators who become genuinely valuable are the ones who can triangulate across sources, understand correlation versus causation in conversion data, and make budget arguments that hold up against backend revenue — not just platform dashboards.
That skill set is what separates junior execution from senior media buying. It's increasingly the bar sophisticated brands set when they evaluate agency partners.
FAQ
What's a healthy overclaim multiplier to expect from Meta? Most accounts I audit run between 1.2x and 1.6x — meaning Meta is claiming 20–60% more revenue than the backend confirms. Above 1.7x is a red flag that view-through is doing too much work.
Should I just turn off view-through entirely? Tighten before you turn off. Move from 7-day click + 1-day view to 7-day click only first. If reported numbers don't collapse, the campaigns were never view-through dependent in the first place.
How often should I re-run incrementality? Quarterly at minimum if you're spending six figures monthly. Sooner if you've made significant creative or audience changes.
Is MMM a replacement for incrementality? No. Marketing mix modeling is a useful complement but answers a different question — channel-level contribution at a portfolio level. Incrementality answers campaign-level causality. Both belong in a mature stack.
Closing
Meta ROAS is a useful input. It is not a source of truth.
The brands that scale profitably are the ones that build measurement systems that force their platform numbers to reconcile with reality. They run holdouts. They pull backend data. They know their overclaim multiplier and adjust scaling decisions accordingly.
The ones that don't eventually find themselves wondering why strong ROAS numbers aren't showing up in their margins. The gap was always there. They didn't have a system designed to surface it.
Build the system. Trust the triangulation. Make decisions on numbers your bank account can confirm.
Keep reading
Pieces I've written on related topics that pair well with this one:
- Why Meta and Google Analytics Never Agree (And How to Reconcile Them) — Meta says $87K. GA4 says $41K. Both are technically correct.
- The Attribution Problem That's Costing You Real Money — Attribution in Meta Ads is distorting budget decisions across channels.
- What Split-Testing on Meta Actually Requires to Produce Statistically Valid Results — Most Meta split tests produce noise, not signal. Here's the four-condition framework for valid creative testing — and what to do with the results.
- How to Diagnose a Declining ROAS Without Touching the Campaign Structure — When Meta ROAS drops, the instinct is to change the campaign structure. That instinct is usually wrong. Here's how to diagnose the root cause first.
- The 8 Attribution Models DTC Brands Use, and the 3 That Matter — Attribution isn't one model. It's a stack of imperfect ones that check each other. Here's the system we use at $250M+ in annual spend.