Reading cross-platform ROAS across Meta, Google, and TikTok
Reading return on ad spend across Meta, Google Ads, and TikTok together is harder than reading any single platform in isolation. And it's also where most of the interesting signal lives. Attribution windows differ. Creative behaviour differs. The same creative will fatigue at different rates on different platforms. Unifying them into an honest read requires cross-platform creative fingerprinting, attribution normalisation, and awareness of how each platform's delivery engine clusters differently.
This guide walks through why platform silos mislead you, the platform-specific gotchas on each of the three, the fingerprinting technique that unifies the creative view, and four normalisations that make the cross-platform read trustworthy.
Why platform silos mislead you
Three structural problems with reading each platform in isolation.
Attribution windows don't align. Meta's 7-click 1-view default differs from Google Ads' 30-day click default and TikTok's 28-day click default. A conversion that attributes to Meta in a 7-day window may also be attributed by Google in a 30-day window and by TikTok in a 28-day window. Each platform claims credit for overlapping conversions because each platform measures in its own frame.
No standard cross-platform incrementality. The gold standard for answering "what did my ads actually cause" is a randomised holdout test. Platforms run these internally — Meta's Conversion Lift, Google's Brand Lift — but they're not designed to compare across platforms. Advertisers rarely run cross-platform lift studies because the coordination overhead is high.
Creative-level aggregation is hidden. The same creative running on all three platforms shows up as three separate line items in each platform's reporting. Aggregating manually by filename or campaign name is brittle; filenames change, and the same creative often has multiple formats (16:9, 9:16, 4:5) with different names.
The net effect: summing ROAS across three siloed dashboards systematically over-counts the return on spend. And reading each platform in isolation misses the patterns that only appear when you see the full cross-platform picture.
Platform-specific gotchas
Meta Advantage+ and Andromeda
Meta's Andromeda algorithm assigns a similarity score to every creative in an account and groups similar creatives under a single internal entity ID. Budget flows to the top-performing member of each cluster. Clustered creatives above a 0.6 Andromeda similarity typically trigger delivery suppression — you may launch 20 creatives and only see five or six get meaningful impressions.
This means Meta's account-level ROAS is effectively the ROAS of your top-clustered creatives, not the average of everything you launched. Reading ROAS without understanding the clustering behaviour will mislead you about which angles and hooks actually work — the ones Andromeda happened to cluster you into.
Google Performance Max
PMax hides asset-level performance almost entirely. The Insights tab shows asset-group rollups but not per-asset metrics. The only way to infer individual creative contribution is via asset-group-level experiments — split your assets into test groups, run them in parallel, compare.
Worse, PMax's 30-day default attribution means ROAS for any given week's conversions is still materialising for the full following month. Reading PMax ROAS weekly is reading mostly historical data with a long tail still arriving.
TikTok Smart+
Smart+ is the most aggressive of the three at delivery clustering — even more than Meta Andromeda, in practice. Creatives that pass clustering on Meta often get suppressed on TikTok because the platform's internal creative embeddings are tuned differently (TikTok weights audio and motion patterns more heavily).
Smart+ also has the shortest learning phase of the three, but the shortest fatigue cycle, too. Creatives that ran successfully for three weeks on Meta often have only a 10–12-day healthy window on TikTok.
Creative fingerprinting across platforms
The same creative running on Meta, Google, and TikTok should be matched across all three as a single logical asset. Perceptual hashing (pHash) handles visual matching — frames converted to low-resolution fingerprints tolerate format differences. CLIP 512-dim embeddings handle semantic matching — two creatives that feel similar even if pixel-different get grouped. For audio, MFCC feature extraction plus a short embedding window catches VO re-use and music reuse.
Without cross-platform matching, your analytics treats the same creative as three separate items. You can't answer "what did creative X do across all platforms" because X doesn't exist as a unified object in your data. With matching, you can aggregate performance, compare platform behaviour for the same creative, and identify when a creative fatiguing on one platform still has runway on another.
How to normalise
Four normalisations make cross-platform ROAS comparable.
Attribution window. Pick a single comparison window — 7-day click is a reasonable default — and request that view from each platform's API. All three platforms support custom attribution windows in their reporting APIs. Don't compare Meta 7-click against Google 30-day; you'll over-weight Google.
Sample size. Require a minimum of 50 conversions per creative-platform pair before drawing conclusions. Below that, natural variance dominates and ROAS differences are noise. For budget-level conclusions, 200+ conversions is safer.
Confidence. Each ROAS read should carry a confidence band. Report read = point estimate plus or minus standard error. A creative showing ROAS of 3.2 ± 0.1 is materially different from one showing 3.2 ± 1.4, even though the point estimates are identical.
Seasonal deltas. Remove week-over-week baseline drift before comparing performance. Retail spikes in Q4; B2B slumps in July–August; consumer packaged goods ramps around product launches. Raw ROAS comparisons that don't control for seasonality will attribute market movement to creative performance, and you'll chase creative changes that were actually market changes.
Common misreads
"Creative X is winning on Meta." Often it's winning in its cluster, not overall. The cluster may have been under-served by other accounts, which is a delivery artefact, not a creative quality signal.
"Creative Y failed on TikTok but worked on Meta." Might be audio-off vs audio-on fit, or duration fit (a 30-second ad for Meta Feed often fails at TikTok's short-form attention profile), not creative quality. Before concluding the creative is bad, verify the format and audio match the platform.
"Account-level ROAS is X." Nearly meaningless without creative-level attribution. An account averaging ROAS of 2.5 could be one creative doing 6.0 and nineteen doing 2.0 — the aggregate hides where the value lives. Always decompose to creative level.
"Cross-platform ROAS is the sum of each platform's claimed ROAS." It isn't. Each platform over-claims its share. True incremental cross-platform ROAS is always lower than the sum, sometimes by 20–40%.
Where Omniscia fits
Omniscia Nexus runs the creative fingerprinting, attribution normalisation, and ROAS unification described above across Meta, Google Ads, and TikTok together. Cortex-validated reads carry confidence bands and minimum-sample-size gates inline, so you never draw conclusions from noise. Fatigue risk is read at creative level across all three platforms so fatiguing on one platform doesn't trigger false fatigue signals on another.
Further reading