Skip to content




Why Google Ads, GA4 and CRM numbers never match

Featured Replies

Are you planning your PPC channel budgets by comparing Google Ads, Meta Ads, GA4, and your CRM/CMS data? Since those data don’t align, what do you report on? And how do you make sure you’re optimizing for real impact?

If you think you need better tracking, cleaner UTMs, and maybe a more sophisticated analytics setup, you’re not alone. But more often than not, the issue is something else entirely. Let’s call it the attribution trap.

The main problem is that an entire generation of marketers has been taught to be data-driven. If configured correctly, analytics tools are supposed to tell you what’s working. Just follow the data.

But attribution can quickly become misleading. Without the right framework, marketers end up allocating budgets based on incomplete insights, often with damaging business consequences.

Let’s step back for a moment: Attribution allocates conversion credit to channels. That’s useful. However, attribution can’t tell you which of those conversions your channels actually caused.

Does this sound overly academic? It isn’t. Understanding this distinction is key to fixing the measurement problem. So let’s look at why attribution fails, how to triangulate your existing data, and whether incrementality testing is the right next step for your client.

Why ads, analytics, and CRM numbers never match

Before fixing anything, you need to understand that aligning ad networks, GA4, and your CRM simply isn’t possible. These systems were built for different purposes, use different methodologies, and measure different moments in the customer journey.

Your customer journey as a framework

Say someone clicked a Meta Ads ad, got retargeted on YouTube, then searched for your client’s brand on Google before converting — all within seven days.

Using the default attribution windows, both Meta and Google Ads will report one conversion. GA4 and your CRM will only show one, most likely crediting Google Ads paid search.

Did Meta Ads invent that “duplicate” conversion? No. Meta Ads has no visibility into Google Ads interactions. How could it know the conversion was supposedly a duplicate?

Conversely, GA4 and your CRM will almost certainly ignore Meta Ads. Should you follow those “insights” and reallocate Meta Ads budget to Google Ads branded search? Probably not.

Structural differences as diagnosis enhancers

Unfortunately, it doesn’t stop there:

  • Attribution date: Ad platforms attribute conversions to the day the click occurred, while GA4 and CRMs typically report on the day the conversion happened. If your customer journey is long, that creates additional discrepancies.
  • Cross-device behavior: A user who clicks a Google Ads ad on mobile, returns on desktop through SEO, and converts will generate a conversion across ad, analytics, and CRM tools. So far, so good. But Google Ads and your CRM will disagree on the source because your CRM won’t have “merged” the mobile and desktop visitors into one user.
  • Privacy restrictions: Ad blockers, browser-level tracking prevention, and cookie consent banners often mean a large share of conversions isn’t measured. Sometimes ad networks fill that gap with modeled conversions, but your CRM still won’t see the actual source.

The latter two issues are fixable through better configuration, especially server-side tagging, offline conversion imports, and consistent UTMs. But the structural divergence remains, so you can’t expect 100% correlation between those tools.

Your single source of truth: The attribution trap

Once teams accept that the numbers differ, the next move is often choosing a single source of truth — oftentimes GA4 or the CRM — and sticking with it. That’s where the attribution trap closes.

Every tool follows an attribution model. And whatever the model — first-click, last-click, linear, time decay, or data-driven — it’s fundamentally limited.

Every attribution model has blind spots

  • Last-click. The easiest model to understand. Also the easiest to game. It rewards the final touchpoint, typically branded search, and systematically undervalues demand generation.
  • First-click. The opposite. It rewards discovery and ignores the touchpoints that moved someone from interested to converted.
  • Linear and time-decay. They feel more balanced, right? True. But they’re also largely arbitrary. Why should equal credit go to every touchpoint? Why should recency determine value? Customer journeys don’t follow strict rules.
  • Data-driven. This model is often presented as the most sophisticated option. Trust the ad network or analytics platform to identify the attribution model that best reflects reality. In practice, it’s still a black box. If it were truly that reliable, platforms would provide more visibility into how it works.

What happens depending on your source of truth

Hopefully, you now have a better grasp of the deeper issue. Attribution answers this question: Given that a conversion happened, which touchpoints should get credit? By narrowing your decision-making process to a single tool, you can’t escape the blind spots of whichever attribution model it follows.

If you rely solely on your CRM, you’ll be driven by last-click attribution, meaning you’ll mostly focus on branded search. A few years later, you may realize demand has dried up despite strong results according to your single source of truth.

On the opposite end of the spectrum, relying only on ad platform data means reporting inflated results. Think 2x, 3x, or even 4x more revenue than what the finance team actually reports. You end up increasing marketing budgets while finance tells you to stop — rightfully so.

Again, GA4 sounds like the grown-up in the room. Not quite. That’s because it only measures the on-site portion of the customer journey. What about awareness campaigns designed to generate views or ad recall? They don’t necessarily generate website visits.

Once you realize all these tools have fundamental flaws and blind spots, someone will inevitably suggest incrementality. In other words: Did this campaign cause conversions that otherwise wouldn’t have happened? Let’s look at that for a moment.

Incrementality tests: The perfect solution?

Incrementality measures the results generated because of your campaign — conversions that wouldn’t have existed without the ad. 

Think of two parallel universes: the gap between the world where the ad ran and the world where it didn’t is your incremental impact. Everything else is activity you would’ve captured anyway.

Attribution vs. incrementality

This matters more than it might seem. A significant share of reported campaign conversions — especially in retargeting and branded search — comes from people who would’ve converted regardless. They were already in-market, already familiar with your brand, and already close to a decision.

Showing them an ad and then claiming credit for the conversion is what attribution does. Incrementality testing measures how much of that credit is real.

For budget decisions, that distinction is everything.

A retargeting campaign reporting strong ROAS through attribution might deliver almost no incremental value. Cut it, and conversions barely move. Keep it, and you’re paying for the illusion of performance in that “single source of truth.”

How to test for incrementality

Incrementality testing requires experiments with two groups: one that sees the ad and one that doesn’t. Then you measure the difference in outcomes. Here are the most common approaches:

  • Geo holdout. Divide your market into comparable geographic regions, run campaigns in some while going dark in others, and measure the difference in conversions. It’s practical, reliable, and relatively easy to set up.
  • Audience holdout. Platforms like Google and Meta let you create a holdout group — a percentage of your target audience intentionally excluded from seeing ads. From there, the process mirrors geo holdout testing. One major caveat: It relies on ad platform data. That means you should only compare incrementality across campaigns within the same ad network. Otherwise, it’s pointless.
  • Time-based testing. Pause a campaign for a defined period and measure what happens to overall conversion volume. If performance holds, the campaign likely wasn’t incremental. This approach is high-risk: seasonality, competitors, and external events can blur the results. And if the campaign was incremental, you’ve just hurt performance during the test period.

Get the newsletter search marketers rely on.


Is incrementality right for you?

If you’re running larger budgets — think roughly €1 million per month or more — you’re probably already familiar with these concepts. So let’s assume you’re operating at a smaller scale.

In that case, incrementality often isn’t actionable. Reliable tests require meaningful differences between test and control groups, which means large amounts of data. And generating that data requires significant spend.

That said, you can still use shortcuts for likely problem areas, especially branded search. Check the auction insights report to see whether competitors are heavily bidding on your brand. If they are, you probably need branded search campaigns to capture the demand you created. If they aren’t, you can likely pause those campaigns, let SEO capture the demand, and save some ad spend.

That said, you can still use shortcuts for likely problem areas, especially branded search. Check the auction insights report to see whether competitors are heavily bidding on your brand. If they are, you probably need branded search campaigns to capture the demand you created. If they aren’t, you can likely pause those campaigns, let SEO capture the demand, and save some ad spend.

Triangulation: The actionable decision-making process

So if attribution is fundamentally flawed and incrementality is mostly reserved for top-tier advertisers, what’s left? Triangulation.

Use the tools you already have while staying aware of their inherent flaws. And educate clients or leadership teams so they don’t blindly follow a “single source of truth.” Here’s what it looks like in practice.

Start with your CRM/CMS

Those systems record actual deals and revenue. Treat every other number as an attempt to explain them.

When Google Ads and Meta Ads report a combined $50K in revenue, while Shopify shows “only” $35,000, Shopify reflects reality.

Better yet, it’s the only system that can reliably tell you whether a conversion came from a new or existing customer. Ad platforms don’t make that distinction reliably. That lets you measure nCAC (new customer acquisition cost), anchoring budget decisions around customers who otherwise wouldn’t have found you.

Then superimpose your customer journey onto ad platform results. That $15K gap represents the ad platforms’ interpretation of their contribution. Your job is to understand each campaign in the context of the customer journey and identify where deduplication is needed.

For example, if you run both Demand Gen and Meta retargeting campaigns, there’s almost certainly overlap. So will be the results. That’s when time-based incrementality tests, if available, can help determine which channel performs better.

Improve on triangulation

Attribution windows: Long customer journeys make performance harder to interpret. Try segmenting campaigns around specific stages of the customer journey and adjust attribution windows and micro-conversions accordingly. Smaller attribution windows are often better at driving the right outcomes when configured properly.

Track ratios: The gaps between ad platform conversions and CRM/CMS data should remain relatively stable. Build a simple report that tracks those relationships over time. If the ratios hold, your measurement framework is stable. If they break, investigate — there may be an incrementality insight hiding there.

Triangulation won’t give you a single clean number. But it will give you a defensible, consistent framework for making decisions. That’s far more valuable than false precision.

Welcome to the real world

The teams that waste the most time on measurement are the ones trying to force three systems to produce the same number, or searching for the attribution model that finally feels fair.

The teams that make the best decisions accept that reality is more complex than a single source of truth and build the data skills needed to reflect that complexity.

So make sure your decision-making process is as close to reality as possible — and embrace the question marks.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.