Jump to content




What 4 AI search experiments reveal about attribution and buying decisions

Featured Replies

What 4 AI search experiments reveal about attribution and buying decisions

AI search influence didn’t show up in our SEO reports or AI prompt tracking tools. It showed up in sales calls.

“Found you via Grok, actually,” a new lead said.

That comment stopped us cold. We hadn’t tried to rank in Grok. We weren’t tracking it. Yet it was influencing how buyers discovered and evaluated us.

That disconnect kept appearing in client conversations, too. Everyone was curious about AI search, but no one trusted the data. 

Teams wanted visibility in ChatGPT and other AI tools, then asked the same question: “Why invest in a channel that doesn’t show up cleanly in attribution?”

To answer that, we ran controlled experiments using assets we could fully control – an agency website, personal sites, an ecommerce brand, and purpose-built test domains.

The goal wasn’t to win AI rankings. It was to understand what still matters once AI enters the decision process:

  • Does AI search change what people buy, or just where brands appear?
  • Can something influence revenue without ever appearing in analytics?
  • Does AI recommendation affect performance across other channels?

Why we ran the experiments

Most AI search conversations fixate on visibility signals like brand mentions, citations, or visibility screenshots from AI prompt tracking tools.

Search has always had one job: help people make a decision.

We wanted to know if AI search performed the same job and actually changed commercial outcomes.

AI systems now operate at the stage where buyers compare options, shortlist providers, and reduce risk.

If AI mattered, it had to show up at the moment of decision.

On measurement limits: 

  • We didn’t rely on API data because API responses often differ from what real users see. Instead, we observed live interfaces across ChatGPT, Perplexity, Gemini, and Google AI Overviews. 
  • We used prompt tracking to spot patterns, not to declare absolute wins.

Experiment 1: Self-promotional ‘best of’ lists on your own website

A simple tactic became popular over the past year:

  • Create a “best X” list on your site.
  • Put yourself at the top.
  • Let AI systems pick up the list.

I’ve seen agencies do this locally and felt conflicted about it.

It wasn’t spam. But it relied on a blind spot – LLMs struggle to separate independent rankings from self-written ones.

Around the same time, Ahrefs published a large study that helped explain why this works. Glen Allsopp analyzed ChatGPT responses across hundreds of “best X”-style prompts and found that “best” list posts were the most commonly cited page type.

Two things from the study stood out:

  • Format: This included cases where brands ranked themselves first
  • Freshness: Most cited lists had been updated recently

I could have tested these observations on StudioHawk. Instead, I did it on my personal brand website to manage the risk. 

I published a list of the “Best SEO agencies in Sydney” and included my own website among the entries to test whether AI would “take the bait,” so to speak.

Within two weeks, LawrenceHitches.com appeared across AI tools for “best SEO agency Sydney” style searches:

Best SEO agencies - Sydney

The speed was surprising – traditional SEO rarely moves that fast.

If visibility appears this easily, then visibility alone can’t mean much, so I tested it again.

Experiment 2: Self-promotion of a fake business

Initially, I could have been piggybacking off the already established StudioHawk brand, so I decided to run a self-promotion test on a fake website

We used a basic landscaping site built only for SEO and AI testing and published the same type of page, a “best X” list.

This time, the topic was “best landscapers in Melbourne”:

Best landscapers in Melbourne

Within two weeks, the list appeared in AI responses again. The result repeated almost exactly.

If a brand-new test site can surface this fast, then “appeared in AI” doesn’t mean much on its own.

Visibility vs. trust

These two experiments showed one thing clearly: LLMs are still easy to influence at the surface level.

I ran these tests back in August 2025, but the same pattern still appears today.

A “best SEO agency Sydney” search run in January 2026 shows the same list-driven results:

Best-SEO-agency-Sydney.png
Top SEO agencies Sydney

This creates a real conflict for brands.

On one side, the data says yes – the Ahrefs research shows “Best X” pages attract citations. Large brands like Shopify, Slack, and HubSpot publish self-ranked lists without obvious damage to rankings or AI visibility.

On the other side is buyer trust.

As Wil Reynolds put it, listing yourself first on your own site doesn’t build confidence with buyers. That’s the tension.

When bullish founders ask for the secret sauce to appear in ChatGPT, I’m blunt. List-based “best of X” pages that rank the author first have been a fast way to surface in some AI results.

That doesn’t work everywhere, and it’s unlikely to hold up long term.

Dig deeper: Google may be cracking down on self-promotional ‘best of’ listicles

If a landscaping site with no reputation can surface this quickly, then appearing in AI means very little on its own.

Why prompt tracking can’t be a success metric

A lot of money is flowing into AI prompt tracking tools. Clients ask for them constantly. We use them too, but with a clear warning.

I wouldn’t make major decisions based on screenshots or Reddit threads about where a brand appears in ChatGPT.

Brand overlap between API outputs and real user sessions was as low as 24%, according to recent research from Surfer SEO comparing tracking APIs with scraped user experiences.

That means three times out of four, what the API told you was happening wasn’t what the user was actually seeing.

If a brand can appear in a screenshot but disappear in a real user session, then appearance alone isn’t a metric.

We stopped asking if we showed up.

Instead, we started asking, “Did this change how buyers behaved?”

  • Did leads reference AI tools without prompting?
  • Did sales calls skip education?
  • Did the speed of buying change?
  • Did price resistance soften?

These signals were harder to collect.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

Get the newsletter search marketers rely on.


Experiment 3: Kadi and the limits of digital PR alone

Kadi, an ecommerce brand we invested in that sells luggage, provided insight into our questions about whether AI results were affecting buyer behavior.

Running tests on Kadi has been an eye-opening experience for two reasons: 

  • It’s the difference between running an agency and running ecommerce.
  • It forced us to become our own client.

To move fast, we led with digital PR.

Kadi’s SEO foundation was solid but not perfect. We wanted to see how far off-site mentions could push SEO and AI visibility without heavy technical work or a polished site structure.

We conducted a large number of creative data campaigns and product placements, including:

  • Travel data studies: “Over-touristed destinations,” “Hidden fees,” “Best time to fly,” and “Happy Hour at 30,000 ft.”
  • Advisory pieces: “Airport cybersecurity” and “duty-free shopping” guides
  • Product and feature focus: “Kadi kids carry-on adventure,” “cloud check-in features,” and inclusions in “best suitcase round-ups.”

It worked:

  • Coverage landed.
  • Authority grew without the need for “traditional SEO.”
  • We saw temporary keyword spikes and traffic boosts.
Kadi - Digital PR efforts

But there was a catch: Digital PR alone wasn’t enough to close the gap with competitors. It created quick traction in search results, but it didn’t resolve the underlying issues.

After launch, SEO foundation work became the priority.

Then, Black Friday made the reality obvious. A customer found Kadi through ChatGPT on a “kids carry-on” query.

We saw this happen on the day of the query and showed the pathway: 

  • They didn’t buy immediately.
  • They checked the shipping policy.
  • They browsed the range.
  • They added three additional products.
  • They debated colour (olive over pink).
  • Attribution later showed Instagram as the source.

That order was the largest of the Black Friday period.

On paper, AI did nothing. In reality, it was part of shaping the decision. 

Digital PR can get you visibility spikes, but it doesn’t address the whole picture. 

While AI traffic does convert, the attribution is inconsistent.

Experiment 4: StudioHawk 

Across 2024 and 2025, StudioHawk underwent a full website rebrand and migration from WordPress to HubSpot.

Our own site sat at the bottom of the priority list for years. It was always the project we would get to later. 

Finally, we paused other priorities and rebuilt the entire site.

The work started in 2023, before terms like “GEO” existed. We were focused only on rebuilding service pages, social proof, and user experience end to end.

After launch, rankings improved and continue to grow.

Studiohawk post-rebrand performance

In 2025, SEO became the agency’s strongest channel by efficiency. It drove 65% of inbound leads and close to 60% of new revenue.

Agency's strongest channel by efficiency

Between July and December 2025, AI search leads began to appear more often:

AI search leads appered

Initially, these were “Oh, cool, we got a lead from AI” moments around the office.

Sales calls started skipping early education. New leads arrived aligned based on fit and expectations.

Over time, we saw that:

  • SEO inbound leads: Averaged 29 days to close.
  • AI search leads: Closed in roughly 18 days.
image-39.png

That 10-day gap mattered.

It meant less time educating, fewer scope objections, lower price sensitivity, and higher confidence earlier in the process.

Within the first year, AI-influenced conversations contributed over $100,000 in closed revenue from 20+ leads, including deals with direct attribution from tools like ChatGPT, Perplexity, and Grok.

The blind spot remains attribution paths such as Instagram, direct, or organic, where AI influenced the decision but didn’t appear in reporting (as seen in the Kadi example).

Where direct AI attribution existed, buyers were more prepared. That preparedness shortened sales cycles and lifted revenue.

AI compresses consideration

We started by asking where people would search next.

Our key finding? AI search doesn’t replace discovery. It compresses the consideration phase.

AI compresses consideration

Consideration is that messy middle where buyers reduce risk, shortlist vendors, compare tradeoffs, and ask, “Who should I trust?”

They answer these questions before a buyer ever clicks a link. 

It means your website no longer carries the full load – AI summaries and third-party mentions do the pre-selling for you.

This is the shift we now describe as the new consideration era.

As the map illustrates, we’ve moved from a straight funnel to a complex, AI-influenced pathway where consensus is key:

Because this happens off-site, last-click attribution is broken. 

A buyer might use ChatGPT to create a shortlist but convert later via direct search.

Where traditional SEO still fits

Strong SEO metrics were a core across all our experiments, but we’ve stopped viewing them as the primary driver of value:

  • Keyword rankings confirm search engines understand your entity.
  • However, those high rankings don’t guarantee effective pre-selling.

Traditional SEO became a supporting signal – proof that the foundation is sound, rather than the end goal.

What this means for brands

After running a variety of AI search experiments, here’s what I think brands should focus on.

1. Measure where AI influence actually lands

Stop obsessing over prompt appearances (e.g., citations, mentions). These are shiny objects, but they fluctuate too easily. 

Instead, measure:

  • Sales velocity (Did deals close faster?)
  • Quality of the lead (Did they ask fewer questions to learn?)
  • Value per lead (Did price friction ease?)

2. Make clarity more important than creativity

AI hates vagueness. Making pages that make it clear what you do and who it’s for.

Pay attention to content that answers questions about risk, comparison, and price.

3. Change the content to help people decide what to buy

Focus on content that answers comparison, risk, and pricing questions. This makes a bigger difference than general category explanations.

4. Make entity consistency a crucial factor

Lack of consistency makes people doubt themselves. Conversely, consistency boosts confidence.

Check to see that your website, reviews, and digital PR all talk about your brand in the same way.

AI search compresses consideration, not discovery

In the end, the results were the same across all experiments. What we got from our sales pipeline was typical:

  • Clear intent.
  • Tight positioning.
  • Consistent signals of authority.

AI search isn’t replacing basic SEO. Instead, it shows weak positioning more quickly than traditional search ever did. 

What does that mean? 

Simply put, AI speeds up decisions that were already forming.

Dig deeper: From searching to delegating: Adapting to AI-first search behavior

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.