Jump to content




Featured Replies

GEO myths- This article may contain lies

Less than 200 years ago, scientists were ridiculed for suggesting that hand washing might save lives.

In the 1840s, it was shown that hygiene reduced death rates, but the underlying explanation was missing.

Without a clear mechanism, adoption stalled for decades, leading to countless preventable deaths.

The joke of the past becomes the truth of today. The inverse is also true when you follow misleading guidance.

Bad GEO advice (I don’t like this acronym, but will use it because it seems to be the most popular) will not literally kill you. 

That said, it can definitely cost money, cause unemployment, and lead to economic death.

Not long ago, I wrote about a similar topic and explained why unscientific SEO research is dangerous and acts as a marketing instrument rather than real scientific discovery. 

This article is a continuation of that work and provides a framework to make sense of the myths surrounding AI search optimization.

I will highlight three concrete GEO myths, examine whether they are true, and explain what I would do if I were you.

If you’re pressed for time, here’s a TL;DR:

  • We fall for bad GEO and SEO advice because of ignorance, stupidity, cognitive biases, and black-and-white thinking.
  • To evaluate any advice, you can use the ladder of misinference – statement vs. fact vs. data vs. evidence vs. proof.
  • You become more knowledgeable if you seek dissenting viewpoints, consume with the intent to understand, pause before you believe, and rely less on AI.
  • You currently:
    • Don’t need an llms.txt.
    • Should leverage schema markup even if AI chatbots don’t use it today.
    • Have to keep your content fresh, especially if it matters for your queries.

Before we dive in, I will recap why we fall for bad advice.

Recap: Why we fall for bad GEO and SEO advice

The reasons are:

  • Ignorance, stupidity, and amathia (voluntary stupidity).
  • Cognitive biases, such as confirmation bias.
  • Black-and-white thinking.

We are ignorant because we don’t know better yet. We are stupid if we can’t know better. Both are neutral. 

We suffer from amathia when we refuse to know better, which is why it’s the worst of the three.

We all suffer from biases. When it comes to articles and research, confirmation bias is probably the most prevalent. 

We refuse to see flaws in how we see things and instead seek out flaws, often with great effort, in rival theories or remain blind to them.

Lastly, we struggle with black-and-white thinking. Everything is this or that, never something in between. A few examples:

  • Backlinks are always good.
  • Reddit is always important for AI search.
  • Blocking AI bots is always stupid.

The truth is, the world consists of many shades of gray. This idea is captured well in the book “May Contain Lies” by Alex Edmans

He says something can be moderate, granular, or marbled:

The-world-isnt-black-and-white-its-grey.
  • Backlinks are not always good or important, as they lose their impact after a certain point (moderate).
  • Reddit isn’t always important for AI search if it’s not cited at all for the relevant prompt set (granular).
  • Blocking some AI bots isn’t always stupid because, for some business models and companies, it makes perfect sense (marbled).

The first step to get better is always awareness. And we all are sometimes ignorant, (voluntarily or involuntarily) stupid, suffer from biases or think black and white.

Let’s get more practical now that we know why we fall for bad advice.

Dig deeper: Most SEO research doesn’t lie – but doesn’t tell the truth either

How I evaluate GEO (and SEO) advice and protect myself from being stupid

One way to save yourself is the ladder of misinference, once again borrowing from Edmans’ book. It looks like this:

The ladder of misinference

To accept something as proof, it needs to climb the rungs of the ladder. 

On closer inspection, many claims fail at the last rung when it comes to evidence versus proof. 

To give you an example:

  • Statement: “User signals are an important factor for better organic performance.”
  • Fact: Better CTR performance can lead to better rankings.
  • Data: You can directly measure this on your own site, and several experiments showed the impact of user signals long before it became common knowledge.
  • Evidence: There are experiments demonstrating causal effects, and a well-known portion of the 2024 Google leak focuses on evaluating user signals.
  • Proof: Court documents in Google’s DOJ monopoly trial confirmed the data and evidence, making this universally true.

Fun fact: Rand Fishkin and Marcus Tandler both said that user signals matter many years ago and were laughed at, much like scientists in the 1800s. 

At the time, the evidence wasn’t strong enough. Today, their “joke” is now the truth.

If I were you, here’s what I would do:

  • Seek dissenting viewpoints: You only truly understand something when you can argue in its favor. The best defense is steelmanning your argument. To do that, you need to fully understand the other side.
  • Consume with the intent to understand: Too often, we listen to reply, which means we don’t listen at all and instead converse with ourselves in our own heads. We focus on our own arguments and what we will say next. To understand, you need to listen actively.
  • Pause before you share and believe: False information is highly contagious, so sharing half-truths or lies is dangerous. You also shouldn’t believe something simply because a well-known person said it or because it’s repeated over and over again.
  • Don’t use AI to summarize (perhaps controversial): AI has significant flaws when it comes to summarization. For example, prompts that ask for brief summaries increase hallucinations, and source material can put a veil of credibility and trust over the response.

We will see why the last point is a big problem in a second.

The prime example: Blinding AI workslop

I decided against finger-pointing, so there is no link or mention of who this is about. With a bit of research, you might find the example yourself.

This “research” was promoted in the following way:

  • “How AI search really works.”
  • Requiring a time investment of weeks.
  • 19 studies and six case studies analyzed.
  • Validated, reviewed, and stress-tested.

To quote Edmans:

  • “It’s not for the authors to call their findings groundbreaking. That’s for the reader to judge. You need to shout about the conclusiveness of your proof or the novelty of your results. Maybe they’re not strong enough to speak for themselves. … It doesn’t matter what fancy name you give your techniques or how much data you gather. Quantity is no substitute for quality.”

Just because something took a long time does not mean the results are good. 

Just because the author or authors say so does not mean the findings are groundbreaking.

According to the HBR, AI workslop is:

  • “AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”

I don’t have proof this work was AI-generated. It’s simply how it felt when I read it myself, with no skimming or AI summaries. 

Here are a few things that caught my attention:

  • It doesn’t deliver what it claims. It purports to explain how AI search works, but instead lists false correlations between studies that analyzed something different from what the analysis claims.
  • Reported sample sizes are inaccurate.
  • Studies and articles are mishmashed.
  • One source is a “someone said something that someone said something that someone said.”
  • Cited research didn’t analyze or conclude what is claimed in the meta-analysis.
  • The “correlation coefficient” isn’t a correlation coefficient, but a weighted score.
  • To be specific, it misdates the GEO study as 2024 instead of 2023 and claims the research “confirms” that schema markup, lists, and FAQ blocks significantly improve inclusion in AI responses. A review of the study shows that it makes no such claims.

This analysis looks convincing on the surface and masquerades as good work, but on closer inspection, it crumbles under scrutiny.

Disclaimer: I specifically wanted to highlight one example because it reflects everything I wrote about in my last article and serves as a perfect continuation. 

This “research” was shared in newsletters, news sites, and roundups. It got a lot of eyeballs.

Let’s now take a look at the three, in my opinion, most pervasive recommendations for influencing the rate of your AI citations.

Dig deeper: Forget the Great Decoupling – SEO’s Great Normalization has begun

Get the newsletter search marketers rely on.


The most common GEO myths: Claims vs. reality

‘Build an llms.txt’

The claims for why this should help:

  • AI chatbots have a centralized source of important information to use for citations.
  • It’s a lightweight file that makes it easier for AI crawlers to evaluate your domain.

When viewed through the ladder of misinference, the llms.txt claim is a statement. 

Some parts are factual – for example, Google and others crawl these files, and Google even indexes and ranks them for keywords – and there is data to support that. 

However, there is no data or evidence showing that llms.txt files boost AI inclusion. There is certainly no proof.

The reality is that llms.txt is a proposal from 2024 that gained traction largely because it was amplified by influencers. 

It was repeated often enough to become one of the more tiring talking points in black-and-white debates.

One side dismisses it entirely, while the other promotes it as a secret holy grail that will solve all AI visibility problems.

The original proposal also stated:

  • “We furthermore propose that pages on websites that have information that might be useful for LLMs to read provide a clean markdown version of those pages at the same URL as the original page, but with .md appended.”

This approach would lead to internal competition, duplicate content, and an unnecessary increase in total crawl volume. 

The only scenario where llms.txt makes sense is if you operate a complex API that AI agents can meaningfully benefit from.

(There’s a small experiment showing that neither llms.txt nor .md files have an impact on AI citations.)

So, if I were you, here’s what I would do:

  • On a quarterly basis:
    • Check whether companies such as OpenAI, Anthropic, and Google have openly announced support.
    • Review log files to see how crawl volume to llms.txt changes over time. You can do this without providing an llms.txt file.
  • If it is officially supported, create one according to published documentation guidelines.

At the moment, no one has evidence – or proof – that an llms.txt meaningfully influences your AI presence.

‘Use schema markup’

The claims for why this should help:

  • Machines love structured data.
  • Generally, the advice “make it as easy as possible” holds true.
  • Microsoft said so.”

The last point is egregious. No one has a direct quote from Fabrice Canel or the exact context in which he supposedly said this.

For this recommendation, there is no solid data or evidence.

The reality is this:

  • For training
    • Text is extracted and HTML elements are stripped.
    • Tokenization after pretraining destroys coherent code if markup makes it through to this step.
    • The existence of LLMs is based on structuring unstructured content.
    • They can handle schema and write it because they are trained to do so.
    • That doesn’t mean your individual markup plays a role in the knowledge of the foundation model.
  • For grounding
    • There is no evidence that AI chatbots access schema markup.
    • Correlation studies show that websites with schema markup have better AI visibility, but there are many rival theories that could explain this.
    • Recent experiments (including this and this) showed the opposite. The tools AI chatbots can access don’t use the HTML.
    • I recently tested this in Perplexity Comet. Even with an open DOM, it hallucinated schema markup on the page that didn’t match what was actually there.

Also, when someone says they use structured data, that can – but does not have to – mean schema. 

All schema is structured data, but not all structured data is schema. In most cases, they mean proper HTML elements such as tables and lists. 

So, if I were you, here’s what I would do:

  • Use schema markup for supported rich results.
  • Use all relevant properties in your schema markup.

You might ask why I recommend this. To me, solid schema markup is a hygiene factor of good SEO. 

Just because AI chatbots and agents don’t use schema today doesn’t mean they won’t in the future.

“One could say the same for llms.txt.” That’s true. However, llms.txt has no SEO benefits.

Schema markup doesn’t help us understand how AI systems process content directly. 

Instead, it helps improve signals they frequently look at, such as search rankings, both in the top 10 and beyond for fan-out queries.

‘Provide fresh content’

The claims for why this should help:

  • AI chatbots prefer fresh content.
  • Fresh content is important for some queries and prompts.
  • Newer or recently updated content should be more accurate.

Compared with llms.txt and schema markup, this recommendation stands on a much more solid foundation in terms of evidence and data.

The reality is that foundation models contain content up to the end of 2022. 

After digesting that information, they need fresh content, which means cited sources, on average, have to be more recent.

If freshness is relevant to a query – OpenAI, Anthropic, and Perplexity use freshness as a signal to determine whether to use web search – then finding fresh sources matters.

There is research supporting this hypothesis from Ahrefs, Generative Pulse, and Seer Interactive

More recently, a scientific paper also supported these claims.

A few words of caution about that paper:

  • The researchers used API results, not the user interface. Results differ because of chatbot system prompts and API settings. Surfer recently published a study showing how large those differences can be.
  • Asking a model to rerank is not how the model or chatbot actually reranks results in the background.
  • The way dates were injected was highly artificial, with a perfect inverse correlation that may exaggerate the results.

That said, this recommendation appears to have the strongest case for meaningfully influencing AI visibility and increasing citations.

So, if I were you, here’s what I would do:

  • Add a relevant date indicating when your content was last updated.
  • Keep update dates consistent:
    • On-page.
    • Schema markup.
    • Sitemap lastmod.
  • Update content regularly, especially for queries where freshness matters. Fan-out queries from AI chatbots often signal freshness when a date is included.
  • Never artificially update content by changing only the date. Google stores up to 20 past versions of a web page and can detect manipulation.

In other words, this one appears to be legitimate.

Dig deeper: The rise of ‘like hat’ SEO: When attention replaces outcomes

Escaping the vortex of AI search misinformation

We have to avoid shoveling AI search misinformation into the walls of our industry. 

Otherwise, it will become the asbestos we eventually have to dig out.

An attention-grabbing headline should always raise red flags. 

I understand the allure of believing what appears to be the consensus or using AI to summarize. It’s easier. We’re all busy.

The issue is that there was already too much content to consume before AI. Now there’s even more because of it. 

We can’t consume and analyze everything, so we rely on the same tools not only to generate content, but also to consume it.

It’s a snake-biting-its-own-tail problem. 

Our compression culture risks creating a vortex of AI search misinformation that feeds back into the training data of the AI chatbots we both love and hate. 

We’re already there. AI chatbots sometimes answer GEO questions from model knowledge.

Take the time to think for yourself and get your hands dirty. 

Try to understand why something should or shouldn’t work. 

And never take anything at face value, no matter who said it. Authority isn’t accuracy.

P.S. This article may contain lies.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.