Skip to content




What blog posts should you write to be mentioned in ChatGPT?

Featured Replies

Query expansion

Across 90 prompts we tested in ChatGPT, commercial prompts triggered web searches 78.3% of the time. Informational prompts did so just 3.1%.

That gap changes what you should write if you want to appear in a ChatGPT answer.

ChatGPT doesn’t pull every response from the same place. Some answers come from training data; others use live web search — a behavior called query fan-out. The model expands your prompt into multiple background searches, then retrieves and synthesizes across those subtopics. If your page isn’t on those branches, it won’t be pulled in.

So the question is no longer just how to rank. It’s which pages open the fan-out door in the first place.

In our sample, informational pages didn’t. Read on to discover where the system went instead.

We tested 90 prompts across three industries: beauty, legaltech/regtech, and IT. We analyzed prompt intent, downstream query expansion, and the intent those expansions reflected.

Here’s the breakdown and the core finding: most queries aligned with commercial intent, not purely informational prompts.

Why this question matters now and how query fan-outs come into play

Query fan-outs change the content game because the system isn’t limited to the literal prompt.

It expands the request into multiple background searches, then retrieves and synthesizes across those subtopics.

Fan-outs trigger parallel web searches tied to the initial prompt, creating opportunities for retrieval, mention, and link citation.

Multi-query expansion is a core design pattern in modern generative search systems. Google describes AI Mode this way: it breaks a question into subtopics, searches them in parallel across multiple sources, then combines the results into a single response.

That raises a strategic SEO question: should you invest more in top-of-funnel educational content, or in lower-funnel comparison, shortlist, and recommendation content?

This experiment framed that problem.

The objective was to test, across selected industries, where fan-out appears by intent category: informational, commercial, transactional, or branded.

The initial hypothesis was direct: informational prompts wouldn’t trigger fan-out, while commercial prompts would, and those fan-outs would stay at the same funnel level or move lower.

We found that ChatGPT-generated fan-outs are overwhelmingly associated with commercial intent.

Disclaimer: This experiment measures observed prompt expansion behavior in ChatGPT. Google AI Mode is cited only as context to show multi-query expansion as a broader pattern in generative search, not as proof of ChatGPT’s internal architecture.

The setup: what we tested

The core sample includes 90 numbered prompts, heavily weighted toward informational intent.

Prompt intentPromptsShare of samplePrompts with fan-outFan-out rate
Informational6572.2%23.1%
Commercial2325.6%1878.3%
Branded11.1%00.0%
Transactional11.1%00.0%

The sample skews heavily toward informational prompts, with some commercial ones and minimal branded and transactional queries.

We structured the experiment around the sectors in the brief: beauty/personal care, legaltech/regtech, and IT/tech.

The result: commercial prompts triggered almost everything

The main finding is clear.

Out of 90 prompts, 20 triggered fan-out. Of those, 18 were commercial and 2 informational.

Informational prompts made up about 10% of fan-out triggers (2 of 20). When they did trigger expansion, they were rewritten into more evaluative, solution-seeking subqueries.

In other words, 90% of fan-out-triggering prompts in the core sample came from commercial intent.

The contrast is stronger than the raw totals suggest. Commercial prompts triggered fan-out 78.3% of the time; informational prompts did so just 3.1%.

This supports the working hypothesis: in this sample, fan-out was overwhelmingly a commercial phenomenon.

Those 20 prompts produced 42 fan-out queries — an average of 2.1 per triggered prompt.

Of those 42 fan-out queries:

  • 39 were commercial.
  • 2 were branded.
  • 1 was informational.

Even when a prompt triggered expansion, the system usually shifted toward comparison, product evaluation, feature filtering, shortlist creation, or brand-specific exploration — not broad educational discovery.

Methodology: how we performed the analysis

The experiment used 90 prompts across three industries, mostly informational, with a smaller set of commercial prompts and minimal branded and transactional queries.

In the analysis, we have:

  • Selected a representative battery of prompts.
  • Identified the fan-outs.
  • Classified each fan-out by intent.
  • Observed distribution by prompt metadata.

The analysis then followed three steps:

  1. Each prompt was classified according to prompt-intent labels.
  2. We counted the prompts triggering fan-out (at least one).
  3. We inspected the observed expansion queries and their assigned fan-out intent labels.

That produced two distinct but complementary views:

  • A prompt-level view, asking whether a given prompt triggered fan-out at all.
  • A fan-out-query view, asking what kind of intent the downstream expansion actually took.

That distinction matters: the first shows which prompts open the fan-out path, while the second shows where the system goes once it opens.

Interpreting the results: fan-out tends to move down-funnel

The cleanest interpretation is that, in this sample, fan-outs behave less like open-ended topic expansion and more like assisted decision support.

Commercial prompts almost always opened the door.

Once they did, fan-outs usually stayed commercial.

The system expanded into comparisons, feature-based filtering, product lists, pricing-adjacent queries, and brand-specific evaluations.

A few examples make that concrete.

  • “Suggest the best accounting software for small business and explain why” expanded into a commercial comparison query around features.
  • “What are the top AI document management systems for lawyers?” expanded into multiple product-oriented legaltech queries.
  • “What are the best products for skin care?” expanded into a shortlist-style query around product categories and reviews.

The two informational exceptions are even more revealing than the rule.

  • “I need an open-source document management system. What can you suggest?” was labeled informational at prompt level, but the resulting fan-out moved into solution recommendation.
  • “AI tools for legal research and document automation” also moved into a clearly commercial/evaluative downstream query.

So, even when the prompt starts broad, fan-out often translates that breadth into a lower-funnel retrieval path.

What this means for content strategy

The takeaway isn’t to stop writing informational content.

It’s this: informational content alone is unlikely to align consistently with fan-out expansion, at least in this dataset.

If your goal is visibility in AI answers tied to product selection, vendor discovery, or option narrowing, you need stronger coverage of pages and passages that match those downstream commercial branches.

That may include:

  • best-of and shortlist pages
  • comparison pages
  • which tool should I choose” pages
  • feature-led category explainers
  • alternatives pages
  • evaluation FAQs
  • recommendation-oriented paragraphs embedded inside broader educational pages

In practical terms, your content model shouldn’t be just ToFU or BoFU, but ToFU with commercial bridges.

A broad article can still help, but it should include passages the system can easily reformulate into decision-support subqueries.

A purely educational piece that explains a category without naming products, tradeoffs, features, use cases, pricing logic, or selection criteria is much less likely to align with the fan-out paths seen here.

Put simply: Don’t just answer the obvious question — anticipate the next evaluative step the system is likely to generate in the background.

Limitations

This result is directional, not universal.

  • 90 prompts reveal a pattern, but not a stable law of AI retrieval behavior.
  • The prompt mix is uneven. Informational prompts dominate the sample, while branded and transactional prompts are barely represented. That means those findings aren’t proof of absence.
  • The dataset spans industries but isn’t normalized by brand, wording style, or use case. Some sectors may be easier to express in product-discovery language.
  • This is an observational analysis of recorded fan-outs, not a controlled platform-level test. It shows what happened in this prompt set, not how ChatGPT always behaves.
  • Google’s description of fan-out provides context, but this isn’t a Google AI Mode test. It’s a ChatGPT-focused prompt and fan-out dataset. The takeaway is strategic, not architectural.

What to test next

The next version of this experiment should isolate the question more aggressively and expand the dataset.

A follow-up should map triggered fan-outs back to specific content formats.

The goal isn’t just to confirm that commercial intent wins. It’s to identify which page templates and passage structures best cover the fan-out branches AI systems prefer.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.