Skip to content




The AI engine pipeline: 10 gates that decide whether you win the recommendation

Featured Replies

The AI engine pipeline- 10 gates that decide whether you win the recommendation

AI recommendations are inconsistent for some brands and reliable for others because of cascading confidence: entity trust that accumulates or decays at every stage of an algorithmic pipeline.

Addressing that reality requires a discipline that spans the full algorithmic trinity through assistive agent optimization (AAO). It also demands three structural shifts: the funnel moves inside the agent, the push layer returns, and the web index loses its monopoly.

The mechanics behind that shift sit inside the AI engine pipeline. Here’s how it works.

The AI engine pipeline: 10 gates and a feedback loop

Every piece of digital content passes through 10 gates before it becomes an AI recommendation. I call this the AI engine pipeline, DSCRI-ARGDW, which stands for:

  • Discovered: The bot finds you exist.
  • Selected: The bot decides you’re worth fetching.
  • Crawled: The bot retrieves your content.
  • Rendered: The bot translates what it fetched into what it can read.
  • Indexed: The algorithm commits your content to memory.
  • Annotated: The algorithm classifies what your content means across dozens of dimensions.
  • Recruited: The algorithm pulls your content to use.
  • Grounded: The engine verifies your content against other sources.
  • Displayed: The engine presents you to the user.
  • Won: The engine gives you the perfect click at the zero-sum moment in AI.

After “won” comes an 11th gate that belongs to the brand, not the engine: served. What happens after the decision feeds back into the AI engine pipeline as entity confidence, making the next cycle stronger or weaker.

DSCRI is absolute. Are you creating a friction-free path for the bots?

ARGDW is relative. How do you compare to your competition? Are you creating a situation in which you’re relatively more “tasty” to the algorithms?

Cascading confidence is multiplicative

Both sides of the AI engine pipeline are sequential. Each gate feeds the next.

Content entering DSCRI through the traditional pull path passes through every gate. Content entering through structured feeds or direct data push can skip some or all of the infrastructure gates entirely, arriving at the competitive phase with minimal attenuation.

Skipped gates are a huge win, so take that option wherever and whenever you can. You “jump the queue” and start at a later stage without the degraded confidence of the previous ones. That changes the economics of the entire pipeline, and I’ll come back to why.

Why the four-step model falls short

The four-step model the SEO industry inherited from 1998 — crawl, index, rank, display — collapses five distinct infrastructure processes into “crawl and index” and five distinct competitive processes into “rank and display.”

It might feel like I’m overcomplicating this, but I’m not. Each gate has nuance that merits its standalone position. If you have empathy for the bots, algorithms, and engines, remove friction, and make the content digestible, they’ll move you through each gate cleanly and without losing speed.

Each gate is an opportunity to fail, and each point of potential failure needs a different diagnosis. The industry has been optimizing a four-room house when it lives in a 10-room building, and the rooms it never enters are the ones where the pipes leak the worst.

Most SEO advice operates at the selection, crawling, and rendering gates. Most GEO advice operates at “displayed” and “won,” which is why I’m not a fan of the term. 

Most teams aren’t yet working on annotation and recruitment, which are actually where the biggest structural advantages are created.

Three audiences you need to cater to and three acts you need to master

The AI engine pipeline has an entry condition — discovery — and nine processing gates organized in three acts of three, each with a different primary audience.

Act I: Retrieval (selection, crawling, rendering)

  • The primary audience is the bot, and the optimization objective is frictionless accessibility.

Act II: Storage (indexing, annotation, recruitment)

  • The primary audience is the algorithm, and the optimization objective is being worth remembering: verifiably relevant, confidently annotated, and worth recruiting over the competition.

Act III: Execution (grounding, display, won)

  • The primary audience is the engine and, by extension, the person using the engine, where the optimization objective is being convincing enough that the engine chooses and the person acts.

Frictionless for bots, worth remembering for algorithms, and convincing for people. Content must pass every machine gate and still persuade a human at the end.

The audiences are nested, not parallel. Content can only reach the algorithm through the bot and can only reach the person through the algorithm. You can have the most impeccable expertise and authority credentials in the world. If the bot can’t process your page cleanly, the algorithm will never see it.

This is the nested audience model: bot, then algorithm, then person. Every optimization strategy should start by identifying which audience it serves and whether the upstream audiences are already satisfied.

Discovery: The system learns you exist

Discovery is binary. Either the system has encountered your URL or it hasn’t. Fabrice Canel, principal program manager at Microsoft responsible for Bing’s crawling infrastructure, confirmed:

  • “You want to be in control of your SEO. You want to be in control of a crawler. And IndexNow, with sitemaps, enable this control.”

The entity home website, the canonical web property you control, is the primary discovery anchor. The system doesn’t just ask, “Does this URL exist?” It asks, “Does this URL belong to an entity I already trust?” Content without entity association arrives as an orphan, and orphans wait at the back of the queue.

The push layer — IndexNow, MCP, structured feeds — changes the economics of this gate entirely. A later piece in this series is dedicated to what changes when you stop waiting to be found.

Act I: The bot decides whether to fetch your content

Selection: The system decides whether your content is worth crawling

Not everything that’s discovered gets crawled. The system makes a triage decision based on countless signals, including entity authority, freshness, crawl budget, perceived value, and predicted cost.

Selection is where entity confidence first translates into a concrete pipeline advantage. The system already has an opinion about you before it crawls a single page. That opinion determines how many of your pages it bothers to look at.

Crawling: The bot arrives and fetches your content

Every technical SEO understands this gate. Server response time, robots.txt, redirect chains. Foundational, but not differentiating.

What most practitioners miss is that the bot doesn’t arrive in a vacuum. Canel confirmed that context from the referring page can be carried forward during crawling. With highly relevant links, the bot carries more context than it would from a link on an unrelated directory.

Rendering: The bot builds the page the algorithm will see

This is where everything changes and where most teams aren’t yet paying attention. The bot executes JavaScript if it chooses to, builds the Document Object Model (DOM), and produces the full rendered page. 

But here’s a question you probably haven’t considered: how much of your published content does the bot actually see after this step? If bots don’t execute your code, your content is invisible. More subtly, if they can’t parse your DOM cleanly, that content loses significant value.

Google and Bing have extended a favor for years: they render JavaScript. Most AI agent bots don’t. If your content sits behind client-side rendering, a growing proportion of the systems that matter simply never see it.

Representatives from both Google and Bing have also discussed the efforts they make to interpret messy HTML. Here’s one way to look at it: search was built on favors, and those favors aren’t being offered by the new players in AI.

Importantly, content lost at rendering can’t be recovered at any downstream gate. Every annotation, grounding decision, and display outcome depends on what survives rendering. If rendering is your weakest gate, it’s your F on the report card. Everything downstream inherits that grade.

Act II: The algorithm decides whether your content is worth remembering

This is where most brands are losing out because most optimization advice doesn’t address the next two gates. And remember, if your content fails to pass any single gate, it’s no longer in the race.

Indexing: Where HTML stops being HTML

Rendering produces the full page as the bot sees it. Indexing then transforms that DOM into something the system can store. Two things happen here that the industry often misses:

  • The system strips the navigation, header, footer, and sidebar — elements that repeat across multiple pages on your site. These aren’t stored per page. The system’s primary goal is to identify the core content. This is why I’ve talked about the importance of semantic HTML5 for years. It matters at a mechanical level: <nav>, <header>, <footer>, <aside>, <main>, and <article> tell the system where to cut. Without semantic markup, it has to guess. Gary Illyes confirmed at BrightonSEO in 2017, possibly 2018, that this was one of the hardest problems they had at the time.
  • The system chunks and converts. The core content is broken into blocks or passages of text, images with associated text, video, and audio. Each chunk is transformed into a proprietary internal format. Illyes described the result as something like a folder with subfolders, each containing a typed chunk. The page becomes a hierarchical structure of typed content blocks.

I call this conversion fidelity: how much semantic information survives the strip, chunk, convert, and store sequence. Rendering fidelity (Gate 3) measures whether the bot could consume your content. Conversion fidelity (Gate 4) measures whether the system preserved it accurately when filing it away.

Both fidelity losses are irreversible, but they fail differently. Rendering fidelity fails when JavaScript doesn’t execute or content is too difficult for the bot to parse. Conversion fidelity fails when the system can’t identify which parts of your page are core content, when your structure doesn’t chunk cleanly, or when semantic relationships between elements don’t survive the format conversion.

Something we often overlook is that even after a successful crawl, indexing isn’t guaranteed. Content that passes through crawl and render may still not be indexed.

That might sound bad enough, but here’s a distinction that should concern you: indexing and annotation are separate processes. Content may be indexed but poorly annotated — stored in the system but semantically misclassified. Non-indexed content is invisible. Misannotated content actively confuses the system about who you are, which can be worse.

Annotation: Where entity confidence is built or broken

This is the gate most of the industry has yet to address.

Think of annotations as sticky notes on the indexed “folders” created at the indexing gate. Indexing algorithms add multiple annotations to every piece of content in the index.

I identified 24 annotation dimensions I felt confident sharing with Canel. When I asked him, his response was, “Oh, there is definitely more.” 

Those 24 dimensions were organized across five annotation layers: 

  • Gatekeepers (scope classification).
  • Core identity (semantic extraction).
  • Selection filters (content categorization).
  • Confidence multipliers (reliability assessment).
  • Extraction quality (usability evaluation).

There are certainly more layers, and each layer likely includes more dimensions than I’ve mapped. Hundreds, probably thousands. This is an open model. The community is invited to map the dimensions I’ve missed.

Annotation is where the system decides the facts: 

  • What your content is about.
  • Where it fits into the wider world.
  • How useful it is.
  • Which entity it belongs to.
  • What claims it makes.
  • How those claims relate to claims from other sources. 

Credibility signals — notability, experience, expertise, authority, trust, transparency — are evaluated here. Topical authority is assessed here, too, along with much more.

Annotation operates on what survives rendering and conversion. If critical information was lost at either gate, the annotation system is working with degraded raw material. It annotates what the annotation engine received, not what you originally published.

Canel confirmed a principle I suggested that should reshape how we think about this gate: “The bot tags without judging. Filtering happens at query time.” Annotation quality determines your eligibility for every downstream triage.

I have a full piece coming on annotation alone. For now, annotation is the gate where most brands silently lose and the one most worth working on.

Recruitment: Where the algorithmic trinity decides whether to absorb you

This is the first explicitly competitive gate. After annotation, the pipeline feeds into three systems simultaneously. 

  • Search engines recruit content for results pages (the document graph). 
  • Knowledge graphs recruit structured facts for entity representation (the entity graph). 
  • Large language models recruit patterns for training data and grounding retrieval (the concept graph).

Before recruitment, the system found, crawled, stored, and classified your content. At recruitment, it decides whether your content is worth keeping over alternatives that serve the same purpose.

Being recruited by all three elements of the algorithmic trinity gives you a disproportionate advantage at grounding because the grounding system can find you through multiple retrieval paths, and at display because there are multiple opportunities for visibility.

Recruitment is the structural advantage that separates brands with consistent AI visibility from brands that appear inconsistently.

Get the newsletter search marketers rely on.


Act III: The engine presents and the decision-maker commits

Grounding: Where AI checks its confidence in the content against real-time evidence

This is the gate that separates traditional search from AI recommendations.

Ihab Rizk, who works on Microsoft’s Clarity platform, described the grounding lifecycle this way:

  • The user asks a question. 
  • The LLM checks its internal confidence. If it’s insufficient, it sends cascading queries, multiple angles of intent designed to triangulate the answer, which many people call fan-out queries. 
  • Bots are dispatched to scrape selected pages in real time. 
  • The answer is generated from a combination of training data and fresh retrieval.

But grounding isn’t just search results, as many people believe. The other two technologies in the algorithmic trinity play a role.

The knowledge graph is used to ground facts. AI Overviews explicitly showed information grounded in the knowledge graph. It’s reasonable to assume specialized small language models are used to ground user-facing large language models.

The takeaway is that your content’s performance from discovery through recruitment determines whether your pages are in the candidate pool when grounding begins. If your content isn’t indexed, isn’t well annotated, or isn’t associated with a high-confidence entity, it won’t be in the retrieval set for any part of the trinity. The engine will ground its answer on someone else’s content instead.

You can’t optimize for grounding if your content never reaches the grounding stage.

Display: The output of the pipeline

Display is where most AI tracking tools operate. They measure what AI says about you. But by the time you’re measuring display, the decisions were already made upstream, from discovery through grounding.

Brands with high cascading confidence appear consistently. Brands with low cascading confidence appear intermittently, the same phenomenon Rand Fishkin demonstrated.

Display is where AI meets the user. It also covers the acquisition funnel, which is easy to understand and meaningful for marketers. This is where most businesses focus because it’s visible and sits just before the click. I’ll write a full article on that later in this series.

Won: The moment the decision-maker commits

Won is the terminal processing gate in the AI engine pipeline. Ten gates of processing, three acts of audience satisfaction, and it comes down to this: Did the system trust you enough to commit?

The accumulated confidence at this gate is called “won probability,” the system’s calculated likelihood that committing to you is the right decision. Three resolutions are possible, and they form a spectrum. To understand why that spectrum matters, you need to understand the 95/5 rule.

Professor John Dawes at the Ehrenberg-Bass Institute demonstrated that at any given moment, only about 5% of potential buyers are actively in-market. The other 95% aren’t ready to purchase. You sell to the 5%, but the real job of marketing is staying top of mind for the other 95% so that when they decide to move to purchase, on their schedule, not yours, you’re the brand they think of.

The three scenarios that follow show how AI takes over the job of being top of mind at the critical moment for the 95%. I call this top of algorithmic mind.

  • The imperfect click: The person browses a list of options, pogo-sticks between results, and decides. Traditional search and what Google called the zero moment of truth. The system doesn’t know who is ready. It shows everyone the same list and hopes. The 95/5 efficiency is low. You’re hitting and hoping, and so is the engine.
  • The perfect click: The AI recommends one solution and the person takes it. I call this the zero-sum moment in AI. This is where we are right now with assistive engines like ChatGPT, Perplexity, and AI Mode. The system has filtered for intent, context, and readiness. It presents one answer to a person moving from the 95% into the 5% with much higher precision.
  • The agential click: The agent commits, either after pausing for human approval, “Shall I book this?” or autonomously. The agent caught the moment of readiness, did the work, and closed it. Maximum precision. This is the ultimate solution to the 95/5 problem: AI catches the exact moment and acts.
The Won Spectrum

Search won’t disappear. Most people will always want to browse some of the time. Window shopping is fun, and emotionally charged decisions aren’t something people will always delegate.

The trajectory, however, moves from imperfect to perfect to agential. Brands need to optimize for all three outcomes on that spectrum, starting now. Optimizing for agents should already be part of your strategy, as should optimizing for assistive engines and search engines. AAO covers them all.

Search engines, AI assistive engines, and assistive agents are your untrained salesforce. Your job is to train them well enough that you’re top of algorithmic mind at the moment the 95% become the 5%, and the AI either:

  • Offers you as an option.
  • Recommends you as the best solution.
  • Actively makes the conversion for you.

Dig deeper: SEO in the age of AI: Becoming the trusted answer

Served: The pipeline remembers

After conversion, the brand takes over. You should optimize the post-won feedback gate. The processing pipeline, the DSCRI-ARGDW spine, gets you to the decision. Served sits outside that spine as the gate that closes the loop, turning the line into a circle.

Every “won” that produces a positive outcome strengthens the next cycle’s cascading confidence. Every “won” that produces a negative outcome weakens it. Ten gates get you to the decision. The 11th, served, determines whether the decision repeats and your advantage compounds.

This is where the business lives. Acquisition without retention is a leak, both directly and indirectly through the AI engine pipeline feedback loop.

Brands that engineer their post-won experience to generate positive evidence, reviews, repeat engagement, low return rates, and completion signals, build a flywheel. Brands that neglect post-won burn confidence with every cycle.

Diagnosing failure in the pipeline

The three acts — bot, algorithm, engine, or person — describe who you’re speaking to. The two phases describe what kind of test you’re taking.

  • Phase 1: Infrastructure, discovery through indexing
    • Absolute tests. You either pass or fail. A page that can’t be rendered doesn’t get partially indexed. Infrastructure gates are binary: pass or stall.
  • Phase 2: Competitive, annotation through won
    • Relative tests. Winning depends not just on how good your content is but on how good the competition is at the same gate.

The practical implication is infrastructure first, competitive second. If your content isn’t being found, rendered, or indexed correctly, fixing annotation quality is wasted effort. You’re decorating a room the building inspector hasn’t cleared.

In practice, brands tend to fail in three predictable ways.

  • Opportunity cost (Act I: Bot failures)
    • Your content isn’t in the system, so you have zero opportunity. Cheapest to fix, most expensive to ignore.
  • Competitive loss (Act II: Algorithm failures) 
    • Your content is in the system, but competitors’ content is preferred. The brand believes it’s doing everything right while AI systems consistently choose a competitor at recruitment, grounding, and display.
  • Conversion leak (Act III: Engine failures)
    • Your content is presented, but the system hedges or fumbles the recommendation. In short, you lose the sale.
The AI engine pipeline - DSCRI-ARGDW-Sv

Every gate you pass still costs you signal

In 2019, I published How Google Universal Search Ranking Works: Darwinism in Search, based on a direct explanation from Google’s Illyes about how Google calculates ranking bids by multiplying individual factor scores. A zero on any factor kills the entire bid.

Darwin’s natural selection works the same way: fitness is the product across all dimensions, and a single zero kills the organism. Brent D. Payne made this analogy: “Better to be a straight C student than three As and an F.” 

As with Google’s bidding system, cascading confidence is multiplicative, not additive. Here’s what that means:

Per-gate confidenceSurviving signal at the won gate
90%34.9%
80%10.7%
70%2.8%
60%0.6%
50%0.1%

Illustrative math, not a measurement. The principle is what matters: strengths don’t compensate for weaknesses in a multiplicative chain.

A single weak gate destroys everything. Nine gates at 90% plus one at 50% drops you from 34.9% to 19.4%. If that gate drops to 10%, it kills the surviving signal entirely. A near-zero anywhere in a multiplicative chain makes the whole chain near-zero.

This is competitive math. If your competitors are all at 50% per gate and you’re at 60%, you win: 0.6% surviving signal against their 0.1%. Not because you’re excellent, but because you’re less bad. 

Most brands aren’t at 90%. The worse your gates are, the bigger the gap a small improvement opens. Here’s an example.

GateDSCRIAReGDiWSurviving Signal
DiscoveredSelectedCrawledRenderedIndexedAnnotatedRecruitedGroundedDisplayedWon
Your Brand75%80%70%85%75%5%80%70%75%80%0.4%
Competitor65%60%65%70%60%60%65%60%65%60%1.8%

I chose annotated as the “F” grade in this example for demonstrative purposes.

Annotation is the phase-boundary gate. It’s the hinge of the whole pipeline. If the system doesn’t understand what your content is, nothing downstream matters.

Applying this Darwinian principle across a 10-gate pipeline, where confidence is measurable at every transition, is my diagnostic model. I recently filed a patent for the mechanical implementation.

Improving gates versus skipping them

There are two ways to increase your surviving signal through the pipeline, and they aren’t equal.

Improving your gates

Better rendering, cleaner markup, faster servers, and schema help the system classify your content more accurately. These are real gains, single-digit to low double-digit percentage improvements in surviving signal.

For many brands and SEOs, this is maintenance rather than transformation. It matters, and most brands aren’t doing it well, but it’s incremental.

Skipping gates entirely

Structured feeds, Google Merchant Center and OpenAI Product Feed Specification, bypass discovery, selection, crawling, and rendering altogether, delivering your content to the competitive phase with minimal attenuation. 

MCP connections skip even further, making data available from recruitment onward with triple-digit percentage advantages over the pull path.

If you’re only improving gates, you’re leaving an order of magnitude on the table.

The highest-value target is always the weakest gate

Improving your best gate from 95% to 98% is nearly invisible in the pipeline math. Improving your worst gate from 50% to 80% transforms your entire surviving signal. That’s the Darwinian principle at work: fitness is multiplicative, the weakest dimension determines the outcome, and strengths elsewhere can’t compensate.

Most teams are optimizing the wrong gate. Technical SEO, content marketing, and GEO each address different gates. Each is necessary, but none is sufficient because the pipeline requires all 10 to perform. Teams pouring budget into the two or three gates they understand are ignoring the ones that are actually killing their signal.

Then there’s the single-system mistake. At recruitment, the pipeline feeds into three graphs, the algorithmic trinity. Missing one graph means one entire retrieval path doesn’t include you.

You can be perfectly optimized for search engine recruitment and completely absent from the knowledge graph and the LLM training corpus. In a multiplicative system, that gap compounds with every cycle.

Most of the AI tracking industry is measuring outputs without diagnosing inputs, tracking what AI says about you at display when the decisions were already made upstream. That’s like checking your blood pressure without diagnosing the underlying condition.

The tools to do this properly are emerging. Authoritas, for example, can inspect the network requests behind ChatGPT to understand which content is actually formulating answers. But the real work is at the gates upstream of display, where your content either passed or stalled before the engine ever opened its mouth.

Audit your pipeline: Earliest failure first

The correct audit order is pipeline order. Start at discovery and work forward.

If content isn’t being discovered, nothing downstream matters. If it’s discovered but not selected for crawling, rendering fixes are wasted effort. If it’s crawled but renders poorly, every annotation and grounding decision downstream inherits that degradation.

This is your new plan: Find the weakest gate. Fix it. Repeat.

The inconsistency Fishkin documented is a training deficit. The AI engine pipeline is trainable. The training compounds. The walled gardens increase their lock-in with every cycle.

The brand that trains its AI salesforce better than the competition doesn’t just win the next recommendation. It makes the next one easier to win, and the one after that, until the gap widens to the point where competitors can’t close it without starting from scratch.

Without entity understanding, nothing else in this pipeline works. The system needs to know who you are before it can evaluate what you publish. Get that right, build from the brand up through the funnel, and the compounding does the rest.

Next: The five infrastructure gates the industry compressed into ‘crawl and index’

The next piece opens the infrastructure gates in full: rendering fidelity, conversion fidelity, JavaScript as a favor, not a standard, structured data as the native language of the infrastructure phase, and the investment comparison that puts numbers on improving gates versus skipping them entirely. 

The sequential audit shows where your content is dying before the algorithm ever sees it, and once you see the leaks, you can start plugging them in the order that moves your surviving signal the most.

This is the third piece in my AI authority series. The first, “Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it,” introduced cascading confidence. The second, “AAO: Why assistive agent optimization is the next evolution of SEO” named the discipline. 

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.