Skip to content




Why surface-level SEO tactics won’t build lasting AI search visibility

Featured Replies

Google search monolith crumbling

A recent Harvard Business Review piece echoes the shift we’re sseeing in the SEO industry: at a macro level, LLMs and Google’s AI-powered SERP features, such as AI Overviews, aren’t just creating a zero-click environment, but also changing user journeys and behavior.

They’re collapsing what used to be multi-touch customer journeys into a single synthesized answer.

For a more visual and emphatic metaphor, the monolith of “Search” is crumbling.

google-search-crumbling.png

When that happens, brands lose many of the touchpoints they once owned, and your marketing strategy must change accordingly. HBR captures this moment well, arguing that marketing now has a new audience and that algorithms increasingly shape first impressions.

That said, while the article points in the right direction on the broader trend, its tactical advice is generic and falls back on shallow tactics.

Much of the guidance returns to familiar marketing playbook ideas that sound strategic and innovative but lack real operational depth. That gap matters for the longevity and sustainability of visibility.

The narrative may be easy for you to understand and repeat at the executive level, but it glosses over the deeper structural changes you must actually make to adapt to the new search ecosystem.

The problem with flock tactics

The HBR article centers on schema, authorship signals, and branded concepts. These recommendations risk becoming what I call “flock tactics.”

These ideas spread quickly because they’re easy to explain, but they offer little lasting competitive advantage once everyone adopts them.

Schema 

Schema has been one of the most debated topics in LLM and AI optimization. Microsoft Bing confirmed it uses schema for its LLMs, but the relationship between Google’s models and third-party LLMs isn’t as straightforward.

While it isn’t necessarily wrong to recommend schema as part of your overall search optimization activities (SEO and AI), positioning it as a table-stakes tactic ignores diminishing returns once competitors implement similar markup and it becomes standard.

Another gap is the role of external knowledge systems, such as Wikidata or authoritative publishers. Much of the information LLMs rely on comes from those sources rather than a single company’s website.

This is less linear to understand, explain, and demonstrate as a single line item on an activity tracker, but these are nuances you now have to deal with, whether you like it or not.

What’s also missing is any exploration — or even a nod — to how models ingest and prioritize structured data compared with the many unstructured signals they rely on.

E-E-A-T — shallow authorship signals

Attaching the names, credentials, and biographies of real experts follows familiar E-E-A-T logic and represents reasonable hygiene.

The problem is that the treatment remains superficial. It risks pushing you to focus on cosmetic signals such as bios, headshots, and credential lists without strengthening the underlying expertise pipeline.

There is a meaningful difference between placing an author bio on a page and cultivating a genuine expert entity whose work appears in conferences, third-party publications, standards committees, or academic collaborations.

Only the latter produces signals that models are more likely to recognize and trust.

Vanity concepts

The article also suggests creating branded frameworks or concepts — for example, something like “The Acme Index” — to help models associate ideas with your company. In theory this sounds appealing, but in practice it’s extremely difficult to execute.

Unless those ideas spread into the trusted datasets LLMs tend to prioritize, they rarely gain traction.

You need those concepts and frameworks adopted and discussed by entities other than yourself, including academic journals, technical standards, widely used software ecosystems, and other prominent entities in your category.

What often results instead is a proliferation of branded labels that remain largely invisible to the models they were meant to influence.

The structural blind spots

Beyond these tactical issues, the analysis overlooks deeper structural challenges. It treats AI primarily as an external platform shift.

The implication is that you must simply adapt to it rather than actively shaping your own environment.

Internalizing AI infrastructure

HBR never seriously considers the possibility of building AI into your own infrastructure. You can deploy assistants, RAG systems, and domain-specific agents within your own products and customer experiences.

These systems operate in logged-in, transactional contexts where first-party data and controlled interfaces still matter enormously.

In those environments, traditional concerns such as site architecture, structured data, and product design remain deeply relevant, though they operate differently from public search optimization.

It’s not just SEO

The discussion also frames SEO primarily as a page-ranking problem tied to discovery.

That perspective misses the broader shift toward entity-level knowledge management (things, not strings).

Visibility within LLMs increasingly depends on how well you structure entities, taxonomies, and knowledge graphs, and on how those systems connect with external data sources.

Most LLMs don’t process data at the petabyte scale Google uses to understand entity relationships. There is a strong correlation that when something ranks well on Google, third-party LLMs often correlate and “trust” Google’s guidance on which brands to show, for what, and when.

HBR’s phrase “engineering recall” points directly to this deeper data engineering work, yet the implications aren’t expanded.

LLM model heterogeneity

Another major omission is the diversity of AI systems themselves.

Different AI assistants and models rely on different training datasets, refresh cycles, retrieval mechanisms, and safety layers.

That heterogeneity means you can’t assume a single optimization strategy will work across all AI surfaces.

It also doesn’t explore the risk of broad-stroke approaches. If you try to increase visibility within AI models without accounting for safety filters, attribution errors, or hallucinations, you may gain visibility in ways that are inaccurate or reputationally damaging.

Surface-level tactics won’t build AI visibility

HBR’s article works well as a high-level explanation of how AI is changing marketing. It helps you understand that traditional SEO alone is no longer enough and that you must consider how AI systems see and describe your brand.

As a practical guide, however, the advice is thin. Most recommendations focus on surface-level tactics that many companies will quickly copy, reinforcing the echo chamber of flock tactics that are easy to sell and quantify, but risk narrowing your focus to short-term wins at the expense of longer-term strategy.

The real challenge is deeper. You need clear entity definitions, structured knowledge systems, reliable data in trusted sources AI models use, testing across how different models represent you, and AI-powered experiences within your own products.

“Winning” in the AI era will depend less on cosmetic SEO improvements and more on the harder structural work behind the scenes.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.