Skip to content




Why now is the time to prepare for WebMCP

Featured Replies

Why now is the time to prepare for WebMCP

New technologies come and go. Early in my career, I often chased shiny new things in an attempt to be on the cutting edge, but it didn’t take more than a few years to realize I was spending countless hours of my time, and my clients’ time, implementing technologies and techniques that went by the wayside. Google Authorship, anyone?

It turns out that if you simply wait for wider — but still early — adoption, learn from the first movers’ mistakes, and catch up quickly, you can avoid wasting time and create greater value for yourself and those you serve. That lesson has served me well.

And then there are those key moments where the early movers stand to not just win in the current landscape, but to shape and lead the next one. Think of the first people reading the PageRank paper and thinking, “I should build some links.” WebMCP feels like one of those moments, only bigger.

It’s not just a revolution in how search works or even in generative engine visibility. We’re at a moment where the very place discoverability occurs is changing, and who (or rather, what) is doing the discovering is changing with it.

Coming soon: Non-human engagement

While SEOs have long debated whether we should be optimizing for search engines or humans (shockingly, it’s both), that paradigm is about to be turned on its head. What happens when discovery shifts from a human to an LLM or agentic system?

This change is already underway. Whenever you visit ChatGPT with a request, it makes decisions, runs supplemental searches, asks follow-up questions, and returns conclusions. The agent is planning and deciding on your behalf, and your resulting output is shaped entirely by what it retrieves and how it interprets it.

We can even see the supplemental (fanout) queries in DevTools:

What-is-there-to-do-on-the-Outer-Banks.p

I think of this as the latest chapter in a longer story:

  • Discovery v1: People interacted with the world and discovered things firsthand. Experience and word of mouth were the discovery points.
  • Discovery v2: People started writing things down. Libraries and educational institutions became the discovery points, followed by newspapers and books.
  • Discovery v3: The web proliferates information and media at a scale previously unimaginable. Directories, then search engines, rose to aid discovery.
  • Discovery v4 (current): After about 25 years of search engines, LLMs rose and discovery moved to a blended, LLM-forward format. Light agentic capabilities are baked in to assist retrieval. People are still in the loop, but the assistant is doing more of the legwork.
  • Discovery v5 (on the horizon): Agentic systems move beyond being assistants in the retrieval and presentation layer and are given autonomy to act on users’ behalf. Many users will have their own agents. Companies will offer them. Google almost certainly will.

I would argue that the stage we’re entering, Discovery v5, will be the most dramatic since the shift to v2.

Can’t you just imagine a world where basic decisions are offloaded from your brain and body, leaving you room to pursue more important things? I know I’ve seen this utopia before.

image-150.png

I honestly don’t see it resulting in this future, but the world we’re creating right now is fundamentally different from the one we’re in presently as marketers, and WebMCP is one of the first concrete steps in this journey. 

Dig deeper: WebMCP explained: Inside Chrome 146’s agent-ready web preview

The trust ratchet only turns one way

Do you accept what you read in an AI Overview and stop your journey there more often than you did on the day it launched? Not 100% of the time, but more often than you did? You do. So do I.

For quick, low-risk queries, we’re happy to trust it. If you’re like me, as these systems have evolved and improved, you’ve started trusting them with higher-stakes information.

Would I trust an AI Overview with tax questions or major health decisions? No. Would I trust it to remind me of the benefits of vitamin D or pull together a dinner recipe? Absolutely.

That boundary keeps moving. As it moves, so does what we’re willing to let an agent do on our behalf, not just what we’ll let it tell us.

  • The cost of being wrong when automating the reorder of groceries you’re running low on is small.
  • The benefit of an agent monitoring flight and hotel combinations for an amazing refundable deal, on your days off, within your budget, is very high.
  • The benefit of hopping in an autonomous vehicle with your family after work on a Friday, dinner in hand, playing a game and sleeping, and arriving at Disney World rested just in time for opening — that’s pretty compelling.

You may say you’ll never hand your autonomy to an agentic system. People said the same about search engines, smartphones, and GPS. The path usually goes: 

  • Skepticism (“Who would ever enter their credit card number on a website?!”)
  • Reluctant adoption (“Ugh, it’s an online service, and I trust the company and don’t have a choice. Alright, I’ll give them my card. But just this once.”)
  • Dependency (“I can’t believe I used to actually go into stores!”)

What does this have to do with WebMCP?

Here’s where it gets concrete and actionable.

MCP servers and skills files are early versions of the infrastructure that makes Discovery v5 possible, but the barrier to entry is high, and they apply only in specific contexts. 

WebMCP is different. It’s a browser-native web standard, currently published as a W3C Community Group Draft and in early preview in Chrome 146 beta as of this writing, that gives websites a structured way to expose actions directly to AI agents without scraping, guessing, or brittle automation.

This isn’t a Google-only initiative. The specification is co-authored by engineers from both Google and Microsoft, which matters. When two of the largest browser and AI platform vendors are writing the spec together, it has a different trajectory than a unilateral bet.

Right now, when an AI agent tries to take an action on your website, like filling out a form, booking an assessment, or searching your inventory, it has to figure everything out by reading your page and inferring intent.

It looks at your DOM, guesses what your fields mean, hopes the date format it picks is the one your form expects, and submits. It’s intelligent, but it’s also fragile. One UI change and the whole flow breaks.

WebMCP changes this by letting you tell the agent exactly what your site can do and how to do it. The spec defines two distinct ways to do that: one that closely maps to what you already know, and one that handles more complex, dynamic interactions.

Get the newsletter search marketers rely on.


Declarative vs. imperative: You already know this distinction

WebMCP proposes two APIs, and the difference between them will feel familiar to anyone who’s spent time in technical SEO.

The Declarative API is the one that should make you sit up and get to work right away. The idea is straightforward.

  • You annotate your existing HTML forms with attributes that describe what the form does and what each field means.
  • The browser automatically translates that into a structured tool any agent can call. 
  • The form continues working exactly as before for human visitors. 

The agent gets a clean, unambiguous interface.

To be clear, the declarative API is still being formally specified, and the exact attribute names aren’t locked down yet. But the concept is settled, and demos are already running. 

Think of it the way you’d think about schema markup in its early days: the syntax evolved, but the underlying idea, annotating what already exists so machines can understand it, was clear and worth acting on.

The analogy to schema markup is almost exact. You’re not building a new system. You’re making what you already have legible to a new class of visitor. That’s a pattern SEOs understand intuitively.

The Imperative API is more mature in the spec and already available for testing. You register tools directly in JavaScript. Here’s an example for a site taking bookings for an assessment:

navigator.modelContext.registerTool({
  name: "book-assessment",
  description: "Book a free IT assessment for your business.",
  inputSchema: {
    type: "object",
    properties: {
      name: { type: "string", description: "Customer's full name" },
      city: { type: "string", description: "City for the assessment" },
      slot: { type: "string", description: "Preferred time in ISO 8601 format" }
    },
    required: ["name", "city", "slot"]
  },
  execute: async (input) => {
    // your booking logic here
    return { confirmed: true, appointmentId: "APT-001" };
  }
});

This is more powerful and flexible, the right approach for dynamic interactions, multi-step flows, or anything that can’t map cleanly to a single form. Here’s something that makes it genuinely interesting: the tools available on a page can change based on state.

A hotel booking demo from Google Chrome Labs illustrates this well. After an agent runs a search_location tool, a new filter_search_results tool appears. After selecting a hotel, start_booking becomes available. The agent’s toolset evolves as the user’s journey progresses, just as a well-designed interface guides a human through a flow.

Think of declarative as the equivalent of adding schema markup to existing content: low lift, high legibility, great starting point. An imperative is like building a fully structured data feed, It takes more effort, offers more power, and is better suited to complex or dynamic needs. Most sites should start with declarative and extend into imperative as their needs grow.

A quick note on scope: The example below uses the declarative side of WebMCP because, as we’ve discussed, that’s the easiest place for most site owners and SEOs to start. It maps naturally to existing HTML forms. Add clear machine-readable descriptions to the form and its fields, and the page becomes easier for agents to understand. 

The imperative API is more case-specific. It’s better suited to dynamic flows, multi-step interactions, custom JavaScript logic, or cases where an action does not map cleanly to a single form.

What the agent sees: Before and after

The contrast is easiest to see with something every service business already has: a booking or contact form. This form:

<form action="/contact" method="POST">
  <label for="name">Name</label>
  <input id="name" name="name" type="text" required>
  <label for="email">Email</label>
  <input id="email" name="email" type="email" required>
  <label for="city">City</label>
  <input id="city" name="city" type="text">
  <label for="message">Message</label>
  <textarea id="message" name="message" required></textarea>
  <button type="submit">Send</button>
</form>

Now here is the same form prepared for WebMCP using declarative-style annotations:

<form action="/contact" method="POST"
toolname="submitContactInquiry"
tooldescription="Submit a contact inquiry for a service business.">
<label for="name">Name</label>
<input
  id="name"
  name="name"
  type="text"
  required
  toolparamdescription="The requester's full name."
>
<label for="email">Email</label>
<input
  id="email"
  name="email"
  type="email"
  required
  toolparamdescription="A valid email address where the requester can be contacted."
>
<label for="city">City</label>
<input
  id="city"
  name="city"
  type="text"
  toolparamdescription="The city where the requester is located."
>
<label for="message">Message</label>
<textarea
  id="message"
  name="message"
  required
  toolparamdescription="The requester's question, project details, or service need."
></textarea>
<button type="submit">Send</button>
</form>

The form still works the same way for a human visitor. Nothing about the normal user experience had to change.

The difference is that an agent no longer has to guess what the form does or what each field means. The form declares its action with toolname and tooldescription, and each important input explains itself with toolparamdescription.

That’s the core idea. You’re not rebuilding the site for agents. You’re making the existing interface easier for them to understand.

And critically, this doesn’t have to mean fully automatic submission. For a contact form, you may want an agent to prepare the form and let the user review it before sending. For a low-risk action, you may eventually allow more automation. The point is that the action becomes explicit, structured, and less fragile.

The attributes proposed for forms are:

  • toolname: The name of the tool (in this case, a form tool).
  • tooldescription: The description of the tool (in this case, the description of a form).
  • toolautosubmit: A boolean attribute that lets the agent submit the form on the user’s behalf without requiring consent. This may not seem like it’d make sense if you’re thinking about just chatting with ChatGPT, but it suddenly makes sense if you have agents engaged in complex tasks, hooked up to your email, and tasked with completing something complex like making reservations or compiling information that requires details beyond a login or form fill.
  • toolparamdescription: A description of a specific parameter, so the agent is aware of the field it’s engaging with.

You can keep up with the specifics of the declarative API as it evolves in the Declarative API Explainer.

Why this matters for your sites specifically

Think about the types of queries agentic systems will handle on behalf of users in a Discovery v5 world:

  • “Find me an SEO consultant who understands technical SEO, doesn’t talk like a LinkedIn carousel, and has time for a call next week.”
  • “Compare three AI agent observability tools and tell me which one seems most likely to solve my actual problem instead of selling me a chatbot.”
  • “Find a contra dance near me this Friday, check whether beginners are welcome, and add it to my calendar if the band looks fun.”

Which site gets the engagement? The one the agent can interact with cleanly, confidently, and without friction. If your competitor has WebMCP-registered tools and you don’t, the agent completes the action on their site and moves on. The user may never know they had a choice.

There’s a secondary implication worth naming. Tool descriptions are the new meta descriptions. The quality of your tool name, description, and parameter definitions will directly shape whether an agent selects your tool over a competitor’s, understands what it does, and calls it correctly. 

The best practices guidance in the WebMCP documentation reads like conversion copywriting. Use clear verbs, explain the why behind options, and be specific about what each parameter means. If that sounds familiar, it should. You’ve been writing for machine readers for years. This is the next layer.

The window is open, but not forever

I’ve been skeptical of early adoption my whole career. I still am, as a default. But I’ve also learned to recognize the moments that are different in kind, not just degree.

image-151.png

Schema markup was one. SSL was one. Mobile optimization was one. Each time, the window in which early movers earned disproportionate returns was real and finite. In each case, the people who understood the underlying shift, not just the tactic, were the ones who compounded that advantage.

WebMCP is a W3C Community Group Draft today, co-authored by Google and Microsoft, already running in Chrome 146 beta, and already integrated into Cloudflare’s infrastructure. It’s not table stakes yet. But the trajectory is clear:

  • The spec matures.
  • Browsers ship it.
  • Agents learn to prefer sites that expose structured tools.
  • The sites that haven’t caught up become invisible to that class of visitor.

The declarative approach, once finalized, means the barrier to starting will be genuinely low: annotations on your most important forms, not a new backend system. The imperative API is available for testing right now.

That’s the argument. It’s the reason I’m making it now, not in six to 12 months when everyone else is trying to catch up.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.