Skip to content




No-JavaScript fallbacks in 2026: Less critical, still necessary

Featured Replies

No-JavaScript fallbacks in 2026- Less critical, still necessary

Google can render JavaScript. That’s no longer up for debate. But that doesn’t mean it always does — or that it does so instantly or perfectly.

Since Google’s 2024 comments suggesting it renders all HTML pages, many developers have questioned whether no-JavaScript fallbacks are still necessary. Two years later, the answer is clearer and more nuanced.

Google’s stance on JavaScript rendering

In July 2024, Google sparked debate during an episode of Search Off the Record titled “Rendering JavaScript for Google Search.” When asked how Google decides which pages to render, Martin Splitt said: 

  • “If it’s so expensive, how do we decide which page should get rendered and which one doesn’t?” 

Zoe Clifford, from Google’s rendering team, replied: 

  • “We just render all of them, as long as they’re HTML, and not other content types like PDFs.”

That comment quickly led developers, especially those building JavaScript-heavy or single-page applications, to argue that no-JavaScript fallbacks were no longer necessary.

Many SEOs weren’t convinced. The remark was informal, untested at scale, and lacking detail. It wasn’t clear:

  • How rendering fit into Googlebot’s process.
  • Whether pages were queued for later execution.
  • How the system behaved under resource constraints.
  • Whether Google might fall back to non-rendered crawling under load.

Without clarity on timing, consistency, and limits, removing fallbacks entirely still felt risky.

What Google’s documentation actually says

Google’s documentation now gives us a much clearer picture of how JavaScript rendering actually works. Let’s start with the “JavaScript SEO basics” page:

image-101.png

What Google says:

  • “Googlebot queues all pages with a 200 HTTP status code for rendering, unless a robots meta tag or header tells Google not to index the page. The page may stay on this queue for a few seconds, but it can take longer than that. Once Google’s resources allow, a headless Chromium renders the page and executes the JavaScript. Googlebot parses the rendered HTML for links again and queues the URLs it finds for crawling. Google also uses the rendered HTML to index the page.”

Google clearly states that JavaScript rendering doesn’t necessarily happen on the initial crawl. Once resources allow, a headless browser is used to parse JavaScript. 

Googlebot likely won’t click on all JavaScript elements, so this probably only includes scripts that don’t require user interactions to fire.

This is important because it tells us Google may make some basic determinations before JavaScript is rendered, via subsequent execution queues. 

If content is generated behind elements (content tabs, etc.) that Google doesn’t click, it likely won’t be discovered without no-JavaScript fallbacks.

Looking at Google’s “How Search works” documentation:

image-99.png

The language is much simpler. Google states it will attempt, at some point, to execute any discovered JavaScript. There’s nothing here that directly contradicts what we’ve seen so far in other Google documentation.

On March 31, Google published a post titled “Inside Googlebot: demystifying crawling, fetching, and the bytes we process,” which further clarifies JavaScript crawling.

image-103.png

The notes on partial fetching are particularly interesting. Google will only crawl up to 2MB of HTML. If a page exceeds this, Google won’t discard it entirely, but instead examines only the first 2MB of returned code.

Google explicitly states that extreme resource bloat, including large JavaScript modules, can still be a problem for indexing and ranking. 

If your JavaScript approaches 2MB and appears at the top of the page, it may push HTML content far enough down that Google won’t see it. The 2MB limit also applies to individual resources pulled into a page. If a CSS file, image, or JavaScript module exceeds 2MB, Google will ignore it.

We’re beginning to see that Google’s claim that it renders all pages comes with important caveats. 

In practice, it seems unlikely that a page with no consideration for server-side rendering (SSR) or no-JavaScript fallbacks would be handled optimally. This highlights why it’s risky to take comments from Googlers at face value without following how the details evolve over time.

The question we opened with is also evolving. It’s less “Do I need blanket no-JavaScript fallbacks in 2026?” and more “Do I still need critical-path fallbacks and resilient HTML within my application?”

Google’s recent search documentation updates add more context:

image-102.png

Google has recently softened its language around JavaScript. It now says it has been rendering JavaScript for “multiple years” and has removed earlier guidance that suggested JavaScript made things harder for Search. 

It also notes that more assistive technologies now support JavaScript than in the past. 

Within that same documentation, Google still recommends pre-rendering approaches, such as server-side rendering and edge-side rendering.

image-100.png

So while the language is softer, Google isn’t suggesting developers can ignore how JavaScript affects SEO.

Looking again at the December 2025 updates:

image-99.png

Google states that non-200 pages may not receive JavaScript execution. This suggests no-JavaScript fallbacks for internal linking within custom 404 pages may still be important.

Google also notes that canonical tags are processed both before and after JavaScript rendering. If source HTML canonicals and JavaScript-modified canonicals don’t match, this can cause significant issues. Google suggests either omitting canonical directives from the source HTML so they’re only evaluated after rendering, or ensuring JavaScript doesn’t modify them.

These updates reinforce an important point: even as Google becomes more capable at rendering JavaScript, the initial HTML response and status code still play a critical role in discovery, canonical handling, and error processing.

Dig deeper: Google removes accessibility section from JavaScript SEO section

Get the newsletter search marketers rely on.


What the data shows

JavaScript rendering is introducing new inconsistencies across the web, according to recent HTTP Archive data:

image-98.png

We can see that since November 2024, the percentage of crawled pages with valid canonical links has dropped.

Via the HTTP Archives 2025 Almanac:

image-95.png

About 2-3% of rendered pages exhibit a “changed” canonical URL, something Google’s documentation explicitly states can be confusing for its indexing and ranking systems. That 2-3% doesn’t explain the larger drop in valid canonical deployment since November 2024.

Other factors are likely at play, such as the adoption of new CMS platforms that don’t properly handle canonicals. The rise of vibe-coded websites using tools like Cursor and Claude Code may also be contributing to these issues across the web.

In July 2024, Vercel published a study to help demystify Google’s JavaScript rendering process:

image-97.png

It analyzed more than 100,000 Googlebot fetches and found that all resulted in full-page renders, including pages with complex JavaScript. However, 100,000 fetches is a relatively small sample given Googlebot’s scale. 

The study was also limited to sites built on specific frameworks, so it’s unwise to assume Google always renders pages perfectly. It’s also unclear how deeply those renders were analyzed.

It does suggest that Google attempts to fully render most pages it encounters. Broadly speaking, Google can generate JavaScript-modified renders, but the quality of those renders is still up for debate. As noted earlier, the 2MB page and resource limits still apply.

Because this study dates to mid-2024, any contradictions with Google’s updated 2025–2026 documentation should take precedence.

Vercel also published a notable finding:

  • “Most AI crawlers don’t execute JavaScript. We tested the major ones (ChatGPT, Claude, and others), and the results were consistent: none of them render client-side content. If your Next.js site ships critical pages as JavaScript-dependent SPAs, those pages are inaccessible to the systems shaping how people discover information.”

So even if Google is far more capable with JavaScript than it used to be, that’s not true across the broader web ecosystem. Many systems still rely on HTML-first delivery. That’s why you shouldn’t rush to remove no-JavaScript fallbacks — they may still be critical to your future visibility.

Cloudflare’s 2025 review is also worth noting:

image-96.png

Cloudflare reported that Googlebot alone accounted for 4.5% of HTML request traffic. While this doesn’t directly explain how Google handles JavaScript, it does highlight the scale at which Google continues to crawl the web.

Dig deeper: How the DOM affects crawling, rendering, and indexing

No-JavaScript fallbacks in 2026

The question we set out to answer was whether no-JavaScript fallbacks are required in 2026.

Google is far more capable with JavaScript than in previous years. Its documentation shows that pages are queued for rendering, and that JavaScript is executed and used for indexing. For many sites, heavy reliance on JavaScript is no longer the red flag it once was.

However, the details of Google’s rendering process still matter. Rendering isn’t always immediate. There are resource constraints, and not all behaviors are supported.

At the same time, the broader web ecosystem hasn’t necessarily kept pace with Google. The risk of removing all no-JavaScript fallbacks hasn’t disappeared — it’s just changed shape.

Key takeaways:

  • Google doesn’t necessarily render JavaScript on the first crawl. There’s a rendering queue, and execution happens when resources allow.
  • Technical limits still exist, including a 2MB HTML and resource cap, and limited interaction with user-triggered elements.
  • Non-200 responses may not receive rendering treatment, which keeps basic HTML and linking important in some cases.
  • Differences between raw HTML and rendered output still exist at scale across the web.
  • Google’s guidance still leans toward SSR (server-side rendering), pre-rendering, and resilient HTML for critical content.
  • Other crawlers, especially AI-driven ones, often don’t execute JavaScript at all. As these systems become more important, the need for fallbacks may increase again.
  • Blanket, site-wide no-JavaScript fallbacks aren’t universally required in 2026, but critical content, links, and signals shouldn’t depend entirely on JavaScript. Many modern crawlers still rely on HTML-first delivery.

For now, no-JavaScript fallbacks for critical architecture, links, and content are still strongly recommended, if not required going forward.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.