React SPAs have an SEO problem. Here's how to solve it.
React is the most popular frontend framework in the world. Millions of applications are built with it. And the vast majority of them — every Create React App project, every Vite + React setup, every app generated by Lovable, Bolt.new, or Base44 — share the same fundamental SEO limitation.
They render content in the browser, not on the server. Most crawlers can't see that content.
This guide covers everything you need to know to fix this — without migrating to Next.js.
Why React SPAs are invisible to most crawlers
A standard React SPA works like this:
- The server sends a minimal HTML file with
<div id="root"></div> - The browser downloads and executes your JavaScript bundle
- React renders your components into the DOM
- The user sees the complete page
The problem: most crawlers skip step 2–3. They read the HTML from step 1 and move on.
The crawler compatibility matrix
| Crawler | Renders JavaScript | Sees React content |
|---|---|---|
| Googlebot | Yes (with delays) | Usually yes |
| GPTBot (ChatGPT) | No | No |
| ClaudeBot | No | No |
| PerplexityBot | No | No |
| Bingbot | Partially | Often no |
| Social bots (LinkedIn, X, Facebook) | No | No |
| DuckDuckBot | No | No |
Only Googlebot reliably renders JavaScript — and even then, it uses a secondary rendering queue that can delay indexing by days or weeks for new sites.
The 8 things you need to get right
1. Serve rendered HTML to crawlers
This is the single most impactful fix. Everything else is secondary if crawlers can't read your content.
Your options:
- Pre-rendering middleware like CrawlReady — deploys in minutes, zero code changes, serves rendered HTML to bots while humans get the normal SPA
- Server-side rendering via Next.js — requires a complete migration but is the cleanest long-term architecture
- Static export — works for content-only sites but not for dynamic applications
For existing React SPAs, pre-rendering is the pragmatic choice. For new projects, consider starting with Next.js.
2. Manage meta tags with React Helmet
React Helmet (or React Helmet Async) lets you set <title>, <meta>, and other head tags from within your components:
import { Helmet } from 'react-helmet-async'
function ProductPage() {
return (
<>
<Helmet>
<title>Product Name — Your Company</title>
<meta name="description" content="A clear, compelling description under 160 characters." />
<link rel="canonical" href="https://your-site.com/product" />
<meta property="og:title" content="Product Name" />
<meta property="og:description" content="A clear, compelling description." />
<meta property="og:image" content="https://your-site.com/og-image.jpg" />
<meta property="og:url" content="https://your-site.com/product" />
<meta property="og:type" content="website" />
</Helmet>
{/* Page content */}
</>
)
}
Critical caveat: React Helmet injects these tags via JavaScript. Non-JS crawlers won't see them unless you pre-render. This is why point #1 is foundational.
3. Set up proper heading structure
Search engines and AI crawlers use headings to understand page structure:
- One
<h1>per page — your main topic or product name <h2>tags for major sections<h3>tags for subsections- Never skip levels (don't go h1 → h3)
<h1>AI-Powered Music Coaching</h1>
<h2>Features</h2>
<h3>Real-Time Pitch Analysis</h3>
<h3>Timing Feedback</h3>
<h2>Pricing</h2>
<h2>FAQ</h2>
4. Implement canonical URLs
Every page should declare its canonical URL to prevent duplicate content issues:
<Helmet>
<link rel="canonical" href="https://your-site.com/current-page" />
</Helmet>
Watch for common React SPA issues:
- Trailing slash inconsistency (
/pagevs/page/) - Query parameters creating duplicates (
/page?ref=twitter) - Old subdomains still being referenced
5. Add Open Graph tags for social sharing
When someone shares your link on LinkedIn, X, Facebook, Slack, or Discord, those platforms fetch your page and look for OG tags to build the preview card.
Required OG tags:
<meta property="og:title" content="Your Page Title" />
<meta property="og:description" content="A compelling description." />
<meta property="og:image" content="https://your-site.com/og-image.jpg" />
<meta property="og:url" content="https://your-site.com/page" />
<meta property="og:type" content="website" />
Image requirements:
- Minimum 1200x630px for best display
- Use absolute URLs (not relative paths)
- No spaces or special characters in filenames
- Test with Facebook's Sharing Debugger and Twitter's Card Validator
6. Generate and submit a sitemap
Your React SPA needs an XML sitemap listing all important routes:
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://your-site.com/</loc>
<lastmod>2026-03-15</lastmod>
</url>
<url>
<loc>https://your-site.com/features</loc>
<lastmod>2026-03-15</lastmod>
</url>
<url>
<loc>https://your-site.com/pricing</loc>
<lastmod>2026-03-15</lastmod>
</url>
</urlset>
For SPAs, you typically need to generate this statically (at build time) or serve it from your server/middleware. React Router doesn't automatically create sitemaps.
Submit your sitemap to Google Search Console at https://search.google.com/search-console.
7. Add structured data (JSON-LD)
Structured data helps search engines and AI systems understand your content at a semantic level:
function OrganizationSchema() {
const schema = {
"@context": "https://schema.org",
"@type": "Organization",
name: "Your Company",
url: "https://your-site.com",
description: "What your company does in one sentence.",
logo: "https://your-site.com/logo.png",
}
return (
<script
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(schema) }}
/>
)
}
Useful schema types for SaaS products:
- Organization — who you are
- Product — what you sell
- FAQ — frequently asked questions (great for rich snippets)
- Article — blog posts and guides
- BreadcrumbList — navigation hierarchy
Again: JSON-LD injected via React is invisible to non-JS crawlers without pre-rendering.
8. Configure robots.txt
Create a robots.txt in your public directory that explicitly allows AI crawlers:
User-agent: *
Allow: /
User-agent: GPTBot
Allow: /
User-agent: ChatGPT-User
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: PerplexityBot
Allow: /
Sitemap: https://your-site.com/sitemap.xml
Check your current robots.txt by visiting https://your-site.com/robots.txt. Many hosting platforms add default restrictions you might not know about.
Common React SPA SEO mistakes
Mistake 1: Assuming Google sees everything
Googlebot does render JavaScript, but through a delayed queue. New pages can take days to weeks to be rendered. Low-authority sites get lower priority. And even Google recommends server-side rendering for critical content.
Mistake 2: Relying on react-snap or react-snapshot
These tools pre-render at build time, which works for static content but breaks for dynamic routes, authenticated pages, and content that changes frequently. They're also no longer actively maintained.
Mistake 3: Ignoring AI crawlers
In 2026, AI search is no longer optional. GPTBot traffic grew 305% year-over-year. If your React app is invisible to AI crawlers, you're missing a significant and growing discovery channel.
Mistake 4: Using hash routing
React Router's HashRouter uses URLs like your-site.com/#/page. Crawlers typically ignore everything after the #. Always use BrowserRouter for SEO-relevant routes.
Mistake 5: Forgetting about client-side redirects
If your React app handles redirects in JavaScript (e.g., useNavigate() or <Navigate />), crawlers won't follow them. Server-side redirects (301/302) are required for SEO.
The React SPA SEO checklist
- Crawlers receive rendered HTML (not empty shell)
- Every page has a unique
<title>under 60 characters - Every page has a
<meta name="description">under 160 characters - Every page has a
<link rel="canonical">pointing to the correct URL - One
<h1>per page with no heading level skips - Open Graph tags set for social sharing
- OG image is at least 1200x630px with absolute URL
- XML sitemap generated and submitted to GSC
- robots.txt allows all important crawlers including AI bots
- JSON-LD structured data for Organization, Product, or relevant types
- BrowserRouter used (not HashRouter)
- No JavaScript-only redirects for SEO-important routes
- Google Search Console verified and sitemap submitted
- Tested with
curlto confirm what raw HTML looks like
Next steps
- Run a CrawlReady audit — see your current visibility gap and specific issues
- Read our comparison of dynamic rendering vs SSR vs pre-rendering to choose your fix
- Check our guide on structured data for SPAs for the JSON-LD deep-dive
This guide applies to React SPAs built with Create React App, Vite, or any client-side rendering setup. If you're already using Next.js with SSR, most of these issues don't apply — but you should still verify with an audit.
Ready to fix your visibility?
CrawlReady deploys in minutes on your Cloudflare account. No code changes. No proxy. Starting at $9/mo.
See Pricing