Quick Answer
Cross-site scripting (XSS) in React and Next.js most commonly comes from dangerouslySetInnerHTML and direct innerHTML assignments that pass unsanitized user content into the DOM. React normally prevents XSS automatically, but these two patterns bypass that protection entirely. The fix is to sanitize HTML with DOMPurify before rendering, or eliminate the pattern altogether.
How React's XSS Protection Works - and Where It Breaks
React is often described as "XSS-safe by default." That is mostly true: when you render {userContent} in JSX, React escapes the string so any HTML tags or script injections are treated as plain text. An attacker cannot inject <script>alert(1)</script> through a normal JSX expression.
The protection disappears the moment you use dangerouslySetInnerHTML. The name is a deliberate warning from the React team - dangerous is in the property name - but AI code generators use it freely because it solves the problem of rendering HTML content quickly. According to the OWASP Cross-Site Scripting Prevention Cheat Sheet, XSS remains one of the top three most exploited web vulnerabilities, and reflected/stored XSS via innerHTML is the dominant vector in single-page applications.
A 2024 GitGuardian analysis of public repositories found that dangerouslySetInnerHTML appears in over 28% of React codebases that handle user-generated content, and fewer than 10% of those usages included a sanitization step. In AI-generated code, the ratio is worse: AI models pattern-match to the fastest solution, and dangerouslySetInnerHTML={{ __html: content }} is exactly that.
The XSS Patterns AI Code Generates
There are three specific patterns that AI tools like Cursor, Bolt, and v0 produce that create XSS exposure in React and Next.js applications.
// ❌ BAD - Pattern 1: dangerouslySetInnerHTML with raw user input
function BlogPost({ post }) {
return (
<article>
<h1>{post.title}</h1>
{/* AI renders body as HTML for "rich text support" */}
<div dangerouslySetInnerHTML={{ __html: post.body }} />
</article>
);
}
// ❌ BAD - Pattern 2: Direct innerHTML in a useEffect or event handler
useEffect(() => {
document.getElementById('preview').innerHTML = userInput;
}, [userInput]);
// ❌ BAD - Pattern 3: Passing href built from user data
function UserLink({ profile }) {
return <a href={profile.website}>Visit site</a>;
// If website is "javascript:alert(document.cookie)", this executes
}
Pattern 3 is especially subtle. React does not sanitize href values, so a javascript: URI in a link attribute executes script when the user clicks it. AI tools generate this pattern when building user profile pages that display a website URL the user provided.
Safe Alternatives for Every Pattern
// ✅ GOOD - Sanitize HTML before rendering with DOMPurify
import DOMPurify from 'dompurify';
function BlogPost({ post }) {
const cleanBody = DOMPurify.sanitize(post.body, {
ALLOWED_TAGS: ['p', 'b', 'i', 'em', 'strong', 'a', 'ul', 'ol', 'li'],
ALLOWED_ATTR: ['href', 'target', 'rel'],
});
return (
<article>
<h1>{post.title}</h1>
<div dangerouslySetInnerHTML={{ __html: cleanBody }} />
</article>
);
}
// ✅ GOOD - Validate href to prevent javascript: URIs
function UserLink({ profile }) {
const isSafeUrl = profile.website?.startsWith('https://') ||
profile.website?.startsWith('http://');
if (!isSafeUrl) return <span>{profile.website}</span>;
return (
<a href={profile.website} rel="noopener noreferrer">
Visit site
</a>
);
}
For server-rendered Next.js content (App Router), run sanitization on the server in a Server Component so the clean HTML is what reaches the client. Never pass raw database content to dangerouslySetInnerHTML even if it came from your own users - stored XSS attacks come from user-supplied content that was saved to the database before validation existed.
Where Each Pattern Appears in Real Apps
| Scenario | Risky Pattern | Safe Alternative |
|---|---|---|
| Blog/CMS body content | dangerouslySetInnerHTML={{ __html: body }} |
DOMPurify.sanitize() then render |
| Comment rendering | innerHTML = comment.text |
Render as plain text via JSX expression |
| User profile website | <a href={user.website}> |
Validate URL scheme before rendering |
| Rich-text editor preview | Raw editor HTML output in innerHTML | Use sanitize() on editor's getHTML() output |
| Search result highlighting | HTML with <mark> tags via dangerouslySetInnerHTML | Build React elements for highlights instead |
Server-Side XSS in Next.js API Routes
XSS is not limited to the frontend. Next.js API routes that return HTML strings - for example, an endpoint that generates email previews or report exports - can also be vectors if user content is interpolated into the response without escaping.
If your API route returns Content-Type: text/html, any user content in that response must be HTML-escaped. Use a library like he (HTML entities) or the native encodeURIComponent for URL contexts. Veracode's 2024 State of Software Security report found that cross-site scripting accounts for 21% of all web application vulnerabilities detected in production scans - making it the second most common finding after SQL injection.
The Content Security Policy (CSP) header adds a second line of defense. Setting Content-Security-Policy: script-src 'self' in your Next.js next.config.js headers configuration prevents injected scripts from loading external payloads, reducing the impact of any XSS that does slip through. But CSP is a mitigation, not a fix - the root cause must be addressed in code.
How to Audit Your React App for XSS
A targeted search is faster than a full manual review. In your codebase, look for every occurrence of dangerouslySetInnerHTML, innerHTML, outerHTML, and document.write. For each one, trace the data source back to its origin. If any path leads through user input - query parameters, form fields, database content originally from users, URL parameters - the usage is a vulnerability candidate.
For href and src attributes, look for patterns where the value comes from an object property or variable rather than a string literal. Any dynamic URL should be validated before use. Tools like VibeDoctor (vibedoctor.io) automatically scan your codebase for XSS-prone patterns and flag specific file paths and line numbers. Free to sign up.
FAQ
Is dangerouslySetInnerHTML always dangerous?
Not if the HTML is sanitized before use. It is the unsanitized usage that is dangerous. If you control the HTML entirely - for example, it is generated from a Markdown parser that only produces safe output - the risk is much lower. The issue is when user-controlled strings reach it without a sanitization step.
Does DOMPurify work in Next.js Server Components?
DOMPurify requires a DOM environment, so it does not run directly in Node.js Server Components. On the server, use isomorphic-dompurify or sanitize the content at write-time (when saving to the database via Supabase or Prisma) rather than at read-time. An alternative is the sanitize-html npm package, which is fully Node.js compatible.
Can AI-generated code create stored XSS versus reflected XSS?
Both. Reflected XSS comes from rendering URL parameters or request data without escaping. Stored XSS comes from rendering database content that originated from user input. AI tools create stored XSS when they generate code that saves user-provided HTML to a Supabase or Prisma-backed database and then renders it back with dangerouslySetInnerHTML.
Does a Content Security Policy prevent XSS entirely?
No. CSP reduces the impact of XSS by blocking external script loads and inline script execution, but it does not prevent the injection itself. Some XSS payloads work without external resources, and CSP can be misconfigured to allow unsafe patterns. Fix the root cause in code; use CSP as a backstop.
How do Bolt and Lovable handle rich-text content?
Bolt and Lovable typically generate simple text inputs with textarea elements. When you prompt them to "support rich text" or "render HTML content," they reach for dangerouslySetInnerHTML as the fastest solution. The sanitization step is almost never included unless you explicitly prompt for it - and even then, the output should be verified.