Introduction to Writing Content Security Policy Tests

The CSP test suite uses the standard W3C testharness.js framework, but there are a few additional things you'll need to do because of the unique way CSP works, even if you're already an expert at writing W3C tests. These tests require the use of the wptserve server (included in the web-platform-tests repository) to operate correctly.

What's different about writing CSP tests?


Content Security Policy is preferentially set through an HTTP header. This means we can't do our tests just as a simple set of HTML+CSS+JS files. Luckily the wptserver framework provides an easy method to add headers to a file.

If my file is named example.html then I can create a file example.html.headers to define the headers that will be served with it. If I need to do template substitutions in the headers, I can instead create a file named example.html.sub.headers.

Negative Test Cases and Blocked Script Execution

Another interesting feature of CSP is that it prevents things from happening. It even can and prevent script from running. How do we write tests that detect something didn't happen?

Checking Reports

CSP also has a feature to send a report. We ideally want to check that whenever a policy is enforced, a report is sent. This also helps us with the previous problem - if it is difficult to observe something not happening, we can still check that a report fired.

Putting it Together

Here's an example of a simple test. (ignore the highlights for now...) This file lives in the /content-security-policy/script-src/ directory.


    <script src='/resources/testharness.js'></script>
    <script src='/resources/testharnessreport.js'></script>
    <h1>Inline script should not run without 'unsafe-inline' script-src directive.</h1>
    <div id='log'></div>

    test(function() {
        asset_unreached('Unsafe inline script ran.')},
        'Inline script in a script tag should not run without an unsafe-inline directive'

    <img src='doesnotexist.jpg' onerror='test(function() { assert_false(true, "Unsafe inline event handler ran.") }, "Inline event handlers should not run without an unsafe-inline directive");'>

    <script async defer src='../support/checkReport.sub.js?reportField=violated-directive&reportValue=script-src%20%27self%27'></script>


This code includes three tests. The first one in the script block will generate a failure if it runs. The second one, in the onerror handler for the img which does not exist should also generate a failure if it runs. But for a successful CSP implementation, neither of these tests does run. The final test is run by the link to ../support/checkReport.sub.js. It will load some script in the page (make sure its not blocked by your policy!) which contacts the server asynchronously and sees if the expected report was sent. This should always run an generate a positive or negative result even if the inline tests are blocked as we expect.

Now, to acutally exercise these tests against a policy, we'll need to set headers. In the same directory we'll place this file:


Expires: Mon, 26 Jul 1997 05:00:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Cache-Control: post-check=0, pre-check=0, false
Pragma: no-cache
Set-Cookie: script-src-1_1={{$id:uuid()}}; Path=/content-security-policy/script-src/
Content-Security-Policy: script-src 'self'; report-uri  ../support/{{$id}}

This sets some headers to prevent caching (just so we are more likely to see our latest changes if we're actively developing this test) sets a cookie (more on that later) and sets the relevant Content-Security-Policy header for our test case.

What about those highlights?

In production code we don't like to repeat ourselves. For this test suite, we'll relax that rule a little bit. Why? It's easier to have many people contributing "safe" files using some template substitutions than require every file to be executable content like Python or PHP which would require much more careful code review. The highlights show where you have to be careful as you repeat yourself in more limited static files.

The YELLOW highlighted text is information that must be the same between both files for report checking to work correctly. In the html file, we're telling checkReport.sub.js to check the value of the violated-directive key in the report JSON. So it needs to match (after URL encoding) the directive we set in the header.

The PINK highlighted text is information that must be repeated from the path and filename of your test file into the headers file. The name of the cookie must match the name of the test file without its extension, the path for the cookie must be correct, and the relative path component to the report-uri must also be corrected if you nest your tests more than one directory deep.

Check Your Effects!

A good test case should also verify the state of the DOM in addition to checking the report - after all, a browser might send a report without actually blocking the banned content. Note that in a browser without CSP support there will be three failures on the example page as the inline script executes.

How exactly you check your effects will depend on the directive, but don't hesitate to use script for testing to see if computed styles are as expected, if layouts changed or if certain elements were added to the DOM. Checking that the report also fired is just the final step of verifing correct behavior.

Note that avoiding inline script is good style and good habits, but not 100% necessary for every test case. Go ahead and specify 'unsafe-inline' if it makes your life easier.

Report Existence Only and Double-Negative Tests

If you want to check that a report exists, or verify that a report wasn't sent for a double-negative test case, you can pass ?reportExists=[true|false] to checkReport.sub.js instead of reportField and reportValue.

How does the magic happen?

Behind the scenes, a few things are going on in the framework.

  1. The {{$id:uuid}} templating marker in the headers file tells the wptserve HTTP server to create a new unique id and assign it to a variable, which we can re-use as {{$id}}.
  2. We'll use this UUID in two places:
    1. As a GET parameter to our reporting script, to uniquely identify this instance of the test case so our report can be stored and retrieved.
    2. As a cookie value associated with the filename, so script in the page context can learn what UUID the report was sent under.
  3. The report listener is a simple python file that stashes the report value under its UUID and allows it to be retrieved again, exactly once.
  4. checkReport.sub.js then grabs the current path information and uses that to find the cookie holding the report UUID. It deletes that cookie (otherwise the test suite would overrun the maximum size of a cookie header allowed) then makes an XMLHttpRequest to the report listener to retrieve the report, parse it and verify the contents as per the parameters it was loaded with.

Why all these gymnastics? CSP reports are delivered by an anonymous fetch. This means that the browser does not process the response headers, body, or allow any state changes as a result. So we can't pull a trick like just echoing the report contents back in a Set-Cookie header or writing them to local storage.

Luckily, you shouldn't have to worry about this magic much, as long as you get the incantation correct.