I recently posted the following Tweet with regards to the current state of the third-party security problem in the JavaScript ecosystem:

I wanted to fill in some of the background to this from my own work on Node.js modules and security concepts, following the Agoric SES and compartment models, and from a growing feeling of the inadequacy of the Node.js, Deno and browser runtimes for supporting the third-party security needs of the ecosystem.

TLDR; I think we need to think about new more secure runtimes for JS, and it should be a collaborative effort, with the components being modules, adding isolated scopes to import maps, and a careful security model plus compatibility with the existing ecosystem. Skip ahead to the proposal here.

Update: Since posting this, I see that Endo and LavaMoat provide techniques very close to these directions, although neither has quite yet taken the leap that I argue is necessary that such a security system should be integrated into the primary runtime itself.

#The Third-Party Security Problem

The underlying issue is the npm install one. As the registry and our dependence on it continues to expand, the security gap here continues to grow in terms of the amount of untrusted code we are running on a daily basis.

Maintainers giving up their time freely now find themselves obliged to respond to regular security issues or risk having unpatchable advisories released for their packages, which may or may not even be genuine escalations of privilege. We engage in security theatre to create the illusion of safety, and yet all the while everything remains highly unsecure.

Rather than simply accepting the status quo, many companies are actively working on mitigating these security properties. The problem is that they end up creating side ecosystems or patches to the existing ecosystem, security measures that are never fundamentally designed into the ecosystem itself. Third-party security remains a huge if not impossible effort, that only dedicated teams can afford to tackle, as we see for example with these intiatives by Figma or Salesforce.

The Realms proposal may give us the tools for constructing a secure runtime, but the JavaScript ecosystem conventions themselves work against supporting security restrictions.

The general view from Chrome/v8, is that this type of third-party per-package security within the same process isn't possible:

Now I admit I have fully bought in to the elegance of the the OCAP, SES and compartment models, the ideas shared by those at Agoric (who are long-time members of TC39). I gave a session on these concepts at the Node.js Collaboration summit.

For all the tremendous benefits of the concept of modular security, there are certainly important questions, but I believe we should actively tackle this work and these questions, and not abandon the same-process modular security models unless they can be fully disproved.

#The Compartment Model

The gist of the compartment model builds on top of SES (Secure ECMAScript), as proposed by Agoric, something like the following:

  1. All capabilities are imported through the module system (import fetch from 'fetch' kind of thing) - the module resolver acts as the capability system, enforcing permissions.
  2. The consequence of (1) is that all global capabilities should be disabled / carefully controlled.
  3. JavaScript needs a whole bunch of patching to prevent prototype mutations and unintentional side channels such as return { toString() {} } object hooks. You have to manage package interfaces very carefully and freeze the entire global object from prototype mutation.

See the talk by Mark Miller on Extremely Modular Distributed JavaScript, or my presentation from the Node.js Collaboration Summit, Security, Modules and Node.js, for a more in-depth coverage of the full model.

The result of this model is, in theory, the ability to restrict destructive code. The date time library you npm install cannot install a trojan horse on your computer, which seems a pretty useful property to have.

Towards (3) we already shipped the `--frozen-intrinsics` flag in Node.js. (1) and (2) clearly require breaking changes to what we have in any existing runtimes today.

#Criticisms

The criticisms of this model include the Spectre class of vulnerabilities, the difficulty in providing secure cross-package interfaces, and that these ideas might sound good in theory but are impractical in real JS environments.

#Spectre

The Spectre class of attacks means that code running on the same process can use CPU reverse engineering and timing information to read secret information used by other separate code in the same process. Think - passwords, secure tokens, etc.

The first thing to note is that Spectre is the ability to steal secrets and not the ability to install a trojan horse on your computer. Even if we can't fully mitigate Spectre (and we can certainly try), we are still limiting destructive capabilities such as giving full disk and network access to random people on the internet, which is a huge win. What we are comparing this model against, is having no separate security for third-party libraries at all, which is the case in Node.js, Deno and browsers today. In the case of an attack, it is better to just lose a credit card, than to lose a credit card AND have your house burnt down.

The second thing to note here is that if you have a true capability system and can carefully control network access, then the capability to exfiltrate (basically to use fetch), can itself be treated as a critical permission. Secrets might be discovered but not as easily shared.

The counterargument to controlling the capability to exfiltrate is that there are always side channels to be found - the blinking of a light through whatever complex window to share the information of the secret token. It's a complex boundary to mitigate.

Finally, in terms of genuine Spectre mitigations, Cloudflare have this same problem for their same-process deployment of Cloudflare Workers, which they recently discussed here - Mitigating Spectre and Other Security Threats: The Cloudflare Workers Security Model.

Their mitigations are summarized at the end, and roughly involve:

  • Restricting Date.now() and multi-threading via new Worker (which allows custom timer creation) to attempt to disable the time measurements necessary to initiate the attack.
  • Proactively detecting the attack behaviour based on monitoring and initiating full isolation.
  • Exploring memory shuffling techniques so that secret information does not remain static.

As Cloudflare mention, this is an active mitigation space that can continue to be developed. In theory, these similar mitigations could apply to new runtime development as well.

The important thing to note is that these mitigation techniques do not apply to the Web platform at all as they are simply not possible (at least not without Realms). The Google / v8 position completely makes sense, given this angle, but the focus I want to make is on new JavaScript runtimes, like successors to Node.js such as Deno and others, which should really be exploring these security properties today.

#Insecure Module Interfaces

The next major problem comes down to the complex interface boundary between third-party packages. For example, consider the following code:


import { renderer } from 'renderer';
import { renderGraph } from 'graph';
import { renderTitle } from 'title';

renderer.render([renderGraph, renderTitle]);

In theory, renderGraph doesn't need any other capabilities other than the ability to call into the renderer so it can be treated as low-trust code.

But now consider a malicious implementation of renderGraph:


export function renderGraph () {
  this[1].setTitle('Changed the title');
}

renderGraph knows the renderer will call it via renderArray[i](), which in JavaScript will set the this binding to the array itself, thus giving access to the title component from the graph component.

Yes, it's a contrived example, but it demonstrates how easily you can get capability spillage in JavaScript, and that's before we even get to information spillage eg via toString().

Locking down these sorts of inadvertant side channels means making all package interfaces out of SafeFunction and SafeObject objects that don't have these sorts of awful flaws, and it's not an easy problem to solve - this is where the bulk of the effort needs to be made.

The other side of this to consider is that Web Assembly module interfaces don't have these same sorts of capability and information spillage that we have in JavaScript, which certainly gives hope for future ecosystems dealing with these problems.

#Impractical Constraints

The third argument is that the security requirements are simply too much of a constraint on JavaScript and its ecosystems. That there exists no path from the ecosystems today to this kind of secure ecosystem. As a result, secure runtimes will always be a fringe effort adopted by the few who can invest in the time and effort to support them.

This, I believe, is the most crucial problem to solve. The ability to run third-party libraries with less risk should be fully democratized.

#Secure Modular Runtime Proposal

I'd like to propose a hypothetical runtime for JavaScript, as a strawman, and to invite scrutiny as to whether this solves the following problems:

  1. That this runtime can fully restrict high-level capability access from packages for third-party code running in the same process than we have in Node.js, Deno and browsers today.
  2. That this runtime can support an onramp from the existing JavaScript ecosystems, which is crucial for adoption.

The proposal is based on a secure runtime because that is the logical conclusion of designing security in from the start. The JavaScript ecosystem is shaped by its runtimes, and only by providing a secure runtime target can we even begin to shape the ecosystem towards more secure properties.

The form of the runtime is a direct implementation of the SES compartment model:

  • The global object should have no capabilities (no fetch, Worker, Date globals), only intrinsics, with all those intrinsics provided as safe intrinsic instantiations. Instead, all capabilities are imported.
  • The permissions model should use import maps, with an isolated scope implementation where scopes do not have fallbacks at all, and packages cannot import anything outside of their scope unless it is explicitly defined in the map. This treats import maps as the single source of truth for both resolution and per-package capability permissions, enabled by the scope mappings.
  • The interfaces between all packages should use SafeObject, SafeFunction and SafeClass implementations - a careful language subset for communication that the module system itself ensures that packages are adhering to. This could be a dynamic wrapping and unwrapping, or it could be more static, or even user-defined.
  • The existing npm ecosystem should be supported via codemods that can run at least 90% of existing code within this new secure model.

#Isolated Scopes

The Isolated Scopes proposal is an Import Maps extension proposal to allow import maps to comprehensively define what can and what cannot be imported.

This proposal has grown out of the idea that Node.js Policies and Import Maps have ended up converging. While in SystemJS we've needed import maps to support integrity, in Node.js we've needed Policies to support import maps-style scopes and mappings.

The clear technical congruence here happened completely naturally, but points to a path: Import Maps are a natural home for defining the integrity of capabilities. This solves the "security as afterthought" problem if we can combine goals here, since a user constructing an import map doesn't care about security, but gets it as a side-effect of the workflow itself (if they choose to enable strong capability enforcement).

The idea is that in a capabilities model, you end up defining permissions something like:


{
  "packageA": {
    "capabilities": ["packageB"]
  },
  "packageB": {
    "capabilities": "fs?local"
  }
}

where a package cannot import anything outside of the package unless explicitly granted access via the capability system.

Yet, the import map for this same application looks something like:


{
  "imports": {
    "packageA": "/path/to/packageA/main.js"
  },
  "scopes": {
    "/path/to/packageA/": {
      "packageB": "/path/to/packageB/main.js"
    },
    "/path/to/packageB/": {
      "fs": "core:fs?local"
    }
  }
}

The capability information is already naturally defined in the import map - that is, it is redundant information. Again, on the other side, Node.js Policies look a lot like an import map.

The changes to import maps to support this are very minor and can be done as an extension proposal:

  1. Provide a new "isolatedScope": true option for import maps, enabled by a top-level property, flag or otherwise.
  2. Restrict scopes to not permit imports of URLs outside of that scope, unless that URL is explicitly defined in the mappings.
  3. Disable scope fallbacks from applying.

With these small tweaks we have the potential to turn import maps into the primary modular workflow for application development that is easily auditable, readable and manageable, and where capability definitions are built in from the start.

#Package Interfaces

In terms of the package interfaces, the exported package bindings (eg Node.js "main" / "exports" field module exports) would use the safe interface system.

We convert the outwardly facing components of existing packages to this safe form, for example:


export function renderGraph () {
  this[1].setTitle('Changed the title');
}

would be converted to be executed in the runtime as:


export function SafeFunction(renderGraph () {
  this[1].setTitle('Changed the title');
})

The SafeFunction implementation would ensure no rebinding of this by callers. All capability references would thus be made fully explicit to the software creator. Advisories are still necessary, but within a well-defined and constrained permissions model that clearly defines what an escalation really means.

SafeObject applies recursively and SafeFunction in turn applies this same sanitization to its return values dynamically at runtime. Live export binding assignments could be replaced with a SafeValue base class reassignment operation. Primitives remain untouched.

There could be various ways to apply these safety functions:

  1. Explicitly requiring users to use these interfaces, perhaps as nice sugar global names like Fn, Obj, Cls - export Fn(() => {}), as the variants of Agoric's harden.
  2. These safety wrappers could be done as a runtime module wrapper entirely requiring no work from the user.
  3. A sort of pre-compilation phase could automatically inject the safety interfaces.
  4. Engine work could make these first-class primitives, and a new runtime could in theory upstream its own directions here over time.

The above vary on where they place the performance overhead of the wrapping, but with careful thought to use cases, it should be possible to optimize for the necessary performance properties while maintaining these security guarantees.

This is the most critical part to the model, and there are likely some quite complex cases to tackle here, but I am yet to hear of any major blocks to implementing these well-defined interface scenarios.

#Ecosystem Compatibility

Existing JavaScript support could be provided using codemods that convert the packages into a form to be exected in the secure runtime. This isn't easy but it should be possible in over 90% of cases to provide ecosystem compatibility. For example:


export async function getCurrentResource () {
  return fetch(`${globalThis.resourceUrl}/${Date.now()}`);
}

can be converted into:


import fetch from 'fetch';
import { now } from 'date';
export Fn(async getCurrentResource () => {
  return fetch(`${import.meta.local.resourceUrl}/${now()}`);
});

where fetch and date are the controlled capability permissions and import.meta.local represents a package-level global that can be set at the application level for the package to support the unknown global access cases.

In this way we can fully codemod existing third-party packages from npm into the secure package convention.

If this sounds like setting the bar too high, just remember that we already codemod all npm code today in the first place everytime we use our current build tooling, and these techniques are also exactly what jspm does to support browser imports.

#Summary

So long as modular security holds hope for JavaScript, now is the time to initiate the work here, as it seems like our best bet for securely running third-party code in future. It cannot be Node.js, Deno or the browser, as their respective ships have sailed on not supporting the needed properties described here - their engrained conventions continue to actively work against restricting third-party package capabilities.

If it does turn out that safe package interfaces are truly not practical or possible for JavaScript, then moving these ideas into the Wasm side of things and ensuring we can start to obtain these properties for future Wasm runtimes would be the worthwhile approach.

But please, let's not dismiss the potential of working on these security problems for JavaScript, even if it isn't completely certain. Because unless we actively work on transitioning to secure JavaScript ecosystems now, we will only continue to treat security as an afterthought later.