Blog
SecurityHosting

How Your SSH Session Is Isolated From Every Other Customer's

When you SSH into our server, you don't actually get the server — you get a sandbox shaped exactly like a server. Here's how flame-bubble uses Linux namespaces, bwrap, and cgroups to keep customers fully separated on shared infrastructure.

Last week we wrote about SSH access — how to use the site CLI, how to get a shell on your account, what commands work. This post is the layer below: when you ssh in, what actually happens on our server, and what stops you (or anyone else) from seeing other customers' files.

The short version: every SSH session runs inside an isolation environment we call a bubble. Each bubble is a Linux container that looks complete from the inside but is rigorously separated from the host and from every other customer's bubble. Here's how it works.

What you see vs what's there

When you SSH into your account, your shell looks like it has a normal Linux filesystem: /home, /tmp, /usr, /var. You can run ls /etc, ps, df -h. It feels like a server.

What's actually happening: that filesystem is a view assembled from pieces — your account's data is mounted at the right paths, system binaries are read-only-bind-mounted from the host, sensitive paths simply don't exist inside your view. You can't cd /home/some-other-customer because there is no /home/some-other-customer from inside your bubble. The path doesn't exist; you'd get "No such file or directory."

Every customer gets a different view. Yours sees only your data.

The mechanism

We build the bubble using bubblewrap (commonly written as bwrap), a small open-source utility from the GNOME project that wraps the Linux kernel's namespace and mount-isolation primitives. It's the same machinery that runs Flatpak apps in their sandbox.

Specifically, every SSH session creates fresh:

These primitives are kernel features. They don't depend on a hypervisor or a separate VM. The cost of creating a bubble is in the millisecond range, which is why we can afford one per SSH session rather than a shared "everyone in /customers/" arrangement.

cgroups: the resource side

Isolation isn't only about visibility. It's also about resources. A customer running a heavy find or a runaway script shouldn't be able to slow down other customers on the same host.

Each bubble is also placed in a Linux control group (cgroup) that caps:

Hit any of those limits and the kernel itself enforces — your process gets OOM-killed if you blow past memory, your CPU gets throttled if you exceed your share, your forks fail with EAGAIN if you exceed the process count. The host stays healthy regardless of what any individual bubble does.

Setuid, briefly

The bubble launcher itself is a small setuid program — it has to be, because creating namespaces and binding mounts requires root capabilities, and SSH hands it a customer-level user. We've kept it small (under a few hundred lines of Go) precisely because setuid programs deserve scrutiny. It does one thing: parse arguments, set up the bubble, drop privileges, exec the customer's shell. No network, no file writes outside the bubble, no surprises.

The setuid binary is also signed via our manifest system — the same system we use to sign every binary in our fleet. If anyone tried to swap it out for a malicious version, the signature would fail and the bubble launch would refuse to start. It's a small but important link in the chain.

What this means in practice

For you, almost nothing visible. The shell feels normal. Files that should be there are. Files that shouldn't aren't. Performance is consistent because no other customer can starve you of resources. If you fork-bomb yourself, you take yourself down, not the host.

For us, this is the part of the system we worry about most. The bubble is what makes shared hosting actually safe to share. We over-engineer it because the cost of getting it wrong is one customer reading another customer's data, and that has to be impossible — not unlikely, not difficult, impossible — for the host to be worth running at all.


Most hosting companies have this layer in some form. Shared hosts use suEXEC and chroot; container-based hosts use Docker or systemd-nspawn; we use bubblewrap and cgroups directly. The specific tools matter less than the property: the kernel itself enforces the boundaries, not the application code, not goodwill, not luck.

Ready for hosting that doesn't oversell?

Get started from £10/mo More articles
Stay in the loop New posts, platform updates, and open chat — join the community.
Join Discord