Why I Care About Structure on a Single VPS

I don’t want my VPS to be “just a box that runs containers”.

I want it to feel like a small, real infrastructure:

  • clear separation between roles (proxy, vault, apps, logging),
  • least privilege instead of “everything as root”,
  • and rootless Podman as the default way to run containers.

This post is me thinking out loud:
What kind of structure do I actually want on this VPS?
How do Unix users, Podman rootless, and security fit together?


From “root runs everything” to a proper layout

The classic homelab pattern looks like this:

root@vps runs:
 - docker daemon (root)
 - all containers (as root)
 - all config in /root/somewhere

That’s simple, but:

  • every container escape = full root on VPS,
  • logs, configs, TLS keys all under /root,
  • no real separation between environments (test, prod, whatever).

I want to move away from that and design something like:

Systemd + Unix users + rootless Podman
--------------------------------------
root:
 - manages systemd unit files
 - manages high-level network/firewall
 - does initial Vault bootstrap & offline PKI

dedicated users:
 - vaulttest  -> runs Vault (test environment)
 - vaultprod  -> runs Vault (prod environment)
 - proxytest  -> runs test reverse proxy
 - proxyprod  -> runs prod reverse proxy
 - appuser    -> runs app containers (Nextcloud etc.)
 - loguser    -> runs logging/monitoring stack

Each of these users uses its own rootless Podman context:

/home/vaulttest/.local/share/containers/...
/home/proxytest/.local/share/containers/...
/home/appuser/.local/share/containers/...

No central root-daemon that owns all containers.


What root actually does (and what not)

I still need root for some things – but I want them to be one-time or rare tasks:

  • Creating Unix users.
  • Installing packages (podman, firewalld, etc.).
  • Configuring nftables / firewalld rules.
  • Setting up systemd user services for each service-user.
  • Managing the offline PKI root (files under /root/vault/offline-root/...).

But root should not be the runtime engine of my services.

At runtime, I want:

  • Vault containers running as vaulttest / vaultprod.
  • Reverse proxies (nginx/caddy/whatever) running as proxytest / proxyprod.
  • Apps running as appuser.
  • Logging stack running as loguser.

So if one service is compromised, the attacker gets:

  • that user,
  • that user’s containers & volumes,
  • but not instant full control of the entire VPS or Vault.

Why rootless Podman and daemonless by design

Docker classic

  • One root daemon (dockerd).
  • All containers share one daemon and usually run as root.
  • Container escape == game over.

Podman rootless

  • No central daemon.
  • Each user runs containers in their own user namespace.
  • Processes are normal user processes from the host’s point of view.

So instead of:

root -> docker daemon -> all containers

I want:

vaulttest -> podman (rootless) -> vault (test)
vaultprod -> podman (rootless) -> vault (prod)
proxytest -> podman (rootless) -> nginx test
proxyprod -> podman (rootless) -> nginx prod
appuser -> podman (rootless) -> apps

Podman is daemonless, which means:

  • No big root service controlling everything.
  • Each user can have their own systemd --user units to run containers on boot.

Example mental model for a single service:

[Systemd system]
  └─ login/session or linger for user "proxytest"
      └─ systemd --user
          └─ podman container: mainproxy-test

VPS structure I’m aiming for

High-level, I imagine the VPS like this:

                    Internet
                        │
                        ▼
              [ main reverse proxy ]
              user: proxyprod (rootless podman)
                        │
          ┌─────────────┴─────────────┐
          ▼                           ▼
[ internal revproxy test ]   [ internal revproxy prod ]
 user: proxytest               user: proxyprod
      │                              │
      ▼                              ▼
[ Vault test ]                 [ Vault prod ]
 user: vaulttest                user: vaultprod
      │                              │
      ▼                              ▼
[ test apps ]                  [ prod apps ]
 user: appuser                 user: appuser

[ logging / monitoring ]
 user: loguser (collects logs from all)

On disk, something like:

/root/vault/offline-root/...    # OFFLINE root keys, only root touches this
/root/vault/ca/...              # exported intermediate CA certs (no private key)

/home/vaulttest/                # Vault test containers, volumes, configs
/home/vaultprod/                # Vault prod containers, volumes, configs
/home/proxytest/                # internal test reverse proxy
/home/proxyprod/                # public/main reverse proxy
/home/appuser/                  # application containers (Nextcloud, etc.)
/home/loguser/                  # Graylog/Prometheus/etc.

And Vault-specific TLS directories per user, for example:

/home/vaulttest/tls-test/       # server certs from PKI-TEST for Vault test
/home/vaultprod/tls-prod/       # server certs from PKI-PROD for Vault prod
/home/proxytest/tls/            # client certs to talk to Vault test
/home/proxyprod/tls/            # client certs to talk to Vault prod

How this differs from a simple root setup

Classic “root + Docker”:

  • One user: root.
  • One daemon: dockerd.
  • Everything runs as root or effectively root.
  • One big /root/docker-compose.yml with all services mixed:
  • Vault, proxy, app, logging – everything together.

My target model:

  • Multiple Unix users, one per functional block.
  • Rootless Podman, no central daemon.
  • Vault test/prod separated by user + data dir + PKI.
  • Clear TLS structure:
  • offline root under /root/vault/offline-root,
  • intermediates under /root/vault/ca,
  • service TLS under relevant /home/<user>/tls-*.

Security upside:

  • Compromise of app → access to the appuser context, but not automatically to Vault.
  • Compromise of proxytest → not instant access to Vault prod.
  • Offline root key physically separated (not live in any container, not live inside Vault).

Why this matters for my Vault & PKI journey

I want this VPS to be more than “playground”:

  • It should behave like a small, serious infrastructure,
    so that everything I learn (PKI, mTLS, Vault, Bug Bounty, Logging)
    lives in a realistic environment.
  • The structure with Unix users + rootless Podman forces me
    to think seriously about boundaries, roles and trust.

In future posts, I’ll go deeper into:

  • How I create the Unix users and home layouts.
  • How I use systemd user services to manage Podman containers.
  • How Vault, proxies and apps actually talk to each other over mTLS.
  • And what I learn from all the “I just locked myself out of my own Vault again” moments. 😅