Build portable, reproducible systemd-nspawn container images with variant support, unit tests, S3 export/import, and GHCR publishing.
# 1. Build the default image (ubuntu-noble)
sudo ./build.sh
# 2. Or build a specific variant
sudo ./build.sh --variant ubuntu-noble
sudo ./build.sh --variant ubuntu-noble-nvidia-560
# 3. Build all variants at once
sudo ./build.sh --all
# 4. Validate
sudo ./validate.sh --variant ubuntu-noble
# 5. Run tests
sudo ./tests/run-tests.sh --variant ubuntu-noble
# 6. Export to S3 or GHCR (see below)Variants define different image configurations. Each variant has its own packages and customization scripts.
# List available variants
./build.sh --list-variants| Variant | Architecture | Image Name | Description |
|---|---|---|---|
ubuntu-noble |
amd64 | nspawn-ubuntu-noble-amd64 | Minimal Ubuntu 24.04 (Noble) system with essential utilities |
ubuntu-noble |
arm64 | nspawn-ubuntu-noble-arm64 | Minimal Ubuntu 24.04 (Noble) system with essential utilities |
ubuntu-noble-nvidia-560 |
amd64 | nspawn-ubuntu-noble-nvidia-560-amd64 | Ubuntu 24.04 (Noble) + NVIDIA 560 userspace drivers (container-friendly, no kernel modules) |
ubuntu-noble-nvidia-560 |
arm64 | nspawn-ubuntu-noble-nvidia-560-arm64 | Ubuntu 24.04 (Noble) + NVIDIA 560 userspace drivers (container-friendly, no kernel modules) |
NVIDIA 560 driver supported GPUs: Ada Lovelace (RTX 40 series), Hopper (H100, H200), Grace Hopper, Blackwell (B100, B200, GB200), as well as older architectures including Ampere (RTX 30 series, A100), Turing (RTX 20 series, T4), and Volta (V100). For a full compatibility list, see the NVIDIA Driver Documentation.
Host requirements: The host machine must have a compatible NVIDIA GPU and a matching or newer kernel-mode driver (≥ 560.x) installed. The container image includes only userspace libraries and utilities (no kernel modules). At runtime, the host's GPU devices (
/dev/nvidia*) and driver files must be bind-mounted into the nspawn container — for example viasystemd-nspawn --bind=/dev/nvidia0 --bind=/dev/nvidiactl --bind=/dev/nvidia-uvm. The host kernel driver handles hardware access; the container's userspace components communicate with it.
-
Create
variants/<name>.confwith your configuration:IMAGE_NAME="nspawn-myvariant" BASE_DISTRO="noble" BASE_MIRROR="http://archive.ubuntu.com/ubuntu" EXTRA_PACKAGES="curl wget nginx"
-
Optionally add customization scripts in
variants/<name>.d/:# variants/myvariant.d/00-setup.sh #!/bin/bash set -euo pipefail systemctl enable nginx touch /etc/nspawn-customized
-
Optionally add variant-specific tests in
tests/suites/variant-<name>.sh.
The base customize.d/ scripts always run first, then variant-specific ones.
A TAP-format test framework validates built images.
# Run all tests for a variant
sudo ./tests/run-tests.sh --variant ubuntu-noble
# Run a specific test suite
sudo ./tests/run-tests.sh --variant ubuntu-noble-nvidia-560 --suite packages
# List available test suites
./tests/run-tests.sh --variant ubuntu-noble --list| Suite | What it checks |
|---|---|
rootfs |
Directory structure, permissions, clean /tmp, clean apt cache |
packages |
All configured packages installed, no broken packages, binaries executable |
services |
systemd-networkd/resolved enabled, no broken units |
security |
No world-writable files, no unexpected SUID, shadow permissions, no empty passwords |
nspawn |
Container execution, /proc and /sys mounted, DNS resolution, os-release readable |
variant-* |
Variant-specific checks (e.g., hostname, NVIDIA repo/packages for nvidia variant) |
Images can be published to GHCR as OCI artifacts, either via CI or manually.
The included workflow automatically:
- Discovers all variants and builds them for both amd64 and arm64 architectures in parallel (matrix strategy)
- Validates each image with
validate.sh - Runs the full test suite for each variant/architecture combination
- Uploads each tarball as a GitHub Actions artifact (retained 30 days)
- On pushes to
main: publishes each variant+architecture to GHCR with commit-SHA andlatesttags
Use workflow_dispatch to build a specific variant, architecture, or use a custom tag.
# Push
export GHCR_TOKEN="ghp_..."
./ghcr-push.sh --tag v1.0.0
# Pull and deploy
sudo GHCR_TOKEN="ghp_..." ./ghcr-pull.sh --extract
sudo ./run.sh./export.sh s3://my-bucket/images/
sudo ./import.sh s3://my-bucket/images/nspawn-base.tar.zst
sudo ./run.sh
sudo ./run.sh --shell # interactive shellAdd shell scripts to customize.d/. They run inside the rootfs via chroot during build, sorted by filename. Example:
# customize.d/10-install-app.sh
#!/bin/bash
apt-get install -y myapp
touch /etc/nspawn-customized| File | Description |
|---|---|
config.env |
Default image configuration |
variants/ |
Variant configs (.conf) and customize scripts (.d/) |
customize.d/ |
Base customize scripts (run for all variants) |
build.sh |
Builds rootfs + packs tarball (supports --variant, --all) |
validate.sh |
Quick validation of built image |
tests/ |
Test framework with TAP output |
tests/run-tests.sh |
Test runner (supports --variant, --suite, --list) |
tests/suites/ |
Individual test suites |
export.sh |
Upload tarball to S3 |
import.sh |
Download from S3 + extract |
ghcr-push.sh |
Push tarball to GHCR as OCI artifact |
ghcr-pull.sh |
Pull tarball from GHCR |
run.sh |
Launch container via systemd-nspawn |
.github/workflows/ |
CI: build all variants, test, publish |
Required:
debootstrap- Bootstrap Debian/Ubuntu base systemssystemd-container- systemd-nspawn container managerzstd- Compression for tarball images
For cross-architecture builds:
qemu-user-static- QEMU user-mode emulationbinfmt-support- Binary format support for foreign architectures
Optional:
awscliormcfor S3 export/importorasfor GHCR push/pull (auto-installed by scripts if missing)
# Ubuntu/Debian
sudo apt-get install debootstrap systemd-container zstd
# For cross-architecture support
sudo apt-get install qemu-user-static binfmt-support