k8e.sh β Open Source Agentic AI Sandbox Matrix. A CNCF-conformant Kubernetes distribution in a single binary under 100MB, purpose-built for secure, isolated AI agent execution at scale. Up and running in 60 seconds. Inspired by K3s.
curl -sfL https://k8e.sh/install.sh | sh -That's it. Your agentic sandbox matrix is ready. π€
K8E is the Open Source Agentic AI Sandbox Matrix β a Kubernetes-native platform for running secure, isolated AI agent workloads at scale, packaged as a single binary under 100MB.
As autonomous AI agents increasingly generate and execute untrusted code, robust sandboxing infrastructure is no longer optional. K8E ships everything needed to spin up a production-grade cluster in under 60 seconds, with first-class primitives for agent isolation, resource governance, and ephemeral execution environments β purpose-built for the AI era.
π One cluster. Many agents. Zero trust between them.
| Capability | Description |
|---|---|
| π Hardware Isolation | Pluggable runtimes: gVisor (default), Kata Containers, Firecracker microVM |
| π Network Policies | Cilium eBPF toFQDNs egress control β per-session, no proxy process needed |
| βοΈ Resource Quotas | CPU/memory caps per agent session to prevent runaway costs |
| ποΈ Ephemeral Workspaces | Auto-cleanup after agent session ends |
| π§ Warm Pool | Pre-booted sandbox pods for sub-500ms session claim latency |
| π€ agent-sandbox compatible | Works with kubernetes-sigs/agent-sandbox |
| π MCP / A2A ready | Any MCP-compatible agent (kiro, claude, gemini) connects via k8e sandbox-mcp |
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β K8E CLUSTER β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β CONTROL PLANE (Server Node) β β
β β ββββββββββββββββ βββββββββββββββ ββββββββββββ β β
β β β API Server β β Scheduler β β etcd β β β
β β ββββββββββββββββ βββββββββββββββ ββββββββββββ β β
β β ββββββββββββββββββββ ββββββββββββββββββββββββββββββββ β β
β β β Controller Mgr β β SandboxMatrix Controller β β β
β β ββββββββββββββββββββ ββββββββββββββββββββββββββββββββ β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββ΄βββββββββββββ β
β βββββββββββββββΌββββββββββββ ββββββββββββΌβββββββββββββββ β
β β WORKER NODE β β WORKER NODE β β
β β βββββββββββββββββββ β β βββββββββββββββββββ β β
β β β sandbox-matrix β β β β sandbox-matrix β β β
β β β grpc-gateway β β β β grpc-gateway β β β
β β β :50051 (TLS) β β β β :50051 (TLS) β β β
β β ββββββββββ¬βββββββββ β β ββββββββββ¬βββββββββ β β
β β β β β β β β
β β ββββββββββΌβββββββββ β β ββββββββββΌβββββββββ β β
β β β Isolated Pods β β β β Isolated Pods β β β
β β β gVisor/Kata/FC β β β β gVisor/Kata/FC β β β
β β βββββββββββββββββββ β β βββββββββββββββββββ β β
β β Cilium CNI (eBPF) β β Cilium CNI (eBPF) β β
β βββββββββββββββββββββββββββ βββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β²
β gRPC (TLS)
ββββββββββ΄βββββββββ
β k8e sandbox-mcpβ β MCP stdio bridge
ββββββββββ¬βββββββββ
β stdin/stdout
ββββββββββ΄βββββββββ
β AI Agent β (kiro / claude / gemini / any MCP client)
βββββββββββββββββββ
| Component | Version | Purpose |
|---|---|---|
| βΈοΈ Kubernetes | v1.35.x | Core orchestration engine |
| π· Cilium | Latest | eBPF networking & per-session egress policy |
| π¦ Containerd | v1.7.x | Container runtime |
| π etcd | v3.5.x | Distributed key-value store |
| π CoreDNS | v1.11.x | Cluster DNS |
| β Helm Controller | v0.16.x | GitOps & chart management |
| π Metrics Server | v0.7.x | Resource metrics |
| πΎ Local Path Provisioner | v0.0.30 | Persistent storage |
| π‘οΈ gVisor / Kata / Firecracker | β | Pluggable sandbox isolation runtimes |
| π€ Sandbox MCP Server | built-in | k8e sandbox-mcp β agent tool bridge |
Install the runtime shim before K8E so it is auto-detected on first startup. gVisor is recommended β no KVM required.
curl -fsSL https://gvisor.dev/archive.key | gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] \
https://storage.googleapis.com/gvisor/releases release main" \
> /etc/apt/sources.list.d/gvisor.list
apt-get update && apt-get install -y runscK8E detects
runscat startup and automatically injects the gVisor stanza into its containerd config (/var/lib/k8e/agent/etc/containerd/config.toml). Do not runrunsc installβ K8E manages its own containerd configuration.
Need stronger isolation? See Sandbox Runtime Setup for Kata Containers and Firecracker.
curl -sfL https://k8e.sh/install.sh | sh -export KUBECONFIG=/etc/k8e/k8e.yaml
kubectl get nodes
kubectl get runtimeclass # should show: gvisor
kubectl -n sandbox-matrix get pods # Sandbox Matrix starts automaticallysandbox-install-skill does two things at once:
- Writes the
k8e-sandboxMCP server entry into the agent's config file - Copies the sandbox skill files from
/var/lib/k8e/server/skills/into the agent's skills directory
K8E server must have started at least once before running this command (it stages the skill files on first boot).
k8e sandbox-install-skill all # installs into kiro, claude, gemini at onceThen ask your agent naturally:
"Run this Python snippet in a sandbox"
That's it. The agent calls sandbox_run automatically β no session management needed.
K8E auto-detects installed runtimes and registers the corresponding RuntimeClass. Choose based on your isolation requirements:
| Runtime | Isolation | Requirement | Boot time |
|---|---|---|---|
| gVisor | Syscall interception (userspace kernel) | None | ~10ms |
| Kata Containers | VM-backed (QEMU) | Nested virt or bare metal | ~500ms |
| Firecracker | Hardware microVM (KVM) | /dev/kvm |
~125ms |
curl -fsSL https://gvisor.dev/archive.key | gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] \
https://storage.googleapis.com/gvisor/releases release main" \
> /etc/apt/sources.list.d/gvisor.list
apt-get update && apt-get install -y runscDo not run
runsc installβ K8E manages its own containerd config at/var/lib/k8e/agent/etc/containerd/config.tomland auto-injects the gVisor stanza on startup.
### Kata Containers
```bash
bash -c "$(curl -fsSL https://raw.githubusercontent.com/kata-containers/kata-containers/main/utils/kata-manager.sh) install-packages"
kata-runtime check
ls /dev/kvm # verify KVM is available
# Install firecracker-containerd shim + devmapper snapshotter
# See: https://github.com/firecracker-microvm/firecracker-containerd
mkdir -p /var/lib/firecracker-containerd/runtime
# Place hello-vmlinux.bin and default-rootfs.img hereInstall runtimes before starting K8E for zero-restart setup. If K8E is already running, restart it after installing a new runtime shim:
systemctl restart k8e
kubectl get runtimeclass
# NAME HANDLER AGE
# gvisor runsc 10s
# kata kata-qemu 10s
# firecracker firecracker 10s β only if /dev/kvm presentk8e sandbox-mcp is a built-in MCP server that bridges any MCP-compatible AI agent to K8E's sandbox infrastructure over gRPC β no extra binaries, no manual endpoint config.
AI Agent (kiro / claude / gemini)
β stdin/stdout
βΌ
k8e sandbox-mcp
β gRPC (TLS, auto-discovered)
βΌ
sandbox-grpc-gateway:50051
β
βΌ
Isolated Pod (gVisor / Kata / Firecracker)
sandbox-install-skill does two things in one command:
- Writes the
k8e-sandboxMCP server entry into the agent's config file - Copies skill files from
/var/lib/k8e/server/skills/into the agent's skills directory
K8E server must have started at least once before running this β it stages the skill files to
/var/lib/k8e/server/skills/on first boot.
# All supported agents at once
k8e sandbox-install-skill all
# Or per agent
k8e sandbox-install-skill kiro # MCP config β .kiro/settings.json (workspace)
# Skills β .kiro/skills/k8e-sandbox-skill/
k8e sandbox-install-skill claude # MCP config β ~/.claude.json
# Skills β ~/.claude/skills/k8e-sandbox-skill/
k8e sandbox-install-skill gemini # MCP config β ~/.gemini/settings.json
# Skills β ~/.gemini/skills/k8e-sandbox-skill/Manual setup β add to your agent's MCP config:
{
"mcpServers": {
"k8e-sandbox": {
"command": "k8e",
"args": ["sandbox-mcp"]
}
}
}For claude code:
claude mcp add k8e-sandbox -- k8e sandbox-mcpecho '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","clientInfo":{"name":"test","version":"1.0"},"capabilities":{}}}' \
| k8e sandbox-mcp| Tool | Description |
|---|---|
sandbox_run |
Run code/commands β auto-manages full session lifecycle |
sandbox_status |
Check if sandbox service is available |
sandbox_create_session |
Create an isolated sandbox pod |
sandbox_destroy_session |
Destroy session and clean up |
sandbox_exec |
Run a command in a specific session |
sandbox_exec_stream |
Run a command, get streaming output |
sandbox_write_file |
Write a file into /workspace |
sandbox_read_file |
Read a file from /workspace |
sandbox_list_files |
List files modified since a timestamp |
sandbox_pip_install |
Install Python packages via pip |
sandbox_run_subagent |
Spawn a child sandbox (depth β€ 1) |
sandbox_confirm_action |
Gate irreversible actions on user approval |
The MCP server auto-discovers the local cluster. Override when needed:
K8E_SANDBOX_ENDPOINT=10.0.0.1:50051 k8e sandbox-mcp # remote cluster
K8E_SANDBOX_CERT=/path/to/ca.crt k8e sandbox-mcp # custom TLS cert
k8e sandbox-mcp --endpoint 10.0.0.1:50051 --tls-cert /path/to/ca.crtAuto-discovery probe order:
K8E_SANDBOX_ENDPOINTenv varK8E_SANDBOX_CERT/K8E_SANDBOX_KEYenv vars/var/lib/k8e/server/tls/serving-kube-apiserver.crt(server node, root)/etc/k8e/k8e.yamlkubeconfig CA (agent node / non-root)127.0.0.1:50051with system CA pool
The Python SDK talks directly to the sandbox gRPC gateway β no MCP process spawn, no stdio handshake (~1β5 ms vs ~500 ms for MCP stdio).
python3 -m pip install grpcio grpcio-tools protobufpython3 -m grpc_tools.protoc -I proto \
--python_out=sdk/python \
--grpc_python_out=sdk/python \
proto/sandbox/v1/sandbox.proto
# make the generated package importable
touch sdk/python/sandbox/__init__.py sdk/python/sandbox/v1/__init__.pyRun code (session auto-managed):
from sandbox_client import SandboxClient
with SandboxClient() as client:
result = client.run("print('hello')", language="python")
print(result.stdout) # hello
print(result.exit_code) # 0Generate 10 random numbers and compute the average:
from sandbox_client import SandboxClient
code = (
"import random; nums = [random.randint(1,100) for _ in range(10)]; "
"print('numbers:', nums); print('average:', sum(nums)/len(nums))"
)
with SandboxClient() as client:
result = client.run(code, language="python")
print(result.stdout)
# numbers: [39, 60, 50, 24, 53, 32, 85, 10, 81, 3]
# average: 43.7Multi-step workflow (shared session):
with SandboxClient() as client:
client.run("pip install pandas", "bash") # session created
result = client.run("python3 analyze.py", "bash") # same session reusedExplicit session with custom options:
from sandbox_client import sandbox_session
with sandbox_session(runtime_class="kata", allowed_hosts=["github.com"]) as (client, sid):
client.write_file(sid, "/workspace/main.py", code)
result = client.exec(sid, "python3 /workspace/main.py")SDK source:
sdk/python/sandbox_client.py
The TypeScript SDK talks directly to the sandbox gRPC gateway β no MCP process spawn, no stdio handshake (~1β5 ms vs ~500 ms for MCP stdio).
npm install @grpc/grpc-js @grpc/proto-loaderRun code (session auto-managed):
import { SandboxClient } from "./sandbox_client";
const client = new SandboxClient();
const result = await client.run("print('hello')", "python");
console.log(result.stdout); // hello
await client.close();Generate 10 random numbers and compute the average:
const client = new SandboxClient();
const code = "import random; nums=[random.randint(1,100) for _ in range(10)]; print('numbers:',nums); print('average:',sum(nums)/len(nums))";
const result = await client.run(code, "python");
console.log(result.stdout);
// numbers: [39, 60, 50, 24, 53, 32, 85, 10, 81, 3]
// average: 43.7
await client.close();Multi-step workflow (shared session):
const client = new SandboxClient();
await client.run("pip install pandas", "bash"); // session created
const result = await client.run("python3 analyze.py", "bash"); // same session reused
await client.close();Explicit session with custom options:
const sid = await client.createSession({ runtimeClass: "kata", allowedHosts: ["github.com"] });
await client.writeFile(sid, "/workspace/main.py", code);
const result = await client.exec(sid, "python3 /workspace/main.py");
await client.destroySession(sid);Streaming output:
for await (const chunk of client.execStream(sid, "python3 train.py")) {
process.stdout.write(chunk);
}One-shot helper:
import { sandboxRun } from "./sandbox_client";
const { stdout } = await sandboxRun("echo hello");SDK source:
sdk/typescript/sandbox_client.ts
# Get token from server node
cat /var/lib/k8e/server/node-token
# On worker machine
curl -sfL https://k8e.sh/install.sh | \
K8E_TOKEN=<token> \
K8E_URL=https://<server-ip>:6443 \
INSTALL_K8E_EXEC="agent" \
sh -curl -sfL https://k8e.sh/install.sh | INSTALL_K8E_EXEC="server --disable-sandbox-matrix" sh -K8E_TOKEN=<secret> # cluster join token
K8E_URL=https://<server>:6443 # server URL (agent nodes)
K8E_KUBECONFIG_OUTPUT=<path> # kubeconfig output path| Feature | K8E π | K3s | K8s (vanilla) | MicroK8s |
|---|---|---|---|---|
| Install time | ~60s | ~90s | ~20min | ~5min |
| Binary size | <100MB | ~70MB | ~1GB+ | ~200MB |
| Agentic Sandbox | β Native | β No | β No | |
| eBPF networking | β Cilium | β No | ||
| MCP skill built-in | β Yes | β No | β No | β No |
| HA embedded etcd | β Yes | β Yes | β Yes | |
| CNCF conformant | β Yes | β Yes | β Yes | β Yes |
| Multi-arch | β Yes | β Yes | β Yes | β Yes |
git clone https://github.com/<your-username>/k8e.git && cd k8e
git checkout -b feat/my-feature
make && make test
git push origin feat/my-feature- π Bug Reports
- π‘ Feature Requests
- π Open PRs
Report vulnerabilities via GitHub Security Advisories. Do not open public issues for security bugs.
Apache License 2.0 β see LICENSE.
| Project | Contribution |
|---|---|
| π K3s | Lightweight Kubernetes foundation that inspired K8E |
| βΈοΈ Kubernetes | The orchestration engine everything is built on |
| π· Cilium | eBPF-powered networking and per-session egress control |
| π€ agent-sandbox | Kubernetes-native agent sandboxing primitives |
| π CNCF | Fostering the open-source cloud native ecosystem |