Skip to content

Dstack-TEE/dstack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

dstack

The open framework for confidential AI.

GitHub Stars License REUSE status Ask DeepWiki Telegram

Documentation · Examples · Community


What is dstack?

dstack is the open framework for confidential AI - deploy AI applications with cryptographic privacy guarantees.

AI providers ask users to trust them with sensitive data. But trust doesn't scale, and trust can't be verified. With dstack, your containers run inside confidential VMs (Intel TDX) with native support for NVIDIA Confidential Computing (H100, Blackwell). Users can cryptographically verify exactly what's running: private AI with your existing Docker workflow.

Features

Zero friction onboarding

  • Docker Compose native: Bring your docker-compose.yaml as-is. No SDK, no code changes.
  • Encrypted by default: Network traffic and disk storage encrypted out of the box.

Hardware-rooted security

  • Private by hardware: Data encrypted in memory, inaccessible even to the host.
  • Reproducible OS: Deterministic builds mean anyone can verify the OS image hash.
  • Workload identity: Every app gets an attested identity users can verify cryptographically.
  • Confidential GPUs: Native support for NVIDIA Confidential Computing (H100, Blackwell).

Trustless operations

  • Isolated keys: Per-app keys derived in TEE. Survives hardware failure. Never exposed to operators.
  • Code governance: Updates follow predefined rules (e.g., multi-party approval). Operators can't swap code or access secrets.

Getting Started

Try it now: Chat with LLMs running in TEE at chat.redpill.ai. Click the shield icon to verify attestations from Intel TDX and NVIDIA GPUs.

Deploy your own:

# docker-compose.yaml
services:
  vllm:
    image: vllm/vllm-openai:latest
    runtime: nvidia
    command: --model Qwen/Qwen2.5-7B-Instruct
    ports:
      - "8000:8000"

Deploy to any TDX host with the dstack-nvidia-0.5.x base image, or use Phala Cloud for managed infrastructure.

Want to deploy a self hosted dstack? Check our full deployment guide →

Architecture

Architecture

Your container runs inside a Confidential VM (Intel TDX) with optional GPU isolation via NVIDIA Confidential Computing. The CPU TEE protects application logic; the GPU TEE protects model weights and inference data.

Core components:

  • Guest Agent: Runs inside each CVM. Generates TDX attestation quotes so users can verify exactly what's running. Provisions per-app cryptographic keys from KMS. Encrypts local storage. Apps interact via /var/run/dstack.sock.

  • KMS: Runs in its own TEE. Verifies TDX quotes before releasing keys. Enforces authorization policies defined in on-chain smart contracts — operators cannot bypass these checks. Derives deterministic keys bound to each app's attested identity.

  • Gateway: Terminates TLS at the edge and provisions ACME certificates automatically. Routes traffic to CVMs. All internal communication uses RA-TLS for mutual attestation.

  • VMM: Runs on bare-metal TDX hosts. Parses docker-compose files directly — no app changes needed. Boots CVMs from a reproducible OS image. Allocates CPU, memory, and confidential GPU resources.

Full security model →

SDKs

Apps communicate with the guest agent via HTTP over /var/run/dstack.sock. Use the HTTP API directly with curl, or use a language SDK:

Language Install Docs
Python pip install dstack-sdk README
TypeScript npm install @phala/dstack-sdk README
Rust cargo add dstack-sdk README
Go go get github.com/Dstack-TEE/dstack/sdk/go README

Documentation

For Developers

For Operators

Reference

Security

FAQ

Why not use AWS Nitro / Azure Confidential VMs / GCP directly?

You can — but you'll build everything yourself: attestation verification, key management, Docker orchestration, certificate provisioning, and governance. dstack provides all of this out of the box.

Approach Docker native GPU TEE Key management Attestation tooling Open source
dstack
AWS Nitro Enclaves - - Manual Manual -
Azure Confidential VMs - Preview Manual Manual -
GCP Confidential Computing - - Manual Manual -

Cloud providers give you the hardware primitive. dstack gives you the full stack: reproducible OS images, automatic attestation, per-app key derivation, TLS certificates, and smart contract governance. No vendor lock-in.

How is this different from SGX/Gramine?

SGX requires porting applications to enclaves. dstack uses full-VM isolation (Intel TDX) — bring your Docker containers as-is. Plus GPU TEE support that SGX doesn't offer.

What's the performance overhead?

Minimal. Intel TDX adds ~2-5% overhead for CPU workloads. NVIDIA Confidential Computing has negligible impact on GPU inference. The main cost is memory encryption, which is hardware-accelerated on supported CPUs.

Is this production-ready?

Yes. dstack powers production AI infrastructure at OpenRouter and NEAR AI. The framework has been audited by zkSecurity and is a Linux Foundation Confidential Computing Consortium project.

Can I run this on my own hardware?

Yes. dstack runs on any Intel TDX-capable server. See the deployment guide for self-hosting instructions. You can also use Phala Cloud for managed infrastructure.

What TEE hardware is supported?

Currently: Intel TDX (4th/5th Gen Xeon) and NVIDIA Confidential Computing (H100, Blackwell). AMD SEV-SNP support is planned.

How do users verify my deployment?

Your app exposes attestation quotes via the SDK. Users verify these quotes using dstack-verifier, dcap-qvl, or the Trust Center. See the verification guide for details.

Trusted by

  • OpenRouter - Confidential AI inference providers powered by dstack
  • NEAR AI - Private AI infrastructure powered by dstack

dstack is a Linux Foundation Confidential Computing Consortium open source project.

Community

Telegram · GitHub Discussions · Examples

Repobeats

Cite

If you use dstack in your research, please cite:

@article{zhou2025dstack,
  title={Dstack: A Zero Trust Framework for Confidential Containers},
  author={Zhou, Shunfan and Wang, Kevin and Yin, Hang},
  journal={arXiv preprint arXiv:2509.11555},
  year={2025}
}

Media Kit

Logo and branding assets: dstack-logo-kit

License

Apache 2.0