- Overview
- Test Types and Build Tags
- Module Structure
- Prerequisites & Setup
- Quick Start: Running Tests
- Quick Decision Guide
- Unit Tests
- Integration Tests via t/ Runner
- Running Integration Tests
- Integration v2 Tests (integration2 build tag)
- Upgrade Tests (upgrade build tag)
- Writing Tests in Go (Dgraph Conventions)
- Fuzz Tests
- Future Improvement Ideas
Dgraph employs a complex sophisticated testing framework with extensive test coverage. The
codebase contains >200 test files with >2,000 test functions and benchmark functions across multiple
packages and modules.
This guide helps engineers navigate testing in the Dgraph codebase.
If you're making a change, you should be able to:
- Choose the appropriate test type for your change (unit, integration, systest, upgrade, or fuzz)
- Identify where to add new tests in the repo
- Write Go tests that follow Dgraph's patterns
- Execute tests locally, single test, a whole package, or a named suite
- Debug CI failures by reproducing them on your own machine
The testing framework uses Go build tags to conditionally compile tests that are more costly to run.
We distinguish the following types of tests:
- Purpose: Test individual functions and components in isolation
- Examples:
dql/dql_test.go,types/value_test.go,schema/parse_test.go - Build tags: No build tags - Standard Go unit tests
- Unit tests run without any cluster and are usually fast.
- Purpose: Test component interactions and full system workflows
- Examples:
acl/acl_test.go,worker/worker_test.go,query/query0_test.go - Build tag:
//go:build integration
- Purpose: Test database upgrade scenarios and migrations
- Examples:
acl/upgrade_test.go,worker/upgrade_test.go - Build tag:
//go:build upgrade
- Function prefix:
Benchmark - Purpose: Performance testing and optimization
- Examples:
query/benchmark_test.go,dql/bench_test.go
- Purpose: Test cloud-specific functionality
- Examples:
query/cloud_test.go,systest/cloud/cloud_test.go - Build tag:
//go:build cloud
Integration, Upgrade and Benchmark tests require a running Dgraph cluster (Docker) and come in two
forms: tests driven by the t/ runner, and tests using the dgraphtest package, which provides
programmatic control over local Dgraph clusters. Most newer integration2 and upgrade tests rely on
dgraphtest.
Note: The
testutilpackage is being phased out. For new tests, preferdgraphtest(cluster management) anddgraphapi(client operations). Thetestutilpackage is maintained for backward compatibility with existing tests only.
The main module is github.com/hypermodeinc/dgraph
The codebase is organized into several key packages:
| Package | Description |
|---|---|
acl |
Access Control Lists and authentication |
algo |
Algorithms and data structures |
audit |
Audit logging functionality |
backup |
Backup and restore operations |
chunker |
Data chunking and parsing |
codec |
Encoding/decoding utilities |
conn |
Connection management |
dgraph |
Main Dgraph binary and commands |
dgraphapi |
Dgraph API client |
dgraphtest |
Testing utilities |
dql |
Dgraph Query Language parser and processor |
edgraph |
GraphQL endpoint |
filestore |
File storage abstraction |
graphql |
GraphQL implementation |
lex |
Lexical analysis |
posting |
Posting list management |
query |
Query processing engine |
raftwal |
Raft write-ahead log |
schema |
Schema management |
systest |
System integration tests |
testutil |
Testing utilities |
tok |
Tokenization and text processing |
types |
Data type definitions |
upgrade |
Database upgrade utilities |
worker |
Worker processes |
x |
Common utilities |
Before running tests, ensure you have the following installed and configured.
TL;DR: On a fresh checkout, run
make setupto auto-install tool dependencies, thenmake installfollowed bymake test. The build system automatically handles OS detection, builds the correct binaries, and validates dependencies.
The test framework includes scripts that check for required dependencies and can optionally auto-install them:
# Auto-install all missing tool dependencies (recommended for first-time setup)
make setup
# Check dependencies without installing (reports what's missing)
make check-deps
# Same as 'make check-deps' but auto-installs anything missing
make check-deps AUTO_INSTALL=trueThe check scripts validate:
- Go version (matches go.mod requirement)
- Docker and Docker Compose versions
- Docker available memory (warns if < 8GB, with auto-fix on macOS)
- gotestsum installation
- ack installation
- Cross-compiler for non-Linux hosts (macOS)
- protoc installation (Linux only)
- Dgraph binary existence and correct architecture
Note: You do not need to install these manually. Running
make setupormake check-deps AUTO_INSTALL=truefrom the repo root automatically checks and installs all missing dependencies. The commands below are listed only as reference for what gets installed.
go version # Verify Go is installeddocker --version
docker compose version
# Allocate sufficient memory: 4GB minimum, 8GB recommended
# Docker Desktop → Settings → Resources → Memory# Set GOPATH (if not already set)
export GOPATH=$(go env GOPATH)
echo $GOPATH # Should output something like /Users/you/go
# Add to your shell profile (~/.zshrc, ~/.bashrc)
export GOPATH=$(go env GOPATH)
export PATH=$PATH:$GOPATH/bingo install gotest.tools/gotestsum@latest
# Verify installation
gotestsum --versionbrew install ack# Build and install Dgraph binary to $GOPATH/bin
make install
# Verify installation
which dgraph # Should show $GOPATH/bin/dgraph
dgraph versionNote: The
t/runner's Docker Compose files mount the dgraph binary into containers at startup. On macOS, binaries are read from$GOPATH/linux_<arch>/dgraph; on Linux, from$GOPATH/bin/dgraph. Simply runmake installafter code changes — no Docker image rebuild needed.
The build system now handles most setup automatically. On both Linux and macOS:
# Auto-install tool dependencies (gotestsum, ack, etc.)
make setup
# Build dgraph binary (automatically handles Linux binary on macOS)
make install
# Run tests (builds Docker image and runs test suite)
make testThat's it! The make install command:
- On Linux: Installs dgraph to
$GOPATH/bin/dgraph - On macOS: Installs native binary to
$GOPATH/bin/dgraphAND Linux binary to$GOPATH/linux_<arch>/dgraph
The Docker Compose files automatically use the correct binary path via the LINUX_GOBIN environment
variable.
The build system now automatically handles cross-compilation for macOS users:
make installbuilds both native macOS and Linux binaries automatically- Linux binaries are stored in
$GOPATH/linux_<arch>/dgraph - Docker Compose files use
${LINUX_GOBIN:-$GOPATH/bin}to find the correct binary - No manual binary swapping required!
After code changes, simply run make install again — it handles everything.
Background: Bulk and live loader tests (systest/bulk_live/) execute dgraph bulk and
dgraph live commands locally on your machine (not inside Docker).
Good news: Since make install now builds both binaries on macOS, you have:
- Native macOS binary at
$GOPATH/bin/dgraph(used for local commands) - Linux binary at
$GOPATH/linux_<arch>/dgraph(used by Docker containers)
Use go test to run one easy test on types package:
go test -v ./types/... -run TestConvert
# Or with make (runs all unit tests, not just one)
make test-unitExpected output:
=== RUN TestConvertToDefault
--- PASS: TestConvertToDefault (0.00s)
...
=== RUN TestConvertToGeoJson_PolyError2
--- PASS: TestConvertToGeoJson_PolyError2 (0.00s)
PASS
ok github.com/dgraph-io/dgraph/v25/types (cached)
? github.com/dgraph-io/dgraph/v25/types/facets [no test files]
Note: Start Docker Desktop before running integration or upgrade tests
cd t && go build . && ./t --test=TestGQLSchema
# Or with make
make test TEST=TestGQLSchemaIf both pass, you're ready to run all test types!
The simplest way to run tests is make test (default: integration suite + integration2). Each
test-* target is a shortcut for make test with specific arguments. The table below shows all
three ways to run each test type.
| Target | make test equivalent |
Without make |
|---|---|---|
make test |
(default) | cd t && ./t --suite=integration then go test -v --tags=integration2 ./... |
make test-unit |
make test SUITE=unit |
cd t && ./t --suite=unit |
make test-integration |
make test SUITE=integration |
cd t && ./t --suite=integration |
make test-core |
make test SUITE=core |
cd t && ./t --suite=core |
make test-systest |
make test SUITE=systest |
cd t && ./t --suite=systest |
make test-vector |
make test SUITE=vector |
cd t && ./t --suite=vector |
make test-integration-heavy |
make test SUITE=systest-heavy,ldbc,load |
cd t && ./t --suite=systest-heavy,ldbc,load |
make test-integration2 |
make test TAGS=integration2 |
go test -v --tags=integration2 ./... |
make test-upgrade |
make test TAGS=upgrade |
go test -v --tags=upgrade ./... |
make test-fuzz |
make test FUZZ=1 |
go test -v -fuzz=Fuzz -fuzztime=300s ./dql/... |
make test-benchmark |
(no equivalent) | go test -bench=. -benchmem ./... |
make test-all |
(no equivalent) | Runs SUITE=all + integration2 + upgrade + fuzz sequentially |
Tip: All targets accept
PKG=,TEST=, andTIMEOUT=variables. For example:make test-systest PKG=systest/plugin TEST=TestPasswordReturn TIMEOUT=60m
Run make help to see all available targets, variables, and dynamically discovered SUITE/TAGS
values.
For more control, pass variables to make test:
| Variable | Purpose | Example |
|---|---|---|
SUITE |
Select t/ runner suite | make test SUITE=integration |
TAGS |
Go build tags - bypasses t/ runner | make test TAGS=integration2 |
PKG |
Limit to specific package | make test PKG=systest/export |
TEST |
Run specific test function | make test TEST=TestGQLSchema |
TIMEOUT |
Per-package test timeout | make test TIMEOUT=90m |
FUZZ |
Enable fuzz testing | make test FUZZ=1 |
FUZZTIME |
Fuzz duration per package | make test FUZZ=1 FUZZTIME=60s |
Precedence: TAGS > FUZZ > SUITE > default (first match wins). When no variable is set,
make test runs integration suite (via t/ runner) plus integration2.
# Run integration2 tests for vector package
make test TAGS=integration2 PKG=systest/vector
# Run upgrade tests for ACL with specific test
make test TAGS=upgrade PKG=acl TEST=TestACL
# Run fuzz tests with custom duration
make test FUZZ=1 PKG=dql FUZZTIME=30s
# Run systest for backup package
make test SUITE=systest PKG=systest/backup/filesystem
# Benchmark a specific package
make test-benchmark PKG=postingUse this section to quickly determine what test to write and where to place it.
Cover as many scenarios as possible. A good PR includes tests for:
- The happy path (expected behaviour)
- Edge cases (empty inputs, boundary values, special characters)
- Error conditions (invalid inputs, failure modes)
Use a layered testing approach. Aim for broad coverage with unit tests to validate individual functions and quickly identify failures, and complement them with integration and end-to-end tests for cluster-dependent behavior and real-world scenarios. Each test type is important and they should be mutually reinforcing.
Unit tests run without a Dgraph cluster. They test pure logic in isolation.
- Place it in the same package as the code you changed
- File name:
*_test.gonext to the source file - No build tag needed
Example: Changing worker/export.go → add test in worker/export_test.go
Testing individual functions and components in isolation is usually not enough. Integration Tests test component interactions and full system workflows. They require a running Dgraph cluster.
Go build tags are special comments at the top of a file (for example, //go:build integration) that
instruct the Go toolchain when to compile that file. When you run tests with
go test -tags=integration, only test files without a build tag (default) or with a matching tag
are compiled and executed.
We use build tags to exclude expensive or environment-dependent tests (like integration,
integration2, and upgrade) from the default go test ./... run, while allowing you to opt in to
them when needed.
| Build Tag | Purpose |
|---|---|
integration |
Standard integration tests requiring a Docker cluster |
integration2 |
Integration tests using Docker Go client via dgraphtest package |
upgrade |
Tests for upgrade scenarios between dgraph versions |
cloud (deprecated) |
Tests running against cloud environment |
| If you're testing... | Test type | Build tag | Where to place |
|---|---|---|---|
| Query or mutation logic | Integration | integration |
Existing package or systest/ |
| Backup / Restore | Integration | integration |
systest/backup/ or systest/online-restore/ |
| Export | Integration | integration |
systest/export/ |
| Live loader / Bulk loader | Integration | integration |
systest/bulk_live/ or systest/loader/ |
| Multi-tenancy / Namespaces | Integration | integration |
systest/multi-tenancy/ |
| Vector / Embeddings | Integration | integration |
systest/vector/ |
| GraphQL schema or endpoints | Integration | integration |
graphql/e2e/ |
| ACL / Auth | Integration | integration |
acl/ or systest/acl/ |
| Upgrade from older version | Upgrade | upgrade |
Same package with //go:build upgrade |
| Fine-grained cluster control (start/stop nodes) | Integration2 | integration2 |
systest/integration2/ or relevant package |
-
I fixed a bug in query parsing (no cluster needed to fully validate) → Unit test in
query/*_test.go, no build tag -
I fixed a bug in export that affects vector data → Integration test in
systest/vector/, usedgraphtest.LocalCluster, tag://go:build integration -
I changed backup behaviour → Integration test in
systest/backup/, tag://go:build integration -
I need to test behaviour after upgrading from v23 to main → Upgrade test in relevant package, tag:
//go:build upgrade -
I changed GraphQL admin endpoint → Integration test in
graphql/e2e/, tag://go:build integration
-
Maximize unit test coverage. If you can fully test it without a cluster - unit tests only. If it can't be tested at all without a cluster, integration tests only. Otherwise add a mix of both unit and integration tests – unit tests for what parts can be tested in isolation and integration tests for the remainder.
-
Cover multiple scenarios. Don't just test the happy path—include edge cases and error conditions.
-
Use table-driven tests. One test function with multiple cases beats many separate functions.
-
No flaky tests. Avoid
time.Sleep(); use polling, retries, or explicit waits with timeouts. -
Follow existing patterns. Look at nearby
*_test.gofiles and match their style.
go test [flags] [package] [test-filter]-v(verbose): Shows detailed output for each test-run <pattern>: Run only tests matching the pattern (regex)
./types/: Single package./types/...: Package and all subpackages recursively
# Run all tests in types package
go test ./types/
# Run all tests in types and subpackages
go test ./types/...
# Run specific test with verbose output
go test -v ./types/... -run TestConvertWith make:
# Run all unit tests (no Docker, no build tags)
make test-unit
# Run unit tests for a specific package
make test-unit PKG=types
# Run a specific unit test
make test-unit PKG=types TEST=TestConvert- No
//go:buildtag at the top of the file = unit test - Files with
//go:build integrationare NOT unit tests
Place *_test.go next to the code being tested:
| Code in | Test in |
|---|---|
types/conversion.go |
types/conversion_test.go |
dql/parser.go |
dql/parser_test.go |
schema/parse.go |
schema/parse_test.go |
- Parsing logic
- Data conversions
- Utility functions
- Algorithms
- Any code that doesn't need cluster access
The t/ runner orchestrates Docker-based integration tests. It spins up Dgraph clusters using
Docker Compose and runs tests tagged with integration.
- Uses Dgraph binary from
$GOPATH/bin/dgraph - Spins up cluster via
docker-compose.yml(package-specific or default) - Runs tests with
--tags=integration - Tears down cluster after completion
A suite is a named group of test packages that can be run together with the --suite flag.
| Suite | Purpose | Packages/Tests Included |
|---|---|---|
unit |
True unit tests only | All packages except ldbc/load — no Docker, no --tags=integration |
integration |
Default suite — all integration tests except heavy | Everything except ldbc, load, and systest-heavy (replaces old unit) |
core |
Core Dgraph functionality | Query, mutation, schema, GraphQL e2e, ACL, TLS, worker |
systest |
All system integration tests | Both systest-baseline + systest-heavy (backward compatible) |
systest-baseline |
Lean systest for daily dev | backup/filesystem, export, multi-tenancy, audit, CDC, group-delete, plugin, ... |
systest-heavy |
Resource-intensive systests | backup/minio*, backup/encryption, backup/advanced-scenarios, tracing, online-restore |
vector |
Vector search functionality | Vector index, similarity search, HNSW |
ldbc |
Benchmark queries | LDBC benchmark suite |
load |
Heavy data loading scenarios | 21million, 1million, bulk_live, bgindex, bulkloader |
all |
Everything in t/ runner | All packages |
The runner looks for docker-compose.yml:
- First in the test package directory (e.g.,
systest/export/docker-compose.yml) - Falls back to default:
dgraph/docker-compose.yml
Tests with custom compose files run in isolated clusters.
# Build the runner first
cd t && go build .
# Run a suite
./t --suite=core
# Run specific package
./t --pkg=systest/export
# Run single test
./t --test=TestExportAndLoadJson
# Keep cluster after test (for debugging)
./t --pkg=systest/export --keep
# Cleanup all test containers
./t -rWith make:
# Run a suite
make test SUITE=core
# Run specific package
make test SUITE=integration PKG=systest/export
# Run single test
make test TEST=TestExportAndLoadJson| Flag | Description |
|---|---|
--suite=X |
Select test suite(s): all, ldbc, load, unit, integration, systest, systest-baseline, systest-heavy, vector, core |
--pkg=X |
Run specific package |
--test=X |
Run specific test function |
--timeout=X |
Per-package timeout (e.g. 60m, 2h). Default: 30m (180m with --race) |
-j=N |
Concurrency (default: 1) |
--keep |
Keep cluster running after tests |
-r |
Remove all test containers |
--skip-slow |
Skip slow packages |
The t/ runner manages cluster lifecycle automatically.
# Build runner
cd t && go build .
# Run all tests in a package
./t --pkg=systest/export
# Run single test
./t --test=TestExportAndLoadJson
# Keep cluster running after tests (for debugging)
./t --pkg=systest/export --keepWith make:
# Run all tests in a package (make builds the runner automatically)
make test SUITE=integration PKG=systest/export
# Run single test
make test TEST=TestExportAndLoadJsonFor fine-grained control, manually start a cluster and run tests against it.
# Start default cluster with a custom prefix
docker compose -f dgraph/docker-compose.yml -p mytest up -d
# Or start package-specific cluster
docker compose -f systest/export/docker-compose.yml -p mytest up -d# Set the prefix (tells testutil which cluster to use)
export TEST_DOCKER_PREFIX=mytest
# Run all tests in package
go test -v --tags=integration ./systest/export/...
# Run single test
go test -v --tags=integration --run '^TestExportAndLoadJson$' ./systest/export/
# Run multiple specific tests
go test -v --tags=integration --run 'TestExport.*' ./systest/export/docker compose -f dgraph/docker-compose.yml -p mytest down -v# Start cluster manually first
docker compose -f dgraph/docker-compose.yml -p myprefix up -d
# Run tests against it (no cluster restart)
cd t && ./t --prefix=myprefix --pkg=systest/export
# Cluster stays running after testsUsing go test regex:
export TEST_DOCKER_PREFIX=mytest
# All tests matching pattern
go test -v --tags=integration --run 'TestExport' ./systest/export/
# Multiple test names
go test -v --tags=integration --run 'TestExportAndLoad|TestExportSchema' ./systest/export/Using t/ runner:
# Run all tests in multiple packages
./t --pkg=systest/export,systest/backup/filesystem
# Run entire suite
./t --suite=systestWith make:
# Run all systest packages
make test-systest
# Run specific systest package
make test SUITE=systest PKG=systest/export
# Run specific test by name
make test TEST=TestExportAndLoadJson| Variable | Purpose | Set by |
|---|---|---|
TEST_DOCKER_PREFIX |
Docker Compose prefix for cluster | t/ runner or manual |
TEST_DATA_DIRECTORY |
Path to test data files | t/ runner |
GOPATH |
Required for finding dgraph binary | User |
Uses dgraphtest package for programmatic cluster control via Docker Go client.
- Problem: t/ runner can't handle upgrade tests, individual node control, or version switching
- Solution: Full programmatic control over cluster lifecycle through Go API
- Use when: Testing upgrades, node failures, or needing precise cluster state control
Important:
dgraphtestanddgraphapiare the future direction for Dgraph testing. New tests should use these packages instead oftestutil. Thetestutilpackage is being retired and maintained only for backward compatibility with existing tests.
| Feature | t/ runner | integration2 |
|---|---|---|
| Cluster management | docker-compose | Docker Go client |
| Version switching | No | Yes |
| Individual node control | No | Yes (Start/Stop/Kill per node) |
| Upgrade testing | No | Yes |
| Build tag | integration |
integration2 |
# Build your local binary first
make install
# Run tests
go test -v --tags=integration2 ./systest/integration2/
go test -v --tags=integration2 --run '^TestName$' ./pkg/With make:
# Run all integration2 tests
make test-integration2
# Run integration2 tests for a specific package
make test TAGS=integration2 PKG=systest/vector
# Run a specific integration2 test
make test TAGS=integration2 PKG=systest/vector TEST=TestVectorSearchAutomatic version handling:
- Clones Dgraph repo to
/tmp/dgraph-repo-*on first run - Checks out requested version (tag/commit)
- Builds binary with
make dgraph(GOOS=linux) - Caches in
dgraphtest/binaries/dgraph_<version>
Version formats:
"local"- uses$GOPATH/bin/dgraph(default)"v23.0.1"- git tag"4fc9cfd"- commit hash
First run is slow (builds binaries), subsequent runs reuse cache.
dgraphapi provides high-level client wrappers for interacting with Dgraph in tests.
Two client types:
- Wraps
dgo.Dgraph - Handles queries, mutations, schema operations
- Login/authentication
- Namespace operations
- UID assignment
- Backup operations (full and incremental)
- Restore operations
- Namespace management (add/delete)
- Snapshot management
- Health checks
- State endpoint queries
- GraphQL operations
- Export operations
Both clients support authentication and multi-tenancy (namespace-aware operations).
$GOPATH/bin/dgraphmust exist for "local" versionGOPATHenvironment variable must be set- Docker with sufficient resources (4GB+ memory)
- Defer cleanup after cluster creation
- Defer client cleanup after getting clients
- Cleanup removes containers, networks, volumes
- Both clients need login for ACL-enabled clusters
- Default credentials:
groot/password - Must specify namespace (typically root namespace = 0)
- First run: 3-5 minutes (clone + build)
- Subsequent runs: normal test speed (reuses binaries)
- Binary cache shared across parallel tests safely
- Simple query/mutation tests → use t/ runner (faster)
- Don't need version switching → use t/ runner
- Don't need individual node control → use t/ runner
The dgraphapi package can work with any running Dgraph cluster. If no Docker prefix is detected
(no TEST_DOCKER_PREFIX env var), it falls back to localhost ports.
Default fallback ports:
- Alpha gRPC:
localhost:9080 - Alpha HTTP:
localhost:8080 - Zero gRPC:
localhost:5080 - Zero HTTP:
localhost:6080
Use case: Write quick Go scripts to interact with your local development cluster instead of using Postman for repetitive tasks.
Benefits:
- Automate repetitive admin operations
- Test admin workflows quickly
- Reuse test helpers for local development
- Type-safe operations instead of manual JSON crafting
This is especially useful for testing backup/restore, namespace operations, or complex mutation sequences during development.
Example test:
func TestLocalCluster(t *testing.T) {
c := dgraphtest.NewComposeCluster()
gc, cleanup, err := c.Client()
require.NoError(t, err)
defer cleanup()
require.NoError(t, gc.SetupSchema(testSchema))
numVectors := 9
rdfs, _ := dgraphapi.GenerateRandomVectors(0, numVectors, 100, pred)
mu := &api.Mutation{SetNquads: []byte(rdfs), CommitNow: true}
_, err = gc.Mutate(mu)
require.NoError(t, err)
}Tests that verify Dgraph behaviour when upgrading from one version to another.
- Ensure backward compatibility across versions
- Catch breaking changes in data format, schema, or behaviour
- Validate that existing data survives upgrades
- Test real-world upgrade workflows customers use
//go:build upgrade
package main
func TestUpgradeFromV23(t *testing.T) {
// Start with old version
conf := dgraphtest.NewClusterConfig().WithVersion("v23.0.1")
// ... test upgrade to "local" ...
}| Strategy | How it works | Use case |
|---|---|---|
BackupRestore |
Take backup on old version, restore on new | Most common customer upgrade path |
InPlace |
Stop cluster, swap binary, restart | Fast upgrade, tests binary compatibility |
ExportImport |
Export from old, import to new | Migration across major versions |
Specified when calling c.Upgrade():
c.Upgrade("local", dgraphtest.BackupRestore)
c.Upgrade("local", dgraphtest.InPlace)
c.Upgrade("local", dgraphtest.ExportImport)Controlled by DGRAPH_UPGRADE_MAIN_ONLY environment variable:
DGRAPH_UPGRADE_MAIN_ONLY=true (default):
- Tests only latest stable → HEAD
- Example: v24.0.0 → local
- Runs in PR CI (fast)
DGRAPH_UPGRADE_MAIN_ONLY=false:
- Tests many historical versions → HEAD
- Includes v21, v22, v23, v24 releases
- Includes specific cloud commits
- Runs in scheduled CI (comprehensive but slow)
Run all upgrade tests:
# Build your local binary first
make install
# Run with main combos only (fast)
go test -v --tags=upgrade ./...
# Run with all version combos (slow, 30min+)
DGRAPH_UPGRADE_MAIN_ONLY=false go test -v --tags=upgrade ./...Run specific package:
go test -v --tags=upgrade ./systest/mutations-and-queries/
go test -v --tags=upgrade ./acl/
go test -v --tags=upgrade ./worker/Run single test:
go test -v --tags=upgrade -run '^TestUpgradeName$' ./pkg/With make:
# Run all upgrade tests
make test-upgrade
# Run upgrade tests for a specific package
make test TAGS=upgrade PKG=acl
# Run a specific upgrade test
make test TAGS=upgrade PKG=acl TEST=TestACL| Package | Tests |
|---|---|
systest/mutations-and-queries/ |
Data preservation across upgrades |
systest/multi-tenancy/ |
Namespace/ACL upgrade behaviour |
systest/plugin/ |
Custom plugin compatibility |
acl/ |
ACL schema and permissions |
worker/ |
Worker-level upgrade logic |
query/ |
Query behaviour consistency |
Dgraph follows standard Go testing patterns with specific conventions.
Function names:
- Start with
Test:TestParseSchema,TestQueryExecution - Use camelCase:
TestBackupAndRestore, notTest_Backup_And_Restore - Be descriptive:
TestVectorIndexRebuildingnotTestVector
File names:
- End with
_test.go:parser_test.go,backup_test.go - Match the source file:
schema.go→schema_test.go
Used extensively in Dgraph for testing multiple scenarios:
func TestConversion(t *testing.T) {
tests := []struct {
name string
input Val
output Val
wantErr bool
}{
{name: "string to int", input: Val{...}, output: Val{...}},
{name: "invalid type", input: Val{...}, wantErr: true},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got, err := Convert(tc.input, tc.output.Tid)
if tc.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tc.output, got)
})
}
}Benefits:
- Test multiple cases in one function
- Easy to add new test cases
- Clear failure messages with
t.Run
Dgraph uses the testify library:
require.* (fail immediately):
require.NoError(t, err) // Stops test if err != nil
require.Equal(t, expected, actual)
require.True(t, condition)When to use: Setup, critical checks, integration tests
assert.* (continue on failure):
assert.NoError(t, err) // Logs error but continues
assert.Equal(t, expected, actual)When to use: Rarely in Dgraph; prefer require for clarity
Convention: Use require by default.
Creates isolated subtests with individual names:
func TestCluster(t *testing.T) {
t.Run("start nodes", func(t *testing.T) {
// subtest 1
})
t.Run("health check", func(t *testing.T) {
// subtest 2
})
}Benefits:
- Run specific subtest:
go test --run TestCluster/health - Better failure isolation
- Clearer test output
Always defer cleanup operations:
func TestWithCluster(t *testing.T) {
c, err := dgraphtest.NewLocalCluster(conf)
require.NoError(t, err)
defer func() { c.Cleanup(t.Failed()) }() // Cleanup even if test fails
gc, cleanup, err := c.Client()
require.NoError(t, err)
defer cleanup() // Close client connections
}Why: Ensures resources are freed even on test failure.
Mark helper functions so failures point to actual test line:
func setupTestData(t *testing.T, gc *GrpcClient) {
t.Helper() // Failures show caller line, not this line
err := gc.SetupSchema(`name: string .`)
require.NoError(t, err)
}
func TestSomething(t *testing.T) {
setupTestData(t, gc) // If this fails, error points here
// ...
}// BAD
time.Sleep(5 * time.Second) // Flaky!
// GOOD
require.NoError(t, c.HealthCheck(false)) // Wait for actual condition// BAD
var sharedClient *Client // Tests interfere with each other
// GOOD
func TestX(t *testing.T) {
client := newClient() // Each test gets its own
}// BAD - Test2 depends on Test1 running first
func TestInsertData(t *testing.T) { /* insert */ }
func TestQueryData(t *testing.T) { /* assumes data exists */ }
// GOOD - Each test is independent
func TestQuery(t *testing.T) {
setupData(t) // Set up what you need
// ... test query
}// BAD
client.Mutate(mutation) // Ignoring error
// GOOD
_, err := client.Mutate(mutation)
require.NoError(t, err)Use with caution:
func TestIndependent(t *testing.T) {
t.Parallel() // Can run in parallel with other tests
// Only if test doesn't share resources
}Don't use for:
- Integration tests sharing clusters
- Tests modifying global state
- Tests using same ports/resources
Dgraph uses testify/suite for tests needing shared setup/teardown across multiple test methods.
When to use:
- Multiple related test methods sharing the same cluster
- Need setup/teardown hooks (
SetupTest,TearDownTest) - Upgrade tests that run the same tests across version combinations
- Sharing test logic between integration and upgrade tests
Benefits:
- Reduces boilerplate for shared setup
- Each test method is independent (new setup/teardown)
- Same test methods run for both integration and upgrade tests
- Excellent for upgrade tests (run same tests across version combos)
Key pattern: Shared test logic across build tags
Dgraph uses suites to run identical test methods for both integration and upgrade tests:
Integration suite (//go:build integration):
- Creates cluster once
- Runs test methods
- Tests current version behaviour
Upgrade suite (//go:build upgrade):
- Creates cluster with old version
- Runs test methods (validates data works on old version)
- Calls
Upgrade()method - Runs same test methods again (validates data still works after upgrade)
Available hooks:
SetupSuite()- once before all testsSetupTest()- before each test methodSetupSubTest()- before each subtestTearDownTest()- after each test methodTearDownSuite()- once after all tests
How to run:
# Run entire test suite (all test methods)
go test -v --tags=integration ./systest/plugin/
# Run specific test method from suite
go test -v --tags=integration --run 'TestPluginTestSuite/TestPasswordReturn' ./systest/plugin/
# Run specific subtest within a test method
go test -v --tags=integration --run 'TestPluginTestSuite/TestPasswordReturn/subtest' ./systest/plugin/
# Run same tests in upgrade mode
go test -v --tags=upgrade --run 'TestPluginTestSuite/TestPasswordReturn' ./systest/plugin/With make:
# Run the plugin systest package via t/ runner
make test SUITE=systest PKG=systest/plugin
# Run a specific test
make test SUITE=systest PKG=systest/plugin TEST=TestPluginTestSuite/TestPasswordReturn
# Run in upgrade mode
make test TAGS=upgrade PKG=systest/plugin TEST=TestPluginTestSuite/TestPasswordReturnWhen NOT to use:
- Simple one-off tests → use regular
func TestX(t *testing.T) - No shared setup needed → suites add unnecessary complexity
- Unit tests → keep simple
Examples in Dgraph codebase:
acl/integration_test.go+acl/acl_integration_test.go- ACL suitesystest/plugin/- Integration + Upgrade suites sharing test methodssystest/mutations-and-queries/- Integration + Upgrade suites
Fuzzing tests parser and validation logic with random inputs to find edge cases.
Go's native fuzzing generates random inputs to find crashes, panics, or unexpected behaviour.
dql/parser_fuzz_test.go- DQL query parser fuzzing
# Run fuzz test for 5 minutes
go test -v ./dql -fuzz=Fuzz -fuzztime=5m
# Run with custom timeout
go test -v ./dql -fuzz=Fuzz -fuzztime=300s -fuzzminimizetime=120sWith make:
# Run all fuzz tests (default 300s per package)
make test-fuzz
# Fuzz a specific package with custom duration
make test FUZZ=1 PKG=dql FUZZTIME=5mci-dgraph-fuzz.yml(runs on PRs)- Runs:
go test -v ./dql -fuzz="Fuzz" -fuzztime="300s" - Timeout: 10 minutes
- Catches parser crashes early
- Parsers (DQL, GraphQL, RDF)
- Input validation
- Decoders/deserializers
- Any code accepting untrusted input
The following improvements could still enhance the developer experience:
- Extend t/ runner: Have the
t/runner also handle unit and integration2 tests, providing a consistent interface for all test types.