Get your first Stellar data pipeline running in 2 minutes.
- Go 1.21+ - Install Go
- Git - For cloning the repository
- Docker - For running containerized components
# Clone the repository
git clone https://github.com/withobsrvr/flowctl.git
cd flowctl
# Build flowctl
make build
# Verify installation
./bin/flowctl version./bin/flowctl initFollow the prompts:
- Network: Select
testnet(recommended for learning) ormainnet - Destination: Select where to store data:
duckdb- Embedded analytics database (easiest)postgres- PostgreSQL databasecsv- CSV files
This creates a stellar-pipeline.yaml file.
For automation or CI/CD:
# Create a testnet pipeline with DuckDB sink
./bin/flowctl init --non-interactive --network testnet --destination duckdb
# Create a mainnet pipeline with PostgreSQL sink
./bin/flowctl init --non-interactive --network mainnet --destination postgres -o mainnet-pipeline.yaml./bin/flowctl run stellar-pipeline.yamlWhat happens:
- flowctl downloads required components from Docker Hub (first run only)
- Starts the embedded control plane
- Launches components: source -> contract-events-processor -> sink
- Data flows from Stellar network to your chosen destination
Press Ctrl+C to stop.
# Query the DuckDB file for contract events
duckdb stellar-pipeline.duckdb "SELECT * FROM contract_events LIMIT 5"# Connect and query contract events
psql -h localhost -U postgres -d stellar_events -c "SELECT * FROM contract_events LIMIT 5"# Check the CSV files
ls -la data/
head data/contract_events.csvThis directory contains sample pipeline configurations generated by flowctl init:
| File | Network | Sink | Description |
|---|---|---|---|
| testnet-duckdb-pipeline.yaml | testnet | DuckDB | Easiest setup for learning |
| testnet-postgres-pipeline.yaml | testnet | PostgreSQL | Production-like setup |
A typical generated pipeline looks like:
apiVersion: flowctl/v1
kind: Pipeline
metadata:
name: stellar-pipeline
description: Process stellar contract events on testnet
spec:
driver: process
sources:
- id: stellar-source
type: stellar-live-source@v1.0.0
config:
network_passphrase: "Test SDF Network ; September 2015"
backend_type: RPC
rpc_endpoint: https://soroban-testnet.stellar.org
start_ledger: 54000000
processors:
- id: contract-events
type: contract-events-processor@v1.0.0
config:
network_passphrase: "Test SDF Network ; September 2015"
inputs: ["stellar-source"]
sinks:
- id: duckdb-sink
type: duckdb-consumer@v1.0.0
config:
database_path: ./stellar-pipeline.duckdb
inputs: ["contract-events"]Key points:
driver: processruns components as local processes- Components are automatically downloaded from Docker Hub
- Pipeline has three stages: source → processor → sink
- The
contract-eventsprocessor extracts Soroban events from ledgers inputsconnects each component to its upstream data source
Check Docker is running:
docker psManually pull the image:
docker pull docker.io/withobsrvr/stellar-live-source:v1.0.0Ensure port 8080 is available:
lsof -i :8080-
Check component logs:
./bin/flowctl run stellar-pipeline.yaml --log-level=debug
-
Verify network connectivity to Stellar Horizon
Check the working directory and ensure the path is writable:
ls -la .- Add processors: Transform data with processors between source and sink
- Monitor pipelines: Use
flowctl dashboardfor real-time monitoring - Deploy to production: Use
flowctl translatefor Docker Compose or Kubernetes