When you need things done NOW — not in 30 minutes.
OpenVox's agent-server model is great for convergence (gradually bringing systems into compliance), but sometimes you need orchestration — running commands across your fleet right now, deploying code immediately, or executing complex multi-step workflows.
This guide covers:
- OpenBolt — Agentless orchestration
- r10k — Code deployment from Git
- Tasks and Plans — Reusable automation
Bolt — now OpenBolt (v5.3.0) in the OpenVox ecosystem — is an agentless orchestration tool. It connects to remote nodes via SSH (or WinRM for Windows) and runs commands, scripts, tasks, and plans — without requiring a Puppet agent on the target. Think of it as the "do it now" complement to Puppet's "keep it this way forever" model.
# RHEL/CentOS/Rocky (from the Vox Pupuli repo)
sudo yum install -y puppet-bolt
# Debian/Ubuntu
sudo apt-get install -y puppet-bolt
# macOS
brew install puppet-bolt
# Verify installation
bolt --version
# 5.3.0Create a Bolt project:
mkdir myproject && cd myproject
bolt project init myprojectThis creates:
myproject/
├── bolt-project.yaml ← Project configuration
├── inventory.yaml ← Target definitions
├── Puppetfile ← Module dependencies
├── plans/ ← Bolt plans
└── tasks/ ← Bolt tasks (RESERVED: currently not used)
Define your targets in inventory.yaml:
---
groups:
- name: webservers
targets:
- uri: web1.example.com
- uri: web2.example.com
config:
ssh:
user: deploy
private-key: ~/.ssh/id_ed25519
host-key-check: false
run-as: root
- name: databases
targets:
- uri: db1.example.com
- uri: db2.example.com
config:
ssh:
user: deploy
run-as: root
- name: all_servers
groups:
- webservers
- databases
- name: local
targets:
- uri: localhost
config:
transport: local# Run a command on all webservers
bolt command run 'systemctl status httpd' --targets webservers
# Run on specific hosts
bolt command run 'df -h' --targets web1.example.com,db1.example.com
# Run on all servers
bolt command run 'uptime' --targets all_servers
# Using PuppetDB for target discovery
bolt command run 'hostname' \
--query 'nodes[certname] { facts.os.name = "Rocky" }'
# Limit concurrency (don't overwhelm your network)
bolt command run 'yum update -y openssl' --targets all_servers --concurrency 5# Run a local script on remote targets
bolt script run ./scripts/health_check.sh --targets webservers
# Pass arguments to the script
bolt script run ./scripts/deploy.sh --targets webservers \
--arguments 'version=2.0 environment=production'# Upload a file
bolt file upload ./configs/nginx.conf /etc/nginx/nginx.conf --targets webservers
# Download files (great for log collection)
bolt file download /var/log/messages ./collected_logs/ --targets all_servers# Apply a manifest without requiring an agent
bolt apply manifest.pp --targets web1.example.com
# Apply inline Puppet code
bolt apply -e 'package { "vim": ensure => installed }' --targets all_servers
# Apply with module support
bolt apply --modulepath ./modules manifest.pp --targets webserversTasks are single-action scripts (Bash, Python, PowerShell, Ruby) with structured metadata. They're like scripts, but with parameter validation, documentation, and discoverability.
# tasks/restart_service.sh
#!/bin/bash
# Restart a service and verify it's running
SERVICE=$PT_service_name # Parameters are passed as environment variables with PT_ prefix
systemctl restart "$SERVICE"
sleep 2
if systemctl is-active --quiet "$SERVICE"; then
echo "{\"status\": \"success\", \"service\": \"$SERVICE\", \"state\": \"running\"}"
else
echo "{\"status\": \"failed\", \"service\": \"$SERVICE\", \"state\": \"stopped\"}" >&2
exit 1
fi// tasks/restart_service.json (metadata)
{
"description": "Restart a system service and verify it started successfully",
"parameters": {
"service_name": {
"description": "The name of the service to restart",
"type": "String"
}
},
"input_method": "environment"
}bolt task run myproject::restart_service service_name=httpd --targets webserversPlans are multi-step workflows written in Puppet language or YAML. They can run commands, tasks, other plans, and include logic (conditionals, error handling, etc.).
# plans/rolling_deploy.pp
plan myproject::rolling_deploy (
TargetSpec $targets,
String $version,
Integer $batch_size = 2,
) {
# Get the targets
$all_targets = get_targets($targets)
# Deploy in batches
$all_targets.each |$batch| {
out::message("Deploying version ${version} to batch...")
# 1. Disable the load balancer
run_task('myproject::lb_drain', $batch)
# 2. Deploy the new version
run_task('myproject::deploy', $batch,
version => $version
)
# 3. Run health checks
$results = run_task('myproject::health_check', $batch)
# 4. Fail fast if health checks fail
$results.each |$result| {
unless $result['healthy'] {
fail_plan("Health check failed on ${result.target.name}")
}
}
# 5. Re-enable in load balancer
run_task('myproject::lb_enable', $batch)
out::message("Batch complete!")
}
return "Deployed version ${version} to ${all_targets.length} targets"
}# plans/update_packages.yaml
---
description: "Update specific packages across fleet"
parameters:
targets:
type: TargetSpec
description: "Targets to update"
packages:
type: Array[String]
description: "Packages to update"
steps:
- name: check_current
command: "rpm -qa ${packages.join(' ')}"
targets: $targets
- name: update_packages
command: "yum update -y ${packages.join(' ')}"
targets: $targets
- name: verify_update
command: "rpm -qa ${packages.join(' ')}"
targets: $targets
return: $verify_updatebolt plan run myproject::rolling_deploy \
targets=webservers version=2.1.0 batch_size=2
bolt plan run myproject::update_packages \
targets=all_servers packages='["openssl","curl"]'r10k (v5.0.2) is the standard tool for deploying Puppet code from Git. It maps Git branches to Puppet environments and installs Forge modules declared in a Puppetfile. The name references Star Wars' assassin droids — because this tool is killer at deployment.
Git Repository Puppet Server
───────────── ──────────────
main branch ─────r10k deploy────► production/ environment
staging branch ─────r10k deploy────► staging/ environment
feature-x ─────r10k deploy────► feature-x/ environment
# 1. Make changes in your control repo
git checkout -b feature/new-monitoring
# ... edit manifests, Hiera data, Puppetfile ...
git add . && git commit -m "feat: add new monitoring profile"
git push origin feature/new-monitoring
# 2. Deploy on the Puppet server
sudo r10k deploy environment --verbose
# 3. Test on a single node
sudo puppet agent -t --environment feature/new-monitoring
# 4. Merge to production
git checkout main && git merge feature/new-monitoring
git push origin main
# 5. Deploy production
sudo r10k deploy environment production --modules --verbose#!/bin/bash
# .git/hooks/post-receive (on your Git server)
r10k deploy environment --verbose#!/bin/bash
# /usr/local/bin/r10k-deploy.sh
# Called by your CI/CD pipeline or webhook
cd /etc/puppetlabs/code
# Deploy all environments
r10k deploy environment --verbose --modules 2>&1 | tee /var/log/r10k-deploy.log
# Flush the environment cache
curl -i --cert $(puppet config print hostcert) \
--key $(puppet config print hostprivkey) \
--cacert $(puppet config print localcacert) \
-X DELETE \
"https://$(puppet config print certname):8140/puppet-admin-api/v1/environment-cache"
echo "Deployment complete at $(date)"# Puppetfile — manages module dependencies
# Forge modules (pinned versions — always pin in production!)
mod 'puppetlabs-stdlib', '9.6.0'
mod 'puppetlabs-apache', '12.1.0'
mod 'puppetlabs-mysql', '15.0.0'
mod 'puppetlabs-postgresql', '10.0.3'
mod 'puppetlabs-firewall', '8.0.3'
mod 'puppetlabs-concat', '9.0.2'
mod 'puppetlabs-ntp', '10.1.0'
mod 'puppet-nginx', '6.0.1'
# Git modules
mod 'custom_profiles',
git: 'git@github.com:myorg/custom_profiles.git',
tag: 'v1.5.0'
# Branch tracking (for development — not recommended in production!)
mod 'experimental_module',
git: 'git@github.com:myorg/experimental_module.git',
branch: 'development'Warning: Always pin module versions in production Puppetfiles! Using
:latestor branch tracking in production is like juggling chainsaws — impressive until it goes wrong.
- Start small: Test orchestration commands on one node before running against the fleet
- Use
--noopand--concurrency: Always dry-run first, and limit concurrency to avoid overwhelming your infrastructure - Pin module versions: In your Puppetfile, always pin to specific versions or tags
- Use Bolt for ad-hoc work, Puppet for convergence: They complement each other
- Version your plans and tasks: Treat them like code — they ARE code
- Log everything: r10k deployments, Bolt runs, and plan outputs should all be logged
Next up: Troubleshooting & FAQ →
This document was created with the assistance of AI (Grok, xAI). All technical content has been reviewed and verified by human contributors.