Environment Setup
This guide covers everything you need to install before the course. It is organized in two phases:
- Day 1 Setup — Docker, Kubernetes tools, an AI coding tool, and the reference app. Required before Module 1.
- Day 2–3 Setup (Hermes) — The Hermes agent framework. Required before Module 7. Install this after Day 1.
Estimated time:
- Day 1 Setup: 30–45 minutes
- Hermes Setup: 10–15 minutes
Workshop participants: Your instructor will tell you which steps to complete before Day 1. Complete Day 1 Setup before arriving. Hermes Setup will be done as a group at the start of Day 2.
Udemy learners: Work through both phases before starting Module 7. Complete Day 1 Setup before Module 1, then return here for Hermes Setup before Module 7.
Day 1 Setup
Step 1: Docker
All Kubernetes labs run containers locally. Docker is the required container runtime.
Install Docker Desktop from docker.com/get-started. Docker Desktop includes the Docker daemon, CLI, and Docker Compose.
macOS ARM (Apple Silicon): Download the Apple Silicon installer, not the Intel one.
Allocate enough resources. Open Docker Desktop → Settings → Resources:
- CPUs: 4+
- Memory: 6 GB minimum (Prometheus + app services + PostgreSQL need ~4 GB combined)
- Disk: 30 GB minimum
Verify:
docker --version
# Expected: Docker version 26.x.x
Minimum version: Docker 24 or later.
Step 2: Kubernetes Tools
KIND (Kubernetes IN Docker)
KIND runs a complete Kubernetes cluster as Docker containers on your laptop. No cloud account required.
macOS:
brew install kind
macOS (direct binary, if not using Homebrew):
# Apple Silicon
curl -Lo kind https://github.com/kubernetes-sigs/kind/releases/download/v0.27.0/kind-darwin-arm64
chmod +x kind && sudo mv kind /usr/local/bin/
# Intel
curl -Lo kind https://github.com/kubernetes-sigs/kind/releases/download/v0.27.0/kind-darwin-amd64
chmod +x kind && sudo mv kind /usr/local/bin/
Linux:
curl -Lo kind https://github.com/kubernetes-sigs/kind/releases/download/v0.27.0/kind-linux-amd64
chmod +x kind && sudo mv kind /usr/local/bin/
Verify:
kind version
# Expected: kind v0.27.0 go1.23.3 linux/amd64
Minimum version: KIND v0.27 or later.
kubectl
macOS:
brew install kubectl
Docker Desktop also bundles kubectl — it is installed automatically with Docker Desktop.
Linux:
curl -LO "https://dl.k8s.io/release/$(curl -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl && sudo mv kubectl /usr/local/bin/
Verify:
kubectl version --client
# Expected: Client Version: v1.32.0
Helm
Helm is the Kubernetes package manager used to deploy the reference application and observability stack.
macOS:
brew install helm
Linux:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Verify:
helm version
# Expected: version.BuildInfo{Version:"v3.18.4", ...}
Step 3: AWS CLI (Optional)
The course includes labs that work against real AWS services. If you have an AWS account, install the AWS CLI. If not, skip this step — all labs have a mock data fallback that works without credentials.
macOS:
brew install awscli
Linux:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip && sudo ./aws/install
Configure (if you have an AWS account):
aws configure
# Enter your Access Key ID, Secret Access Key, region (e.g., us-east-1), output format (json)
No AWS account? All AWS labs have a local mock fallback. Labs that require AWS note the mock path at the top.
Step 4: AI Coding Tool
The course supports two terminal-based AI coding tools. Choose one — both work for all coding labs (Modules 1–6).
Path A: Claude Code (recommended if you have Claude Pro/Team)
Claude Code is Anthropic's terminal AI coding agent. It uses your existing Claude subscription with no additional API billing.
Requirements: Active Claude Pro ($20/month) or Claude Team subscription at claude.ai.
Install Node.js first (v18 or later required):
node --version # must be v18 or later
# Install if needed:
brew install node # macOS
# or: https://github.com/nvm-sh/nvm
Install Claude Code:
npm install -g @anthropic-ai/claude-code
Verify:
claude --version
# Expected: 1.x.x (claude-code)
Authenticate:
claude
This opens a browser for OAuth authentication. Sign in with your Claude account. After authentication, the terminal shows your Claude workspace.
Known issue — January 2026 OAuth block: Anthropic temporarily blocked OAuth authentication in some regions in January 2026. If you hit an auth error: (1) update to the latest version with
npm update -g @anthropic-ai/claude-code, (2) check status.anthropic.com, or (3) use an API key:export ANTHROPIC_API_KEY=sk-ant-...(key from console.anthropic.com). This issue was resolved for most users by February 2026.
Path B: OpenCode (free — no subscription required)
OpenCode (opencode.ai) is a terminal AI coding agent from the SST team. It supports 75+ LLM providers including free-tier options.
Important: This is
sst/opencodefrom opencode.ai — not the archivedopencode-ai/opencodeproject (archived September 2025).
Install:
# macOS
brew install sst/tap/opencode
# Linux
curl -fsSL https://opencode.ai/install.sh | sh
Verify:
opencode --version
# Expected: opencode 0.x.x
Connect a free provider:
opencode
Inside OpenCode, run /connect to open the provider selector.
Recommended free providers:
| Provider | Free Limit | Notes |
|---|---|---|
| Gemini 2.5 Flash (Google AI Studio) | 500 req/day | Recommended default — ample for all labs |
| Groq (llama-3.1-8b-instant) | 14,400 req/day | Fastest inference |
OpenRouter (:free models) | Free credits on signup | Flexible fallback |
Quick setup for Gemini 2.5 Flash:
- Go to aistudio.google.com, sign in with your Google account
- Click "Get API Key" → "Create API Key", copy the key
- In OpenCode:
/connect→ select Google → paste your API key → selectgemini-2.5-flash
A typical lab uses 5–15 requests. 500 requests/day is more than sufficient.
Step 5: Clone the Course Repository
git clone https://github.com/YOUR_ORG/agentic-devops-course.git
cd agentic-devops-course
Replace the URL with the actual repository URL provided by your instructor. Workshop participants receive this via email before Day 1.
Verify you are in the right place:
ls
# Should show: modules/ reference-app/ infrastructure/ setup/ skills/ agents/ governance/
Step 6: Deploy the Reference Application
The reference application is a microservices system that serves as the lab environment for Modules 1–6. It includes:
- Three Rust backend services (api-gateway, catalog, worker)
- A Svelte health dashboard
- PostgreSQL database
- Prometheus + Grafana for observability
Deploy:
cd reference-app
make deploy
What this does:
- Creates a KIND cluster named
lab - Installs PostgreSQL via Helm
- Installs Prometheus + Grafana (kube-prometheus-stack)
- Builds and deploys the three Rust services and dashboard
Expected time: 5–10 minutes on first run (Docker image pulls).
Expected output:
Creating KIND cluster 'lab'...
Installing PostgreSQL...
Installing Prometheus stack...
Deploying reference app...
Deployment complete!
Dashboard: http://localhost:30080
Grafana: http://localhost:30090 (admin/admin)
Verify: Open http://localhost:30080. You should see the health dashboard showing api-gateway, catalog, and worker, all green.
Step 7 (Optional): Datadog
The default observability stack is Prometheus + Grafana, installed in Step 6. Datadog's free tier works alongside the lab environment if you want exposure to commercial SaaS observability tooling.
This step is entirely optional. All observability labs have a Prometheus path.
- Sign up at app.datadoghq.com — free tier includes up to 5 hosts
- Copy your API key from account settings
- Install the Datadog Agent on the KIND cluster:
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm install datadog-agent datadog/datadog \
--set datadog.apiKey=<YOUR_DATADOG_API_KEY> \
--set datadog.site=datadoghq.com \
--namespace monitoring \
--create-namespace
- Verify the agent appears in your Datadog Infrastructure page within 2–3 minutes.
Step 8: Verify Your Day 1 Environment
Run the course verification script:
bash setup/verify.sh
Expected output:
=== Agentic DevOps Course — Environment Verification ===
--- Required CLI Tools ---
PASS Docker daemon running
PASS Docker version >= 24
PASS kubectl installed
PASS kind installed
PASS kind version >= 0.27
PASS Helm installed
PASS Helm version >= 3
PASS Node.js installed
PASS Node.js version >= 18
--- AI Coding Tools (at least one required) ---
PASS Claude Code installed (or)
PASS OpenCode installed
--- KIND Cluster ---
PASS KIND cluster 'lab' exists
PASS kubectl context 'kind-lab' configured
PASS kubectl can reach KIND cluster (nodes ready)
--- Reference App ---
PASS Reference app Cargo workspace exists
PASS Helm chart exists
PASS Makefile exists
--- Mock Data Files ---
PASS mock-data/cloudwatch/describe-alarms-clean.json
PASS mock-data/ec2/describe-instances.json
--- Deployment Status ---
PASS App pods running in namespace 'app'
PASS Dashboard accessible at localhost:30080
=== Results: 26 passed, 0 failed ===
Ready for labs!
If any checks fail, see the Troubleshooting section at the bottom of this page.
Day 2–3 Setup: Hermes
Hermes is the agent framework used in Modules 7–13. Install it before starting Module 7.
Prerequisites:
- macOS 12+ or Ubuntu 22.04+
- Python 3.11 or later:
python3 --version - Docker Desktop running (from Day 1 Setup)
Step 1: Install Hermes
Method A: uv (Recommended)
uv is a fast Python package manager. Install it if you do not have it:
curl -LsSf https://astral.sh/uv/install.sh | sh
Install Hermes:
uv tool install hermes-agent
Method B: pip
pip install hermes-agent
# Use pip3 if your system defaults to Python 2
Verify:
hermes --version
# Expected: hermes v0.7.0
If you see command not found, add the install location to your PATH:
export PATH="$HOME/.local/bin:$PATH"
Add this to your ~/.zshrc or ~/.bashrc to persist across sessions.
Step 2: Connect Hermes to an LLM Provider
Pick one of the four options below. All four work for all lab exercises.
Option A: Claude Code OAuth
For: Participants with an active Claude Pro or Claude Team subscription.
Hermes borrows the OAuth token that Claude Code stores after login — no separate API key needed.
# Ensure Claude Code is authenticated
claude auth status
# If not logged in: claude auth login
# Connect Hermes
hermes login --provider claude-code
# Verify
hermes run "say: OK"
# Expected: OK
Option B: Google AI Studio (Free)
For: Participants without a Claude subscription.
- Go to aistudio.google.com, sign in with your Google account
- Click "Get API key" → "Create API key", copy the key (starts with
AIza…)
hermes login --provider google-ai-studio --api-key YOUR_API_KEY
hermes config set model gemini-2.5-flash
hermes run "say: OK"
# Expected: OK
Option C: Hugging Face Inference (Free)
For: Participants who want an open-weight model.
- Sign up at huggingface.co
- Go to Settings → Access Tokens → New token, select Read scope, name it
hermes-lab
hermes login --provider huggingface --api-key YOUR_HF_TOKEN
hermes config set model meta-llama/Llama-3.1-8B-Instruct
hermes run "say: OK"
# Expected: OK
Response latency on the free HF tier is 2–5 seconds per request. This is normal.
Option D: OpenRouter
For: Participants who want flexibility across model families.
- Sign up at openrouter.ai — free credits added on first signup
- Go to Keys → Create Key, copy the key
hermes login --provider openrouter --api-key YOUR_OPENROUTER_KEY
hermes config set model anthropic/claude-haiku-4-5
hermes run "say: OK"
# Expected: OK
Append
:freeto a model name to use the free tier version:meta-llama/llama-3.1-8b-instruct:free
Step 3: Set the Default Model
All lab exercises are designed for a fast model. Set Haiku as your default (adjust for your provider):
hermes config set model claude-haiku-4-5
Or edit ~/.hermes/config.yaml directly:
model: claude-haiku-4-5
Skills give Hermes structured instructions and mock data is compact JSON, so Haiku produces correct results at near-zero token cost. Upgrade to Sonnet only for the complex reasoning scenarios in Module 10.
Note on
HERMES_LAB_MODE: This variable (mockorlive) controls whether Hermes uses pre-baked mock data or real infrastructure. It is set per-lab in each module's instructions — do not set it globally.
Step 4: Verify the Full Hermes Setup
bash setup/verify.sh
Expected output (Hermes section):
[PASS] hermes --version: v0.7.0
[PASS] LLM connectivity: OK
[PASS] Docker: running
[PASS] KIND: v0.31.0 or later
[PASS] All checks passed — ready for labs
Troubleshooting
Docker
Docker not running:
Start Docker Desktop, wait 30 seconds, then retry bash setup/verify.sh.
Pods stuck in Pending: Docker Desktop may need more memory. Open Docker Desktop → Settings → Resources → increase Memory to 8 GB.
KIND Cluster
KIND cluster creation fails:
kind delete cluster --name lab
cd reference-app && make deploy
Dashboard not accessible at localhost:30080:
kubectl get pods -n app --context kind-lab
All pods should show Running. If pods are in Pending or CrashLoopBackOff, check Docker resource allocation.
Port 30080 already in use:
lsof -i :30080
Find the process and stop it, or adjust the port in infrastructure/kind/cluster-config.yaml.
Claude Code
Claude Code authentication fails:
- Update:
npm update -g @anthropic-ai/claude-code - Check status.anthropic.com
- Use API key:
export ANTHROPIC_API_KEY=sk-ant-...
Node.js version too old:
node --version # must be v18 or later
brew install node # macOS
# or: nvm install 18 && nvm use 18
OpenCode
OpenCode provider connection fails:
- Re-copy your API key from the provider console
- Check the provider's status page for rate limit resets
- Try a different provider (e.g., switch from Gemini to Groq)
Hermes
| Problem | Likely cause | Fix |
|---|---|---|
hermes: command not found | Install location not in PATH | Add ~/.local/bin to PATH and restart terminal |
hermes run "say: OK" returns auth error | Provider login not completed | Re-run hermes login --provider <your-provider> |
| LLM connectivity fails | Wrong API key or expired token | Re-run hermes login … with a fresh key |
| Google AI Studio rate limit error | Free tier: 500 req/day | Wait until next UTC day or switch provider |
| HF Inference: model not found | Model name changed | Check huggingface.co/models for current Llama-3.1-8B model ID |
General
kind or kubectl not found after installation:
The binary may not be in your PATH. Check:
echo $PATH
Ensure /usr/local/bin is included. Add if missing: export PATH="$PATH:/usr/local/bin".
Windows Notes (WSL2)
All lab commands run inside WSL2.
- Install WSL2 with Ubuntu 22.04+: open PowerShell as Administrator and run
wsl --install - Install Docker Desktop for Windows with WSL2 integration enabled (Settings → Resources → WSL Integration → enable your Ubuntu distro)
- Inside WSL2, install KIND, kubectl, and Helm using the Linux commands above
- Claude Code and OpenCode both work natively in PowerShell or WSL2
- All
bash setup/*.shscripts must be run from a WSL2 terminal
Why WSL2? Lab shell scripts and the Makefile use bash. Windows PowerShell is not compatible with bash scripts.
What's Next
Once bash setup/verify.sh shows all PASS for Day 1 Setup:
- Open
course-site/docs/module-01-foundations/ - Read the Module 1 overview for learning objectives
- Start with the Module 1 lab
Once Hermes Setup is complete, continue from where you left off in Module 7.