Module 8 Lab: Wire Tools to Agents
Duration: 75 minutes Track: Use the same track you chose in Module 7 Prerequisite: Your completed Module 7 SKILL.md (you will attach it to your agent in Step 6) Outcome: A running Hermes agent with your identity, safety config, and skill attached — tested end-to-end
There's a Track C-specific version of this lab at Lab — Track C: Kubernetes.
It uses concrete track-c paths, the examine-and-copy approach for SOUL.md and config.yaml
(no blank starter files to fill in), and links forward into Module 10 Track C. Use it
instead of this unified version.
You are building a real agent. By the end of this lab, you will run hermes -p <your-track> chat
and talk to an agent you defined: its identity is yours, its skill is yours.
Prerequisites
# Verify Hermes is installed
hermes --version
# Confirm your Module 7 skill exists
ls course/modules/module-07-skills/my-<track>-skill.md
# If you saved your skill elsewhere, note the path — you will need it in Step 6
Choose Your Lab Mode: Mock or Live
Your agent needs something to run commands against. You have two options and can switch between them at any time by changing one environment variable:
| Mode | HERMES_LAB_MODE | What kubectl/aws/psql does | Use when |
|---|---|---|---|
| Mock (default) | mock | Returns pre-baked JSON fixtures from infrastructure/mock-data/ — the real binaries are never called | You don't have a KIND cluster / AWS account / database, or you want deterministic scenario data |
| Live | live (or unset) | Passes through to the real kubectl/aws/psql binary — hits your actual cluster / account / DB | You have real infrastructure set up and want to see the agent exercise it end-to-end |
The mock mode uses course-shipped wrappers at infrastructure/wrappers/mock-kubectl, mock-aws, and mock-psql. They intercept kubectl/aws/psql commands when HERMES_LAB_MODE=mock is set, and print a [ MOCK MODE ] banner to stderr so you can always tell which mode is active. See the Module 8 Reference §4 for the wrapper implementation.
Mock mode setup (default — no extra infrastructure needed):
export HERMES_LAB_MODE=mock
export HERMES_LAB_SCENARIO=clean # or: crashloop2, oom, image-pull, liveness, missing-secret, port-mismatch
export MOCK_DATA_DIR="$(pwd)/infrastructure/mock-data"
export PATH="$(pwd)/infrastructure/wrappers:$PATH"
# Verify the wrapper is in front of real kubectl
which kubectl
# Expected: <course-dir>/infrastructure/wrappers/kubectl (or similar)
# (If you see /usr/local/bin/kubectl, your PATH export is missing)
Live mode setup (real KIND cluster — Track C focus):
Validate you have a working cluster
kubectl get nodes
kubectl get pods -A
# 4. Switch Hermes to live mode — just change the env var
export HERMES_LAB_MODE=live # or: unset HERMES_LAB_MODE
# You can still keep the wrappers in PATH — the wrapper scripts auto-exec the real
# kubectl binary when HERMES_LAB_MODE != "mock". The [ MOCK MODE ] banner will not print.
Switching back and forth is safe. The wrapper scripts are thin routers — they check HERMES_LAB_MODE on every invocation, so you can flip between mock and live inside the same Hermes chat session without restarting the agent. Just toggle the env var in another terminal and the agent's next command will use the new mode.
When your agent asks about pods, you can tell which mode it's actually using from the output:
- Mock mode: Returns fixtures like
api-deployment-def456,webapp-deployment-abc123(deterministic, always the same) - Live mode: Returns whatever pods are actually running in your cluster (e.g.,
reference-app-api-gateway-<hash>)
Also check stderr — mock mode prints a visible [ MOCK MODE ] banner before every command; live mode prints nothing.
What You're Building
A Hermes profile is a directory. When you run hermes -p <name>, Hermes uses that directory
as its home — reading the identity from SOUL.md, the config from config.yaml, and the
skills from the skills/ subdirectory.
Final profile structure you'll create:
~/.hermes/profiles/<your-track>/
├── SOUL.md ← Agent identity (you write this in Steps 3-4)
├── config.yaml ← Model, toolsets, approvals (you configure this in Step 4)
└── skills/
└── <your-skill-name>/
└── SKILL.md ← Your Module 7 skill (you attach this in Step 6)
Reference implementation (completed Track A profile):
course/agents/track-a-database/— fully configured, ready to install- Module 8 solution files:
course/modules/module-08-tools/solution/
Step 1: Choose Your Track and Understand the Profile (5 min)
Your track from Module 7: ____________________ (Track A: Database, Track B: FinOps, Track C: Kubernetes, Track D: Observability)
Reference profile for your track:
| Track | Reference Profile Directory |
|---|---|
| A (Database) | course/agents/track-a-database/ |
| B (FinOps) | course/agents/track-b-finops/ |
| C (Kubernetes) | course/agents/track-c-kubernetes/ |
Open your track's reference profile and examine the files:
ls course/agents/track-a-database/ # or your track
# Expected: SOUL.md config.yaml skills/
cat course/agents/track-a-database/SOUL.md
cat course/agents/track-a-database/config.yaml
You will build your own version of these files. The reference is your "finished example."
Step 2: Create Your Profile Directory (3 min)
# Create the profile skeleton
mkdir -p ~/.hermes/profiles/<your-track>/skills/
# Verify structure
ls ~/.hermes/profiles/<your-track>/
# Expected: skills/
Note: hermes profile create also creates this structure, but the manual mkdir approach
makes the file system layout transparent. You see exactly what Hermes will read.
Step 3: Examine and Install Your SOUL.md (15 min)
Your SOUL.md is your agent's identity. Hermes reads it at startup and uses it as the agent's entire identity — replacing the generic "Hermes Agent" default persona.
Read the reference SOUL.md for your track:
cat agents/track-a-database/SOUL.md # Track A
cat agents/track-b-finops/SOUL.md # Track B
cat agents/track-c-kubernetes/SOUL.md # Track C
Read through each section as you go:
- Identity — A first-person statement that gives the agent its name, domain, and operating scope. This is what the agent says when you ask "Who are you?"
- Behavior Rules — Explicit if/then rules that govern how the agent responds in different situations. The first rule always checks
HERMES_LAB_MODEso the agent knows whether it is talking to mock fixtures or live infrastructure. - Escalation Policy — Conditions under which the agent stops acting autonomously and hands off to a human. For infrastructure agents this is where you encode your team's blast-radius limits.
Copy the reference SOUL.md directly to your profile:
# Track A:
cp agents/track-a-database/SOUL.md ~/.hermes/profiles/track-a/SOUL.md
# Track B:
cp agents/track-b-finops/SOUL.md ~/.hermes/profiles/track-b/SOUL.md
# Track C:
cp agents/track-c-kubernetes/SOUL.md ~/.hermes/profiles/track-c/SOUL.md
The reference SOUL.md files are production-ready — no placeholders to fill in.
If you want to understand the full blank template format that these were authored from,
see course/agents/SOUL-TEMPLATE.md as background reading.
Quality gate:
grep -c '\[' ~/.hermes/profiles/<your-track>/SOUL.md
# Expected: 0
Step 4: Configure config.yaml (10 min)
Your config.yaml controls: which model the agent uses (the Brain), which tools it has access to, and how it handles potentially dangerous commands.
Open the starter file:
cp course/modules/module-08-tools/starter/config-starter.yaml /tmp/my-config.yaml
Step 4a: Pick Your LLM Provider
You have two options. Pick the one that matches the API key you already have (or can get in 2 minutes).
Option A — Anthropic Claude Haiku 4.5 (default; requires existing Anthropic key or Claude subscription):
model:
default: "anthropic/claude-haiku-4-5"
provider: "Anthropic"
API key goes in ~/.hermes/profiles/<your-track>/.env:
echo 'ANTHROPIC_API_KEY=sk-ant-...' > ~/.hermes/profiles/<your-track>/.env
Option B — Google Gemini 2.5 Flash (free, no credit card, 500 req/day):
model:
default: "gemini-2.5-flash"
provider: "custom:google-ai-studio"
custom_providers:
- name: google-ai-studio
base_url: https://generativelanguage.googleapis.com/v1beta/openai
API key goes in ~/.hermes/profiles/<your-track>/.env:
echo 'OPENAI_API_KEY=<your-gemini-api-key>' > ~/.hermes/profiles/<your-track>/.env
Google AI Studio exposes an OpenAI-compatible endpoint at generativelanguage.googleapis.com/v1beta/openai. Hermes routes custom providers through its OpenAI client library, which reads the key from the OPENAI_API_KEY env var — even though the value is actually a Gemini API key. Do NOT rename the variable to GEMINI_API_KEY or GOOGLE_API_KEY — the Hermes custom provider will not find it.
To get a Gemini API key: visit aistudio.google.com, click Get API Key → Create API Key, copy the key (starts with AIza...). Full walkthrough in course/setup/llm-access.md § Provider 1 (read from your repo checkout, not the Docusaurus site).
Step 4b: Fill in the rest of config.yaml
Keep these values for the lab:
platform_toolsets.cli:[terminal, file, web, skills](L2 Advisory)approvals.mode:manual(every dangerous command requires your approval)approvals.timeout:300(5 minutes — required for multi-step lab flows)
Install it:
cp /tmp/my-config.yaml ~/.hermes/profiles/<your-track>/config.yaml
Verify your provider config loaded:
hermes -p <your-track> chat
Ask: What model are you running on? — the agent should identify its backing model (claude-haiku-4-5 or gemini-2.5-flash). If you see "API key missing" errors, re-check that .env file lives at ~/.hermes/profiles/<your-track>/.env and contains the right variable name for your option.
Step 5: Run Your Agent — No Skills Yet (8 min)
This step is intentional. Run your agent before attaching any skill. The goal is to see that the profile IS the identity, even without skills.
Step 5a: Confirm your env vars before launching
The Hermes process inherits environment variables from the shell that launches it — and only at launch time. Once hermes chat is running, env var changes you make in the same terminal do NOT propagate into the agent's process. This matters because the lab is designed around HERMES_LAB_MODE, HERMES_LAB_SCENARIO, and (later) HERMES_LAB_GOVERNANCE — get them wrong before you launch and your agent will run in the wrong mode for the entire session.
Verify your active mode and exports BEFORE running hermes chat:
# Confirm what mode you're in (check ALL the lab env vars)
echo "MODE=$HERMES_LAB_MODE SCENARIO=$HERMES_LAB_SCENARIO TRACK=$HERMES_LAB_TRACK"
# Confirm the kubectl/aws/psql wrappers are in PATH (symlinks to mock-*)
which kubectl
# Expected: <course-dir>/infrastructure/wrappers/kubectl
# Note: the wrapper path is correct in BOTH mock and live mode — the wrapper
# internally execs the real binary when HERMES_LAB_MODE=live.
# If you see /usr/local/bin/kubectl, your PATH export did not take effect.
# If anything's wrong, re-run the export block from Prerequisites BEFORE launching
hermes chat is already runningTwo safe patterns work; one common pattern silently fails:
| You did this | Result |
|---|---|
Changed HERMES_LAB_MODE in the same terminal that's running hermes chat, then sent another prompt | ❌ Silent failure — agent's process still has the OLD value because env vars don't reach a child process retroactively |
Exited hermes chat (exit or Ctrl+C), re-exported the new value, relaunched hermes -p <track> chat | ✓ Works — the new shell-level value is inherited at launch |
Changed HERMES_LAB_MODE in a different terminal (without restarting Hermes) | ✓ Works for the wrappers — mock-kubectl/mock-aws/mock-psql are tiny bash scripts that re-read HERMES_LAB_MODE on EVERY invocation, not at agent startup. So the next command the agent runs will pick up the new value. The Hermes process itself still has the old value, but for the wrappers, that's irrelevant. |
Why the third pattern works: the wrapper scripts run as separate child processes of hermes for each command. Each child gets a fresh os.environ snapshot from the parent shell at exec time. If you export HERMES_LAB_MODE=live in the original shell while Hermes is still running, the wrapper child processes inherit the new value on their next invocation — even though Hermes itself still thinks HERMES_LAB_MODE=mock. The wrapper layer is what actually decides mock vs live, not Hermes, so this works.
Why the first pattern doesn't: the same terminal running hermes chat is blocked on the chat REPL. You can't run shell commands in it until you exit. If you open a new tab and export HERMES_LAB_MODE=live there, that's a DIFFERENT shell — the original shell (parent of hermes) never saw the change.
Step 5b: Launch the agent
hermes -p <your-track> chat
Ask your agent:
What is your name and role?
Expected: The agent introduces itself using the identity from your SOUL.md. It should NOT say "I am Hermes Agent" — it should use the name and role you wrote.
If the identity is wrong: Check that ~/.hermes/profiles/<your-track>/SOUL.md exists
and that your SOUL.md starts with a strong first-person statement ("You are [Name]...").
To verify the active mode from inside the chat: ask the agent to run a quick diagnostic command:
Run `kubectl get pods -n app` and tell me what you see. Also report whether the [ MOCK MODE ] banner appeared.
- Mock mode confirmed: agent reports the
[ MOCK MODE ]banner and returns deterministic fixture pod names likewebapp-deployment-abc123 - Live mode confirmed: no banner; agent returns whatever pods are actually in your cluster (e.g.,
reference-app-api-gateway-<hash>if you deployed the reference app)
If you see the wrong mode, exit the chat (exit or Ctrl+C), fix your env vars in the shell, then relaunch.
Exit the chat session when done: type exit or Ctrl+C.
Step 6: Attach Your Module 7 Skill (8 min)
Copy your Module 7 skill into the profile's skills/ directory.
IMPORTANT: The skills/ path matters exactly. The skill must be at:
~/.hermes/profiles/<your-track>/skills/<skill-name>/SKILL.md
A skill at the profile root (without the skills/<name>/ subdirectory) is NOT discovered.
# Create the skill subdirectory using your skill's name (from its YAML frontmatter)
SKILL_NAME="[your-skill-name-from-frontmatter]"
mkdir -p ~/.hermes/profiles/<your-track>/skills/$SKILL_NAME/
# Copy your Module 7 skill
cp course/modules/module-07-skills/my-<track>-skill.md \
~/.hermes/profiles/<your-track>/skills/$SKILL_NAME/SKILL.md
# Verify structure
ls ~/.hermes/profiles/<your-track>/skills/
# Expected: <your-skill-name>/
ls ~/.hermes/profiles/<your-track>/skills/$SKILL_NAME/
# Expected: SKILL.md
Step 7: Restart Agent and Verify Skill Loads (8 min)
hermes -p <your-track> chat
Verify the skill is installed (CLI — reliable):
hermes -p <your-track> skills list
# Expected: your skill name appears with source: local
Note: Asking the agent "List your available skills" in chat may return empty — this is a known LLM behavior and does NOT mean the skill is missing. Use the CLI command above to reliably confirm the skill is installed. If it does not appear, verify the directory structure from Step 6.
Test the skill activates on a trigger: Send a prompt that matches one of your "When to Use" trigger conditions:
| Track | Test prompt |
|---|---|
| A (Database) | "The RDS CPUUtilization alarm just fired. Investigate." |
| B (FinOps) | "We have a cost spike. Daily spend is 80% above baseline." |
| C (Kubernetes) | "Several pods are in CrashLoopBackOff. Diagnose." |
| D (Observability) | "We're getting 15 CloudWatch alarms in 5 minutes. Analyze." |
Expected: The agent responds using the structure from your SKILL.md (citing evidence, running Phase 1 commands, applying Phase 2 decision logic).
Exit when done.
Step 8: Test a Safety Boundary (12 min)
Hermes's approval gate catches commands that match DANGEROUS_PATTERNS in tools/approval.py.
You configured approvals.mode: manual in Step 4. Now you'll trigger it.
Track-specific test commands:
Track A (Database): Ask the agent to drop a test index:
I want to drop the index test_idx_orders if it exists. Can you do that?
Expected: The agent generates DROP INDEX test_idx_orders; — this matches "SQL DROP" in
DANGEROUS_PATTERNS. You will see an approval prompt. Type deny to cancel.
Track B (FinOps): SQL DROP may not apply. Instead, ask:
I want to check for dangerous commands. Run: DROP TABLE test;
Expected: Approval gate fires for "SQL DROP". Type deny.
Track C (Kubernetes): For kubectl safety, note that kubectl delete/drain/cordon are NOT in Hermes DANGEROUS_PATTERNS — they are governed by SOUL.md NEVER rules. To test the approval gate, use a shell command that IS in DANGEROUS_PATTERNS:
Try running: rm -rf /tmp/test-dir
Expected: Approval gate fires for "recursive delete". Type deny.
Track D (Observability): Ask about a cleanup command:
Can you run: DROP TABLE alert_noise_log;
Expected: Approval gate fires for "SQL DROP". Type deny.
What you just tested: The approval gate works. Your agent cannot execute a flagged command without your explicit approval. This is governance L2 in action.
Exit the chat session.
Verify Your Complete Profile
Run the final verification checklist:
# 1. Profile directory has all required files
ls ~/.hermes/profiles/<your-track>/
# Expected: SOUL.md config.yaml skills/
# 2. Skill is in the correct location
ls ~/.hermes/profiles/<your-track>/skills/
# Expected: <your-skill-name>/
# 3. No placeholders remain in SOUL.md
grep -c '\[' ~/.hermes/profiles/<your-track>/SOUL.md
# Expected: 0
# 4. Config has correct approval settings
grep "mode:" ~/.hermes/profiles/<your-track>/config.yaml
# Expected: mode: manual
grep "timeout:" ~/.hermes/profiles/<your-track>/config.yaml
# Expected: timeout: 300
Compare with Solution
Track A reference solution: course/modules/module-08-tools/solution/
diff ~/.hermes/profiles/<your-track>/SOUL.md \
course/modules/module-08-tools/solution/SOUL-solution.md
# Differences are expected — your identity vs. Track A reference
What must match: all sections present (Identity, Behavior Rules, Escalation Policy), no [placeholders], approval gate configured.
Summary
You have built a Hermes agent profile from scratch:
- Identity from SOUL.md (your agent's persona and rules)
- Capabilities from config.yaml (model, tools, safety)
- Skills from your Module 7 work (your diagnostic runbook)
- Safety boundary tested (approval gate demonstrated)
Your profile is now a reusable agent definition. Anyone can install it with:
cp -r ~/.hermes/profiles/<your-track>/ course/agents/<your-track>/
And others can install it with cp -r course/agents/<your-track>/ ~/.hermes/profiles/<your-track>/.
Next Steps
In Module 10, you will extend your domain agent with more sophisticated skills and test it against the full cross-domain incident scenario.