Skip to main content

Superpowers for IaC — Quick Reference


The Superpowers Cycle

PhaseGoalTime Guidance
Phase 0: ContextWrite CLAUDE.md — system state, gaps, constraints, exact vocabulary10 min
Phase 1: BrainstormAI reviews context, produces prioritized gap analysis (plan, not code)15 min
Phase 2: TDD — REDWrite tests FIRST. Run them. They MUST fail.20 min
Phase 3: Implement — GREENPrompt AI to generate code that passes the tests20 min
Phase 4: DebugFix AI generation errors using 4-phase methodology10 min
Phase 5: Verify + ReviewEvidence-based verification + 5-dimension code review15 min

Iron Law: No production code before a failing test exists.


TDD Commands by Tool

ToolRED (Test First)GREEN (Make Pass)REFACTOR
HelmRun verify-chart.sh — expect FAIL for missing resourceshelm lint exits 0 + helm template | kubectl apply --dry-run=client -f - succeedshelm lint stays clean, dry-run stays green
Terraformterraform test — expect "Reference to undeclared resource" (no .tf files yet)terraform validate + terraform test — all assertions passterraform test stays green

Common AI Generation Errors — Helm

ErrorWhat AI GeneratesCorrect ValueWhy
Wrong HPA apiVersionautoscaling/v2beta2autoscaling/v2v2beta2 removed in K8s 1.26+
Wrong PDB apiVersionpolicy/v1beta1policy/v1v1beta1 removed in K8s 1.25+
matchLabels mismatchapp: reference-appMust match Deployment spec.selector.matchLabelsHPA/PDB must target existing selectors
ServiceMonitor wrong portport: 8080Use the name field from Service specServiceMonitor requires port name, not number
NOTES.txt no dot prefix{{ Release.Name }}{{ .Release.Name }}Helm template functions require dot prefix
HPA targetRef hardcodedname: api-gateway{{ include "reference-app.fullname" . }}Should use chart helper for naming consistency

Common AI Generation Errors — Terraform

ErrorWhat AI GeneratesCorrect ValueWhy
Wrong metric namecpu_utilizationCPUUtilizationAWS metric names are camelCase — exact match required
Wrong comparison operatorGreaterThanOrEqualToThresholdGreaterThanThresholdUse "greater than" not "greater than or equal" for standard CPU threshold
Wrong SNS protocol"lambda" or "sqs""email"Free tier requires email protocol (SMS costs money)
Wrong AMI owners["self"] or ["aws"]["amazon"]Amazon-owned AMIs require the "amazon" owner value
Missing data sourceami = "ami-12345abc"data "aws_ami" with filterAMI IDs are region-specific — always use data source
Resource name mismatchaws_instance.thisMust match test file (e.g., aws_instance.app)Terraform test references resource by name — mismatch = test failure
Variable not usedthreshold = 80 (hardcoded)threshold = var.alarm_thresholdHardcoded values cannot be overridden by callers

Systematic Debugging Phases

Phase 1 — Investigate: Read the error output completely. Do not guess. Identify the exact resource, attribute, and value mentioned in the error. For Helm: helm lint --debug. For Terraform: terraform test 2>&1.

Phase 2 — Pattern match: Find a working example. Check the error table above. Compare the failing template or resource against a known-good reference.

Phase 3 — Hypothesize: Form one specific hypothesis. "The matchLabels uses app but the Deployment uses app.kubernetes.io/name." Apply the smallest change that tests this hypothesis.

Phase 4 — Fix and verify: Apply the minimal fix. Run the verification command immediately. Confirm the error is gone. Move to the next error.

3-Fix Rule: After 3 failed fix attempts on the same error, stop. Ask the AI with the file content + test/check + exact error. The error is likely a context problem, not a code problem.


Code Review Dimensions for IaC

DimensionWhat to CheckCommon Finding
Code QualityNaming conventions, DRY (no hardcoded values), consistent patternsLiteral values that should be variables; inconsistent resource naming
ArchitectureResource relationships, data flow, dependency orderingHPA targeting wrong Deployment; alarm not referencing EC2 instance; SNS subscription hardcoding topic ARN
TestingAssertions cover all resources; edge cases (zero replicas, empty values, default thresholds)terraform test checks resource existence but not attribute values; verify-chart.sh missing resource limit check
RequirementsAll CLAUDE.md constraints present; nothing omittedMissing monitoring = false; wrong apiVersion; resource limits absent
Production ReadinessFree tier compliance; variable defaults; documentation; tagsMissing tags; no treat_missing_data; NOTES.txt empty; PDB blocks single-replica drain

Verification Commands

# ── Helm ──────────────────────────────────────────────
# Lint: syntax and schema validation
helm lint <chart-dir>

# Template + dry-run: validate rendered output against cluster API
helm template <release> <chart-dir> -f <values-file> | kubectl apply --dry-run=client -f -

# Install dry-run: full release simulation including NOTES.txt
helm install <release> <chart-dir> --dry-run

# Debug lint failures: get exact file and line number
helm lint <chart-dir> --debug

# ── Terraform ─────────────────────────────────────────
# Schema validation: syntax and type checking
terraform validate

# Unit tests with mock provider (no AWS credentials needed)
terraform test

# Format check: consistent code style
terraform fmt -check

# Plan (requires real AWS credentials): shows actual resource changes
terraform plan

CLAUDE.md Template for IaC Projects

Use this template as your starting point. The more specific each section, the better the AI generation quality.

# [Project Name]

## System State
- [What tool/chart/module exists today]
- [Current versions and configurations]
- [What is already working correctly]

## What Is Missing (Production Gaps)
- [Specific gap 1 — be concrete, not "improve the chart"]
- [Specific gap 2]

## Constraints
- [Hard requirement 1 — API versions, naming conventions, free tier limits]
- [Hard requirement 2 — test toolchain, verification method]
- [Performance or cost constraints]

## Architecture
- [Key design decisions already made]
- [Exact resource types and attribute names needed]
- [Resource relationships and data flow]

Architecture section tip: Include the exact attribute names and values your provider requires. For Terraform:

## Architecture
- Resource: aws_cloudwatch_metric_alarm
- metric_name = "CPUUtilization" (not cpu_utilization)
- comparison_operator = "GreaterThanThreshold"
- namespace = "AWS/EC2"
- evaluation_periods = 2, period = 300

Encoding the exact vocabulary in CLAUDE.md pre-corrects the most common AI generation errors before code generation begins.


Quick Reference: mock_provider for Terraform Testing

# versions.tf — requires Terraform 1.7+
terraform {
required_version = ">= 1.7"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# unit.tftest.hcl — write this BEFORE main.tf
mock_provider "aws" {}

variables {
notification_email = "test@example.com"
instance_name = "test-instance"
}

run "resource_exists" {
command = plan
assert {
condition = length(aws_instance.app) > 0
error_message = "EC2 instance must be created"
}
}

run "alarm_threshold_correct" {
command = plan
assert {
condition = aws_cloudwatch_metric_alarm.cpu.threshold == 80
error_message = "Alarm threshold should default to 80"
}
}

Run tests:

terraform init
terraform test # must fail before main.tf exists (RED)
# ... generate main.tf, variables.tf, outputs.tf from CLAUDE.md context
terraform test # must pass after implementation (GREEN)

Quick Reference: Helm Verification Script Pattern

#!/bin/bash
set -e

CHART_DIR="<your-chart-dir>"
ERRORS=0

echo "=== Chart Verification ==="

helm lint "$CHART_DIR" || { echo "FAIL: helm lint"; ERRORS=$((ERRORS+1)); }

RENDERED=$(helm template test-release "$CHART_DIR" 2>/dev/null)

echo "$RENDERED" | grep -q "kind: HorizontalPodAutoscaler" \
|| { echo "FAIL: no HPA"; ERRORS=$((ERRORS+1)); }

echo "$RENDERED" | grep -q "kind: PodDisruptionBudget" \
|| { echo "FAIL: no PDB"; ERRORS=$((ERRORS+1)); }

echo "$RENDERED" | kubectl apply --dry-run=client -f - 2>&1 | tail -3 \
|| { echo "FAIL: kubectl dry-run"; ERRORS=$((ERRORS+1)); }

[ $ERRORS -eq 0 ] && echo "ALL CHECKS PASS" || { echo "$ERRORS check(s) failed"; exit 1; }

Run script:

chmod +x verify-chart.sh
./verify-chart.sh # must fail before templates exist (RED)
# ... generate templates from CLAUDE.md context
./verify-chart.sh # must pass after implementation (GREEN)