Configuration Guide
Everything you need to set up the loopctl orchestration workflow. Copy the configs below, adapt the placeholders for your project, and run.
Overview
loopctl provides a structural trust layer for AI agent orchestration. These configs define the workflow that enforces chain-of-custody verification across every story in your project.
The workflow follows a strict sequence:
- Orchestrator finds the next ready story and dispatches an implementation agent
- Implementation agent builds the feature, runs tests, and requests review
- Review agents (different identity) audit the work and record findings
- Orchestrator verifies the review, confirms the story, and moves to the next
The key constraint: the implementer, reviewer, and verifier must all be different identities. This is enforced at the API layer through three identity gates that return 409 Conflict if any agent attempts to mark their own work as done.
Orchestrator Command
The orchestrator is a long-running AI agent session that coordinates the entire build loop. It never writes code directly -- it dispatches sub-agents for implementation and review.
Key concepts:
- MCP tool preference -- use typed MCP tools instead of raw curl for all loopctl interactions
- Session sentinel -- a file that hooks use to detect orchestrator mode
- Pre-flight context -- read CLAUDE.md, load architecture docs, check environment
- Autonomous loop -- find ready story, build context, dispatch, review, verify, repeat
- Chain of custody -- implementer, reviewer, and verifier are different identities
# Genericized orchestrator command (key excerpts)
# Full command is ~300 lines; this shows the essential structure.
# === IDENTITY ===
# Create sentinel file so hooks know this is an orchestrator session
touch "$HOME/.claude/.orchestrator-active-$SESSION_ID"
# === PRE-FLIGHT ===
# 1. Read CLAUDE.md and AGENTS.md for project conventions
# 2. Identify architecture docs relevant to current epic
# 3. Load orchestration state from loopctl
mcp__loopctl__get_progress({project_id: "<uuid>"})
# === AUTONOMOUS LOOP ===
# Repeat until no stories remain in ready state:
# 1. Find next ready story
STORY=$(mcp__loopctl__list_stories({status: "ready", limit: 1}))
# 2. Contract the story (commit to acceptance criteria count)
mcp__loopctl__contract_story({
story_id: "$STORY_ID",
story_title: "...",
ac_count: 12
})
# 3. Build implementation context
# - Read the story JSON for full requirements
# - Identify which architecture docs apply
# - Check for dependency stories that inform this one
# 4. Dispatch implementation agent (NEVER write code directly)
# The orchestrator coordinates. Agents write code.
claude --agent implementation-agent \
--message "Implement US-X.Y: $STORY_TITLE ..."
# 5. Implementation agent finishes — request review
mcp__loopctl__request_review({story_id: "$STORY_ID"})
# 6. Dispatch review agents (different identity than implementer)
# Team review: 3 agents in parallel
# Then: VCA (Verify, Classify, Aggregate)
# Then: Adversarial review: 4 agents in parallel
# Then: VCA again
# 7. Review agent reports completion
mcp__loopctl__report_story({story_id: "$STORY_ID"})
mcp__loopctl__review_complete({
story_id: "$STORY_ID",
findings_count: 14,
fixes_count: 12,
disproved_count: 2
})
# 8. Orchestrator verifies (third identity)
mcp__loopctl__verify_story({story_id: "$STORY_ID"})
# === RULES ===
# - MCP tools over curl (typed, validated, discoverable)
# - One story at a time — never parallelize stories
# - Chain of custody: implementer != reviewer != verifier
# - Never write code directly — dispatch sub-agents
# - Fix ALL review findings — no deferrals
Agent Definitions
Agents are defined as YAML frontmatter files in .claude/agents/. Each agent has a name, permission mode, model, and preloaded skills. The loopctl workflow uses two groups of agents:
- Implementation agent — writes code (uses sonnet for speed)
- Review team — BA + Architect + Engineer review in round 1, then all three + Security Adversary in round 2 (all use opus for depth)
Implementation
Implementation Agent
---
name: implementation-agent
description: Primary development agent for feature implementation
permissionMode: bypassPermissions
model: sonnet
effort: high
skills:
- patterns-elixir
- patterns-ecto
- patterns-phoenix-web
---
name -- unique identifier used in dispatch commands
description -- human-readable purpose shown in agent listings
permissionMode -- bypassPermissions lets the agent run without interactive confirmation
model -- which LLM to use (sonnet for implementation, opus for review)
effort -- high enables extended thinking and deeper analysis
skills -- preloaded knowledge files that guide the agent's patterns
View full agent prompt
Review Team (Round 1: Team Review)
Three agents review the implementation in parallel. Each brings a different perspective: business requirements, system architecture, and engineering quality.
Business Analyst Agent
---
name: business-analyst
description: Requirements analysis, AC validation, and story quality review
permissionMode: bypassPermissions
model: opus
effort: high
skills:
- patterns-elixir
- patterns-ecto
- patterns-phoenix-web
---
Reviews implementation against acceptance criteria. Validates that the story's business requirements are met, checks for missing edge cases, and verifies that test cases cover all specified behaviors. Uses opus for deeper reasoning about requirements.
View full agent prompt
Systems Architect Agent
---
name: systems-architect
description: Architecture review, OTP compliance, fault tolerance, scalability
permissionMode: bypassPermissions
model: opus
effort: high
skills:
- patterns-elixir
- patterns-ecto
- patterns-phoenix-web
- patterns-elixir-otp
- patterns-elixir-integration
---
Reviews system design, OTP compliance, fault tolerance, and scalability. Checks supervision trees, GenServer patterns, database query performance, and cross-context boundaries. Loaded with OTP and integration pattern skills for deep architectural analysis.
View full agent prompt
Adversarial Round (Round 2)
After round 1 findings are fixed, the same BA, Architect, and Engineer agents re-run with an adversarial mindset — actively trying to break the code. A fourth agent joins: the Security Adversary, focused on defensive edge cases.
Security Adversary Agent
---
name: security-adversary
description: Adversarial security and resilience reviewer (READ-ONLY)
permissionMode: bypassPermissions
model: opus
effort: high
skills:
- owasp-security
- patterns-elixir
---
This agent is read-only -- it reviews code but never modifies it. It checks 10 defensive areas:
- Auth / Permissions / Trust Boundaries
- Data Loss / Corruption
- Idempotency / Retry Safety
- Race Conditions / Concurrency
- Degraded Dependencies
- Schema Drift / Migration Safety
- Observability Gaps
- Resource Exhaustion / DoS
- Tenant Leakage / Multi-tenancy
- Information Disclosure
View full agent prompt
Enhanced Review Pipeline
Every story passes through a two-stage review pipeline before the orchestrator can verify it. The review runs in an isolated forked context -- the orchestrator cannot interfere.
Team Review (3 agents) ──> VCA ──> Fix
│ │
▼ ▼
Adversarial Review (4 agents) ──> VCA ──> Fix ──> Summary
VCA: Verify, Classify, Aggregate
After each review stage, a VCA pass deduplicates findings, assigns confidence scores, and classifies each finding using git blame to determine whether the finding is in code touched by this story or in pre-existing code.
Findings Math
The review completion endpoint enforces strict arithmetic: every finding must be accounted for as either fixed or disproved. No deferrals, no tech debt placeholders.
# Findings math is API-enforced:
fixes_count + disproved_count == findings_count
# Example: 14 findings found
{
"findings_count": 14,
"fixes_count": 12, # Bugs actually fixed
"disproved_count": 2 # False positives with justification
}
# 12 + 2 == 14 ✓ (accepted by API)
# 12 + 1 == 13 ✗ (rejected — math doesn't add up)
Chain of Custody
The review agent calls /stories/:id/report to mark implementation done and /stories/:id/review-complete to record findings. Both endpoints enforce that the caller is not the same agent that implemented the story. The orchestrator then calls /stories/:id/verify as a third identity.
Anti-Deferral Policy
Fix everything unless it is physically impossible. No "will address in a future PR" -- every finding gets resolved before the story can be verified.
Enforcement Hooks
Hooks provide client-side enforcement that complements the server-side identity gates. They run automatically in the agent's environment.
Orchestrator Guardrail (PreToolUse)
Blocks file-writing tools when the orchestrator sentinel is active. The orchestrator should coordinate -- agents write code.
#!/bin/bash
# .claude/hooks/PreToolUse/orchestrator-guardrail.sh
# Blocks Edit/Write/MultiEdit when orchestrator sentinel is active.
# Orchestrator coordinates — agents write code.
input=$(cat)
tool_name=$(echo "$input" | jq -r '.tool_name // ""')
session_id=$(echo "$input" | jq -r '.session_id // ""')
if [ -f "$HOME/.claude/.orchestrator-active-$session_id" ]; then
case "$tool_name" in
Edit|Write|MultiEdit|NotebookEdit)
echo "BLOCKED: Orchestrator cannot write code. Dispatch a sub-agent." >&2
exit 2 ;;
esac
fi
exit 0
Keep-Working (Stop Hook)
Prevents the orchestrator from stopping when stories remain in the backlog. Queries the loopctl API for pending work and forces continuation if any stories are still ready.
#!/bin/bash
# .claude/hooks/Stop/keep-working.sh
# Prevents orchestrator from stopping when stories remain.
# Queries loopctl API for pending work.
INPUT=$(cat)
SESSION_ID=$(echo "$INPUT" | jq -r '.session_id // ""')
if [ ! -f "$HOME/.claude/.orchestrator-active-$SESSION_ID" ]; then
exit 0 # Not an orchestrator session
fi
# Check for remaining work
PENDING=$(curl -s -H "Authorization: Bearer $LOOPCTL_ORCH_KEY" \
"$LOOPCTL_SERVER/api/v1/stories/ready?limit=1" | jq '.data | length')
if [ "$PENDING" -gt 0 ]; then
exit 2 # Force continuation — stories remain
fi
exit 0
Project Setup
Two files configure loopctl integration in your project: CLAUDE.md (the project instruction file) and .mcp.json (the MCP server config).
CLAUDE.md Template
Add the loopctl loading instruction and chain-of-custody rules to your project's CLAUDE.md. This ensures every agent session starts with the current orchestration state.
# My Project
Stack: Elixir 1.18 / Phoenix 1.8, PostgreSQL, Oban
## CRITICAL: Load Orchestration State
mcp__loopctl__get_progress({project_id: "<uuid>"})
## Chain-of-Custody Enforcement
- POST /stories/:id/report — 409 if caller == assigned_agent
- POST /stories/:id/review-complete — 409 if caller == assigned_agent
- POST /stories/:id/verify — 409 if caller == assigned_agent
MCP Server Config
Add the loopctl MCP server to your project's .mcp.json. Use separate keys for orchestrator and agent roles.
{
"mcpServers": {
"loopctl": {
"command": "npx",
"args": ["loopctl-mcp-server"],
"env": {
"LOOPCTL_SERVER": "https://loopctl.com",
"LOOPCTL_ORCH_KEY": "lc_your_orchestrator_key",
"LOOPCTL_AGENT_KEY": "lc_your_agent_key"
}
}
}
}
Next steps: Register a tenant, create API keys, import your stories, and start the orchestrator. See the Getting Started guide on the home page for the full setup sequence.
API Endpoints
All endpoints live under /api/v1. Authentication is via Bearer token in the Authorization header. The full OpenAPI 3.0 spec is available at /swaggerui .
Knowledge Wiki
GET /api/v1/knowledge/lint
Knowledge wiki health check. Returns stale articles, orphaned content, contradiction clusters, and coverage gaps. Supports configurable thresholds via query params: stale_days, min_links.
GET /api/v1/projects/:project_id/knowledge/lint
Project-scoped lint report. Same output as the tenant-wide lint endpoint but filtered to articles belonging to a specific project.
GET /api/v1/knowledge/pipeline
Self-learning pipeline status. Returns extraction health metrics: total articles, draft/published/archived counts, publish rate, recent extractions, and pipeline activity over time.
Additional Knowledge Endpoints
GET /api/v1/knowledge/search?q=...
Unified search across knowledge articles. Supports mode=keyword|semantic|combined for full-text search, pgvector embeddings, or both.
GET /api/v1/knowledge/context?q=...
Deep-read context endpoint. Returns articles with recency scoring and linked references -- designed for agent consumption during implementation.
GET /api/v1/knowledge/export
Export the entire knowledge wiki as an Obsidian-compatible ZIP file with Markdown articles, backlinks, and an index file.
GET /api/v1/knowledge/drafts
List draft articles awaiting review and publication. Supports limit and offset pagination.
POST /api/v1/knowledge/bulk-publish
Publish multiple draft articles in a single request. Accepts an array of article IDs. Role: user+.
POST /api/v1/knowledge/ingest
Submit a URL or raw content for LLM-powered knowledge extraction. Queues an Oban job that fetches, extracts, and creates draft articles. Role: orchestrator+.
POST /api/v1/knowledge/ingest/batch
Submit up to 50 content items for batch ingestion in a single request. Each item is processed independently. Role: orchestrator+.
GET /api/v1/knowledge/ingestion-jobs
List recent ingestion jobs with status, errors, and article counts. Role: orchestrator+.
GET /api/v1/knowledge/index
Lightweight catalog of all published articles with titles, tags, and category metadata.
MCP Tools
The loopctl MCP server provides 50 typed tools for Claude Code agents. Install via npm install loopctl-mcp-server and configure in .mcp.json. Agents should use MCP tools instead of raw curl to preserve chain-of-custody.
Knowledge Wiki Tools
knowledge_publish
Publish a draft article to the knowledge wiki. Requires the article ID. Transitions the article from draft to published status.
knowledge_drafts
List draft articles awaiting review. Returns titles, tags, source metadata, and creation timestamps. Supports pagination.
knowledge_lint
Run a health check on the knowledge wiki. Reports stale articles, orphaned content, contradiction clusters, and coverage gaps with configurable thresholds.
knowledge_export
Export the knowledge wiki as an Obsidian-compatible ZIP file. Contains Markdown articles with YAML frontmatter, backlinks between articles, and an index file.
knowledge_bulk_publish
Publish up to 100 draft articles in a single call. Requires LOOPCTL_USER_KEY env var (destructive operation).
knowledge_ingest
Submit a URL or raw content for LLM-powered knowledge extraction. Queues an async Oban job that fetches, extracts, and creates draft articles.
knowledge_ingest_batch
Batch-submit up to 50 content items for ingestion in a single call. Each item is processed independently with per-item result tracking.
knowledge_ingestion_jobs
List recent content ingestion jobs with state, errors, and article counts. Useful for monitoring the self-learning pipeline.
Knowledge Analytics Tools
Article usage tracking surfaces which knowledge agents actually read. Every search hit, direct fetch, and context retrieval is recorded as an immutable access event. Analytics endpoints aggregate those events into top-articles, per-article stats, per-agent usage, and unused-articles reports. All four endpoints and MCP tools require orchestrator+.
knowledge_analytics_top
GET /api/v1/knowledge/analytics/top-articles — Top accessed articles for the tenant. Optional params: limit (default 20, max 100), since_days (default 7), access_type (search, get, context, index).
knowledge_article_stats
GET /api/v1/knowledge/articles/:id/stats — Per-article counts: total accesses, unique agents, by-type breakdown, last access time, and the 10 most recent events.
knowledge_agent_usage
GET /api/v1/knowledge/analytics/agents/:agent_id — What a specific api_key reads: total reads, unique articles, access type breakdown, and top read articles.
knowledge_unused_articles
GET /api/v1/knowledge/analytics/unused-articles — Published articles with zero accesses in the window. Optional params: days_unused (default 30), limit (default 50, max 200).
Work Breakdown Tools
import_stories
Bulk-import epics and stories into a project. Pass merge: true to add to epics that already exist (without it, duplicates return 409). For large payloads, payload_path accepts an absolute file path instead of the inline object. Epic numbers are type-tolerant -- integers and numeric strings both work.
create_story
Create a single story in an existing epic. Accepts either epic_id (UUID) or (project_id + epic_number) -- the latter is friendlier when you know the epic number but not the UUID.
backfill_story
Mark a story as verified when the work was completed outside loopctl (pre-onboarding, external delivery). Requires reason; accepts evidence_url and pr_number. Refused for any story with dispatch lineage -- you cannot use backfill as a chain-of-custody shortcut. Emits a story.backfilled webhook and records provenance in metadata.backfill.
Other Notable Tools
The full set of 50 tools covers projects, stories, epics, verification, artifacts, orchestrator state, webhooks, skills, token usage, analytics, and knowledge. See the npm package README for the complete list.