Do not edit anything in this directory. It is regenerated by
npm run buildfrom the source skill files in the repo root (SKILL.md and skills/*.md). Any local edits here will be lost on the next build.
Each skill below declares its own tier via the tier: field in its source
frontmatter. The build groups them into sections; this file is rendered from
those declarations so the table of contents cannot drift from the actual
skill set on disk.
continuous-improvement— Install structured self-improvement loops with instinct-based learning into Claude Code — research, plan, execute, verify, reflect, learn, iterate. On-demand or weekly analysis to save tokens. Supports multi-agent parallel analysis.
proceed-with-the-recommendation⭐ — Orchestrator for all 7 Laws of AI Agent Discipline. Walks an agent-emitted recommendation list top-to-bottom under the 7 Laws — restate, route per item, verify before advancing, reflect at the end, close with the mandatory three-section block. Standalone with inline fallbacks; trigger phrases are matched by the companion hook, not enumerated here.
gateguard— Enforces Law 1 (Research Before Executing) of the 7 Laws of AI Agent Discipline. Fact-forcing gate that blocks Edit/Write/Bash (including MultiEdit) and demands concrete investigation (importers, data schemas, user instruction) before allowing the action. Measurably improves output quality by +2.25 points vs ungated agents.para-memory-files— Enforces Law 5 (Reflect After Every Session) and Law 7 (Learn From Every Session) of the 7 Laws of AI Agent Discipline by giving the agent a durable file-based memory it can read on resume and write at session end. File-based memory system using Tiago Forte's PARA method. Use this skill whenever you need to store, retrieve, update, or organize knowledge across sessions. Covers three memory layers: (1) Knowledge graph in PARA folders with atomic YAML facts, (2) Daily notes as raw timeline, (3) Tacit knowledge about user patterns. Also handles planning files, memory decay, weekly synthesis, and recall via qmd. Trigger on any memory operation: saving facts, writing daily notes, creating entities, running weekly synthesis, recalling past context, or managing plans.tdd-workflow— Enforces Law 3 (One Thing at a Time) and Law 4 (Verify Before Reporting) of the 7 Laws of AI Agent Discipline. Use this skill when writing new features, fixing bugs, or refactoring code. Enforces test-driven development with 80%+ coverage including unit, integration, and E2E tests.verification-loop— Enforces Law 4 (Verify Before Reporting) of the 7 Laws of AI Agent Discipline. A comprehensive verification system for agent coding sessions covering build, types, lint, tests, security, and diff with a PASS/FAIL report.
safety-guard— Enforces Law 3 (One Thing at a Time) of the 7 Laws of AI Agent Discipline by scoping edits to a directory and blocking destructive shell commands. Use this skill to prevent destructive operations when working on production systems or running agents autonomously.strategic-compact— Enforces Law 5 (Reflect After Every Session) of the 7 Laws of AI Agent Discipline at phase boundaries. Suggests manual context compaction at logical intervals to preserve context through task phases rather than arbitrary auto-compaction.token-budget-advisor— Enforces Law 2 (Plan Is Sacred) of the 7 Laws of AI Agent Discipline by making token-budget tradeoffs explicit before the response is composed. Offers the user an informed choice about how much response depth to consume before answering. Use this skill when the user explicitly wants to control response length, depth, or token budget. TRIGGER when: "token budget", "token count", "token usage", "token limit", "response length", "answer depth", "short version", "brief answer", "detailed answer", "exhaustive answer", "respuesta corta vs larga", "cuántos tokens", "ahorrar tokens", "responde al 50%", "dame la versión corta", "quiero controlar cuánto usas", or clear variants where the user is explicitly asking to control answer size or depth. DO NOT TRIGGER when: user has already specified a level in the current session (maintain it), the request is clearly a one-word answer, or "token" refers to auth/session/payment tokens rather than response size.wild-risa-balance— Enforces Law 2 (Plan Is Sacred) of the 7 Laws of AI Agent Discipline. Decision-framing lens that pairs WILD generation with RISA execution when emitting recommendation lists. Not a runtime hook.
ralph— Enforces Law 6 (Iterate Means One Thing) of the 7 Laws of AI Agent Discipline at PRD scale. Ralph is an autonomous AI agent loop that runs repeatedly until all PRD items are complete. Converts PRDs to executable JSON, implements stories iteratively with quality checks, and tracks progress.superpowers— Law activator for the 7 Laws of AI Agent Discipline. Routes tasks to the correct Law-aligned specialist (brainstorming → Law 2, writing-plans → Law 2, test-driven-development → Law 3+4, verification-before-completion → Law 4, etc.) so the right discipline fires automatically instead of the agent skipping a step. Not a peer skill — a dispatcher for the others.workspace-surface-audit— Enforces Law 1 (Research Before Executing) of the 7 Laws of AI Agent Discipline. Audits the active repo, MCP servers, plugins, connectors, env surfaces, and harness setup, then recommends the highest-value continuous-improvement-native skills, hooks, agents, and operator workflows. Use when the user wants help setting up Claude Code or understanding what capabilities are actually available in their environment.