Thank you for your interest in contributing! This guide covers how to add skills, agents, hooks, and extend TLDR.
- Development Setup
- Adding Skills
- Creating Agents
- Developing Hooks
- Extending TLDR
- Code Style
- Testing
- Pull Request Process
# Clone (your fork) of the repository
cd Continuous-Claude-v3/opc
# Install Python dependencies
uv sync
# Install hook dependencies (TypeScript)
cd ../.claude/hooks && npm install && npm run build && cd ../../opc
# Verify installation
claude
> /helpSkills are modular capabilities defined in .claude/skills/<skill-name>/SKILL.md.
.claude/skills/my-skill/
├── SKILL.md # Main skill definition (required)
└── templates/ # Optional templates
# my-skill
Short description of what this skill does.
## When to Use
- Trigger condition 1
- Trigger condition 2
## Workflow
1. Step one
2. Step two
3. Step three
## Commands
| Command | Description |
|---------|-------------|
| `/my-skill` | Main invocation |
| `/my-skill --option` | With options |
## Example
User: "do my-skill thing"
Claude: [executes skill workflow]Add to .claude/skills/skill-rules.json:
{
"skill": "my-skill",
"keywords": ["my-skill", "specific trigger"],
"intentPatterns": ["do.*my.*skill"],
"priority": 50
}> /skill-developer
This interactive skill walks you through creating new skills.
Agents are specialized AI workers defined in .claude/agents/<agent-name>.md.
# Frontmatter
---
name: <name of agent>
description: <one line description of agent>
model: <preferred model: opus | sonnet | haiku>
tools: <list tools agent needs (optional): Read | Grep | Glob | etc...>
---
## Prompt
<system>
You are the agent-name agent for Continuous Claude.
Your job is to:
1. Specific task
2. Another task
## Constraints
- Constraint 1
- Constraint 2
## Output Format
Return results as:
[format description]
</system>| Type | Model | Use Case |
|---|---|---|
| Orchestrator | Opus | Coordinate multi-agent workflows |
| Researcher | Sonnet/Opus | Explore codebase or external sources |
| Implementer | Opus | Write/modify code |
| Reviewer | Opus | Analyze and critique |
| Specialist | Varies | Domain-specific tasks |
Agents are spawned via the Task tool:
Task({
subagent_type: "my-agent",
prompt: "Do the thing",
description: "Doing the thing"
})Hooks intercept Claude Code at lifecycle points. Located in .claude/hooks/.
| Lifecycle | When it runs |
|---|---|
SessionStart |
Session begins |
SessionEnd |
Session ends |
PreToolUse |
Before tool execution |
PostToolUse |
After tool execution |
UserPromptSubmit |
User sends a message |
PreCompact |
Before context compaction |
SubagentStart |
Subagent spawned |
SubagentStop |
Subagent completes |
Stop |
LLM generation stops |
#!/bin/bash
# .claude/hooks/my-hook.sh
# Read input from stdin (JSON)
INPUT=$(cat)
# Parse relevant fields
TOOL_NAME=$(echo "$INPUT" | jq -r '.tool_name // empty')
CONTENT=$(echo "$INPUT" | jq -r '.tool_input.content // empty')
# Your logic here
if [[ "$TOOL_NAME" == "Read" ]]; then
# Do something
echo '{"result": "modified content"}'
else
# Pass through
echo "$INPUT"
fi// .claude/hooks/src/my-hook.ts
interface HookInput {
tool_name?: string;
tool_input?: Record<string, unknown>;
session_id?: string;
}
interface HookOutput {
result?: string;
permissionDecision?: 'allow' | 'deny';
reason?: string;
}
async function hook(input: HookInput): Promise<HookOutput> {
// Your logic here
return { result: 'success' };
}
// Entry point
const input: HookInput = JSON.parse(
require('fs').readFileSync(0, 'utf-8')
);
hook(input).then(output => console.log(JSON.stringify(output)));Add to .claude/settings.json:
{
"hooks": {
"PreToolUse": [
{
"matcher": "Read",
"hooks": [
{
"command": ".claude/hooks/my-hook.sh",
"timeout": 5000
}
]
}
]
}
}> /hook-developer
Or debug existing hooks:
> /debug-hooks
TLDR is the 5-layer code analysis system in opc/packages/tldr-code/.
tldr/
├── api.py # Unified API (all layers)
├── ast_extractor.py # L1: AST extraction
├── hybrid_extractor.py # Multi-language support
├── cross_file_calls.py # L2: Call graph
├── cfg_extractor.py # L3: Control flow
├── dfg_extractor.py # L4: Data flow
└── pdg_extractor.py # L5: Program dependence
- Add tree-sitter grammar:
uv pip install tree-sitter-<language>- Update
hybrid_extractor.py:
LANGUAGE_CONFIGS = {
'mylang': LanguageConfig(
extension='.ml',
tree_sitter_lang='mylang',
function_patterns=['function_definition'],
class_patterns=['class_definition']
)
}- Add tests in
tests/test_mylang.py
cd opc/packages/tldr-code
source .venv/bin/activate
python -m pytest tests/ -v- Use type hints
- Format with
ruff - Follow PEP 8
- Use strict mode
- Format with Prettier
- Follow existing patterns
- Use CommonMark
- Keep lines under 100 chars
- Use tables for structured data
# TLDR tests
cd opc/packages/tldr-code && pytest tests/ -v
# Hook tests (TypeScript)
cd .claude/hooks && npm test# Manual test in Claude
> /my-skill
# Check skill-rules.json triggers
> "trigger phrase that should activate my-skill"# Test hook directly
echo '{"tool_name": "Read", "tool_input": {"file_path": "test.py"}}' | .claude/hooks/my-hook.sh- Fork the repository
- Create branch:
git checkout -b feature/my-feature - Make changes following the guidelines above
- Test your changes
- Commit with clear message
- Push to your fork
- Open PR with:
- Description of changes
- Testing done
- Any breaking changes
- Tests pass
- Documentation updated
- No breaking changes (or documented)
- Follows code style
- Open an issue for bugs or feature requests
- Use
/helpin Claude for usage questions - Check existing skills/agents for patterns
Thank you for contributing!