Problem
- Claude provider workflows can only reference built-in Claude model names directly.
- Teams routing Claude Code through Anthropic-compatible gateways need stable friendly aliases, provider-specific base URLs, credentials, and optional headers without hardcoding long model IDs in every workflow.
- This comes up for users who run multiple Claude-compatible gateways or local providers and want behavior similar to Pi agent custom model aliases.
Proposed Solution
Add a Claude custom model registry loaded from ~/.archon/claude-models.json. The registry maps provider/name aliases to provider model IDs and injects Claude Code environment variables for the selected provider.
Supported non-sensitive config shape:
{
"providers": {
"gateway": {
"baseUrl": "https://llm-gateway.example.com",
"apiKey": "YOUR_GATEWAY_API_KEY",
"headers": {
"X-Team": "platform"
},
"models": [
{ "id": "openai/gpt-5.4", "name": "gpt" }
]
}
}
}
User Flow
Before (current)
User Archon Claude provider Claude Code
──── ────────────────────── ───────────
sets model ───▶ passes model string directly ───▶ uses default Claude endpoint
[!] gateway base URL, credentials, and aliases must be handled outside Archon
After (proposed)
User Archon Claude provider Claude Code
──── ────────────────────── ───────────
defines ~/.archon/claude-models.json
uses gateway/gpt ───────────────▶ resolves alias to model ID
[injects provider env] ───────▶ calls gateway endpoint
Alternatives Considered
| Alternative |
Pros |
Cons |
Why not chosen |
| Put every custom provider env var directly in workflow nodes |
No new registry file |
Repeats secrets and model IDs across workflows |
Harder to maintain and easier to leak sensitive data |
Use only global .env variables |
Simple for one provider |
Cannot switch providers per node/model alias |
Does not support multi-provider workflows cleanly |
Add aliases to .archon/config.yaml |
Single config file |
Mixes provider credentials into general config |
Separate model catalog mirrors the Pi pattern and keeps docs clearer |
Scope
- Package(s) likely affected:
providers, docs, tests
- Breaking change?
No
- Database changes needed?
No
- New external dependencies?
No
Security Considerations
- New permissions/capabilities?
No
- New external network calls?
No
- Secrets/tokens handling?
Yes
The feature reads credentials from a user-owned global config file and injects them only into the Claude Code subprocess for matched custom models. Documentation and example files must use placeholders only and must not include real keys or URLs.
Definition of Done
Problem
Proposed Solution
Add a Claude custom model registry loaded from
~/.archon/claude-models.json. The registry mapsprovider/namealiases to provider model IDs and injects Claude Code environment variables for the selected provider.Supported non-sensitive config shape:
{ "providers": { "gateway": { "baseUrl": "https://llm-gateway.example.com", "apiKey": "YOUR_GATEWAY_API_KEY", "headers": { "X-Team": "platform" }, "models": [ { "id": "openai/gpt-5.4", "name": "gpt" } ] } } }User Flow
Before (current)
After (proposed)
Alternatives Considered
.envvariables.archon/config.yamlScope
providers,docs,testsNoNoNoSecurity Considerations
NoNoYesThe feature reads credentials from a user-owned global config file and injects them only into the Claude Code subprocess for matched custom models. Documentation and example files must use placeholders only and must not include real keys or URLs.
Definition of Done
provider/namealiases from~/.archon/claude-models.jsonANTHROPIC_BASE_URL,ANTHROPIC_API_KEYorANTHROPIC_AUTH_TOKEN, and optionalANTHROPIC_CUSTOM_HEADERSare passed to Claude Code