Skip to content

madara88645/Compiler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,068 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Prompt Compiler

License: Apache 2.0 Python 3.10+

Prompt Compiler Cover

Prompt Compiler turns vague requests into structured prompts, execution plans, and policy-checked workflows — so you can go from idea to safe, usable AI output in seconds.

Try it now at prcompiler.com | VS Code extension | GitHub artifacts


What It Does

Type any request — a feature idea, a bug report, a research question — and Prompt Compiler produces:

  • System Prompt — persona, role, constraints, output format
  • User Prompt — structured task definition
  • Execution Plan — step-by-step decomposition
  • Expanded Prompt — ready to paste into any LLM
  • Policy Layer — risk level, allowed tools, execution mode, data sensitivity

Key Features

Core Prompt Compiler

The engine analyzes your intent and produces four output layers:

  • System Prompt: persona, role, constraints, and output format rules for the target AI
  • User Prompt: structured task definition derived from your input
  • Execution Plan: decomposed steps based on your request
  • Expanded Prompt: a combined prompt ready to paste into chat-based LLMs

Switch between the output tabs in the UI to inspect each layer, and copy any result with one click.

Prompt Compiler - Main View


Conservative Mode

The Conservative toggle controls how aggressively the compiler interprets your input.

State Behavior
ON (default) Stays grounded in what you actually wrote. No invented libraries, fake APIs, or made-up requirements. Missing information leads to clarification instead of fabrication.
OFF Expands more aggressively, fills gaps, and leans into richer prompt optimization.

The toggle is available in both the web app and the browser extension, and its state is persisted locally.

Offline Compiler - Heuristics Mode


Policy-Aware Prompt Workflows

Prompt Compiler now also exposes a structured IR and policy layer for more sensitive or execution-heavy requests.

  • Risk Level: low, medium, high
  • Execution Mode: advice_only, human_approval_required, auto_ok
  • Data Sensitivity: public, internal, confidential, restricted
  • Allowed / Forbidden Tools
  • Sanitization Rules

This is not a separate product. It is a new capability inside Prompt Compiler that helps you inspect risky requests before you run them downstream in coding, research, or automation flows.


Agent Generator

Describe a role or autonomous task, and the Agent Generator produces a complete, constraint-driven system prompt for an AI agent.

  • Single Agent: generates a focused, single-role agent prompt with boundary conditions
  • Multi-Agent Swarm: toggle the multi-agent mode to generate a cooperative swarm-style prompt for specialized workers

Export Button

After generating an agent, the Export section can turn the output into framework-ready code:

Framework Output
Claude SDK Python code using the anthropic client
LangChain Python agent with ChatPromptTemplate
LangGraph Python graph definition with node/edge structure

Agent Generator


Skill & Tool Generator

Describe a capability in plain English, and the Skill Generator translates it into a structured tool definition.

  • Produces Input Schema and Output Schema in valid JSON
  • Generates a stringified skill definition ready for LangChain, OpenAI functions, or custom agent frameworks

Export Button

After generating a skill, the Export section can wrap the output in framework-specific code:

Format Output
LangChain Tool Python @tool function plus JSON schema
Claude tool_use JSON config compatible with Anthropic's tools parameter
Claude MCP Tool Stub Runnable FastMCP server.py + README.md, ready to register with Claude Code, Cursor, or Claude Desktop

Skills Generator Interface


Claude Agent Packs Beta

The Agent Packs sidebar turns a short project brief (project type, stack, goal) into a runnable, repo-ready bundle of Claude assets — not just a prompt. Pick a pack type, preview the files, copy individual snippets, or download the whole thing as a .zip.

This feature is currently in beta: it is designed to give you a strong starting point quickly, but you should still review every generated file before using it in production.

What the beta means in practice:

  • Fast scaffolding, not blind automation - expect useful repo memory, settings, agents, and workflow files, then adjust them for your own policies and edge cases.
  • Best for early repo setup and internal experimentation - especially when you want to bootstrap Claude Code conventions without hand-writing every asset.
  • Human review is required - check prompts, permissions, deny rules, CI assumptions, and generated documentation before shipping.
  • Optional client API key fallback in the web UI - if PROMPTC_SERVER_API_KEY is already configured on the web server, that server-side key wins. The UI field is mainly for local debugging or fallback calls when the proxy intentionally has no server key.

Four pack types are available out of the box, all served from a single Claude-first endpoint:

Pack Type What It Emits Use It For
Project Pack CLAUDE.md, .claude/settings.json, .github/workflows/claude.yml Bootstrapping Claude Code in a new repo with policy, deny rules, and CI on day one
Subagent Bundle One or more .claude/agents/<role>.md files with name, description, and tools frontmatter Giving Claude Code a team of specialized reviewers / builders that it can dispatch to
PR Reviewer Pack A reviewer subagent + .github/workflows/claude.yml Wiring Claude into pull request review automation
MCP Tool Stub A FastMCP server.py + README.md scaffolded from a skill definition Standing up an MCP server that exposes a custom tool to any MCP client

Risk-aware generation. Each request takes a risk_mode:

Mode Behavior
balanced (default) Sensible defaults: typical deny list, common allowed tools, gentle ask gates for destructive commands
strict Tightens deny lists, narrows allowed tools, drops optimistic defaults — pick this when adopting Claude Code into a repo with sensitive data or untrusted contributors

Provider-agnostic core. The pack generator is built around an AgentPackAdapter Protocol. Claude-specific logic lives in app/adapters/claude_code.py; the IR layer in app/adapters/agent_packs.py stays neutral. Cursor / Codex / other-provider adapters can plug in later without touching the core.

API surface (both endpoints require an API key):

# Generate the manifest (preview-friendly: file paths, contents, kinds, preview order)
curl -X POST https://api.example.com/agent-packs/claude \
  -H "x-api-key: $PROMPTC_API_KEY" \
  -H "content-type: application/json" \
  -d '{
    "project_type": "FastAPI service",
    "stack": "Python 3.12, FastAPI, PostgreSQL",
    "goal": "Add a Claude-reviewed PR workflow with deny rules for .env",
    "pack_type": "project-pack",
    "risk_mode": "strict"
  }'

# Same payload, returns a deflate-compressed .zip ready to drop into a repo
curl -X POST https://api.example.com/agent-packs/claude/download \
  -H "x-api-key: $PROMPTC_API_KEY" \
  -H "content-type: application/json" \
  -d '{...same body...}' \
  --output claude-project-pack.zip

Repo-native adoption. This repo eats its own dog food: the Compiler itself ships a CLAUDE.md, a hardened .claude/settings.json (denies .env*, secrets/**, users.db, web/.env.local; gates git push, fly:, railway: behind explicit confirmation), four ready-to-dispatch subagents in .claude/agents/ (compiler-architect, frontend-polisher, mcp-integrator, prompt-safety-reviewer), and a claude.yml workflow for hosted Claude Code review on PRs.


Token Optimizer

Compresses your prompt by roughly 20-30% without losing meaning, logic, or variables. Useful near context-window limits.

Token Optimizer Interface


Benchmark Playground

A/B test raw prompts against compiled versions:

  • Raw vs. Compiled side-by-side comparison
  • Auto-Judge scoring for relevance, quality, and clarity
  • Visual Metrics including improvement percentages

Benchmark Playground Interface


RAG & Knowledge Base

Upload project files such as PDF, Markdown, text, or code to ground Prompt Compiler in your own domain context.

  • Context Manager for drag-and-drop reference files
  • Strategist/Critic flow for injecting grounded context and catching hallucinated claims
  • Local SQLite-backed retrieval for fast reuse without re-uploading

GitHub Workflow Artifacts

Prompt Compiler can render deterministic markdown artifacts from natural language requests:

  • Issue Brief
  • Implementation Checklist
  • PR Review Brief

Example:

python -m cli.main github render --type pr-review-brief --from-file prompt.txt

VS Code Extension

The VS Code package lives in integrations/vscode-extension. Once the Marketplace publisher is claimed and the first vscode-v* tag is pushed, it installs from:

Until then, install from source (see the extension README) or the .vsix artifact on the Publish VS Code Extension workflow run.

Features:

  • Commands: PromptC: Compile Selection, PromptC: Compile File, PromptC: Recompile Last
  • Activity Bar sidebar for backend status, latest compile summary, history, and favorites
  • Panel tabs: Intent, Policy, Prompts, Raw JSON
  • Artifact actions: copy, insert into editor, save favorite
  • Settings: promptc.apiBaseUrl, promptc.conservativeMode, promptc.requestTimeoutMs, promptc.autoOpenPanel, promptc.historySize

API keys are stored in VS Code secret storage, not workspace settings.


Installation

git clone https://github.com/madara88645/Compiler.git
cd Compiler

# Backend: pyproject.toml is the source of truth
python -m pip install -e .[dev,docs]

# Frontend
cd web
npm ci
cd ..

Environment Setup

cp .env.example .env

Core variables:

OPENAI_API_KEY=sk-your-actual-key
OPENAI_BASE_URL=https://api.openai.com
GROQ_API_KEY=gsk_your_groq_key

# Prompt compiler mode: conservative (default) or default
PROMPT_COMPILER_MODE=conservative

# Optional auth hardening
ADMIN_API_KEY=replace-me
PROMPTC_REQUIRE_API_KEY_FOR_ALL=false

# Optional RAG storage controls
PROMPTC_UPLOAD_DIR=.promptc_uploads
PROMPTC_RAG_ALLOWED_ROOTS=

Notes:

  • Leave PROMPTC_REQUIRE_API_KEY_FOR_ALL=false for backwards-compatible local development.
  • /compile/fast, generator routes, and RAG mutation routes require an API key.
  • If you set PROMPTC_RAG_ALLOWED_ROOTS, only files inside those roots can be ingested by path.

Running the App

Windows (one-click): double-click start_app.bat

Manual:

# Terminal 1 - Backend
python -m uvicorn api.main:app --reload --port 8080

# Terminal 2 - Frontend
cd web
npm run dev

Open http://localhost:3000.


How To Use

  1. Type your idea, prompt, task, or workflow request into the input box.
  2. Click Generate.
  3. Review the output tabs: Intent, System, User, Plan, Expanded, JSON, Quality.
  4. Use Conservative mode when you want grounded output.
  5. If the task is sensitive, inspect the policy layer before using the result downstream.
  6. Use Agent, Skill, Optimizer, Benchmark, and RAG surfaces as needed.

Project Structure

api/            FastAPI endpoints (compile, agent-generator, skills-generator, optimize, rag)
app/
  compiler.py       Core compiler pipeline
  emitters.py       Prompt rendering layer
  models_v2.py      IR v2 and policy contract
  llm_engine/       HybridCompiler and provider logic
  heuristics/       Offline parsing, safety, risk, and policy inference
  rag/              SQLite FTS5 RAG index and retrieval
  testing/          Regression runner
  github_artifacts.py
web/
  app/
    page.tsx                    Main compiler UI
    agent-generator/            Agent Generator page
    skills-generator/           Skill Generator page
    benchmark/                  Benchmark Playground
    optimizer/                  Token Optimizer
    components/                 Shared UI components
cli/            CLI entrypoints
integrations/
  vscode-extension/
extension/      Browser extension
tests/          Offline-safe test suite
docs/           Product, pattern, and workflow docs

Docs


License

Copyright © 2026 Mehmet Özel. All rights reserved.

Licensed under the Apache License 2.0.

For managed/hosted service inquiries: mehmet.ozel2701@gmail.com

Self-hosting is free and always will be.

About

A tool that compiles messy natural language prompts into a structured intermediate representation (IR) and optionally sends them to LLMs like ChatGPT for cleaner, more reliable responses.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors