⚠️ EXPERIMENTAL — not yet ready for production audits. Findings should be reviewed by a human auditor.
preflight is a CLI tool that audits Aiken smart contracts for vulnerabilities. It parses your validator code into an AST, then runs a configurable set of vulnerability skills against it using either local heuristics or an AI provider (OpenAI, Anthropic, Ollama). Output is a markdown report plus a structured JSON state file for tooling.
- Source discovery — recursively scans the working directory for
.akfiles (skippingtarget/,build/,.git/,.tx3/). - AST generation — parses each source file with
aiken-lang, builds a per-validator context (handlers, parameters, source spans), and caches the snapshot to.tx3/audit/aiken-ast.json. - Skill loop — for each vulnerability skill (markdown files with YAML frontmatter), invokes the configured provider with the skill prompt and AST context. The provider returns findings (or asks for additional file reads via a tool-call loop).
- Persistence — after every skill iteration, the analysis state is written to
.tx3/audit/state.jsonso partial progress survives interruptions. - Report — once all skills finish, a markdown report is rendered to
.tx3/audit/vulnerabilities.md.
Download the binary for your platform from the Releases page and place it on your PATH.
git clone https://github.com/tx3-lang/preflight
cd preflight
cargo install --path .Run preflight from the root of an Aiken project:
preflight --provider openaiBy default the tool writes:
| Path | Description |
|---|---|
.tx3/audit/state.json |
Incremental analysis state (one entry per skill iteration). |
.tx3/audit/vulnerabilities.md |
Final markdown report. |
.tx3/audit/aiken-ast.json |
Cached AST snapshot. |
Select with --provider:
| Provider | Description | Required env var |
|---|---|---|
scaffold |
No-op stub — useful for smoke testing the pipeline. | — |
heuristic |
Local pattern-matching detectors (no network). | — |
openai |
Uses the OpenAI Responses API. Default model: gpt-4.1-mini. |
OPENAI_API_KEY |
anthropic |
Uses the Anthropic Messages API. Default model: claude-sonnet-4-6. |
ANTHROPIC_API_KEY |
ollama |
OpenAI-compatible local Ollama endpoint. Default model: llama3.1. |
— |
Override the endpoint, model, or API-key env var with --endpoint, --model, --api-key-env.
--reasoning-effort hints to thinking-capable models how much to deliberate. Accepted values and wiring differ by provider:
| Provider | Accepted values | How it's wired |
|---|---|---|
openai |
low | medium | high |
Sets reasoning.effort (or reasoning_effort) on the Responses API request. Multiple request shapes are tried for compatibility. |
ollama |
low | medium | high |
Same as OpenAI plus think: true for Ollama's reasoning toggle. |
anthropic |
low | medium | high | max |
Enables adaptive extended thinking (thinking: {type: "adaptive", display: "summarized"}) plus output_config.effort. max is Opus-tier only — Sonnet 4.6 rejects it; Haiku 4.5 rejects thinking entirely. |
scaffold, heuristic |
— | Ignored. |
If the API rejects the thinking variant (e.g., max on Sonnet, or any value on Haiku), the Anthropic provider falls back to a request without thinking automatically. With --ai-logs you'll see ⚠️ Streaming attempt 1 failed followed by a successful retry.
Smoke test with no AI
preflight --provider scaffoldLocal heuristic scan
preflight --provider heuristicOpenAI with progress logs
export OPENAI_API_KEY=sk-...
preflight --provider openai --ai-logsAnthropic with a specific model
export ANTHROPIC_API_KEY=...
preflight --provider anthropic --model claude-opus-4-7Anthropic with adaptive thinking
export ANTHROPIC_API_KEY=...
preflight --provider anthropic --reasoning-effort high --ai-logs
# Opus-tier maximum reasoning:
preflight --provider anthropic --model claude-opus-4-7 --reasoning-effort maxOllama (local)
preflight --provider ollama --model llama3.1When the AI provider asks to read additional files, requests are gated:
--read-scope workspace(default) — any path under the project root may be read.--read-scope strict— only the source files discovered up front are readable.--interactive-permissions— prompt the user before each AI-requested read.
Each skill is a markdown file with YAML frontmatter. Place additional skills in skills/vulnerabilities/ (or pass --skills-dir <path>). Frontmatter schema:
---
id: my-vuln-001 # required, unique
name: my-vuln # required
severity: low|medium|high|critical
description: Short summary used by the audit prompt.
prompt_fragment: Concrete detection objective for the model.
examples: [] # optional
false_positives: [] # optional — what NOT to flag
references: [] # optional
tags: [] # optional
confidence_hint: medium # optional
---
# Detailed guidance (markdown body)
The body is passed to the model as authoritative guidance for this skill.If --skills-dir is left at its default (skills/vulnerabilities) and the directory does not exist on disk, preflight falls back to three embedded seed skills.
| ID | Severity | What it detects |
|---|---|---|
strict-value-equality-001 |
high | Strict equality checks on ADA or full output values that can become unsatisfiable due to minUTxO / datum / reference-script changes. |
missing-address-validation-002 |
high | Outputs selected by datum/token/value but never checked against the destination address. |
unvalidated-datum-003 |
high | Outputs at script addresses whose datum is missing or never validated. |
preflight [OPTIONS]
--provider <PROVIDER> scaffold | heuristic | openai | anthropic | ollama [default: scaffold]
--endpoint <URL> Override the provider endpoint.
--model <MODEL> Override the provider model.
--api-key-env <VAR> Override the env var read for the API key.
--reasoning-effort <LEVEL> Reasoning effort hint. OpenAI/Ollama: low|medium|high. Anthropic: low|medium|high|max (max is Opus-tier only; rejected by Sonnet/Haiku).
--ai-logs Print chat-style progress while auditing.
--skills-dir <PATH> Vulnerability skills directory. [default: skills/vulnerabilities]
--state-out <PATH> Incremental state JSON path. [default: .tx3/audit/state.json]
--report-out <PATH> Final markdown report path. [default: .tx3/audit/vulnerabilities.md]
--ast-out <PATH> AST snapshot JSON path. [default: .tx3/audit/aiken-ast.json]
--no-ast-cache Force regeneration of the AST snapshot.
--read-scope <SCOPE> workspace | strict [default: workspace]
--interactive-permissions Confirm every AI-requested read.
--main-source <PATH> Fallback source file when no .ak files are discovered.
Releases are produced via cargo-dist. The Release GitHub Action triggers on tag pushes matching v*.*.* and builds binaries for:
aarch64-apple-darwinx86_64-unknown-linux-gnuaarch64-unknown-linux-gnux86_64-pc-windows-msvc
Cutting a release locally:
# One-time
cargo install cargo-release
# From main, working tree clean
cargo release patch --execute # 0.1.0 → 0.1.1
git push --follow-tags origin maincargo-release config lives in release.toml. Tag pushes trigger .github/workflows/release.yml, which compiles the binaries and creates the GitHub Release.
Apache-2.0. See LICENSE.