The best AEO/GEO skill for Claude Code. Audit, fix, and monitor your website's visibility across ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews. 100+ optimization techniques, 33 evidence collectors, 4-vector composite GEO Score. Built on the peer-reviewed Princeton KDD 2024 paper. Free, MIT-licensed.
best-aeo-skill is an open-source Claude Code skill that measures and improves how your website is cited by AI search engines. It runs 33 evidence collectors against any URL, computes a 0–100 composite GEO Score across 4 weighted vectors (Technical Accessibility, Content Citability, Structured Data, Entity Signals), labels every finding with a Confidence rubric (Confirmed / Likely / Hypothesis), and applies one-command fixes to llms.txt, robots.txt, JSON-LD schema, and on-page content. It runs locally with zero external API dependencies.
- Why this matters in 2026
- Quickstart
- How it works
- What is GEO? What is AEO?
- Compared to alternatives
- What's inside the repo
- Research basis
- FAQ
- Citation (academic)
- Credits & attribution
- Supported agents
- License
In May 2026:
- 25.11% of Google searches trigger an AI Overview (Semrush AI Overviews tracker, Q1 2026)
- 87% of AI-referral traffic to publishers flows through ChatGPT alone (Similarweb, March 2026)
- AI traffic converts at 14.2% vs 2.8% for traditional organic — 5.07× higher (Adobe Digital Insights, 2026 AI Commerce Report)
- 62% of B2B SaaS buyers now start product research in an AI assistant before Google (Gartner CMO Survey, 2026)
The companies that get cited by AI engines win. The ones that get filtered out lose 100% of that traffic — there is no "page 2" of an AI answer.
Existing GEO/AEO Claude skills each solve part of the problem. best-aeo-skill combines the strongest features of the top 5 open-source predecessors into one production-ready skill.
/plugin marketplace add metawhisp/best-aeo-skill
/plugin install best-aeo-skillnpx skills add metawhisp/best-aeo-skillgit clone https://github.com/metawhisp/best-aeo-skill.git ~/.claude/skills/best-aeo-skill> Run a full GEO audit on https://yoursite.com
You get a 0–100 GEO Score, ranked findings with Confidence labels (Confirmed / Likely / Hypothesis), and a one-command auto-fix path.
bestaeo fix --url https://yoursite.com --applyAuto-generates llms.txt, JSON-LD schema, content rewrites, and robots.txt patches.
bestaeoskill.com/audit — the same engine running on Cloudflare Workers.
┌─────────────────────────────────────────────────────────────────┐
│ best-aeo-skill │
├─────────────────────────────────────────────────────────────────┤
│ │
│ CLI ──→ ┌─ audit ──┐ ┌─ fix ──────┐ ┌─ monitor ─┐ │
│ │ +score │ │ +rewrite │ │ +regress │ │
│ └─────┬────┘ └──────┬─────┘ └─────┬─────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Composite GEO Scorer (4 vectors) │ │
│ │ Technical 20% │ Citability 35% │ Schema 20% │ E. 25% │ │
│ └────────────────────────┬─────────────────────────────┘ │
│ │ │
│ ┌────────────┼────────────┐ │
│ ▼ ▼ ▼ │
│ 33 evidence 5 specialist 4 frameworks │
│ collectors agents (CORE-EEAT, │
│ (Python) (parallel) CITE, Princeton, AutoGEO) │
│ │
└─────────────────────────────────────────────────────────────────┘
Step 1 — Audit. 33 evidence collectors run in parallel: page fetch, robots.txt, AI bot access matrix (23 bots: GPTBot, ClaudeBot, PerplexityBot, etc.), JSON-LD schema validation, statistic density, citation analysis, quote extraction, freshness, llms.txt presence, entity & author markup, and 22 more.
Step 2 — Score. Findings feed a 4-vector composite scorer derived from the Princeton KDD 2024 paper on Generative Engine Optimization. Default weights: Technical 20% / Citability 35% / Schema 20% / Entity 25%. Weights are profile-adaptive (SaaS, e-commerce, publisher, local, agency, devtools, academic).
Step 3 — Fix. bestaeo fix --apply writes structured changes:
| Fix | What it does |
|---|---|
fix-llmstxt |
Generates /llms.txt and /llms-full.txt from sitemap + content |
fix-robotstxt |
Adds 27 explicit Allow directives for AI bots |
fix-schema |
Emits JSON-LD: SoftwareApplication, FAQPage, Article, Organization, Author |
fix-content |
Adds inline citations, statistics, quotes, author bylines |
Step 4 — Monitor. Re-audit on a schedule, track GEO Score deltas, alert on regressions, compare to competitors.
Generative Engine Optimization (GEO) is the practice of optimizing web content so it gets cited and quoted by generative AI engines (ChatGPT, Claude, Perplexity, Gemini, Google AI Overviews) when they answer user queries. The term and methodology were introduced in the peer-reviewed paper "GEO: Generative Engine Optimization" (Aggarwal et al., KDD 2024).
Answer Engine Optimization (AEO) is the practice of structuring content so it can be extracted as a direct answer by question-answering systems — featured snippets, voice assistants, and AI Overviews. AEO predates GEO (the term traces to 2018) but the two now overlap heavily; the industry uses them interchangeably.
Difference vs traditional SEO: SEO optimizes for ranked link lists. GEO/AEO optimizes for being cited inside an answer. The optimization signals are different: AI engines weight statistical density, authoritative external citations, structured data, entity consistency, and direct-answer phrasing far more heavily than backlinks or anchor-text variety.
| best-aeo-skill | claude-seo | seo-geo-claude-skills | geo-optimizer-skill | Agentic-SEO-Skill | |
|---|---|---|---|---|---|
| Composite GEO Score (0–100) | ✓ 4-vector | ✓ | partial | ✓ | partial |
| Confidence labels (Confirmed/Likely/Hypothesis) | ✓ unique | ✗ | ✗ | ✗ | ✓ |
| Princeton KDD 2024 framework | ✓ full | partial | partial | ✓ | partial |
Auto-fix (fix --apply) |
✓ | partial | partial | ✓ | ✗ |
| Multi-engine scoring (ChatGPT, Claude, Perplexity, Gemini, AIO) | ✓ | ✓ | ✓ | ✓ | ✓ |
| MCP server | ✓ | ✓ | ✓ | ✓ | ✗ |
| Adaptive vector weights (per business profile) | ✓ | partial | ✗ | partial | ✗ |
llms.txt generation |
✓ | ✓ | ✓ | ✓ | ✓ |
| 35+ agent compatibility (npx skills) | ✓ | ✓ | ✓ | ✓ | ✓ |
| Zero external dependencies | ✓ | ✗ | ✓ | partial | partial |
| Live web audit (no install) | ✓ | ✗ | ✗ | ✗ | ✗ |
Full feature matrix → bestaeoskill.com/skill
best-aeo-skill/
├── SKILL.md Main manifest (skill entry point, 874 lines)
├── README.md Public-facing docs (you are here)
├── LICENSE MIT
│
├── skills/ 7 sub-skills
│ ├── audit/SKILL.md
│ ├── fix-content/SKILL.md
│ ├── fix-schema/SKILL.md
│ ├── fix-llmstxt/SKILL.md
│ ├── fix-robotstxt/SKILL.md
│ ├── compare/SKILL.md
│ └── monitor/SKILL.md
│
├── agents/ 5 specialist agents (run in parallel)
│ ├── technical.md
│ ├── citability.md
│ ├── schema.md
│ ├── entity.md
│ └── monitor.md
│
├── scripts/ 33 evidence collectors (Python, zero deps)
│ ├── audit.py ─ orchestrator
│ ├── composite_scorer.py ─ 4-vector composite
│ ├── fetch_page.py ─ page fetch + canonicalization
│ ├── robots_check.py ─ robots.txt parser
│ ├── ai_bot_access.py ─ 23-bot allow/block matrix
│ ├── schema_validate.py ─ JSON-LD validator
│ ├── statistic_density.py ─ stats/100w detector
│ ├── citation_check.py ─ external/internal/authoritative
│ ├── quote_extractor.py ─ direct-quote finder
│ ├── freshness_check.py ─ Article.dateModified parser
│ ├── llms_txt_check.py ─ llms.txt presence + spec compliance
│ ├── llms_txt_generate.py ─ auto-generates llms.txt
│ ├── entity_extractor.py ─ Organization, Person, sameAs
│ └── ... (33 total)
│
├── frameworks/ 4 reference frameworks
│ ├── core-eeat.md 80-item content quality benchmark
│ ├── cite.md 40-item authority benchmark
│ ├── confidence-labels.md Evidence rubric
│ └── princeton-kdd-2024.md Research basis
│
├── templates/ Output templates
│ ├── audit-report.md
│ ├── llms.txt.tmpl
│ └── schema/ (FAQPage, Article, Organization, etc.)
│
├── examples/ Sample audits and fixes
│
└── docs/
├── installation.md
├── methodology.md (skill-by-skill attribution)
└── architecture.md
This skill is built on peer-reviewed and industry-validated research:
-
GEO: Generative Engine Optimization — Aggarwal et al., KDD 2024 (arXiv:2311.09735) 9 optimization methods tested across 10k queries. Citation likelihood lifts: +40% from authoritative quotation, +115% from inline source citation, +30–40% from statistic density.
-
AutoGEO: Automatic Generative Engine Optimization — ICLR 2026 (arXiv:2502.13392) Automatic rule extraction beats human-engineered rules by +50.99% over the Princeton baseline.
-
C-SEO Bench: Conversational SEO Benchmark — 2025 (arXiv:2506.11097) Confirms infrastructure (schema, llms.txt, robots.txt) outranks prose tweaks for citation rate.
-
Google's E-E-A-T framework — December 2025 update (Search Quality Rater Guidelines, v2025.12) Experience-Expertise-Authoritativeness-Trustworthiness now applies to all competitive queries, not just YMYL.
Full research deep-dive → bestaeoskill.com/research
Traditional SEO tools optimize for ranked link lists (10 blue links). best-aeo-skill optimizes for citations inside an AI-generated answer. The signals are different: AI engines weight statistical density, authoritative external citations, structured data, entity consistency, and direct-answer phrasing far more heavily than backlinks or anchor variety. Use both — they are complementary, not substitutes.
Yes, free under MIT. No API key required. The audit runs entirely locally using Python's standard library — no external services, no telemetry, no rate limits. The optional MCP server is also local-only.
ChatGPT (OpenAI), Claude (Anthropic), Perplexity, Google Gemini, Google AI Overviews, Microsoft Copilot, You.com, and Brave Search. Each engine has a profile of weighted signals; the composite GEO Score is a weighted average across all eight, configurable per use case.
The score is grounded in direct evidence (HTTP responses, parsed HTML, validated JSON-LD) — not LLM judgment. Every finding carries a Confidence label: Confirmed = directly observed by a collector, Likely = inferred from 2+ collectors, Hypothesis = LLM judgment, flagged for human review. No recommendation appears without a label.
It writes four artifacts: (1) /llms.txt and /llms-full.txt derived from your sitemap, (2) updated /robots.txt with 27 explicit Allow: directives for AI bots, (3) JSON-LD schema blocks (SoftwareApplication, FAQPage, Article, Organization, Person), and (4) inline content additions — citations, statistics, author bylines — emitted as a unified diff for your review. Nothing is pushed without confirmation.
Yes. Three install paths: (1) Claude Code via /plugin install, (2) Cursor, Codex, OpenCode, OpenClaw, Gemini CLI, Qwen Code, Amp, Kimi, CodeBuddy, Windsurf, and 25+ others via npx skills add, (3) manual via git clone for any agent that reads SKILL.md. There is also a browser-based audit running on Cloudflare Workers that requires no install.
~1–2 seconds per page on Cloudflare Workers. ~3–5 seconds locally on a typical laptop (zero deps means no JIT warm-up). A site-wide audit of 100 pages: ~2 minutes locally.
claude-seo is comprehensive but fragmented — 12 agents with overlapping responsibilities. seo-geo-claude-skills has good multi-platform reach but generic scoring. geo-optimizer-skill has Princeton depth but limited fix actions. Agentic-SEO-Skill introduced confidence labels but has narrow scope. best-aeo-skill integrates the strongest piece from each into a single skill — see the comparison table above and the per-technique attribution in docs/methodology.md.
If this skill helps your research or industry report, please cite it:
@software{bestaeoskill_2026,
author = {MetaWhisp},
title = {best-aeo-skill: A Composite GEO/AEO Optimizer for Claude Code},
year = {2026},
url = {https://github.com/metawhisp/best-aeo-skill},
note = {MIT License}
}For citing the underlying methodology, please cite the Princeton KDD 2024 paper directly.
Each technique in this skill is traceable to its origin. Full per-rule attribution: docs/methodology.md.
| Inspired by | Took |
|---|---|
| AgriciDaniel/claude-seo (5.8k ⭐) | Multi-extension architecture, 12-agent parallelism |
| aaron-he-zhu/seo-geo-claude-skills | CORE-EEAT (80-item) + CITE (40-item) frameworks |
| Bhanunamikaze/Agentic-SEO-Skill | Confidence labels, 33 evidence collectors |
| Auriti-Labs/geo-optimizer-skill | Princeton KDD framework, MCP server, 47 methods |
| 199-biotechnologies/claude-skill-seo-geo-optimizer | IndexNow, freshness monitoring, entity extraction |
Claude Code · Cursor · Codex CLI · OpenCode · OpenClaw · Gemini CLI · Qwen Code · Amp · Kimi · CodeBuddy · Windsurf · Continue · Aider · Cline · Roo Code · Devin · 35+ via npx-skills protocol.
MIT — use, fork, ship. Star ⭐ if it helps. Issues and PRs welcome.
Built and maintained by bestaeoskill.com · Documentation · Live audit · Research · FAQ