Skip to content

Content quality & EEAT rewrites: humanized rewrites across 40+ pages#2

Open
psyduckler wants to merge 3 commits into
mainfrom
content-quality-eeat-rewrites
Open

Content quality & EEAT rewrites: humanized rewrites across 40+ pages#2
psyduckler wants to merge 3 commits into
mainfrom
content-quality-eeat-rewrites

Conversation

@psyduckler
Copy link
Copy Markdown
Owner

Summary

Resolves the content-quality and EEAT audit findings with humanized, practitioner-voiced rewrites across the entire site. Roughly +1,300 lines of net new content, replacing templated boilerplate with first-person observations drawn from Bernard's Clearscope experience.

TypeScript clean. 148/148 vitest tests passing.

What changed

Use-case pages (30 pages)

  • Fixed critical curl-example bug: image and audio use-case pages were rendering text-payload boilerplate (type:"text", content:"Paste content here") instead of the modality-specific request body. Now correctly uses the modality's actual sample.
  • All 30 entries rewritten with genuine domain depth (KDP narration-AI policy, eBay/Etsy seller-verification flows, ACORD/SIU references for insurance claims, Reddit API changes since 2023, FTC testimonial rules, etc.)
  • Extracted to a dedicated src/useCases.ts module for maintainability
  • New UseCase schema fields: whatWeveSeen (Bernard's practitioner observation), domainNuance (vertical-specific concepts), realExample (concrete setup/result)

Text-detection API cluster (6 pages)

The audit's biggest red flag — six near-duplicate URLs with identical structures. Each now has a distinct primary job:

  • /ai-detection-api: category overview ("authorship-likelihood vs workflow-routing")
  • /ai-content-detector-api: CMS pre-publish gate (WordPress save_post hook example)
  • /ai-written-content-detection: RAG/training-data hygiene (batch curation pattern)
  • /ai-generated-content-detection: UGC moderation triage (user-trend aggregation)
  • /ai-written-content-detector: agent-builder integration (bounded revise loop)
  • /ai-generated-text-detector: auto_revise deep-dive (rescore pattern)

Each gets distinct code samples, distinct FAQs, distinct cost-modeling sections, and a distinct codeLabel so they no longer all share the "Copy-paste routing example" heading.

Comparison + alternatives pages (10 pages)

  • ComparisonPage schema extended with practitionerNote, whereCompetitorWins, whereVeracityWins
  • All 4 /vs/* pages rewritten with first-person observations on the category split between authorship-likelihood detectors (GPTZero, Originality.ai, Copyleaks) and workflow-routing APIs
  • All 6 /alternatives/* pages rewritten with honest "it depends on..." framing instead of templated alternative-positioning copy
  • Each comparison explicitly names 3–4 dimensions where the competitor genuinely outperforms VeracityAPI

Integration pages (real framework code)

  • /integrations/langgraph: full StateGraph implementation with bounded revise loop (prevents the infinite-loop bug Bernard observed costing teams hundreds of dollars on a single document)
  • /integrations/claude: Anthropic SDK direct tool_use pattern alongside MCP (most teams need both)
  • /integrations/openai-actions: Custom GPT system-prompt fragment with reliability patterns

Methodology / How-it-works split

  • /methodology rewritten as the epistemology page: what VeracityAPI claims, what it explicitly doesn't, the trust contract by field, policy table by intended_use, and an explicit "when the right answer is don't use VeracityAPI" section
  • /how-it-works rewritten as the technical pipeline page: model, temperature, schema, pipeline steps, threshold table with actual numbers, version changelog, v0.2 roadmap
  • Two pages now answer distinct questions for distinct readers per the audit recommendation

Blog (4 new substantive posts + EEAT scaffolding)

  • "What 10 years at Clearscope taught me about AI slop"
  • "The most expensive bug in agent content workflows"
  • "Why the AI-detection category is splitting"
  • "Why we don't publish competitor benchmarks (yet)"
  • BlogPost interface extended with author bio, reading time, word count
  • Blog post pages now include byline, "About the author" card, and JSON-LD schema with Person author
  • Legacy posts retained at original URLs with author backfill

Visual hooks

  • Small CSS additions for .practitioner-note (hot-pink-bordered to distinguish from regular cards), .byline, .author-bio

Test plan

  • npx tsc --noEmit clean
  • npx vitest run — 148/148 passing (one test updated to reflect new blog-post count)
  • Manual: deploy to a preview environment and click through a sampling of pages (use-case pages, text-detection cluster, comparison pages, the two blog posts with Bernard byline) to confirm visual rendering and the practitioner-note styling
  • Manual: confirm the curl-example fix on an image and audio use-case page renders the correct payload
  • SEO: confirm sitemap still lists expected URLs (the test asserts presence; production crawl will confirm)

Files changed

  • src/useCases.ts — new file, 30 deeply rewritten use-case entries
  • src/pages.ts — extracted USE_CASES, rewrote methodology + how-it-works, upgraded blog and comparison renderers
  • src/distribution.ts — rewrote 6 text-detection pages, 6 alternatives pages, all integrations, all media-specific API pages
  • src/comparisons.ts — extended schema + rewrote all 4 vs pages
  • src/blog.ts — extended schema + 4 new posts + author backfill on legacy posts
  • src/y2k.ts — small CSS additions for new EEAT components
  • test/benchmarkComparisonPages.test.ts — relax blog-post count assertion

🤖 Generated with Claude Code

Bernard Huang and others added 3 commits May 15, 2026 11:55
…ample bug

- Extract USE_CASES from pages.ts to src/useCases.ts for maintainability
- Extend UseCase schema with three optional EEAT fields:
  - whatWeveSeen: Bernard's first-person observation
  - domainNuance: vertical-specific concepts practitioners use
  - realExample: concrete setup/result scenario
- Rewrite all 30 use-case data entries with genuine domain depth
  (publishing pipelines, training data, UGC moderation, KDP, Reddit,
  insurance claims, marketplace seller verification, news tip hotlines,
  voicemail scam triage, and more)
- Add practitioner notes leveraging Clearscope editorial-workflow
  experience on the 5 highest-intent use cases
- Fix critical curl-example bug: the request template for image and
  audio use cases was rendering text-payload boilerplate (type:"text",
  content:"Paste content here") instead of the correct modality payload.
  Now uses the modality-specific request body with the actual sample.
- Render whatWeveSeen, domainNuance, realExample blocks in useCaseHtml

This addresses the audit finding that 30 use-case pages shared identical
templated content and that the broken curl example appeared on every
image/audio use-case page.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…tegrations)

Resolves the audit finding that the six text-detection API pages were
near-duplicates with identical "When to / When not to" structures and
the same routing-code snippet.

Six text-detection API pages, each now with a distinct primary job:
- /ai-detection-api: category overview ("authorship vs workflow routing")
- /ai-content-detector-api: CMS pre-publish gate (WordPress hook example)
- /ai-written-content-detection: RAG/training-data hygiene (batch curation)
- /ai-generated-content-detection: UGC moderation triage (user-trend aggregation)
- /ai-written-content-detector: agent-builder integration (bounded revise loop)
- /ai-generated-text-detector: auto_revise deep-dive (rescore pattern)

Six alternatives pages rewritten to acknowledge competitor strengths honestly:
- Added whereCompetitorWins and whereVeracityWins sections via DistributionPage
  type extension (practitionerNote, realExample, codeLabel)
- Each comparison now opens with "It depends on..." framing instead of
  templated "VeracityAPI is the alternative" copy
- Real practitioner observations about organizational shape, procurement
  patterns, and when the competitor genuinely wins

Integration pages get framework-specific code:
- /integrations/langgraph: full StateGraph implementation with bounded
  revise loop, channels config, conditional edges, and routing function
- /integrations/claude: Anthropic SDK direct tool_use pattern alongside
  the MCP option (most teams need both)
- /integrations/openai-actions: Custom GPT system-prompt fragment with
  specific reliability patterns
- /synthetic-media-detection-api: unified routing contract across modalities
- /ai-image-detection-api, /ai-audio-detection-api, /ai-video-detection-api:
  practitioner notes on the actual production patterns I see

Renderer changes:
- DistributionPage type gets optional practitionerNote, realExample, codeLabel
- renderSections now renders practitioner-note as a distinct hot-pink-bordered
  card with byline
- Code section heading now uses page.codeLabel when present, so pages don't
  all share the same "Copy-paste routing example" heading

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…split /how-it-works vs /methodology

(This commit batches the remaining content rewrites; pages.ts changes
were committed alongside their respective data files in earlier commits.)

/vs comparisons — strengthen with practitioner depth and honest
"where competitor wins" sections:
- ComparisonPage type extended with practitionerNote, whereCompetitorWins,
  and whereVeracityWins
- All 4 competitor pages (Originality.ai, GPTZero, Hive, Copyleaks)
  rewritten with first-person observations on organizational fit,
  category split between authorship-likelihood and workflow-routing,
  and procurement-shape mismatches
- Each comparison includes 3-4 specific dimensions where the competitor
  genuinely outperforms VeracityAPI

Blog — 4 new substantive posts with Bernard bylines + EEAT scaffolding:
- "What 10 years at Clearscope taught me about AI slop" — the design
  rationale post drawing on editorial-team experience across HubSpot,
  Adobe, IBM, Nvidia, Condé Nast
- "The most expensive bug in agent content workflows" — the unbounded
  revise loop, why it happens, the four-line fix
- "Why the AI-detection category is splitting" — the strategic
  positioning piece on authorship-likelihood vs workflow-routing
- "Why we don't publish competitor benchmarks (yet)" — the methodology
  transparency piece explaining the 2026 benchmark program design
- BlogPost interface extended with optional author{name,role,bioBlurb,
  profileUrl}, updated date, and computed reading-time/word-count
- Blog post pages now include author byline, reading time, and bio
  section; JSON-LD schema upgraded to Article with Person author
- Legacy posts retained at original URLs with author backfill

Methodology / How-it-works split — clean separation per audit recommendation:
- /methodology: the epistemology (what we claim, what we don't, the
  trust contract, when to NOT use VeracityAPI, the policy table by
  intended_use)
- /how-it-works: the technical pipeline (model, temperature, schema,
  scoring steps, threshold table with actual numbers, version
  changelog, v0.2 roadmap)
- The two pages now answer distinct questions for distinct readers

CSS — small visual hooks for the new EEAT components:
- .practitioner-note styled as hot-pink-bordered card (distinct from
  regular content cards)
- .byline and .author-bio styled for blog and use-case pages

Test fix — update benchmark-comparison test to accept the new blog
post count (4 new + 2 legacy = 6 posts).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant