Skip to content

Commit 1878c66

Browse files
authored
v8.2.0 Discriminating Epicurus — feature discovery, protocol separation, flowr 0.5
Release v8.2.0 (Discriminating Epicurus): feature discovery state, todo-driven execution with anchor protocol, convention boundary, skill/protocol separation (41 skills cleaned), flowr 0.5 JSON-first output adoption, @id format, silent quality gates.
1 parent 480da82 commit 1878c66

65 files changed

Lines changed: 604 additions & 243 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.flowr/flows/discovery-flow.yaml

Lines changed: 25 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
flow: discovery-flow
2-
version: 4.0.0
2+
version: 5.0.0
33
exits:
44
- complete
55

@@ -101,11 +101,34 @@ states:
101101
- delivery_order
102102
- quality_attributes
103103
- deployment
104+
next:
105+
done: feature-discovery
106+
needs_reinterview: stakeholder-interview
107+
108+
- id: feature-discovery
109+
attrs:
110+
description: "PO synthesizes analysis artifacts into coherent feature boundaries with scoped business rules and constraints"
111+
owner: PO
112+
git: main
113+
skills:
114+
- discover-features
115+
in:
116+
- product_definition.md
117+
- domain_model.md
118+
- glossary.md
119+
- interview-notes/*.md
120+
- technical_design.md
121+
out:
122+
- features/<feature_name>.feature:
123+
- title
124+
- description
125+
- rules_business
126+
- constraints
104127
conditions:
105128
committed_to_main_locally:
106129
committed_to_main_locally: ==verified
107130
next:
108131
done:
109132
to: complete
110133
when: committed_to_main_locally
111-
needs_reinterview: stakeholder-interview
134+
needs_reinterview: stakeholder-interview

.flowr/flows/feature-development-flow.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,10 @@ exits:
1010
states:
1111
- id: planning
1212
attrs:
13-
description: "Plan feature scope, specification, breakdown, and BDD scenarios"
13+
description: "Plan feature breakdown, BDD scenarios, and development readiness"
1414
git: main
1515
flow: planning-flow
16-
flow-version: "^5"
16+
flow-version: "^6"
1717
next:
1818
complete: development
1919
needs_architecture: needs_architecture

.flowr/flows/planning-flow.yaml

Lines changed: 9 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
flow: planning-flow
2-
version: 5.0.0
2+
version: 6.0.0
33
params: [feature_name]
44
exits:
55
- complete
@@ -9,7 +9,7 @@ exits:
99
states:
1010
- id: feature-selection
1111
attrs:
12-
description: "PO picks the next feature to develop based on business priority and delivery order, verifying that architecture covers it"
12+
description: "PO selects the next feature — by delivery order for first feature, by WSJF for subsequent features"
1313
owner: PO
1414
git: main
1515
skills:
@@ -19,31 +19,13 @@ states:
1919
- technical_design.md
2020
out: []
2121
next:
22-
selected: feature-specification
22+
selected: feature-breakdown
2323
needs_architecture: needs_architecture
2424
no_features: no_features
2525

26-
- id: feature-specification
27-
attrs:
28-
description: "PO conducts a targeted conversation with stakeholders to capture feature-specific behavioral rules, scenarios, and acceptance criteria"
29-
owner: PO
30-
git: main
31-
skills:
32-
- specify-feature
33-
in:
34-
- product_definition.md
35-
- domain_model.md
36-
- glossary.md
37-
- technical_design.md
38-
out:
39-
- interview-notes/<session>.md
40-
next:
41-
done: feature-breakdown
42-
needs_architecture: needs_architecture
43-
4426
- id: feature-breakdown
4527
attrs:
46-
description: "PO decomposes the selected feature into Rule blocks (user stories) within the feature file based on specification interview and domain constraints"
28+
description: "PO refines coarse Rules from discovery into full Rule blocks with INVEST validation, adding detail through targeted clarification"
4729
owner: PO
4830
git: main
4931
skills:
@@ -66,11 +48,11 @@ states:
6648
testable: ==acceptance_criteria_defined
6749
next:
6850
done:
69-
to: bdd-features
51+
to: feature-examples
7052
when: invest_passed
71-
needs_respecification: feature-specification
53+
needs_respecification: feature-breakdown
7254

73-
- id: bdd-features
55+
- id: feature-examples
7456
attrs:
7557
description: "PO writes concrete Given/When/Then Example blocks for each Rule in the feature file using ubiquitous language from the glossary"
7658
owner: PO
@@ -108,7 +90,7 @@ states:
10890
done:
10991
to: create-py-stubs
11092
when: examples_complete
111-
needs_respecification: feature-specification
93+
needs_respecification: feature-breakdown
11294

11395
- id: create-py-stubs
11496
attrs:
@@ -168,4 +150,4 @@ states:
168150
to: complete
169151
when:
170152
- feature_baselined
171-
- committed_to_main_locally
153+
- committed_to_main_locally
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
domain: requirements
3+
tags: [feature-discovery, story-mapping, backlog-creation, gap-analysis]
4+
last-updated: 2026-05-04
5+
---
6+
7+
# Feature Discovery
8+
9+
## Key Takeaways
10+
11+
- Feature discovery synthesizes multiple analysis artifacts (domain model, event map, interview notes, delivery order, technical design) into coherent feature boundaries with scoped business rules. It is a genuine analysis step, not mechanical transcription.
12+
- Each feature captures coarse business rules — one-line statements of behavior that the feature must enforce or enable. These are behavioral hypotheses to be validated and refined during breakdown.
13+
- The PO must identify feature boundaries that respect bounded context borders, aggregate transactional boundaries, and module dependency order. Features that span aggregate boundaries or cross dependency lines are flagged for splitting.
14+
- Gaps discovered during feature discovery (a bounded context with no feature, a quality attribute with no enforcing feature, a domain event with no corresponding rule) are flagged, not silently filled.
15+
- When artifacts are ambiguous, contradictory, or incomplete, the PO asks targeted clarification questions using the same interview techniques (CIT, laddering) as discovery interviews, but scoped to the specific feature boundary or rule under consideration.
16+
- Features enter `Status: ELICITING` during discovery and advance to `BASELINED` after planning (breakdown, example writing, and baseline confirmation).
17+
18+
## Concepts
19+
20+
**Feature Boundary Identification**: Deciding where one feature ends and another begins is a design judgment, not a mechanical step. Bounded contexts provide coarse boundaries, but the PO must decide granularity — too coarse and the feature is unmanageable; too fine and you lose cohesion. Patton (2014) recommends mapping the user's narrative flow as a backbone, then slicing vertically into releasable increments. Each slice should be independently deliverable and testable. Cross-reference the domain model's aggregate boundaries and the delivery order's dependency graph to validate that each feature is self-contained.
21+
22+
**Rule Discovery as Hypothesis**: Coarse rules are hypotheses about what the system must do, derived by cross-referencing three sources: domain events ("what must happen when X occurs"), entity invariants ("what must always be true about Y"), and stakeholder goals ("what the user needs to accomplish"). These hypotheses will be validated and refined during breakdown. This two-phase approach — coarse hypotheses during discovery, validated rules during breakdown — prevents premature commitment to story-level detail while ensuring comprehensive coverage across the whole product before any single feature is developed (Cohn, 2004; Patton, 2014).
23+
24+
**Targeted Clarification During Discovery**: When synthesizing analysis artifacts into feature boundaries, gaps and contradictions naturally emerge. A delivery step may map to multiple aggregates with unclear ownership. An entity invariant may contradict what the interview notes say. A quality attribute may have no obvious enforcing mechanism. These are not failures of earlier interviews — they are expected consequences of zooming from domain-level understanding to feature-level specificity. Targeted questions use the same techniques as discovery interviews (CIT for specific failure incidents, laddering for "why does this matter?") but are narrower, focused on resolving a specific boundary question rather than exploring the whole domain.
25+
26+
**Gap Analysis**: Systematically verify coverage across three dimensions: (1) every bounded context from the domain model is covered by at least one feature, (2) every quality attribute from the product definition is enforced by at least one feature's constraints, and (3) every critical domain event is traceable to at least one business rule. Uncovered areas indicate missing features or gaps in the domain model itself — flag both.
27+
28+
**Feature Lifecycle**: Features follow a lifecycle of increasing specificity across phases:
29+
1. **Discovery**: Feature boundaries identified, coarse business rules written, constraints scoped. Status: ELICITING.
30+
2. **Breakdown**: Coarse rules expanded into full Rule blocks with As a/I want/So that format. INVEST validation applied. Targeted clarification may refine rules. Status remains ELICITING.
31+
3. **Example Writing and Baseline**: Given/When/Then Examples written, pre-mortems applied, baseline confirmed. Status advances to BASELINED.
32+
33+
## Related
34+
35+
- [[requirements/invest]] — story quality criteria applied during breakdown
36+
- [[requirements/wsjf]] — feature prioritization applied to BASELINED features
37+
- [[requirements/gherkin]] — Examples written during planning
38+
- [[requirements/interview-techniques]] — interview methods used during discovery and clarification
39+
- [[requirements/decomposition]] — splitting Rules during breakdown
40+
- [[requirements/pre-mortem]] — adversarial analysis applied during breakdown

.opencode/knowledge/requirements/gherkin.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ last-updated: 2026-04-29
99
## Key Takeaways
1010

1111
- Write declarative Examples that describe behaviour, not UI steps; use `Example:` not `Scenario:` (BDD — North, 2006).
12-
- Each Example must have an `@id` tag (format `@id:<unique-id>`) for traceability from test to acceptance criterion.
12+
- Each Example must have an `@id` tag (format `@id:<unique-id>`, e.g. 8-char hex like `@id:3a7f1b2c`) for traceability from test to acceptance criterion.
1313
- `Then` must be a single, observable, measurable outcome; no "and" combining multiple behaviours in one `Then`.
1414
- Bug Examples use `@bug` and require both a specific feature test and a Hypothesis property test.
1515
- After criteria commit, Examples are frozen; changes require `@deprecated` on the old Example and a new Example with a new `@id`.
@@ -18,7 +18,7 @@ last-updated: 2026-04-29
1818

1919
**Declarative vs Imperative Gherkin**: Declarative Examples describe behaviour, not UI steps (BDD — North, 2006). "Given a registered user Bob / When Bob logs in / Then Bob sees a personalized welcome" is correct. "Given I type 'bob' in the username field / When I click the Login button / Then I see 'Welcome, Bob'" is imperative and wrong. Declarative Examples express what the user observes, not how the system implements it.
2020

21-
**Example Format and @id Tags**: Each Example uses the `Example:` keyword (not `Scenario:`), includes `Given/When/Then` in plain English, and must have an `@id` tag for traceability. The format is `@id:<unique-id>` where the unique ID is assigned when the feature is baselined. Each Example must be observably distinct from every other Example in the same Rule.
21+
**Example Format and @id Tags**: Each Example uses the `Example:` keyword (not `Scenario:`), includes `Given/When/Then` in plain English, and must have an `@id` tag for traceability. The format is `@id:<unique-id>` (e.g., 8-char hex like `@id:3a7f1b2c`). IDs are assigned during example writing if not already set; the agent respects the existing format if present. Stakeholder may define a different ID format — agents must honour the established convention. Each Example must be observably distinct from every other Example in the same Rule.
2222

2323
**Single Observable Outcome per Then**: `Then` must be a single, observable, measurable outcome. No "and" combining multiple behaviours in one `Then` — split into separate Examples instead. Observable means observable by the end user, not by a test harness.
2424

@@ -47,14 +47,15 @@ last-updated: 2026-04-29
4747

4848
### @id Tag Format
4949

50-
- Format: `@id:<unique-id>`
51-
- Assigned when the feature is baselined
50+
- Format: `@id:<unique-id>` (e.g., 8-char hex like `@id:3a7f1b2c`)
51+
- Assigned during example writing if not already set; respects existing format if present
52+
- Stakeholder may define a different ID format — agents must honour the established convention
5253
- Globally unique across all feature files
5354
- Enables traceability from test to acceptance criterion
5455

5556
### Frozen Examples Rule
5657

57-
After criteria commit, Examples are frozen. This rule is stated explicitly in the feature template and enforced by the `bdd-features` conditions in the planning flow:
58+
After criteria commit, Examples are frozen. This rule is enforced by the flow's example-writing conditions:
5859

5960
- `all_examples_have_ids: ==true`
6061
- `all_examples_have_gherkin: ==true`

.opencode/knowledge/requirements/interview-techniques.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,7 @@ last-updated: 2026-04-29
2828

2929
**Funnel Technique**: Start with broad open-ended questions before narrowing to specifics. Priming bias (Tversky & Kahneman, 1974) is structural: any category name the interviewer introduces activates a schema that filters what the interviewee considers worth reporting. The funnel sequences questions so the interviewee's own categories emerge first.
3030

31-
**Discovery Interview Structure**: Discovery interviews follow a three-level funnel aligned with the Funnel technique. Level 1 — General: seven standard questions (Who, What, Why, When/Where, Success, Failure, Out-of-scope) establish the big picture. Level 2 — Cross-cutting: behaviour groups, bounded contexts, integration points, and lifecycle events structure the domain. Level 3 — Feature identification: feature names and rough boundaries are identified; detailed feature specification (stories, criteria) happens later during planning interviews, not here. The purpose of the discovery interview is to understand the domain and identify what features exist, not to specify each feature in detail.
32-
33-
**Feature Specification Interview**: Planning interviews focus on one feature at a time. The goal is to elicit behavioral rules and scenarios that will become user stories and acceptance criteria. CIT is used to probe for specific past failures related to this feature — "Tell me about a time when [feature behavior] went wrong." Laddering is used when a stated requirement lacks a clear scenario — "Why is that important? What would break if it weren't available?" A pre-mortem is applied when hidden failure modes are suspected. Feature specification interviews are narrower and deeper than discovery interviews — they already know what the feature is and need to define exactly how it behaves.
31+
**Discovery Interview Structure**: Discovery interviews follow a three-level funnel aligned with the Funnel technique. Level 1 — General: seven standard questions (Who, What, Why, When/Where, Success, Failure, Out-of-scope) establish the big picture. Level 2 — Cross-cutting: behaviour groups, bounded contexts, integration points, and lifecycle events structure the domain. Level 3 — Feature identification: feature names and rough boundaries are identified; detailed feature specification (stories, criteria) happens later, not here. The purpose of the discovery interview is to understand the domain and identify what features exist, not to specify each feature in detail.
3432

3533
## Content
3634

.opencode/knowledge/requirements/invest.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ last-updated: 2026-04-29
1212
- Each letter has a specific FAIL action: split or reorder dependencies (I), remove over-specification (N), reframe or drop (V), split or add discovery (E), split into smaller Rules (S), rewrite with observable outcomes (T).
1313
- Common mistakes: "As the system, I want..." has no business value; stories containing "and" should be split into two Rules; duplicate stories should be merged or differentiated.
1414
- Self-declare INVEST-I, INVEST-V, INVEST-S, and INVEST-T before committing stories; every DISAGREE is a hard blocker.
15-
- In the planning flow, the `invest_passed` condition on `feature-breakdown.done` requires all six letters to be `==true`.
15+
- In the flow, the INVEST condition on the breakdown-done transition requires all six letters to be `==true`.
1616

1717
## Concepts
1818

@@ -22,7 +22,7 @@ last-updated: 2026-04-29
2222

2323
**Self-Declaration**: Before committing stories, declare INVEST-I (each Rule is Independent), INVEST-V (each Rule delivers Value to a named user), INVEST-S (each Rule is Small enough for one development cycle), and INVEST-T (each Rule is Testable). Every DISAGREE is a hard blocker — fix before committing.
2424

25-
**Flow Condition Gate**: The `invest_passed` condition on the `feature-breakdown.done` transition requires `independent: ==true`, `negotiable: ==true`, `valuable: ==true`, `estimable: ==true`, `small: ==true`, and `testable: ==true`. All six must pass before the flow advances to BDD features.
25+
**Flow Condition Gate**: The INVEST condition on the breakdown-done transition requires `independent: ==true`, `negotiable: ==true`, `valuable: ==true`, `estimable: ==true`, `small: ==true`, and `testable: ==true`. All six must pass before the flow advances to Example writing.
2626

2727
## Content
2828

@@ -57,7 +57,7 @@ Every DISAGREE is a hard blocker — must be fixed before committing.
5757

5858
### Flow Condition Gate
5959

60-
In `planning-flow.yaml`, the `feature-breakdown.done` transition is guarded by `when: invest_passed`, which requires:
60+
In the flow, the breakdown-done transition is guarded by an INVEST condition, which requires:
6161

6262
```yaml
6363
invest_passed:

.opencode/knowledge/requirements/moscow.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ last-updated: 2026-04-29
1111
- Classify each candidate Example as Must (required for correctness), Should (high value but deferrable), or Could (nice-to-have edge case). This classification is for internal triage only — it must NOT appear as Gherkin tags or in the .feature file.
1212
- If Musts alone exceed 8 Examples or the Rule spans more than 2 concerns, split the Rule immediately.
1313
- Musts cannot exceed 60% of total effort at the story level (DSDM); if a story has 12 Examples and only 3 are Musts, the remaining 9 can be deferred.
14-
- MoSCoW triage is applied during criteria writing (planning-flow `bdd-features` state), not during discovery.
14+
- MoSCoW triage is applied when writing Examples (after INVEST qualification and pre-mortem analysis), not during discovery.
1515

1616
## Concepts
1717

@@ -46,7 +46,7 @@ At the story level, Musts should not exceed 60% of total effort (DSDM). If a sto
4646

4747
### When to Apply
4848

49-
MoSCoW triage is applied during criteria writing in the `bdd-features` state of the planning flow, after INVEST qualification in `feature-breakdown` and pre-mortem analysis. Each candidate Example receives a Must/Should/Could classification for internal triage — to decide which Examples to include and which to defer. MoSCoW labels must NOT appear as Gherkin tags, in `@id` tags, or anywhere in the .feature file.
49+
MoSCoW triage is applied when writing Examples, after INVEST qualification and pre-mortem analysis. Each candidate Example receives a Must/Should/Could classification for internal triage — to decide which Examples to include and which to defer. MoSCoW labels must NOT appear as Gherkin tags, in `@id` tags, or anywhere in the .feature file.
5050

5151
## Related
5252

0 commit comments

Comments
 (0)