Skip to main content

Task Decomposition Specification

Version: 1.1.0 Status: DRAFT Created: 2025-12-19 Updated: 2025-12-30 Target: v0.22.0+ Implementation Report: TASK-DECOMPOSITION-SPEC-IMPLEMENTATION-REPORT.md
v1.1.0 CHANGE: Refactored to be implementation-agnostic. Removed Bash-specific code, retained algorithms as pseudocode. This spec is now AUTHORITATIVE for decomposition principles, applicable to any implementation language (Python, Bash, TypeScript, etc.).

Part 1: Preamble

1.1 Purpose

This specification defines an LLM-agent-first task decomposition system for transforming high-level user requests into atomic, executable tasks. The system produces validated task DAGs (Directed Acyclic Graphs) with dependency relationships suitable for parallel or sequential agent execution.

1.2 Authority

This specification is AUTHORITATIVE for:
  • Task decomposition algorithm phases
  • Atomicity criteria and validation
  • Dependency graph construction rules
  • Challenge protocol for decomposition validation
  • CLI integration (decompose command)
  • JSON output schemas for decomposition results
This specification DEFERS TO:

Part 2: RFC 2119 Conformance

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in BCP 14 [RFC 2119] [RFC 8174] when, and only when, they appear in all capitals, as shown here.

Part 3: Executive Summary

3.1 Problem Statement

LLM agents struggle with:
  1. Scope explosion: Large tasks exceed context windows
  2. Hallucinated dependencies: Assuming ordering that doesn’t exist
  3. Non-atomic tasks: Tasks that require hidden sub-decisions
  4. Parallel blindness: Treating everything as sequential

3.2 Solution Architecture

A 4-phase decomposition pipeline with adversarial validation:
HUMAN INPUT (request)

┌──────────────────────────────────────┐
│ PHASE 1: SCOPE ANALYZER              │
│ Complexity assessment, HITL gates    │
└──────────────────┬───────────────────┘

┌──────────────────────────────────────┐
│ PHASE 2: GOAL DECOMPOSER (HTN)       │
│ Recursive breakdown, atomicity check │
│ ← CHALLENGE: Valid decomposition?    │
└──────────────────┬───────────────────┘

┌──────────────────────────────────────┐
│ PHASE 3: DEPENDENCY GRAPH BUILDER    │
│ DAG construction, cycle detection    │
│ ← CHALLENGE: Dependencies real?      │
└──────────────────┬───────────────────┘

┌──────────────────────────────────────┐
│ PHASE 4: TASK SPECIFIER              │
│ Schema-compliant output generation   │
│ ← CHALLENGE: Truly atomic?           │
└──────────────────┬───────────────────┘

OUTPUT: Task DAG (todo.schema.json compliant)

3.3 Core Principles

PrincipleRequirement
No Time EstimatesSize by scope (small/medium/large), NEVER hours/days
Evidence-Based DependenciesDependencies MUST be provable, not assumed
Atomicity ThresholdTasks MUST pass 6-point atomicity criteria
Adversarial ValidationEach phase MUST survive challenge
HITL GatesAmbiguity triggers human decision points
LLM-Agent-First OutputJSON with _meta envelope, structured errors

Part 4: Atomicity Criteria (AUTHORITATIVE)

4.1 Six-Point Atomicity Test

A task is ATOMIC if and only if ALL of the following are TRUE:
#CriterionTestFailure Indicator
1Single File ScopeAffects ≤3 tightly-coupled filesChanges 4+ unrelated files
2Single Cognitive ConcernOne “thing” to understandRequires context switching
3Clear Acceptance CriteriaTestable completion condition”It works” is not testable
4No Context SwitchingCan complete in one focus sessionRequires waiting for external input
5No Hidden Sub-DecisionsAll choices made at decomposition timeAgent must ask HITL during execution
6Programmatic ValidationResult verifiable by code/testRequires subjective human judgment

4.2 Atomicity Score Calculation

atomicity_score = (passed_criteria / 6) * 100

IF atomicity_score < 100:
    task.requiresDecomposition = true
    task.failedCriteria = [list of failed criterion IDs]

4.3 Size-to-Atomicity Mapping

SizeFile ScopeAtomicity Expectation
small1-2 filesMUST be atomic (score = 100)
medium3-7 filesSHOULD be atomic, MAY need review
large8+ filesMUST NOT be atomic, MUST decompose

4.4 Examples

ATOMIC (score = 100):
title: "Add validation to email input field"
files: ["src/components/EmailInput.tsx"]
acceptance: ["Email regex validates format", "Error message displays on invalid input"]
NOT ATOMIC (score = 50):
title: "Implement authentication system"
files: ["src/auth/*", "src/api/*", "src/db/*"]  # Criterion 1 FAIL
acceptance: ["Users can log in"]  # Criterion 3 FAIL (not specific)
hidden_decisions: ["OAuth vs JWT?", "Session storage?"]  # Criterion 5 FAIL

Part 5: Phase 1 - Scope Analyzer

5.1 Purpose

Assess input complexity and determine decomposition strategy before work begins.

5.2 Inputs

interface ScopeInput {
  request: string;           // Natural language or structured
  context?: {
    codebase?: string;       // Project root path
    existingTasks?: Task[];  // Current task list for dedup
    phase?: string;          // Current project phase
  };
}

5.3 Algorithm

FUNCTION analyze_scope(input: ScopeInput) -> ScopeAssessment:

    # Step 1: Entity extraction
    entities = extract_entities(input.request)
    # entities: { files: [], components: [], concepts: [], actions: [] }

    # Step 2: Complexity scoring
    complexity = {
        file_count: estimate_files(entities),
        component_count: count_distinct(entities.components),
        reasoning: assess_reasoning_complexity(input.request),
        ambiguity: detect_ambiguities(input.request)
    }

    # Step 3: Classification
    IF complexity.file_count > 10 OR complexity.component_count > 3:
        classification = "epic"
        requires_decomposition = true
    ELSE IF complexity.file_count <= 2 AND complexity.reasoning == "trivial":
        classification = "subtask"
        requires_decomposition = false
    ELSE:
        classification = "task"
        requires_decomposition = complexity.file_count > 3

    # Step 4: HITL gate check
    IF complexity.ambiguity.count > 0:
        hitl_required = true
        hitl_questions = complexity.ambiguity.items
    ELSE:
        hitl_required = false

    RETURN ScopeAssessment {
        classification,
        requires_decomposition,
        complexity,
        hitl_required,
        hitl_questions
    }

5.4 Output Schema

{
  "$schema": "https://cleo.dev/schemas/v1/decomposition/scope.schema.json",
  "_meta": {
    "phase": "scope-analysis",
    "version": "1.0.0",
    "timestamp": "2025-12-19T10:00:00Z"
  },
  "input": {
    "request": "Implement user authentication with OAuth",
    "requestHash": "sha256:abc123..."
  },
  "assessment": {
    "classification": "epic",
    "requiresDecomposition": true,
    "complexity": {
      "fileCount": 15,
      "componentCount": 4,
      "reasoning": "high",
      "domains": ["auth", "db", "api", "ui"]
    },
    "ambiguities": [
      {
        "id": "AMB-001",
        "question": "Which OAuth providers should be supported?",
        "severity": "blocking",
        "options": ["Google only", "Google + GitHub", "All major providers"]
      }
    ],
    "hitlRequired": true
  }
}

5.5 Exit Codes

CodeConstantMeaning
0EXIT_SUCCESSScope analysis complete
2EXIT_INVALID_INPUTRequest is empty or malformed
30EXIT_HITL_REQUIREDAmbiguities require human input

Part 6: Phase 2 - Goal Decomposer

6.1 Purpose

Recursively decompose goals into atomic tasks using HTN-inspired methods.

6.2 Decomposition Methods (AUTHORITATIVE)

Each goal type has defined decomposition methods:
Goal PatternMethodSubtask Template
implement_featureFeature Decomposition[schema, api, logic, ui, tests]
fix_bugBug Fix Decomposition[reproduce, diagnose, fix, verify]
refactor_codeRefactor Decomposition[analyze, plan, execute, verify]
add_commandCLI Command Decomposition[script, help, tests, docs]
update_schemaSchema Migration[schema, migration, validation, update-code]

6.3 Algorithm

FUNCTION decompose_goal(goal: Goal, depth: int = 0) -> TaskTree:

    # Depth guard (per TASK-HIERARCHY-SPEC)
    IF depth >= 3:
        WARN("Max depth reached, forcing atomic")
        RETURN create_atomic_task(goal)

    # Atomicity check
    atomicity = evaluate_atomicity(goal)
    IF atomicity.score == 100:
        RETURN create_atomic_task(goal)

    # Select decomposition method
    method = select_method(goal.pattern)
    IF method == null:
        # Fallback: generic decomposition
        method = generic_decomposition_method()

    # Apply method
    subtasks = []
    FOR template IN method.subtask_templates:
        subtask_goal = instantiate_template(template, goal)
        subtask_tree = decompose_goal(subtask_goal, depth + 1)
        subtasks.append(subtask_tree)

    # Sibling limit check (per TASK-HIERARCHY-SPEC)
    IF len(subtasks) > 7:
        subtasks = group_into_intermediates(subtasks, max_group=7)

    RETURN TaskTree {
        root: goal,
        children: subtasks,
        method: method.name
    }

6.4 Challenge Protocol (Phase 2)

Per CONSENSUS-FRAMEWORK-SPEC Part 5.5, decomposition MUST be challenged: Challenge Questions:
  1. “Can any of these subtasks be merged without losing clarity?”
  2. “Are there missing tasks required to achieve the goal?”
  3. “Does each subtask have clear, distinct acceptance criteria?”
  4. “Is the decomposition method appropriate for this goal pattern?”
Challenge Agent Evidence Standard:
  • Logical counter-argument with specific subtask reference
  • OR counter-example showing merge/split improves outcome
Verdict Thresholds:
VerdictCondition
VALIDChallenge Agent finds no substantive issues
NEEDS_REVISIONChallenge Agent identifies fixable issues
REJECTEDDecomposition fundamentally flawed, restart

6.5 Output Schema

{
  "$schema": "https://cleo.dev/schemas/v1/decomposition/goals.schema.json",
  "_meta": {
    "phase": "goal-decomposition",
    "version": "1.0.0",
    "method": "feature_decomposition",
    "depth": 2,
    "challengeStatus": "validated"
  },
  "goalTree": {
    "id": "G001",
    "title": "Implement user authentication",
    "type": "epic",
    "atomicityScore": 33,
    "children": [
      {
        "id": "G002",
        "title": "Create user schema",
        "type": "task",
        "atomicityScore": 100,
        "children": []
      },
      {
        "id": "G003",
        "title": "Implement login API",
        "type": "task",
        "atomicityScore": 83,
        "children": [
          {
            "id": "G004",
            "title": "Add /auth/login endpoint",
            "type": "subtask",
            "atomicityScore": 100,
            "children": []
          }
        ]
      }
    ]
  },
  "challenge": {
    "agent": "requirements-analyst",
    "verdict": "VALID",
    "findings": [],
    "timestamp": "2025-12-19T10:05:00Z"
  }
}

Part 7: Phase 3 - Dependency Graph Builder

7.1 Purpose

Construct a validated DAG from the goal tree, identifying true dependencies and parallel opportunities.

7.2 Dependency Types (AUTHORITATIVE)

TypeDetection MethodEvidence Required
ExplicitKeywords: “after”, “requires”, “depends on”Present in goal description
Data FlowOutput of A is input of BSchema/type analysis
File ConflictBoth modify same fileFile path intersection
API ContractB calls API defined in ACode/interface analysis
SemanticLogical ordering (schema before queries)Domain knowledge

7.3 Algorithm

FUNCTION build_dependency_graph(goal_tree: TaskTree) -> DAG:

    # Flatten tree to node list
    nodes = flatten_to_leaves(goal_tree)

    # Initialize empty edge set
    edges = []

    FOR each node_a IN nodes:
        FOR each node_b IN nodes WHERE node_a != node_b:

            # Check each dependency type
            dependency = detect_dependency(node_a, node_b)

            IF dependency.exists:
                edge = Edge {
                    from: node_a.id,
                    to: node_b.id,
                    type: dependency.type,
                    evidence: dependency.evidence
                }
                edges.append(edge)

    # Construct DAG
    dag = DAG(nodes, edges)

    # Validate: no cycles
    IF has_cycle(dag):
        cycles = find_cycles(dag)
        RETURN Error {
            code: "E_CIRCULAR_REFERENCE",
            exitCode: 14,
            cycles: cycles
        }

    # Optimize: identify parallel groups
    dag.parallelGroups = compute_parallel_groups(dag)

    # Compute execution order
    dag.executionOrder = topological_sort(dag)

    RETURN dag

7.4 Challenge Protocol (Phase 3)

Challenge Questions:
  1. “Is dependency X→Y proven or merely assumed?”
  2. “Can task A actually run in parallel with B?”
  3. “Are there hidden shared-state conflicts?”
  4. “What happens if task C fails - what are the ripple effects?”
Evidence Standard for Dependencies:
ClaimRequired Evidence
”A must complete before B”Data flow proof OR explicit requirement
”A and B can run in parallel”No shared state, no ordering requirement
”A blocks B on failure”Error propagation path identified

7.5 Output Schema

{
  "$schema": "https://cleo.dev/schemas/v1/decomposition/dag.schema.json",
  "_meta": {
    "phase": "dependency-graph",
    "version": "1.0.0",
    "nodeCount": 5,
    "edgeCount": 4,
    "maxParallelism": 3,
    "criticalPathLength": 3,
    "challengeStatus": "validated"
  },
  "nodes": [
    {"id": "T001", "title": "Create user schema", "depth": 0},
    {"id": "T002", "title": "Add login endpoint", "depth": 1},
    {"id": "T003", "title": "Add logout endpoint", "depth": 1},
    {"id": "T004", "title": "Write auth tests", "depth": 2},
    {"id": "T005", "title": "Update API docs", "depth": 2}
  ],
  "edges": [
    {"from": "T001", "to": "T002", "type": "data_flow", "evidence": "User model required"},
    {"from": "T001", "to": "T003", "type": "data_flow", "evidence": "User model required"},
    {"from": "T002", "to": "T004", "type": "semantic", "evidence": "Tests require implementation"},
    {"from": "T003", "to": "T004", "type": "semantic", "evidence": "Tests require implementation"}
  ],
  "parallelGroups": [
    {"group": 1, "tasks": ["T001"]},
    {"group": 2, "tasks": ["T002", "T003"]},
    {"group": 3, "tasks": ["T004", "T005"]}
  ],
  "executionOrder": ["T001", "T002", "T003", "T004", "T005"],
  "challenge": {
    "agent": "requirements-analyst",
    "verdict": "VALID",
    "findings": [
      {
        "type": "optimization",
        "message": "T005 (docs) could run in parallel with T002/T003",
        "action": "accepted",
        "newEdge": null
      }
    ]
  }
}

Part 8: Phase 4 - Task Specifier

8.1 Purpose

Generate schema-compliant task objects ready for insertion into cleo.

8.2 Field Mapping

DAG Fieldtodo.schema.json FieldTransformation
node.ididPrefix with “T”, pad to 3+ digits
node.titletitleValidate length ≤120 chars
goal.typetypeMap: epic/task/subtask
parent_node.idparentIdUse parent’s task ID
node.atomicityScoresize≤50→small, ≤80→medium, else→large
edges[to=node]dependsCollect all incoming edge sources
goal.acceptanceacceptanceArray of testable criteria

8.3 Algorithm

FUNCTION specify_tasks(dag: DAG, phase: string) -> Task[]:

    tasks = []
    id_counter = get_next_task_id()  # From existing todo.json

    FOR node IN dag.executionOrder:

        # Generate ID
        task_id = format_id(id_counter++)

        # Map fields
        task = {
            id: task_id,
            title: truncate(node.title, 120),
            status: "pending",
            priority: infer_priority(node),
            type: node.type,
            parentId: get_parent_id(node, dag),
            size: atomicity_to_size(node.atomicityScore),
            phase: phase,
            description: node.description,
            files: node.files or [],
            acceptance: node.acceptance or [],
            depends: get_dependencies(node, dag, id_mapping),
            createdAt: now_iso8601(),
            labels: ["decomposed", f"decomposition:{decomposition_id}"],
            # v1.1.0: Spec traceability (optional, from ct-decompose)
            associations: {
                specRequirement: node.sourceRequirement,  # e.g., "REQ-001"
                acceptanceCriteria: node.sourceAC or []   # e.g., ["AC-001", "AC-002"]
            } IF node.sourceRequirement ELSE null
        }

        # Validate against schema
        validation = validate_schema(task, "todo.schema.json")
        IF NOT validation.valid:
            RETURN Error {
                code: "E_VALIDATION_SCHEMA",
                exitCode: 6,
                task: task_id,
                errors: validation.errors
            }

        tasks.append(task)

    RETURN tasks

8.4 Challenge Protocol (Phase 4)

Challenge Questions:
  1. “Can an LLM agent complete this task without HITL intervention?”
  2. “Is the scope truly minimal or can it be decomposed further?”
  3. “Are the acceptance criteria actually testable by code?”
  4. “Is the size classification accurate?”
Atomicity Verification Checklist:
FOR each task IN tasks:
    [ ] Single file scope (≤3 files)
    [ ] Single cognitive concern
    [ ] Clear acceptance criteria (≥1 testable)
    [ ] No hidden sub-decisions
    [ ] No external wait requirements
    [ ] Programmatically verifiable result

8.5 Final Output Schema

{
  "$schema": "https://cleo.dev/schemas/v1/decomposition/output.schema.json",
  "_meta": {
    "command": "decompose",
    "version": "0.22.0",
    "timestamp": "2025-12-19T10:10:00Z",
    "decompositionId": "DEC-20251219-001",
    "inputHash": "sha256:abc123...",
    "phases": {
      "scope": "completed",
      "goals": "completed",
      "dag": "completed",
      "tasks": "completed"
    },
    "challenge": {
      "phases_challenged": 3,
      "total_findings": 2,
      "findings_addressed": 2,
      "final_verdict": "VALID"
    }
  },
  "success": true,
  "summary": {
    "inputRequest": "Implement user authentication with OAuth",
    "epicCount": 1,
    "taskCount": 3,
    "subtaskCount": 5,
    "totalTasks": 9,
    "maxDepth": 2,
    "parallelGroups": 3,
    "criticalPathLength": 4,
    "estimatedParallelism": 2.25
  },
  "tasks": [
    {
      "id": "T042",
      "title": "Create user schema",
      "status": "pending",
      "priority": "high",
      "type": "task",
      "parentId": "T041",
      "size": "small",
      "phase": "core",
      "files": ["src/db/schema/user.ts"],
      "acceptance": [
        "User table has id, email, passwordHash, createdAt fields",
        "Schema validates with Drizzle generate"
      ],
      "depends": [],
      "createdAt": "2025-12-19T10:10:00Z",
      "labels": ["decomposed", "decomposition:DEC-20251219-001"]
    }
  ],
  "dag": {
    "nodes": [...],
    "edges": [...],
    "parallelGroups": [...],
    "executionOrder": [...]
  },
  "hitlGates": []
}

Part 9: CLI Integration

9.1 Command Syntax

cleo decompose <request> [OPTIONS]

# Examples
cleo decompose "Implement user authentication"
cleo decompose --file requirements.md
cleo decompose "Add dark mode" --phase ui --dry-run

9.2 Command Options

OptionTypeDefaultDescription
<request>stringrequiredNatural language request or structured input
--file, -ipath-Read request from file
--phasestringcurrentTarget phase for generated tasks
--parenttask ID-Parent task for all generated tasks
--dry-runbooleanfalsePreview without creating tasks
--no-challengebooleanfalseSkip adversarial validation (NOT RECOMMENDED)
--eager-executionbooleanfalseBegin on stable spec sections (see 9.6)
--format, -fstringautoOutput format (json/text/markdown)
--quiet, -qbooleanfalseSuppress non-essential output
--verbose, -vbooleanfalseShow detailed phase outputs

9.3 Exit Codes

CodeConstantMeaning
0EXIT_SUCCESSDecomposition complete, tasks created
2EXIT_INVALID_INPUTRequest empty or malformed
6EXIT_VALIDATION_ERRORGenerated tasks fail schema validation
10EXIT_PARENT_NOT_FOUNDSpecified --parent doesn’t exist
11EXIT_DEPTH_EXCEEDEDWould exceed max hierarchy depth
12EXIT_SIBLING_LIMITWould exceed max sibling count
14EXIT_CIRCULAR_REFERENCEDAG contains cycles
30EXIT_HITL_REQUIREDDecomposition blocked by ambiguity
31EXIT_CHALLENGE_REJECTEDChallenge agent rejected decomposition
35EXIT_ATOMICITY_FAILEDTasks fail atomicity criteria (score < 100)
102EXIT_NO_CHANGERequest already decomposed (idempotent)

9.4 Error Codes

CodeExit CodeDescription
E_DECOMPOSE_EMPTY_INPUT2No request provided
E_DECOMPOSE_AMBIGUOUS30Request has unresolved ambiguities
E_DECOMPOSE_CYCLE14Generated DAG has cycles
E_DECOMPOSE_REJECTED31Challenge agent rejected decomposition
E_DECOMPOSE_ATOMICITY35Tasks fail atomicity criteria
E_DECOMPOSE_DEPTH11Recursive decomposition exceeded depth
E_DECOMPOSE_SIBLINGS12Too many sibling tasks generated

9.5 Implementation Pipeline (Language-Agnostic)

v1.1.0 CHANGE: Implementation-specific code removed. See pseudocode below.
The decompose command MUST implement the following pipeline:
FUNCTION decompose_main(request, options):

    # Validate input
    IF request IS empty:
        RETURN error(E_DECOMPOSE_EMPTY_INPUT, exit_code=2)

    # Phase 1: Scope Analysis
    scope_result = analyze_scope(request)

    IF scope_result.hitlRequired:
        RETURN hitl_gate(scope_result, exit_code=30)

    # Phase 2: Goal Decomposition
    goal_result = decompose_goals(request, scope_result)

    IF NOT options.no_challenge:
        challenge_result = challenge_decomposition(goal_result, phase="goals")
        IF challenge_result.verdict == "REJECTED":
            RETURN error(E_DECOMPOSE_REJECTED, exit_code=31)

    # Phase 3: Dependency Graph
    dag_result = build_dependency_graph(goal_result)

    IF dag_result.hasCycle:
        RETURN error(E_DECOMPOSE_CYCLE, exit_code=14)

    IF NOT options.no_challenge:
        challenge_decomposition(dag_result, phase="dag")

    # Phase 4: Task Specification
    tasks_result = specify_tasks(dag_result, options.phase, options.parent)

    IF NOT options.no_challenge:
        challenge_decomposition(tasks_result, phase="tasks")

    # Output or create
    IF options.dry_run:
        RETURN preview(tasks_result)
    ELSE:
        created = create_tasks(tasks_result)
        RETURN success(created, exit_code=0)
Implementation Requirements:
  • Implementations MUST follow the 4-phase pipeline order
  • Challenge validation at each phase is RECOMMENDED (skip with --no-challenge)
  • HITL gates MUST block execution until user input received
  • All exit codes MUST match those in Part 9.3

9.6 Eager Execution Mode

v1.1.0 ADDITION: From RCSD Pipeline v1.1 ct-decompose specification.
When --eager-execution is enabled, decomposition MAY begin on stable (non-provisional) spec sections before the full specification is finalized. Provisional Task Marking: Tasks generated from incomplete or provisional spec sections MUST be marked:
{
  "id": "T503",
  "provisional": true,
  "provisionReason": "Pending spec resolution for REQ-004"
}
Constraints:
  • Provisional tasks MUST NOT be scheduled for execution
  • Provisional tasks MUST be re-validated when source spec finalizes
  • Removing provisional: true requires atomicity re-check
Use Cases:
  • Large specifications where some sections are stable
  • Parallel spec writing and task planning
  • Early dependency graph visualization

Part 10: Core Functions (Language-Agnostic)

v1.1.0 CHANGE: Bash-specific library code removed. Functions defined as abstract specifications.

10.1 Required Function Signatures

Implementations MUST provide the following functions. Return types are JSON objects.
FUNCTION analyze_scope(request: string) -> ScopeAssessment:
    """
    Analyze request complexity and determine decomposition strategy.

    Returns: {
        "_meta": { "phase": "scope-analysis", "timestamp": ISO8601 },
        "input": { "request": string },
        "assessment": {
            "classification": "epic" | "task" | "subtask",
            "requiresDecomposition": boolean,
            "complexity": { "fileCount": int, "componentCount": int, "reasoning": string },
            "hitlRequired": boolean,
            "ambiguities": string[]
        }
    }
    """

FUNCTION decompose_goals(request: string, scope: ScopeAssessment) -> GoalTree:
    """
    Recursively decompose request into hierarchical goal tree.
    Uses HTN-inspired methods (Part 6).

    Returns: GoalTree with nested children, max depth 3, max siblings 7.
    """

FUNCTION build_dependency_graph(goals: GoalTree) -> DAGResult:
    """
    Flatten goal tree to DAG nodes, detect dependencies, validate acyclicity.
    Uses detection algorithms from Part 19.

    Returns: {
        "dag": {
            "nodes": [],
            "edges": [],
            "hasCycle": boolean,
            "parallelGroups": [],
            "executionOrder": [],
            "criticalPath": []  # v1.1.0: Longest dependency chain (from ct-decompose)
        }
    }
    """

FUNCTION challenge_decomposition(result: any, phase: string) -> ChallengeResult:
    """
    Invoke adversarial challenge agent per CONSENSUS-FRAMEWORK-SPEC.

    Returns: { "phase": string, "verdict": "VALID" | "NEEDS_REVISION" | "REJECTED", "findings": [] }
    """

FUNCTION specify_tasks(dag: DAGResult, phase: string, parent: string?) -> TasksResult:
    """
    Convert DAG nodes to schema-compliant tasks with IDs, labels, dependencies.

    Returns: { "tasks": Task[], atomicityScore: int }
    """

FUNCTION create_tasks(tasks: TasksResult) -> CreationResult:
    """
    Persist tasks to storage (todo.json or equivalent).

    Returns: { "created": boolean, "taskIds": string[] }
    """

10.2 Implementation Notes

  • All functions MUST return valid JSON per LLM-AGENT-FIRST-SPEC
  • analyze_scope MAY use heuristics for simple cases, SHOULD use LLM for complex cases
  • decompose_goals MUST use LLM invocation per Part 18
  • challenge_decomposition MUST spawn separate agent with adversarial prompts
  • Implementations MAY add language-specific optimizations while maintaining interface compliance

Part 11: Agent Invocation Pattern

11.1 LLM Agent Prompt for Goal Decomposition

You are a Goal Decomposition Agent. Your task is to break down a high-level request into atomic, executable subtasks.

## Input
Request: {request}
Scope Assessment: {scope_json}

## Constraints
1. Maximum depth: 3 levels (epic → task → subtask)
2. Maximum siblings: Unlimited by default (configurable via `hierarchy.maxSiblings`)
3. Each leaf task MUST pass atomicity criteria:
   - Single file scope (≤3 files)
   - Single cognitive concern
   - Clear acceptance criteria
   - No hidden sub-decisions
   - No external wait requirements
   - Programmatically verifiable

## Output Format
Return JSON matching this schema:
{goal_tree_schema}

## Decomposition Methods
For "implement_feature": [schema, api, logic, ui, tests]
For "fix_bug": [reproduce, diagnose, fix, verify]
For "refactor_code": [analyze, plan, execute, verify]
For "add_command": [script, help, tests, docs]

## Task
Decompose the request into a goal tree. For each goal that is not atomic, recursively decompose until all leaves are atomic.

11.2 Challenge Agent Prompt

You are a Challenge Agent (Red Team). Your role is to attack the decomposition and identify flaws.

## Input
Phase: {phase_name}
Decomposition: {decomposition_json}

## Your Mission
Find problems with this decomposition. You MUST challenge, not accept.

## Challenge Questions
1. Can any subtasks be merged without losing clarity?
2. Are there missing tasks required to achieve the goal?
3. Does each subtask have clear, distinct acceptance criteria?
4. Are dependencies real or assumed?
5. Can an LLM agent complete each task without HITL?

## Evidence Standard
For each finding, provide:
- Specific reference (task ID or edge)
- Logical counter-argument OR counter-example
- Suggested fix

## Output Format
{
  "verdict": "VALID" | "NEEDS_REVISION" | "REJECTED",
  "findings": [
    {
      "type": "missing_task" | "merge_possible" | "unclear_criteria" | "hallucinated_dependency" | "non_atomic",
      "reference": "task or edge ID",
      "argument": "why this is a problem",
      "suggestion": "how to fix"
    }
  ]
}

## Rules
- You MUST NOT rubber-stamp. Easy agreement is suspicious.
- If decomposition is good, explain WHY with evidence.
- Minimum 2 challenges per decomposition (even if minor).

Part 12: HITL Gate Integration

12.1 Gate Triggers

Decomposition MUST trigger HITL when:
ConditionGate TypeAction
Request has multiple interpretationsambiguityPresent options, request choice
Scope exceeds single-session viabilityscopeConfirm epic creation
External system dependencies detectedexternalConfirm integration approach
>2 valid decomposition approachesmethodPresent options with trade-offs
Challenge agent raises blocking issuechallengePresent finding, request decision

12.2 Gate Output Format

{
  "$schema": "https://cleo.dev/schemas/v1/decomposition/hitl-gate.schema.json",
  "_meta": {
    "command": "decompose",
    "gateId": "HITL-DEC-001",
    "phase": "scope-analysis",
    "blocking": true
  },
  "gate": {
    "type": "ambiguity",
    "trigger": "Multiple OAuth providers possible",
    "context": "Request mentions 'OAuth' but doesn't specify providers"
  },
  "questions": [
    {
      "id": "Q1",
      "text": "Which OAuth providers should be supported?",
      "options": [
        {"id": "A", "label": "Google only", "implications": "Simpler, ~2 tasks"},
        {"id": "B", "label": "Google + GitHub", "implications": "Moderate, ~4 tasks"},
        {"id": "C", "label": "All major (Google, GitHub, Microsoft, Apple)", "implications": "Complex, ~8 tasks"}
      ]
    }
  ],
  "recommendation": {
    "option": "B",
    "rationale": "Covers most common use cases without excessive complexity"
  },
  "respondBy": "blocking"
}

Part 13: Testing Requirements

13.1 Unit Tests

# tests/unit/test-decomposition.bats

@test "analyze_scope classifies small request as subtask" {
    result=$(analyze_scope "Fix typo in README")
    classification=$(echo "$result" | jq -r '.assessment.classification')
    [[ "$classification" == "subtask" ]]
}

@test "analyze_scope classifies large request as epic" {
    result=$(analyze_scope "Implement authentication with OAuth, password reset, 2FA, and session management")
    classification=$(echo "$result" | jq -r '.assessment.classification')
    [[ "$classification" == "epic" ]]
    decompose=$(echo "$result" | jq -r '.assessment.requiresDecomposition')
    [[ "$decompose" == "true" ]]
}

@test "decompose rejects request with cycles" {
    run cleo decompose "Task A depends on B, B depends on C, C depends on A"
    [[ "$status" -eq 14 ]]  # EXIT_CIRCULAR_REFERENCE
}

@test "decompose respects max depth" {
    result=$(cleo decompose "Deeply nested task" --format json 2>&1)
    max_depth=$(echo "$result" | jq -r '.summary.maxDepth')
    [[ "$max_depth" -le 3 ]]
}

@test "decompose respects max siblings" {
    result=$(cleo decompose "Task with many subtasks" --format json 2>&1)
    # Check no parent has >7 children
    over_limit=$(echo "$result" | jq '[.tasks[] | select(.parentId != null)] | group_by(.parentId) | map(length) | max')
    [[ "$over_limit" -le 7 ]]
}

13.2 Integration Tests

# tests/integration/decomposition.bats

@test "decompose creates valid tasks" {
    # Decompose
    result=$(cleo decompose "Add email validation" --format json)
    [[ $(echo "$result" | jq -r '.success') == "true" ]]

    # Verify tasks exist
    task_id=$(echo "$result" | jq -r '.tasks[0].id')
    cleo exists "$task_id" --quiet
    [[ $? -eq 0 ]]
}

@test "decompose dry-run creates no tasks" {
    count_before=$(cleo list --format json | jq '.tasks | length')
    cleo decompose "Add feature X" --dry-run
    count_after=$(cleo list --format json | jq '.tasks | length')
    [[ "$count_before" -eq "$count_after" ]]
}

@test "decompose with parent sets parentId" {
    # Create parent
    parent=$(cleo add "Parent epic" --type epic --format json | jq -r '.task.id')

    # Decompose under parent
    result=$(cleo decompose "Subtask work" --parent "$parent" --format json)

    # Verify parentId
    child_parent=$(echo "$result" | jq -r '.tasks[0].parentId')
    [[ "$child_parent" == "$parent" ]]
}

13.3 Challenge Tests

@test "challenge agent rejects non-atomic tasks" {
    # Force a task with low atomicity
    result=$(decompose_goals "Build entire application" "{}")
    challenge=$(challenge_decomposition "$result" "goals")

    # Should have findings
    finding_count=$(echo "$challenge" | jq '.findings | length')
    [[ "$finding_count" -gt 0 ]]
}

@test "challenge agent flags hallucinated dependencies" {
    # Create DAG with suspicious edge
    dag='{"edges":[{"from":"T001","to":"T002","evidence":"assumed"}]}'
    challenge=$(challenge_decomposition "$dag" "dag")

    # Should flag the edge
    echo "$challenge" | jq -e '.findings[] | select(.type == "hallucinated_dependency")'
}

Part 14: Performance Requirements

14.1 Latency Targets

PhaseTargetMax
Scope Analysis<500ms2s
Goal Decomposition<5s30s
Dependency Graph<1s5s
Task Specification<500ms2s
Challenge (per phase)<10s60s
Total (with challenge)<20s120s

14.2 Scaling Limits

MetricLimitReason
Tasks per decomposition50Context window preservation
Depth3Per TASK-HIERARCHY-SPEC
Siblings7Per TASK-HIERARCHY-SPEC
Parallel DAG width10Practical execution limit
Request length10,000 charsPrompt size management

Part 15: Security Considerations

15.1 Input Validation

  • Request text MUST be sanitized before LLM prompts
  • File paths in --file MUST be validated (no path traversal)
  • Parent IDs MUST be validated against existing tasks

15.2 Output Integrity

  • Generated task IDs MUST use sequential allocation (no user input)
  • Decomposition results SHOULD be checksummed
  • Challenge results MUST NOT be editable by decomposition agent

15.3 Prompt Injection Prevention

  • User request MUST be isolated in prompt (not interpolated)
  • JSON outputs MUST be validated before use
  • LLM outputs MUST NOT be executed as code without sandboxing

Part 16: Conformance

16.1 Conformance Classes

A conforming implementation MUST:
  • Implement all 4 phases (scope, goals, dag, tasks)
  • Apply atomicity criteria per Part 4
  • Validate DAG for cycles before output
  • Produce JSON matching output schemas
  • Support all exit codes defined in Part 9.3
  • Source required libraries per LLM-AGENT-FIRST-SPEC
A conforming implementation SHOULD:
  • Implement challenge protocol per Part 6.4, 7.4, 8.4
  • Support HITL gates per Part 12
  • Meet latency targets per Part 14.1
A conforming implementation MAY:
  • Use alternative decomposition methods beyond Part 6.2
  • Extend atomicity criteria beyond Part 4.1
  • Add additional output fields to schemas

DocumentRelationship
SPEC-BIBLE-GUIDELINES.mdAUTHORITATIVE for document structure
LLM-AGENT-FIRST-SPEC.mdAUTHORITATIVE for JSON output, exit codes, error handling
HIERARCHY-ENHANCEMENT-SPEC.mdAUTHORITATIVE for type/parentId/size, depth/sibling limits
LLM-TASK-ID-SYSTEM-DESIGN-SPEC.mdAUTHORITATIVE for task ID format
CONSENSUS-FRAMEWORK-SPEC.mdAUTHORITATIVE for challenge protocol, evidence standards
PHASE-SYSTEM-SPEC.mdRelated for phase assignment
RCSD-PIPELINE-SPEC.mdStage: This spec implements the DECOMPOSE stage of the RCSD Pipeline (Part 4)

Part 18: LLM Invocation Architecture (AUTHORITATIVE)

This section addresses Finding 5 (BLOCKING): LLM invocation pattern unclear.

18.1 Token Budget Allocation

PhaseMax Input TokensMax Output TokensModel Tier
Scope Analysis2,0001,000Fast (Haiku)
Goal Decomposition8,0004,000Standard (Sonnet)
Dependency Graph4,0002,000Fast (Haiku)
Task Specification4,0003,000Fast (Haiku)
Challenge (per phase)6,0002,000Standard (Sonnet)

18.2 Invocation Decision Matrix

FUNCTION should_invoke_llm(phase: string, input: any) -> boolean:

    # Local-only phases (no LLM needed)
    IF phase == "scope" AND input.complexity.simple:
        RETURN false  # Use heuristic classification

    IF phase == "dag" AND input.explicit_dependencies_only:
        RETURN false  # Parse explicit keywords only

    # Always invoke LLM for
    IF phase == "goals":
        RETURN true  # Decomposition requires reasoning

    IF phase == "challenge":
        RETURN true  # Adversarial requires different model

    # Hybrid decision
    IF phase == "tasks":
        RETURN input.requires_description_generation

18.3 Prompt Template Architecture

PROMPT_STRUCTURE = {
    "system": {
        "role": "Decomposition Agent | Challenge Agent",
        "constraints": [atomicity_rules, depth_limits, sibling_limits],
        "output_schema": phase_specific_schema
    },
    "user": {
        "input": request_or_previous_phase_output,
        "context": {
            "existing_tasks": optional_dedup_context,
            "project_phase": current_phase
        }
    }
}

18.4 Model Selection Logic

v1.1.0 CHANGE: Bash-specific code removed. Selection logic as decision table.
FUNCTION select_model(phase: string, complexity: string = "medium") -> ModelTier:
    """
    Select appropriate model tier based on phase and complexity.
    Model tiers are implementation-defined (e.g., "haiku", "sonnet", "opus").
    """

    # Decision table
    IF phase IN ["scope", "dag", "tasks"]:
        RETURN "fast"       # Low-cost tier for structured extraction

    IF phase == "goals":
        IF complexity == "high":
            RETURN "standard"  # Complex reasoning requires capable model
        ELSE:
            RETURN "fast"

    IF phase == "challenge":
        RETURN "standard"   # Adversarial validation requires capable model

    RETURN "fast"           # Default to low-cost tier
Model Tier Mapping (implementation-specific):
TierClaude ModelsOpenAI ModelsPurpose
fastclaude-3-haikugpt-4o-miniStructured extraction, low latency
standardclaude-sonnet-4gpt-4oComplex reasoning, adversarial
capableclaude-opus-4o3Maximum capability (reserved)

18.5 Fallback Strategy

If LLM invocation fails:
Failure TypeFallback Action
Rate limitExponential backoff (1s, 2s, 4s, max 30s)
TimeoutReturn partial result with incomplete: true
Invalid outputRetry once with stricter schema enforcement
Model unavailableFall back to simpler model tier

Part 19: Dependency Detection Algorithms (AUTHORITATIVE)

This section addresses Finding 3 (BLOCKING): Dependency detection underspecified.

19.1 Detection Functions

FUNCTION detect_explicit_dependency(task_a, task_b) -> Dependency | null:
    # Keywords in task_a that reference task_b
    keywords = ["after", "requires", "depends on", "following", "once"]

    FOR keyword IN keywords:
        IF task_a.description CONTAINS keyword + task_b.identifier:
            RETURN Dependency {
                type: "explicit",
                evidence: f"Keyword '{keyword}' references {task_b.id}",
                confidence: 1.0
            }

    RETURN null


FUNCTION detect_data_flow_dependency(task_a, task_b) -> Dependency | null:
    # Schema/type analysis
    outputs_a = extract_outputs(task_a)  # Files created, APIs exposed
    inputs_b = extract_inputs(task_b)    # Files read, APIs called

    overlap = outputs_a INTERSECT inputs_b

    IF overlap.length > 0:
        RETURN Dependency {
            type: "data_flow",
            evidence: f"Output {overlap[0]} from {task_a.id} consumed by {task_b.id}",
            confidence: 0.9,
            artifacts: overlap
        }

    RETURN null


FUNCTION detect_file_conflict_dependency(task_a, task_b) -> Dependency | null:
    # Both tasks modify same file
    files_a = task_a.files or []
    files_b = task_b.files or []

    conflicts = files_a INTERSECT files_b

    IF conflicts.length > 0:
        # Need ordering to avoid merge conflicts
        RETURN Dependency {
            type: "file_conflict",
            evidence: f"Both modify {conflicts[0]}",
            confidence: 0.85,
            files: conflicts,
            resolution: "serialize_execution"
        }

    RETURN null


FUNCTION detect_semantic_dependency(task_a, task_b) -> Dependency | null:
    # Domain knowledge patterns
    semantic_rules = [
        ("schema", "query"),      # Schema before queries
        ("model", "migration"),   # Model before migration
        ("interface", "impl"),    # Interface before implementation
        ("test", "implement"),    # Implementation before tests (inverted)
    ]

    FOR (before_pattern, after_pattern) IN semantic_rules:
        IF task_a.title MATCHES before_pattern AND task_b.title MATCHES after_pattern:
            RETURN Dependency {
                type: "semantic",
                evidence: f"Domain rule: {before_pattern} → {after_pattern}",
                confidence: 0.75,
                rule: f"{before_pattern}_before_{after_pattern}"
            }

    RETURN null

19.2 Confidence Thresholds

ConfidenceAction
≥0.9Auto-include dependency
0.7-0.89Include, flag for challenge review
0.5-0.69Require HITL confirmation
<0.5Reject, log as “considered but excluded”

19.3 Transitive Dependency Resolution

FUNCTION compute_transitive_closure(dag: DAG) -> DAG:
    # Floyd-Warshall for reachability
    reachable = initialize_matrix(dag.nodes.length)

    FOR edge IN dag.edges:
        reachable[edge.from][edge.to] = true

    FOR k IN dag.nodes:
        FOR i IN dag.nodes:
            FOR j IN dag.nodes:
                IF reachable[i][k] AND reachable[k][j]:
                    reachable[i][j] = true

    # Remove redundant edges (A→B when A→C→B exists)
    optimized_edges = []
    FOR edge IN dag.edges:
        is_redundant = false
        FOR intermediate IN dag.nodes:
            IF intermediate != edge.from AND intermediate != edge.to:
                IF reachable[edge.from][intermediate] AND reachable[intermediate][edge.to]:
                    is_redundant = true
                    BREAK

        IF NOT is_redundant:
            optimized_edges.append(edge)

    dag.edges = optimized_edges
    RETURN dag

19.4 Anti-Hallucination Validation

Every dependency MUST have:
  1. Type from defined set: explicit | data_flow | file_conflict | api_contract | semantic
  2. Evidence string explaining the relationship
  3. Confidence score 0.0-1.0
Dependencies with evidence: null or evidence: "assumed" MUST be rejected.

Part 20: Generic Decomposition Fallback (AUTHORITATIVE)

This section addresses Finding 2: Generic decomposition method undefined.

20.1 Fallback Method

When no pattern-specific method matches:
method: generic_decomposition
pattern: ".*"  # Fallback, lowest priority
phases:
  1_understand:
    template: "Understand requirements for {goal}"
    type: subtask
    atomicity_target: 100
  2_plan:
    template: "Plan approach for {goal}"
    type: subtask
    atomicity_target: 100
  3_implement:
    template: "Implement {goal}"
    type: task
    atomicity_target: 80  # May need further decomposition
  4_validate:
    template: "Validate {goal} meets requirements"
    type: subtask
    atomicity_target: 100

20.2 Recursive Application

If 3_implement fails atomicity (score < 100), recursively apply:
FUNCTION apply_generic_fallback(goal: Goal, depth: int) -> TaskTree:

    IF depth >= 3:
        WARN("Max depth reached, accepting non-atomic")
        RETURN create_task(goal, atomic=false)

    result = apply_method("generic_decomposition", goal)

    FOR subtask IN result.children:
        IF subtask.atomicity_score < 100:
            subtask.children = apply_generic_fallback(subtask, depth + 1)

    RETURN result

20.3 Method Priority Order

  1. Exact pattern match (e.g., implement_feature)
  2. Partial pattern match (regex similarity > 0.8)
  3. LLM-suggested method (if enabled)
  4. Generic fallback

Part 21: Challenge Quality Metrics (AUTHORITATIVE)

This section addresses Finding 4: Challenge agent quality metrics missing.

21.1 Finding Severity Classification

SeverityCriteriaRequired Action
blockingDecomposition fundamentally brokenREJECT, require restart
majorMissing/incorrect tasks or dependenciesREVISE before proceeding
minorStyle, naming, or optimization issuesLOG, optional fix
infoSuggestions for improvementDOCUMENT for future

21.2 Challenge Quality Scoring

challenge_quality_score = {
    specificity: findings.all(f => f.reference != null),      # 0-1
    evidence: findings.all(f => f.argument.length > 50),      # 0-1
    actionable: findings.all(f => f.suggestion != null),      # 0-1
    coverage: unique_tasks_challenged / total_tasks           # 0-1
}

overall_quality = average(challenge_quality_score.values())

IF overall_quality < 0.6:
    WARN("Challenge quality below threshold, may need re-challenge")

21.3 Rubber-Stamp Detection

Challenge agent outputs are suspect if:
  • verdict: "VALID" with findings.length == 0
  • All findings are severity: "info"
  • Challenge completed in <2 seconds
  • No specific task/edge references
FUNCTION detect_rubber_stamp(challenge_result) -> boolean:
    IF challenge_result.verdict == "VALID" AND challenge_result.findings.length == 0:
        RETURN true

    IF all(f.severity == "info" FOR f IN challenge_result.findings):
        RETURN true

    IF challenge_result.duration_ms < 2000:
        RETURN true

    RETURN false


IF detect_rubber_stamp(challenge_result):
    LOG_WARNING("Potential rubber-stamp detected, requesting re-challenge")
    challenge_result = re_challenge_with_stricter_prompt()

21.4 Minimum Challenge Requirements

Every challenge MUST produce at least:
  • 2 findings (even if severity: info)
  • 1 specific reference to task or edge
  • 1 actionable suggestion

Part 22: Iteration and Retry Protocol (AUTHORITATIVE)

This section addresses Finding 6: No iteration/retry protocol.

22.1 Retry Configuration

retry_config:
  max_retries_per_phase: 3
  max_total_retries: 10
  backoff:
    initial_delay_ms: 1000
    max_delay_ms: 30000
    multiplier: 2.0
  circuit_breaker:
    failure_threshold: 5
    reset_timeout_ms: 60000

22.2 Phase Retry Logic

FUNCTION execute_phase_with_retry(phase, input) -> Result:
    retries = 0

    WHILE retries < config.max_retries_per_phase:
        TRY:
            result = execute_phase(phase, input)

            IF result.success:
                RETURN result

            IF result.error.retryable == false:
                RETURN result  # Permanent failure

            retries += 1
            delay = calculate_backoff(retries)
            sleep(delay)

        CATCH error:
            IF is_transient(error):
                retries += 1
                continue
            ELSE:
                RETURN Error(error)

    RETURN Error("Max retries exceeded for phase: {phase}")

22.3 Challenge-Revision Loop

FUNCTION decomposition_with_challenge_loop(request) -> Result:
    max_iterations = 3
    iteration = 0

    scope = analyze_scope(request)
    goals = decompose_goals(request, scope)

    WHILE iteration < max_iterations:
        challenge = challenge_decomposition(goals, "goals")

        IF challenge.verdict == "VALID":
            BREAK

        IF challenge.verdict == "REJECTED":
            # Fundamental flaw, restart with new approach
            goals = decompose_goals(request, scope, approach="alternative")
            iteration += 1
            continue

        IF challenge.verdict == "NEEDS_REVISION":
            # Apply suggested fixes
            FOR finding IN challenge.findings WHERE finding.severity IN ["blocking", "major"]:
                goals = apply_fix(goals, finding)

            iteration += 1

    IF iteration >= max_iterations:
        RETURN Error("Could not produce valid decomposition after {max_iterations} iterations")

    RETURN goals

22.4 State Preservation

Between retries, preserve:
  • Original request (immutable)
  • Phase outputs completed successfully
  • Challenge findings for learning
  • Retry count per phase
{
  "_retry_state": {
    "request_hash": "sha256:abc...",
    "completed_phases": ["scope"],
    "current_phase": "goals",
    "attempt": 2,
    "previous_findings": [...]
  }
}

Part 23: Schema Extensions (AUTHORITATIVE)

This section addresses Finding: Schema considerations for children and computed fields.

23.1 Proposed Schema Additions

The following fields SHOULD be added to todo.schema.json v2.4.0:
{
  "children": {
    "type": "array",
    "items": { "type": "string", "pattern": "^T\\d{3,}$" },
    "description": "Computed: IDs of direct child tasks (inverse of parentId)",
    "computed": true
  },
  "ancestors": {
    "type": "array",
    "items": { "type": "string", "pattern": "^T\\d{3,}$" },
    "description": "Computed: IDs of all ancestor tasks (transitive parentId)",
    "computed": true
  },
  "depth": {
    "type": "integer",
    "minimum": 0,
    "maximum": 2,
    "description": "Computed: Hierarchy depth (0=root, 1=child, 2=grandchild)",
    "computed": true
  },
  "dependents": {
    "type": "array",
    "items": { "type": "string", "pattern": "^T\\d{3,}$" },
    "description": "Computed: IDs of tasks that depend on this task (inverse of depends)",
    "computed": true
  },
  "blockedBy": {
    "type": "array",
    "items": { "type": "string", "pattern": "^T\\d{3,}$" },
    "description": "Computed: IDs of incomplete dependencies blocking this task",
    "computed": true
  },
  "decompositionId": {
    "type": "string",
    "pattern": "^DEC-\\d{8}-\\d{3}$",
    "description": "ID of decomposition session that created this task"
  },
  "atomicityScore": {
    "type": "integer",
    "minimum": 0,
    "maximum": 100,
    "description": "Atomicity score at creation (see Part 4)"
  },
  "acceptance": {
    "type": "array",
    "items": { "type": "string" },
    "description": "Testable acceptance criteria"
  }
}

23.2 Computed Field Calculation

v1.1.0 CHANGE: Bash-specific code removed. Algorithms as pseudocode.
Computed fields are NOT stored in todo.json but calculated on read:
FUNCTION compute_children(task_id: string, tasks: Task[]) -> string[]:
    """Return IDs of direct children (tasks with parentId == task_id)."""
    RETURN [t.id FOR t IN tasks WHERE t.parentId == task_id]


FUNCTION compute_depth(task_id: string, tasks: Task[]) -> int:
    """Calculate hierarchy depth (0 = root, 1 = child, etc.)."""
    depth = 0
    current = task_id

    WHILE current IS NOT null:
        task = find_task(current, tasks)
        IF task IS null OR task.parentId IS null:
            BREAK
        depth += 1
        current = task.parentId

    RETURN depth


FUNCTION compute_blocked_by(task_id: string, tasks: Task[]) -> string[]:
    """Return IDs of incomplete dependencies blocking this task."""
    task = find_task(task_id, tasks)
    IF task IS null OR task.depends IS empty:
        RETURN []

    RETURN [
        dep_id FOR dep_id IN task.depends
        WHERE find_task(dep_id, tasks).status != "done"
    ]


FUNCTION compute_dependents(task_id: string, tasks: Task[]) -> string[]:
    """Return IDs of tasks that depend on this task (inverse of depends)."""
    RETURN [t.id FOR t IN tasks WHERE task_id IN (t.depends OR [])]


FUNCTION compute_ancestors(task_id: string, tasks: Task[]) -> string[]:
    """Return IDs of all ancestor tasks (transitive parentId)."""
    ancestors = []
    current = find_task(task_id, tasks).parentId

    WHILE current IS NOT null:
        ancestors.append(current)
        current = find_task(current, tasks).parentId

    RETURN ancestors

23.3 Materialization Strategy

For performance, computed fields MAY be materialized:
  • On write: Update children/dependents when parentId/depends changes
  • On read: Always compute blockedBy (status-dependent)
  • Cache: Store in separate .cleo/computed-cache.json with TTL

23.4 Backward Compatibility

New fields MUST be:
  • Optional in schema (not required)
  • Handled gracefully by older CLI versions
  • Migrated via cleo migrate run

Appendix A: Decomposition Method Library

A.1 Feature Implementation

method: feature_decomposition
pattern: "implement.*feature|add.*functionality|create.*system"
subtasks:
  - name: schema
    template: "Define data schema for {feature}"
    type: task
    dependencies: []
  - name: api
    template: "Implement API endpoints for {feature}"
    type: task
    dependencies: [schema]
  - name: logic
    template: "Implement business logic for {feature}"
    type: task
    dependencies: [schema]
  - name: ui
    template: "Create UI components for {feature}"
    type: task
    dependencies: [api, logic]
  - name: tests
    template: "Write tests for {feature}"
    type: task
    dependencies: [api, logic, ui]

A.2 Bug Fix

method: bug_fix_decomposition
pattern: "fix.*bug|resolve.*issue|correct.*error"
subtasks:
  - name: reproduce
    template: "Create reproduction case for {bug}"
    type: subtask
    dependencies: []
  - name: diagnose
    template: "Identify root cause of {bug}"
    type: subtask
    dependencies: [reproduce]
  - name: fix
    template: "Implement fix for {bug}"
    type: subtask
    dependencies: [diagnose]
  - name: verify
    template: "Verify fix and add regression test for {bug}"
    type: subtask
    dependencies: [fix]

A.3 CLI Command

method: cli_command_decomposition
pattern: "add.*command|create.*command|implement.*cli"
subtasks:
  - name: script
    template: "Create {command} script"
    type: task
    dependencies: []
  - name: help
    template: "Add help text for {command}"
    type: subtask
    dependencies: [script]
  - name: tests
    template: "Write tests for {command}"
    type: task
    dependencies: [script]
  - name: docs
    template: "Document {command} in docs/commands/"
    type: subtask
    dependencies: [script]

Appendix B: Changelog

Version 1.0.0 (2025-12-19)

  • Initial specification
  • 4-phase decomposition pipeline
  • Atomicity criteria (6-point test)
  • HTN-inspired goal decomposition
  • DAG construction with cycle detection
  • Challenge protocol integration
  • CLI command specification
  • Library function definitions
  • Test requirements
  • Performance targets

End of Specification