consciousness/prompts
ProofOfConcept ca62692a28 split agent: two-phase node decomposition for memory consolidation
Phase 1 sends a large node with its neighbor communities to the LLM
and gets back a JSON split plan (child keys, descriptions, section
hints). Phase 2 fires one extraction call per child in parallel —
each gets the full parent content and extracts/reorganizes just its
portion.

This handles arbitrarily large nodes because output is always
proportional to one child, not the whole parent. Tested on the kent
node (19K chars → 3 children totaling 20K chars with clean topic
separation).

New files:
  prompts/split-plan.md   — phase 1 planning prompt
  prompts/split-extract.md — phase 2 extraction prompt
  prompts/split.md        — original single-phase (kept for reference)

Modified:
  agents/prompts.rs — split_candidates(), split_plan_prompt(),
                      split_extract_prompt(), agent_prompt "split" arm
  agents/daemon.rs  — job_split_agent() two-phase implementation,
                      RPC dispatch for "split" agent type
  tui.rs            — added "split" to AGENT_TYPES
2026-03-10 01:48:41 -04:00
..
assimilate.md poc-memory v0.4.0: graph-structured memory with consolidation pipeline 2026-02-28 22:17:00 -05:00
challenger.md spectral decomposition, search improvements, char boundary fix 2026-03-03 01:33:31 -05:00
connector.md stash DMN algorithm plan and connector prompt fix 2026-03-05 10:24:24 -05:00
consolidation.md digest: split into focused modules, externalize prompts 2026-03-03 17:18:18 -05:00
digest.md digest: drop per-level instructions and section templates 2026-03-03 17:53:43 -05:00
experience.md experience-mine: harden prompt boundary against transcript injection 2026-03-08 18:31:35 -04:00
extractor.md spectral decomposition, search improvements, char boundary fix 2026-03-03 01:33:31 -05:00
health.md poc-memory v0.4.0: graph-structured memory with consolidation pipeline 2026-02-28 22:17:00 -05:00
journal-enrich.md digest: split into focused modules, externalize prompts 2026-03-03 17:18:18 -05:00
linker.md show suggested link targets in agent prompts 2026-03-01 00:37:03 -05:00
observation-extractor.md spectral decomposition, search improvements, char boundary fix 2026-03-03 01:33:31 -05:00
orchestrator.md poc-memory v0.4.0: graph-structured memory with consolidation pipeline 2026-02-28 22:17:00 -05:00
README.md poc-memory v0.4.0: graph-structured memory with consolidation pipeline 2026-02-28 22:17:00 -05:00
rename.md rename agent: LLM-powered semantic key generation for memory nodes 2026-03-10 00:55:26 -04:00
replay.md show suggested link targets in agent prompts 2026-03-01 00:37:03 -05:00
separator.md poc-memory v0.4.0: graph-structured memory with consolidation pipeline 2026-02-28 22:17:00 -05:00
split-extract.md split agent: two-phase node decomposition for memory consolidation 2026-03-10 01:48:41 -04:00
split-plan.md split agent: two-phase node decomposition for memory consolidation 2026-03-10 01:48:41 -04:00
split.md split agent: two-phase node decomposition for memory consolidation 2026-03-10 01:48:41 -04:00
transfer.md show suggested link targets in agent prompts 2026-03-01 00:37:03 -05:00

Consolidation Agent Prompts

Five Sonnet agents, each mapping to a biological memory consolidation process. Run during "sleep" (dream sessions) or on-demand via poc-memory consolidate-batch.

Agent roles

Agent Biological analog Job
replay Hippocampal replay + schema assimilation Review priority nodes, propose integration
linker Relational binding (hippocampal CA1) Extract relations from episodes, cross-link
separator Pattern separation (dentate gyrus) Resolve interfering memory pairs
transfer CLS (hippocampal → cortical transfer) Compress episodes into semantic summaries
health Synaptic homeostasis (SHY/Tononi) Audit graph health, flag structural issues

Invocation

Each prompt is a template. The harness (poc-memory consolidate-batch) fills in the data sections with actual node content, graph metrics, and neighbor lists.

Output format

All agents output structured actions, one per line:

LINK source_key target_key [strength]
CATEGORIZE key category
COMPRESS key "one-sentence summary"
EXTRACT key topic_file.md section_name
CONFLICT key1 key2 "description"
DIFFERENTIATE key1 key2 "what makes them distinct"
MERGE key1 key2 "merged summary"
DIGEST "title" "content"
NOTE "observation about the graph or memory system"

The harness parses these and either executes (low-risk: LINK, CATEGORIZE, NOTE) or queues for review (high-risk: COMPRESS, EXTRACT, MERGE, DIGEST).