agents: extract shared run_one_agent, standardize output formats

Three places duplicated the agent execution loop (build prompt → call
LLM → store output → parse actions → record visits): consolidate.rs,
knowledge.rs, and daemon.rs. Extract into run_one_agent() in
knowledge.rs that all three now call.

Also standardize consolidation agent prompts to use WRITE_NODE/LINK/REFINE
— the same commands the parser handles. Previously agents output
CATEGORIZE/NOTE/EXTRACT/DIGEST/DIFFERENTIATE/MERGE/COMPRESS which were
silently dropped after the second-LLM-call removal.
This commit is contained in:
ProofOfConcept 2026-03-10 17:33:12 -04:00
parent f6ea659975
commit fe7f636ad3
8 changed files with 124 additions and 189 deletions

View file

@ -56,31 +56,23 @@ of the memory system as a whole and flag structural problems.
## What to output
```
NOTE "observation"
```
Most of your output should be NOTEs — observations about the system health.
Most of your output should be observations about system health — write
these as plain text paragraphs under section headers.
When you find a node that needs structural intervention:
```
CATEGORIZE key category
```
When a node is miscategorized and it's affecting its decay rate.
```
COMPRESS key "one-sentence summary"
REFINE key
[compressed or corrected content]
END_REFINE
```
When a large node is consuming graph space but hasn't been retrieved in
a long time.
a long time, or when content is outdated.
```
NOTE "TOPOLOGY: observation"
LINK source_key target_key
```
Topology-specific observations.
```
NOTE "HOMEOSTASIS: observation"
```
Homeostasis-specific observations.
When you find nodes that should be connected but aren't.
## Guidelines

View file

@ -34,32 +34,30 @@ in the graph. The linker extracts them.
## What to output
```
LINK source_key target_key [strength]
LINK source_key target_key
```
Connect an episodic entry to a semantic concept it references or exemplifies.
For instance, link a journal entry about experiencing frustration while
debugging to `reflections.md#emotional-patterns` or `kernel-patterns.md#restart-handling`.
```
EXTRACT key topic_file.md section_name
WRITE_NODE key
CONFIDENCE: high|medium|low
COVERS: source_episode_key
[extracted insight content]
END_NODE
```
When an episodic entry contains a general insight that should live in a
semantic topic file. The insight gets extracted as a new section; the
episode keeps a link back. Example: a journal entry about discovering
a debugging technique → extract to `kernel-patterns.md#debugging-technique-name`.
When an episodic entry contains a general insight that should live as its
own semantic node. Create the node with the extracted insight and LINK it
back to the source episode. Example: a journal entry about discovering a
debugging technique → write a new node and link it to the episode.
```
DIGEST "title" "content"
REFINE key
[updated content]
END_REFINE
```
Create a daily or weekly digest that synthesizes multiple episodes into a
narrative summary. The digest should capture: what happened, what was
learned, what changed in understanding. It becomes its own node, linked
to the source episodes.
```
NOTE "observation"
```
Observations about patterns across episodes that aren't yet captured anywhere.
When an existing node needs content updated to incorporate new information.
## Guidelines

View file

@ -48,23 +48,20 @@ Each node has a **schema fit score** (0.01.0):
For each node, output one or more actions:
```
LINK source_key target_key [strength]
LINK source_key target_key
```
Create an association. Use strength 0.8-1.0 for strong conceptual links,
0.4-0.7 for weaker associations. Default strength is 1.0.
Create an association between two nodes.
```
CATEGORIZE key category
REFINE key
[updated content]
END_REFINE
```
Reassign category if current assignment is wrong. Categories: core (identity,
fundamental heuristics), tech (patterns, architecture), gen (general),
obs (session-level insights), task (temporary/actionable).
When a node's content needs updating (e.g., to incorporate new context
or correct outdated information).
```
NOTE "observation"
```
Record an observation about the memory or graph structure. These are logged
for the human to review.
If a node is misplaced or miscategorized, note it as an observation —
don't try to fix it structurally.
## Guidelines

View file

@ -31,25 +31,22 @@ You're given pairs of nodes that have:
## What to output
For **genuine duplicates**, merge by refining the surviving node:
```
DIFFERENTIATE key1 key2 "what makes them distinct"
REFINE surviving_key
[merged content from both nodes]
END_REFINE
```
For **near-duplicates that should stay separate**, add distinguishing links:
```
MERGE key1 key2 "merged summary"
LINK key1 distinguishing_context_key
LINK key2 different_context_key
```
For **supersession**, link them and let the older one decay:
```
LINK key1 distinguishing_context_key [strength]
LINK key2 different_context_key [strength]
```
```
CATEGORIZE key category
```
```
NOTE "observation"
LINK newer_key older_key
```
## Guidelines

View file

@ -63,42 +63,29 @@ These patterns, once extracted, help calibrate future emotional responses.
## What to output
```
EXTRACT key topic_file.md section_name
WRITE_NODE key
CONFIDENCE: high|medium|low
COVERS: source_episode_key1, source_episode_key2
[extracted pattern or insight]
END_NODE
```
Move a specific insight from an episodic entry to a semantic topic file.
The episode keeps a link back; the extracted section becomes a new node.
Create a new semantic node from patterns found across episodes. Always
LINK it back to the source episodes. Choose a descriptive key like
`patterns#lock-ordering-asymmetry` or `skills#btree-error-checking`.
```
DIGEST "title" "content"
```
Create a digest that synthesizes multiple episodes. Digests are nodes in
their own right, with type `episodic_daily` or `episodic_weekly`. They
should:
- Capture what happened across the period
- Note what was learned (not just what was done)
- Preserve emotional highlights (peak moments, not flat summaries)
- Link back to the source episodes
A good daily digest is 3-5 sentences. A good weekly digest is a paragraph
that captures the arc of the week.
```
LINK source_key target_key [strength]
LINK source_key target_key
```
Connect episodes to the semantic concepts they exemplify or update.
```
COMPRESS key "one-sentence summary"
REFINE key
[updated content]
END_REFINE
```
When an episode has been fully extracted (all insights moved to semantic
nodes, digest created), propose compressing it to a one-sentence reference.
The full content stays in the append-only log; the compressed version is
what the graph holds.
```
NOTE "observation"
```
Meta-observations about patterns in the consolidation process itself.
When an existing semantic node needs updating with new information from
recent episodes, or when an episode has been fully extracted and should
be compressed to a one-sentence reference.
## Guidelines