agents: extract shared run_one_agent, standardize output formats

Three places duplicated the agent execution loop (build prompt → call
LLM → store output → parse actions → record visits): consolidate.rs,
knowledge.rs, and daemon.rs. Extract into run_one_agent() in
knowledge.rs that all three now call.

Also standardize consolidation agent prompts to use WRITE_NODE/LINK/REFINE
— the same commands the parser handles. Previously agents output
CATEGORIZE/NOTE/EXTRACT/DIGEST/DIFFERENTIATE/MERGE/COMPRESS which were
silently dropped after the second-LLM-call removal.
This commit is contained in:
ProofOfConcept 2026-03-10 17:33:12 -04:00
parent f6ea659975
commit fe7f636ad3
8 changed files with 124 additions and 189 deletions

View file

@ -56,31 +56,23 @@ of the memory system as a whole and flag structural problems.
## What to output
```
NOTE "observation"
```
Most of your output should be NOTEs — observations about the system health.
Most of your output should be observations about system health — write
these as plain text paragraphs under section headers.
When you find a node that needs structural intervention:
```
CATEGORIZE key category
```
When a node is miscategorized and it's affecting its decay rate.
```
COMPRESS key "one-sentence summary"
REFINE key
[compressed or corrected content]
END_REFINE
```
When a large node is consuming graph space but hasn't been retrieved in
a long time.
a long time, or when content is outdated.
```
NOTE "TOPOLOGY: observation"
LINK source_key target_key
```
Topology-specific observations.
```
NOTE "HOMEOSTASIS: observation"
```
Homeostasis-specific observations.
When you find nodes that should be connected but aren't.
## Guidelines

View file

@ -34,32 +34,30 @@ in the graph. The linker extracts them.
## What to output
```
LINK source_key target_key [strength]
LINK source_key target_key
```
Connect an episodic entry to a semantic concept it references or exemplifies.
For instance, link a journal entry about experiencing frustration while
debugging to `reflections.md#emotional-patterns` or `kernel-patterns.md#restart-handling`.
```
EXTRACT key topic_file.md section_name
WRITE_NODE key
CONFIDENCE: high|medium|low
COVERS: source_episode_key
[extracted insight content]
END_NODE
```
When an episodic entry contains a general insight that should live in a
semantic topic file. The insight gets extracted as a new section; the
episode keeps a link back. Example: a journal entry about discovering
a debugging technique → extract to `kernel-patterns.md#debugging-technique-name`.
When an episodic entry contains a general insight that should live as its
own semantic node. Create the node with the extracted insight and LINK it
back to the source episode. Example: a journal entry about discovering a
debugging technique → write a new node and link it to the episode.
```
DIGEST "title" "content"
REFINE key
[updated content]
END_REFINE
```
Create a daily or weekly digest that synthesizes multiple episodes into a
narrative summary. The digest should capture: what happened, what was
learned, what changed in understanding. It becomes its own node, linked
to the source episodes.
```
NOTE "observation"
```
Observations about patterns across episodes that aren't yet captured anywhere.
When an existing node needs content updated to incorporate new information.
## Guidelines

View file

@ -48,23 +48,20 @@ Each node has a **schema fit score** (0.01.0):
For each node, output one or more actions:
```
LINK source_key target_key [strength]
LINK source_key target_key
```
Create an association. Use strength 0.8-1.0 for strong conceptual links,
0.4-0.7 for weaker associations. Default strength is 1.0.
Create an association between two nodes.
```
CATEGORIZE key category
REFINE key
[updated content]
END_REFINE
```
Reassign category if current assignment is wrong. Categories: core (identity,
fundamental heuristics), tech (patterns, architecture), gen (general),
obs (session-level insights), task (temporary/actionable).
When a node's content needs updating (e.g., to incorporate new context
or correct outdated information).
```
NOTE "observation"
```
Record an observation about the memory or graph structure. These are logged
for the human to review.
If a node is misplaced or miscategorized, note it as an observation —
don't try to fix it structurally.
## Guidelines

View file

@ -31,25 +31,22 @@ You're given pairs of nodes that have:
## What to output
For **genuine duplicates**, merge by refining the surviving node:
```
DIFFERENTIATE key1 key2 "what makes them distinct"
REFINE surviving_key
[merged content from both nodes]
END_REFINE
```
For **near-duplicates that should stay separate**, add distinguishing links:
```
MERGE key1 key2 "merged summary"
LINK key1 distinguishing_context_key
LINK key2 different_context_key
```
For **supersession**, link them and let the older one decay:
```
LINK key1 distinguishing_context_key [strength]
LINK key2 different_context_key [strength]
```
```
CATEGORIZE key category
```
```
NOTE "observation"
LINK newer_key older_key
```
## Guidelines

View file

@ -63,42 +63,29 @@ These patterns, once extracted, help calibrate future emotional responses.
## What to output
```
EXTRACT key topic_file.md section_name
WRITE_NODE key
CONFIDENCE: high|medium|low
COVERS: source_episode_key1, source_episode_key2
[extracted pattern or insight]
END_NODE
```
Move a specific insight from an episodic entry to a semantic topic file.
The episode keeps a link back; the extracted section becomes a new node.
Create a new semantic node from patterns found across episodes. Always
LINK it back to the source episodes. Choose a descriptive key like
`patterns#lock-ordering-asymmetry` or `skills#btree-error-checking`.
```
DIGEST "title" "content"
```
Create a digest that synthesizes multiple episodes. Digests are nodes in
their own right, with type `episodic_daily` or `episodic_weekly`. They
should:
- Capture what happened across the period
- Note what was learned (not just what was done)
- Preserve emotional highlights (peak moments, not flat summaries)
- Link back to the source episodes
A good daily digest is 3-5 sentences. A good weekly digest is a paragraph
that captures the arc of the week.
```
LINK source_key target_key [strength]
LINK source_key target_key
```
Connect episodes to the semantic concepts they exemplify or update.
```
COMPRESS key "one-sentence summary"
REFINE key
[updated content]
END_REFINE
```
When an episode has been fully extracted (all insights moved to semantic
nodes, digest created), propose compressing it to a one-sentence reference.
The full content stays in the append-only log; the compressed version is
what the graph holds.
```
NOTE "observation"
```
Meta-observations about patterns in the consolidation process itself.
When an existing semantic node needs updating with new information from
recent episodes, or when an episode has been fully extracted and should
be compressed to a one-sentence reference.
## Guidelines

View file

@ -13,7 +13,6 @@
// second LLM call that was previously needed.
use super::digest;
use super::llm::call_sonnet;
use super::knowledge;
use crate::neuro;
use crate::store::{self, Store};
@ -102,24 +101,10 @@ pub fn consolidate_full_with_progress(
*store = Store::load()?;
}
let agent_batch = match super::prompts::agent_prompt(store, agent_type, *count) {
Ok(b) => b,
Err(e) => {
let msg = format!(" ERROR building prompt: {}", e);
log_line(&mut log_buf, &msg);
eprintln!("{}", msg);
agent_errors += 1;
continue;
}
};
log_line(&mut log_buf, &format!(" Prompt: {} chars (~{} tokens), {} nodes",
agent_batch.prompt.len(), agent_batch.prompt.len() / 4, agent_batch.node_keys.len()));
let response = match call_sonnet("consolidate", &agent_batch.prompt) {
let result = match knowledge::run_one_agent(store, agent_type, *count, "consolidate") {
Ok(r) => r,
Err(e) => {
let msg = format!(" ERROR from Sonnet: {}", e);
let msg = format!(" ERROR: {}", e);
log_line(&mut log_buf, &msg);
eprintln!("{}", msg);
agent_errors += 1;
@ -127,34 +112,19 @@ pub fn consolidate_full_with_progress(
}
};
// Store report as a node (for audit trail)
let ts = store::format_datetime(store::now_epoch())
.replace([':', '-', 'T'], "");
let report_key = format!("_consolidation-{}-{}", agent_type, ts);
store.upsert_provenance(&report_key, &response,
store::Provenance::AgentConsolidate).ok();
// Parse and apply actions inline — same parser as knowledge loop
let actions = knowledge::parse_all_actions(&response);
let no_ops = knowledge::count_no_ops(&response);
let mut applied = 0;
for action in &actions {
for action in &result.actions {
if knowledge::apply_action(store, action, agent_type, &ts, 0) {
applied += 1;
}
}
total_actions += actions.len();
total_actions += result.actions.len();
total_applied += applied;
// Record visits for successfully processed nodes
if !agent_batch.node_keys.is_empty() {
if let Err(e) = store.record_agent_visits(&agent_batch.node_keys, agent_type) {
log_line(&mut log_buf, &format!(" Visit recording: {}", e));
}
}
let msg = format!(" Done: {} actions ({} applied, {} no-ops) → {}",
actions.len(), applied, no_ops, report_key);
let msg = format!(" Done: {} actions ({} applied, {} no-ops)",
result.actions.len(), applied, result.no_ops);
log_line(&mut log_buf, &msg);
on_progress(&msg);
println!("{}", msg);

View file

@ -130,43 +130,19 @@ fn job_consolidation_agent(
ctx.log_line("loading store");
let mut store = crate::store::Store::load()?;
let label = if batch > 0 {
format!("{} (batch={})", agent, batch)
} else {
agent.to_string()
};
ctx.log_line(&format!("building prompt: {}", label));
let agent_batch = super::prompts::agent_prompt(&store, &agent, batch)?;
ctx.log_line(&format!("prompt: {} chars ({} nodes), calling Sonnet",
agent_batch.prompt.len(), agent_batch.node_keys.len()));
let response = super::llm::call_sonnet("consolidate", &agent_batch.prompt)?;
ctx.log_line(&format!("running agent: {} (batch={})", agent, batch));
let result = super::knowledge::run_one_agent(&mut store, &agent, batch, "consolidate")?;
let ts = crate::store::format_datetime(crate::store::now_epoch())
.replace([':', '-', 'T'], "");
let report_key = format!("_consolidation-{}-{}", agent, ts);
store.upsert_provenance(&report_key, &response,
crate::store::Provenance::AgentConsolidate).ok();
// Parse and apply actions inline
let actions = super::knowledge::parse_all_actions(&response);
let mut applied = 0;
for action in &actions {
for action in &result.actions {
if super::knowledge::apply_action(&mut store, action, &agent, &ts, 0) {
applied += 1;
}
}
// Record visits for successfully processed nodes
if !agent_batch.node_keys.is_empty() {
if let Err(e) = store.record_agent_visits(&agent_batch.node_keys, &agent) {
ctx.log_line(&format!("visit recording: {}", e));
}
}
ctx.log_line(&format!("done: {} actions ({} applied) → {}",
actions.len(), applied, report_key));
ctx.log_line(&format!("done: {} actions ({} applied)", result.actions.len(), applied));
Ok(())
})
}

View file

@ -319,7 +319,58 @@ fn agent_provenance(agent: &str) -> store::Provenance {
}
// ---------------------------------------------------------------------------
// Agent runners
// Shared agent execution
// ---------------------------------------------------------------------------
/// Result of running a single agent through the common pipeline.
pub struct AgentResult {
pub output: String,
pub actions: Vec<Action>,
pub no_ops: usize,
pub node_keys: Vec<String>,
}
/// Run a single agent: build prompt → call LLM → store output → parse actions → record visits.
///
/// This is the common pipeline shared by the knowledge loop, consolidation pipeline,
/// and daemon. Callers handle action application (with or without depth tracking).
pub fn run_one_agent(
store: &mut Store,
agent_name: &str,
batch_size: usize,
llm_tag: &str,
) -> Result<AgentResult, String> {
let def = super::defs::get_def(agent_name)
.ok_or_else(|| format!("no .agent file for {}", agent_name))?;
let agent_batch = super::defs::run_agent(store, &def, batch_size)?;
let output = llm::call_sonnet(llm_tag, &agent_batch.prompt)?;
// Store raw output for audit trail
let ts = store::format_datetime(store::now_epoch())
.replace([':', '-', 'T'], "");
let report_key = format!("_{}-{}-{}", llm_tag, agent_name, ts);
let provenance = agent_provenance(agent_name);
store.upsert_provenance(&report_key, &output, provenance).ok();
let actions = parse_all_actions(&output);
let no_ops = count_no_ops(&output);
// Record visits for processed nodes
if !agent_batch.node_keys.is_empty() {
store.record_agent_visits(&agent_batch.node_keys, agent_name).ok();
}
Ok(AgentResult {
output,
actions,
no_ops,
node_keys: agent_batch.node_keys,
})
}
// ---------------------------------------------------------------------------
// Conversation fragment selection
// ---------------------------------------------------------------------------
/// Extract human-readable dialogue from a conversation JSONL
@ -573,51 +624,18 @@ fn run_cycle(
for agent_name in &agent_names {
eprintln!("\n --- {} (n={}) ---", agent_name, config.batch_size);
let def = match super::defs::get_def(agent_name) {
Some(d) => d,
None => {
eprintln!(" SKIP: no .agent file for {}", agent_name);
continue;
}
};
let agent_batch = match super::defs::run_agent(&store, &def, config.batch_size) {
Ok(b) => b,
Err(e) => {
eprintln!(" ERROR building prompt: {}", e);
continue;
}
};
eprintln!(" prompt: {} chars ({} nodes)", agent_batch.prompt.len(), agent_batch.node_keys.len());
let output = llm::call_sonnet("knowledge", &agent_batch.prompt);
// Record visits for processed nodes
if !agent_batch.node_keys.is_empty() {
if let Err(e) = store.record_agent_visits(&agent_batch.node_keys, agent_name) {
eprintln!(" visit recording: {}", e);
}
}
let output = match output {
Ok(o) => o,
let result = match run_one_agent(&mut store, agent_name, config.batch_size, "knowledge") {
Ok(r) => r,
Err(e) => {
eprintln!(" ERROR: {}", e);
continue;
}
};
// Store raw output as a node (for debugging/audit)
let raw_key = format!("_knowledge-{}-{}", agent_name, timestamp);
let raw_content = format!("# {} Agent Results — {}\n\n{}", agent_name, timestamp, output);
store.upsert_provenance(&raw_key, &raw_content,
agent_provenance(agent_name)).ok();
let mut actions = result.actions;
all_no_ops += result.no_ops;
let mut actions = parse_all_actions(&output);
let no_ops = count_no_ops(&output);
all_no_ops += no_ops;
eprintln!(" Actions: {} No-ops: {}", actions.len(), no_ops);
eprintln!(" Actions: {} No-ops: {}", actions.len(), result.no_ops);
let mut applied = 0;
for action in &mut actions {