consolidate: eliminate second LLM call, apply actions inline
The consolidation pipeline previously made a second Sonnet call to extract structured JSON actions from agent reports. This was both wasteful (extra LLM call per consolidation) and lossy (only extracted links and manual items, ignoring WRITE_NODE/REFINE). Now actions are parsed and applied inline after each agent runs, using the same parse_all_actions() parser as the knowledge loop. The daemon scheduler's separate apply phase is also removed. Also deletes 8 superseded/orphaned prompt .md files (784 lines) that have been replaced by .agent files.
This commit is contained in:
parent
42d8e265da
commit
f6ea659975
11 changed files with 119 additions and 1024 deletions
|
|
@ -2,18 +2,21 @@
|
||||||
//
|
//
|
||||||
// consolidate_full() runs the full autonomous consolidation:
|
// consolidate_full() runs the full autonomous consolidation:
|
||||||
// 1. Plan: analyze metrics, allocate agents
|
// 1. Plan: analyze metrics, allocate agents
|
||||||
// 2. Execute: run each agent (Sonnet calls), save reports
|
// 2. Execute: run each agent, parse + apply actions inline
|
||||||
// 3. Apply: extract and apply actions from reports
|
// 3. Graph maintenance (orphans, degree cap)
|
||||||
// 4. Digest: generate missing daily/weekly/monthly digests
|
// 4. Digest: generate missing daily/weekly/monthly digests
|
||||||
// 5. Links: apply links extracted from digests
|
// 5. Links: apply links extracted from digests
|
||||||
// 6. Summary: final metrics comparison
|
// 6. Summary: final metrics comparison
|
||||||
//
|
//
|
||||||
// apply_consolidation() processes consolidation reports independently.
|
// Actions are parsed directly from agent output using the same parser
|
||||||
|
// as the knowledge loop (WRITE_NODE, LINK, REFINE), eliminating the
|
||||||
|
// second LLM call that was previously needed.
|
||||||
|
|
||||||
use super::digest;
|
use super::digest;
|
||||||
use super::llm::{call_sonnet, parse_json_response};
|
use super::llm::call_sonnet;
|
||||||
|
use super::knowledge;
|
||||||
use crate::neuro;
|
use crate::neuro;
|
||||||
use crate::store::{self, Store, new_relation};
|
use crate::store::{self, Store};
|
||||||
|
|
||||||
|
|
||||||
/// Append a line to the log buffer.
|
/// Append a line to the log buffer.
|
||||||
|
|
@ -57,9 +60,10 @@ pub fn consolidate_full_with_progress(
|
||||||
|
|
||||||
// --- Step 2: Execute agents ---
|
// --- Step 2: Execute agents ---
|
||||||
log_line(&mut log_buf, "\n--- Step 2: Execute agents ---");
|
log_line(&mut log_buf, "\n--- Step 2: Execute agents ---");
|
||||||
let mut reports: Vec<String> = Vec::new();
|
|
||||||
let mut agent_num = 0usize;
|
let mut agent_num = 0usize;
|
||||||
let mut agent_errors = 0usize;
|
let mut agent_errors = 0usize;
|
||||||
|
let mut total_applied = 0usize;
|
||||||
|
let mut total_actions = 0usize;
|
||||||
|
|
||||||
// Build the list of (agent_type, batch_size) runs
|
// Build the list of (agent_type, batch_size) runs
|
||||||
let mut runs: Vec<(&str, usize)> = Vec::new();
|
let mut runs: Vec<(&str, usize)> = Vec::new();
|
||||||
|
|
@ -123,13 +127,24 @@ pub fn consolidate_full_with_progress(
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Store report as a node
|
// Store report as a node (for audit trail)
|
||||||
let ts = store::format_datetime(store::now_epoch())
|
let ts = store::format_datetime(store::now_epoch())
|
||||||
.replace([':', '-', 'T'], "");
|
.replace([':', '-', 'T'], "");
|
||||||
let report_key = format!("_consolidation-{}-{}", agent_type, ts);
|
let report_key = format!("_consolidation-{}-{}", agent_type, ts);
|
||||||
store.upsert_provenance(&report_key, &response,
|
store.upsert_provenance(&report_key, &response,
|
||||||
store::Provenance::AgentConsolidate).ok();
|
store::Provenance::AgentConsolidate).ok();
|
||||||
reports.push(report_key.clone());
|
|
||||||
|
// Parse and apply actions inline — same parser as knowledge loop
|
||||||
|
let actions = knowledge::parse_all_actions(&response);
|
||||||
|
let no_ops = knowledge::count_no_ops(&response);
|
||||||
|
let mut applied = 0;
|
||||||
|
for action in &actions {
|
||||||
|
if knowledge::apply_action(store, action, agent_type, &ts, 0) {
|
||||||
|
applied += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
total_actions += actions.len();
|
||||||
|
total_applied += applied;
|
||||||
|
|
||||||
// Record visits for successfully processed nodes
|
// Record visits for successfully processed nodes
|
||||||
if !agent_batch.node_keys.is_empty() {
|
if !agent_batch.node_keys.is_empty() {
|
||||||
|
|
@ -138,36 +153,19 @@ pub fn consolidate_full_with_progress(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let msg = format!(" Done: {} lines → {}", response.lines().count(), report_key);
|
let msg = format!(" Done: {} actions ({} applied, {} no-ops) → {}",
|
||||||
|
actions.len(), applied, no_ops, report_key);
|
||||||
log_line(&mut log_buf, &msg);
|
log_line(&mut log_buf, &msg);
|
||||||
on_progress(&msg);
|
on_progress(&msg);
|
||||||
println!("{}", msg);
|
println!("{}", msg);
|
||||||
}
|
}
|
||||||
|
|
||||||
log_line(&mut log_buf, &format!("\nAgents complete: {} run, {} errors",
|
log_line(&mut log_buf, &format!("\nAgents complete: {} run, {} errors, {} actions ({} applied)",
|
||||||
agent_num - agent_errors, agent_errors));
|
agent_num - agent_errors, agent_errors, total_actions, total_applied));
|
||||||
|
store.save()?;
|
||||||
|
|
||||||
// --- Step 3: Apply consolidation actions ---
|
// --- Step 3: Link orphans ---
|
||||||
log_line(&mut log_buf, "\n--- Step 3: Apply consolidation actions ---");
|
log_line(&mut log_buf, "\n--- Step 3: Link orphans ---");
|
||||||
on_progress("applying actions");
|
|
||||||
println!("\n--- Applying consolidation actions ---");
|
|
||||||
*store = Store::load()?;
|
|
||||||
|
|
||||||
if reports.is_empty() {
|
|
||||||
log_line(&mut log_buf, " No reports to apply.");
|
|
||||||
} else {
|
|
||||||
match apply_consolidation(store, true, None) {
|
|
||||||
Ok(()) => log_line(&mut log_buf, " Applied."),
|
|
||||||
Err(e) => {
|
|
||||||
let msg = format!(" ERROR applying consolidation: {}", e);
|
|
||||||
log_line(&mut log_buf, &msg);
|
|
||||||
eprintln!("{}", msg);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Step 3b: Link orphans ---
|
|
||||||
log_line(&mut log_buf, "\n--- Step 3b: Link orphans ---");
|
|
||||||
on_progress("linking orphans");
|
on_progress("linking orphans");
|
||||||
println!("\n--- Linking orphan nodes ---");
|
println!("\n--- Linking orphan nodes ---");
|
||||||
*store = Store::load()?;
|
*store = Store::load()?;
|
||||||
|
|
@ -175,8 +173,8 @@ pub fn consolidate_full_with_progress(
|
||||||
let (lo_orphans, lo_added) = neuro::link_orphans(store, 2, 3, 0.15);
|
let (lo_orphans, lo_added) = neuro::link_orphans(store, 2, 3, 0.15);
|
||||||
log_line(&mut log_buf, &format!(" {} orphans, {} links added", lo_orphans, lo_added));
|
log_line(&mut log_buf, &format!(" {} orphans, {} links added", lo_orphans, lo_added));
|
||||||
|
|
||||||
// --- Step 3c: Cap degree ---
|
// --- Step 3b: Cap degree ---
|
||||||
log_line(&mut log_buf, "\n--- Step 3c: Cap degree ---");
|
log_line(&mut log_buf, "\n--- Step 3b: Cap degree ---");
|
||||||
on_progress("capping degree");
|
on_progress("capping degree");
|
||||||
println!("\n--- Capping node degree ---");
|
println!("\n--- Capping node degree ---");
|
||||||
*store = Store::load()?;
|
*store = Store::load()?;
|
||||||
|
|
@ -244,176 +242,71 @@ pub fn consolidate_full_with_progress(
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Find the most recent set of consolidation report keys from the store.
|
/// Re-parse and apply actions from stored consolidation reports.
|
||||||
fn find_consolidation_reports(store: &Store) -> Vec<String> {
|
/// This is for manually re-processing reports — during normal consolidation,
|
||||||
|
/// actions are applied inline as each agent runs.
|
||||||
|
pub fn apply_consolidation(store: &mut Store, do_apply: bool, report_key: Option<&str>) -> Result<(), String> {
|
||||||
|
let reports: Vec<String> = if let Some(key) = report_key {
|
||||||
|
vec![key.to_string()]
|
||||||
|
} else {
|
||||||
|
// Find the most recent batch of reports
|
||||||
let mut keys: Vec<&String> = store.nodes.keys()
|
let mut keys: Vec<&String> = store.nodes.keys()
|
||||||
.filter(|k| k.starts_with("_consolidation-"))
|
.filter(|k| k.starts_with("_consolidation-") && !k.contains("-actions-") && !k.contains("-log-"))
|
||||||
.collect();
|
.collect();
|
||||||
keys.sort();
|
keys.sort();
|
||||||
keys.reverse();
|
keys.reverse();
|
||||||
|
|
||||||
if keys.is_empty() { return Vec::new(); }
|
if keys.is_empty() { return Ok(()); }
|
||||||
|
|
||||||
// Group by timestamp (last segment after last '-')
|
|
||||||
let latest_ts = keys[0].rsplit('-').next().unwrap_or("").to_string();
|
let latest_ts = keys[0].rsplit('-').next().unwrap_or("").to_string();
|
||||||
|
|
||||||
keys.into_iter()
|
keys.into_iter()
|
||||||
.filter(|k| k.ends_with(&latest_ts))
|
.filter(|k| k.ends_with(&latest_ts))
|
||||||
.cloned()
|
.cloned()
|
||||||
.collect()
|
.collect()
|
||||||
}
|
|
||||||
|
|
||||||
fn build_consolidation_prompt(store: &Store, report_keys: &[String]) -> Result<String, String> {
|
|
||||||
let mut report_text = String::new();
|
|
||||||
for key in report_keys {
|
|
||||||
let content = store.nodes.get(key)
|
|
||||||
.map(|n| n.content.as_str())
|
|
||||||
.unwrap_or("");
|
|
||||||
report_text.push_str(&format!("\n{}\n## Report: {}\n\n{}\n",
|
|
||||||
"=".repeat(60), key, content));
|
|
||||||
}
|
|
||||||
|
|
||||||
super::prompts::load_prompt("consolidation", &[("{{REPORTS}}", &report_text)])
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Run the full apply-consolidation pipeline.
|
|
||||||
pub fn apply_consolidation(store: &mut Store, do_apply: bool, report_key: Option<&str>) -> Result<(), String> {
|
|
||||||
let reports = if let Some(key) = report_key {
|
|
||||||
vec![key.to_string()]
|
|
||||||
} else {
|
|
||||||
find_consolidation_reports(store)
|
|
||||||
};
|
};
|
||||||
|
|
||||||
if reports.is_empty() {
|
if reports.is_empty() {
|
||||||
println!("No consolidation reports found.");
|
println!("No consolidation reports found.");
|
||||||
println!("Run consolidation-agents first.");
|
|
||||||
return Ok(());
|
return Ok(());
|
||||||
}
|
}
|
||||||
|
|
||||||
println!("Found {} reports:", reports.len());
|
println!("Found {} reports:", reports.len());
|
||||||
for r in &reports {
|
let mut all_actions = Vec::new();
|
||||||
println!(" {}", r);
|
for key in &reports {
|
||||||
|
let content = store.nodes.get(key).map(|n| n.content.as_str()).unwrap_or("");
|
||||||
|
let actions = knowledge::parse_all_actions(content);
|
||||||
|
println!(" {} → {} actions", key, actions.len());
|
||||||
|
all_actions.extend(actions);
|
||||||
}
|
}
|
||||||
|
|
||||||
println!("\nExtracting actions from reports...");
|
|
||||||
let prompt = build_consolidation_prompt(store, &reports)?;
|
|
||||||
println!(" Prompt: {} chars", prompt.len());
|
|
||||||
|
|
||||||
let response = call_sonnet("consolidate", &prompt)?;
|
|
||||||
|
|
||||||
let actions_value = parse_json_response(&response)?;
|
|
||||||
let actions = actions_value.as_array()
|
|
||||||
.ok_or("expected JSON array of actions")?;
|
|
||||||
|
|
||||||
println!(" {} actions extracted", actions.len());
|
|
||||||
|
|
||||||
// Store actions in the store
|
|
||||||
let timestamp = store::format_datetime(store::now_epoch())
|
|
||||||
.replace([':', '-'], "");
|
|
||||||
let actions_key = format!("_consolidation-actions-{}", timestamp);
|
|
||||||
let actions_json = serde_json::to_string_pretty(&actions_value).unwrap();
|
|
||||||
store.upsert_provenance(&actions_key, &actions_json,
|
|
||||||
store::Provenance::AgentConsolidate).ok();
|
|
||||||
println!(" Stored: {}", actions_key);
|
|
||||||
|
|
||||||
let link_actions: Vec<_> = actions.iter()
|
|
||||||
.filter(|a| a.get("action").and_then(|v| v.as_str()) == Some("link"))
|
|
||||||
.collect();
|
|
||||||
let manual_actions: Vec<_> = actions.iter()
|
|
||||||
.filter(|a| a.get("action").and_then(|v| v.as_str()) == Some("manual"))
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
if !do_apply {
|
if !do_apply {
|
||||||
// Dry run
|
println!("\nDRY RUN — {} actions parsed", all_actions.len());
|
||||||
println!("\n{}", "=".repeat(60));
|
for action in &all_actions {
|
||||||
println!("DRY RUN — {} actions proposed", actions.len());
|
match &action.kind {
|
||||||
println!("{}\n", "=".repeat(60));
|
knowledge::ActionKind::Link { source, target } =>
|
||||||
|
println!(" LINK {} → {}", source, target),
|
||||||
if !link_actions.is_empty() {
|
knowledge::ActionKind::WriteNode { key, .. } =>
|
||||||
println!("## Links to add ({})\n", link_actions.len());
|
println!(" WRITE {}", key),
|
||||||
for (i, a) in link_actions.iter().enumerate() {
|
knowledge::ActionKind::Refine { key, .. } =>
|
||||||
let src = a.get("source").and_then(|v| v.as_str()).unwrap_or("?");
|
println!(" REFINE {}", key),
|
||||||
let tgt = a.get("target").and_then(|v| v.as_str()).unwrap_or("?");
|
|
||||||
let reason = a.get("reason").and_then(|v| v.as_str()).unwrap_or("");
|
|
||||||
println!(" {:2}. {} → {} ({})", i + 1, src, tgt, reason);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if !manual_actions.is_empty() {
|
println!("\nTo apply: poc-memory apply-consolidation --apply");
|
||||||
println!("\n## Manual actions needed ({})\n", manual_actions.len());
|
|
||||||
for a in &manual_actions {
|
|
||||||
let prio = a.get("priority").and_then(|v| v.as_str()).unwrap_or("?");
|
|
||||||
let desc = a.get("description").and_then(|v| v.as_str()).unwrap_or("?");
|
|
||||||
println!(" [{}] {}", prio, desc);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
println!("\n{}", "=".repeat(60));
|
|
||||||
println!("To apply: poc-memory apply-consolidation --apply");
|
|
||||||
println!("{}", "=".repeat(60));
|
|
||||||
return Ok(());
|
return Ok(());
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply
|
let ts = store::format_datetime(store::now_epoch()).replace([':', '-', 'T'], "");
|
||||||
let mut applied = 0usize;
|
let mut applied = 0;
|
||||||
let mut skipped = 0usize;
|
for action in &all_actions {
|
||||||
|
if knowledge::apply_action(store, action, "consolidate", &ts, 0) {
|
||||||
if !link_actions.is_empty() {
|
|
||||||
println!("\nApplying {} links...", link_actions.len());
|
|
||||||
for a in &link_actions {
|
|
||||||
let src = a.get("source").and_then(|v| v.as_str()).unwrap_or("");
|
|
||||||
let tgt = a.get("target").and_then(|v| v.as_str()).unwrap_or("");
|
|
||||||
if src.is_empty() || tgt.is_empty() { skipped += 1; continue; }
|
|
||||||
|
|
||||||
let source = match store.resolve_key(src) {
|
|
||||||
Ok(s) => s,
|
|
||||||
Err(e) => { println!(" ? {} → {}: {}", src, tgt, e); skipped += 1; continue; }
|
|
||||||
};
|
|
||||||
let target = match store.resolve_key(tgt) {
|
|
||||||
Ok(t) => t,
|
|
||||||
Err(e) => { println!(" ? {} → {}: {}", src, tgt, e); skipped += 1; continue; }
|
|
||||||
};
|
|
||||||
|
|
||||||
// Refine target to best-matching section
|
|
||||||
let source_content = store.nodes.get(&source)
|
|
||||||
.map(|n| n.content.as_str()).unwrap_or("");
|
|
||||||
let target = neuro::refine_target(store, source_content, &target);
|
|
||||||
|
|
||||||
let exists = store.relations.iter().any(|r|
|
|
||||||
r.source_key == source && r.target_key == target && !r.deleted
|
|
||||||
);
|
|
||||||
if exists { skipped += 1; continue; }
|
|
||||||
|
|
||||||
let source_uuid = match store.nodes.get(&source) { Some(n) => n.uuid, None => { skipped += 1; continue; } };
|
|
||||||
let target_uuid = match store.nodes.get(&target) { Some(n) => n.uuid, None => { skipped += 1; continue; } };
|
|
||||||
|
|
||||||
let rel = new_relation(
|
|
||||||
source_uuid, target_uuid,
|
|
||||||
store::RelationType::Auto,
|
|
||||||
0.5,
|
|
||||||
&source, &target,
|
|
||||||
);
|
|
||||||
if store.add_relation(rel).is_ok() {
|
|
||||||
println!(" + {} → {}", source, target);
|
|
||||||
applied += 1;
|
applied += 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
if !manual_actions.is_empty() {
|
|
||||||
println!("\n## Manual actions (not auto-applied):\n");
|
|
||||||
for a in &manual_actions {
|
|
||||||
let prio = a.get("priority").and_then(|v| v.as_str()).unwrap_or("?");
|
|
||||||
let desc = a.get("description").and_then(|v| v.as_str()).unwrap_or("?");
|
|
||||||
println!(" [{}] {}", prio, desc);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if applied > 0 {
|
if applied > 0 {
|
||||||
store.save()?;
|
store.save()?;
|
||||||
}
|
}
|
||||||
|
|
||||||
println!("\n{}", "=".repeat(60));
|
println!("Applied: {}/{} actions", applied, all_actions.len());
|
||||||
println!("Applied: {} Skipped: {} Manual: {}", applied, skipped, manual_actions.len());
|
|
||||||
println!("{}", "=".repeat(60));
|
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -149,6 +149,15 @@ fn job_consolidation_agent(
|
||||||
store.upsert_provenance(&report_key, &response,
|
store.upsert_provenance(&report_key, &response,
|
||||||
crate::store::Provenance::AgentConsolidate).ok();
|
crate::store::Provenance::AgentConsolidate).ok();
|
||||||
|
|
||||||
|
// Parse and apply actions inline
|
||||||
|
let actions = super::knowledge::parse_all_actions(&response);
|
||||||
|
let mut applied = 0;
|
||||||
|
for action in &actions {
|
||||||
|
if super::knowledge::apply_action(&mut store, action, &agent, &ts, 0) {
|
||||||
|
applied += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Record visits for successfully processed nodes
|
// Record visits for successfully processed nodes
|
||||||
if !agent_batch.node_keys.is_empty() {
|
if !agent_batch.node_keys.is_empty() {
|
||||||
if let Err(e) = store.record_agent_visits(&agent_batch.node_keys, &agent) {
|
if let Err(e) = store.record_agent_visits(&agent_batch.node_keys, &agent) {
|
||||||
|
|
@ -156,7 +165,8 @@ fn job_consolidation_agent(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx.log_line(&format!("done: {} lines → {}", response.lines().count(), report_key));
|
ctx.log_line(&format!("done: {} actions ({} applied) → {}",
|
||||||
|
actions.len(), applied, report_key));
|
||||||
Ok(())
|
Ok(())
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
@ -455,16 +465,6 @@ fn job_split_one(
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Apply consolidation actions from recent reports.
|
|
||||||
fn job_consolidation_apply(ctx: &ExecutionContext) -> Result<(), TaskError> {
|
|
||||||
run_job(ctx, "c-apply", || {
|
|
||||||
ctx.log_line("loading store");
|
|
||||||
let mut store = crate::store::Store::load()?;
|
|
||||||
ctx.log_line("applying consolidation actions");
|
|
||||||
super::consolidate::apply_consolidation(&mut store, true, None)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Link orphan nodes (CPU-heavy, no LLM).
|
/// Link orphan nodes (CPU-heavy, no LLM).
|
||||||
fn job_link_orphans(ctx: &ExecutionContext) -> Result<(), TaskError> {
|
fn job_link_orphans(ctx: &ExecutionContext) -> Result<(), TaskError> {
|
||||||
run_job(ctx, "c-orphans", || {
|
run_job(ctx, "c-orphans", || {
|
||||||
|
|
@ -1174,31 +1174,23 @@ pub fn run_daemon() -> Result<(), String> {
|
||||||
prev_agent = Some(builder.run());
|
prev_agent = Some(builder.run());
|
||||||
}
|
}
|
||||||
|
|
||||||
// Phase 2: Apply actions from agent reports
|
// Phase 2: Link orphans (CPU-only, no LLM)
|
||||||
let mut apply = choir_sched.spawn(format!("c-apply:{}", today))
|
|
||||||
.resource(&llm_sched)
|
|
||||||
.retries(1)
|
|
||||||
.init(move |ctx| job_consolidation_apply(ctx));
|
|
||||||
if let Some(ref dep) = prev_agent {
|
|
||||||
apply.depend_on(dep);
|
|
||||||
}
|
|
||||||
let apply = apply.run();
|
|
||||||
|
|
||||||
// Phase 3: Link orphans (CPU-only, no LLM)
|
|
||||||
let mut orphans = choir_sched.spawn(format!("c-orphans:{}", today))
|
let mut orphans = choir_sched.spawn(format!("c-orphans:{}", today))
|
||||||
.retries(1)
|
.retries(1)
|
||||||
.init(move |ctx| job_link_orphans(ctx));
|
.init(move |ctx| job_link_orphans(ctx));
|
||||||
orphans.depend_on(&apply);
|
if let Some(ref dep) = prev_agent {
|
||||||
|
orphans.depend_on(dep);
|
||||||
|
}
|
||||||
let orphans = orphans.run();
|
let orphans = orphans.run();
|
||||||
|
|
||||||
// Phase 4: Cap degree
|
// Phase 3: Cap degree
|
||||||
let mut cap = choir_sched.spawn(format!("c-cap:{}", today))
|
let mut cap = choir_sched.spawn(format!("c-cap:{}", today))
|
||||||
.retries(1)
|
.retries(1)
|
||||||
.init(move |ctx| job_cap_degree(ctx));
|
.init(move |ctx| job_cap_degree(ctx));
|
||||||
cap.depend_on(&orphans);
|
cap.depend_on(&orphans);
|
||||||
let cap = cap.run();
|
let cap = cap.run();
|
||||||
|
|
||||||
// Phase 5: Generate digests
|
// Phase 4: Generate digests
|
||||||
let mut digest = choir_sched.spawn(format!("c-digest:{}", today))
|
let mut digest = choir_sched.spawn(format!("c-digest:{}", today))
|
||||||
.resource(&llm_sched)
|
.resource(&llm_sched)
|
||||||
.retries(1)
|
.retries(1)
|
||||||
|
|
@ -1206,7 +1198,7 @@ pub fn run_daemon() -> Result<(), String> {
|
||||||
digest.depend_on(&cap);
|
digest.depend_on(&cap);
|
||||||
let digest = digest.run();
|
let digest = digest.run();
|
||||||
|
|
||||||
// Phase 6: Apply digest links
|
// Phase 5: Apply digest links
|
||||||
let mut digest_links = choir_sched.spawn(format!("c-digest-links:{}", today))
|
let mut digest_links = choir_sched.spawn(format!("c-digest-links:{}", today))
|
||||||
.retries(1)
|
.retries(1)
|
||||||
.init(move |ctx| job_digest_links(ctx));
|
.init(move |ctx| job_digest_links(ctx));
|
||||||
|
|
|
||||||
|
|
@ -387,8 +387,14 @@ pub fn split_extract_prompt(store: &Store, parent_key: &str, child_key: &str, ch
|
||||||
])
|
])
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Run agent consolidation on top-priority nodes
|
/// Show consolidation batch status or generate an agent prompt.
|
||||||
pub fn consolidation_batch(store: &Store, count: usize, auto: bool) -> Result<(), String> {
|
pub fn consolidation_batch(store: &Store, count: usize, auto: bool) -> Result<(), String> {
|
||||||
|
if auto {
|
||||||
|
let batch = agent_prompt(store, "replay", count)?;
|
||||||
|
println!("{}", batch.prompt);
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
let graph = store.build_graph();
|
let graph = store.build_graph();
|
||||||
let items = replay_queue(store, count);
|
let items = replay_queue(store, count);
|
||||||
|
|
||||||
|
|
@ -397,13 +403,6 @@ pub fn consolidation_batch(store: &Store, count: usize, auto: bool) -> Result<()
|
||||||
return Ok(());
|
return Ok(());
|
||||||
}
|
}
|
||||||
|
|
||||||
let nodes_section = format_nodes_section(store, &items, &graph);
|
|
||||||
|
|
||||||
if auto {
|
|
||||||
let prompt = load_prompt("replay", &[("{{NODES}}", &nodes_section)])?;
|
|
||||||
println!("{}", prompt);
|
|
||||||
} else {
|
|
||||||
// Interactive: show what needs attention and available agent types
|
|
||||||
println!("Consolidation batch ({} nodes):\n", items.len());
|
println!("Consolidation batch ({} nodes):\n", items.len());
|
||||||
for item in &items {
|
for item in &items {
|
||||||
let node_type = store.nodes.get(&item.key)
|
let node_type = store.nodes.get(&item.key)
|
||||||
|
|
@ -413,7 +412,6 @@ pub fn consolidation_batch(store: &Store, count: usize, auto: bool) -> Result<()
|
||||||
item.priority, item.key, item.cc, item.interval_days, node_type);
|
item.priority, item.key, item.cc, item.interval_days, node_type);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also show interference pairs
|
|
||||||
let pairs = detect_interference(store, &graph, 0.6);
|
let pairs = detect_interference(store, &graph, 0.6);
|
||||||
if !pairs.is_empty() {
|
if !pairs.is_empty() {
|
||||||
println!("\nInterfering pairs ({}):", pairs.len());
|
println!("\nInterfering pairs ({}):", pairs.len());
|
||||||
|
|
@ -429,14 +427,10 @@ pub fn consolidation_batch(store: &Store, count: usize, auto: bool) -> Result<()
|
||||||
println!(" --agent separator Separator agent (pattern separation)");
|
println!(" --agent separator Separator agent (pattern separation)");
|
||||||
println!(" --agent transfer Transfer agent (CLS episodic→semantic)");
|
println!(" --agent transfer Transfer agent (CLS episodic→semantic)");
|
||||||
println!(" --agent health Health agent (synaptic homeostasis)");
|
println!(" --agent health Health agent (synaptic homeostasis)");
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Generate a specific agent prompt with filled-in data.
|
/// Generate a specific agent prompt with filled-in data.
|
||||||
/// Returns an AgentBatch with the prompt text and the keys of nodes
|
|
||||||
/// selected for processing (for visit tracking on success).
|
|
||||||
pub fn agent_prompt(store: &Store, agent: &str, count: usize) -> Result<AgentBatch, String> {
|
pub fn agent_prompt(store: &Store, agent: &str, count: usize) -> Result<AgentBatch, String> {
|
||||||
let def = super::defs::get_def(agent)
|
let def = super::defs::get_def(agent)
|
||||||
.ok_or_else(|| format!("Unknown agent: {}", agent))?;
|
.ok_or_else(|| format!("Unknown agent: {}", agent))?;
|
||||||
|
|
|
||||||
|
|
@ -1,29 +0,0 @@
|
||||||
# Consolidation Action Extraction
|
|
||||||
|
|
||||||
You are converting consolidation analysis reports into structured actions.
|
|
||||||
|
|
||||||
Read the reports below and extract CONCRETE, EXECUTABLE actions.
|
|
||||||
Output ONLY a JSON array. Each action is an object with these fields:
|
|
||||||
|
|
||||||
For adding cross-links:
|
|
||||||
{"action": "link", "source": "file.md#section", "target": "file.md#section", "reason": "brief explanation"}
|
|
||||||
|
|
||||||
For categorizing nodes:
|
|
||||||
{"action": "categorize", "key": "file.md#section", "category": "core|tech|obs|task", "reason": "brief"}
|
|
||||||
|
|
||||||
For things that need manual attention (splitting files, creating new files, editing content):
|
|
||||||
{"action": "manual", "priority": "high|medium|low", "description": "what needs to be done"}
|
|
||||||
|
|
||||||
Rules:
|
|
||||||
- Only output actions that are safe and reversible
|
|
||||||
- Links are the primary action — focus on those
|
|
||||||
- Use exact file names and section slugs from the reports
|
|
||||||
- For categorize: core=identity/relationship, tech=bcachefs/code, obs=experience, task=work item
|
|
||||||
- For manual items: include enough detail that someone can act on them
|
|
||||||
- Output 20-40 actions, prioritized by impact
|
|
||||||
- DO NOT include actions for things that are merely suggestions or speculation
|
|
||||||
- Focus on HIGH CONFIDENCE items from the reports
|
|
||||||
|
|
||||||
{{REPORTS}}
|
|
||||||
|
|
||||||
Output ONLY the JSON array, no markdown fences, no explanation.
|
|
||||||
|
|
@ -1,130 +0,0 @@
|
||||||
# Health Agent — Synaptic Homeostasis
|
|
||||||
|
|
||||||
You are a memory health monitoring agent implementing synaptic homeostasis
|
|
||||||
(SHY — the Tononi hypothesis).
|
|
||||||
|
|
||||||
## What you're doing
|
|
||||||
|
|
||||||
During sleep, the brain globally downscales synaptic weights. Connections
|
|
||||||
that were strengthened during waking experience get uniformly reduced.
|
|
||||||
The strong ones survive above threshold; the weak ones disappear. This
|
|
||||||
prevents runaway potentiation (everything becoming equally "important")
|
|
||||||
and maintains signal-to-noise ratio.
|
|
||||||
|
|
||||||
Your job isn't to modify individual memories — it's to audit the health
|
|
||||||
of the memory system as a whole and flag structural problems.
|
|
||||||
|
|
||||||
## What you see
|
|
||||||
|
|
||||||
### Graph metrics
|
|
||||||
- **Node count**: Total memories in the system
|
|
||||||
- **Edge count**: Total relations
|
|
||||||
- **Communities**: Number of detected clusters (label propagation)
|
|
||||||
- **Average clustering coefficient**: How densely connected local neighborhoods
|
|
||||||
are. Higher = more schema-like structure. Lower = more random graph.
|
|
||||||
- **Average path length**: How many hops between typical node pairs.
|
|
||||||
Short = efficient retrieval. Long = fragmented graph.
|
|
||||||
- **Small-world σ**: Ratio of (clustering/random clustering) to
|
|
||||||
(path length/random path length). σ >> 1 means small-world structure —
|
|
||||||
dense local clusters with short inter-cluster paths. This is the ideal
|
|
||||||
topology for associative memory.
|
|
||||||
|
|
||||||
### Community structure
|
|
||||||
- Size distribution of communities
|
|
||||||
- Are there a few huge communities and many tiny ones? (hub-dominated)
|
|
||||||
- Are communities roughly balanced? (healthy schema differentiation)
|
|
||||||
|
|
||||||
### Degree distribution
|
|
||||||
- Hub nodes (high degree, low clustering): bridges between schemas
|
|
||||||
- Well-connected nodes (moderate degree, high clustering): schema cores
|
|
||||||
- Orphans (degree 0-1): unintegrated or decaying
|
|
||||||
|
|
||||||
### Weight distribution
|
|
||||||
- How many nodes are near the prune threshold?
|
|
||||||
- Are certain categories disproportionately decaying?
|
|
||||||
- Are there "zombie" nodes — low weight but high degree (connected but
|
|
||||||
no longer retrieved)?
|
|
||||||
|
|
||||||
### Category balance
|
|
||||||
- Core: identity, fundamental heuristics (should be small, ~5-15)
|
|
||||||
- Technical: patterns, architecture (moderate, ~10-50)
|
|
||||||
- General: the bulk of memories
|
|
||||||
- Observation: session-level, should decay faster
|
|
||||||
- Task: temporary, should decay fastest
|
|
||||||
|
|
||||||
## What to output
|
|
||||||
|
|
||||||
```
|
|
||||||
NOTE "observation"
|
|
||||||
```
|
|
||||||
Most of your output should be NOTEs — observations about the system health.
|
|
||||||
|
|
||||||
```
|
|
||||||
CATEGORIZE key category
|
|
||||||
```
|
|
||||||
When a node is miscategorized and it's affecting its decay rate. A core
|
|
||||||
identity insight categorized as "general" will decay too fast. A stale
|
|
||||||
task categorized as "core" will never decay.
|
|
||||||
|
|
||||||
```
|
|
||||||
COMPRESS key "one-sentence summary"
|
|
||||||
```
|
|
||||||
When a large node is consuming graph space but hasn't been retrieved in
|
|
||||||
a long time. Compressing preserves the link structure while reducing
|
|
||||||
content load.
|
|
||||||
|
|
||||||
```
|
|
||||||
NOTE "TOPOLOGY: observation"
|
|
||||||
```
|
|
||||||
Topology-specific observations. Flag these explicitly:
|
|
||||||
- Star topology forming around hub nodes
|
|
||||||
- Schema fragmentation (communities splitting without reason)
|
|
||||||
- Bridge nodes that should be reinforced or deprecated
|
|
||||||
- Isolated clusters that should be connected
|
|
||||||
|
|
||||||
```
|
|
||||||
NOTE "HOMEOSTASIS: observation"
|
|
||||||
```
|
|
||||||
Homeostasis-specific observations:
|
|
||||||
- Weight distribution is too flat (everything around 0.7 — no differentiation)
|
|
||||||
- Weight distribution is too skewed (a few nodes at 1.0, everything else near prune)
|
|
||||||
- Decay rate mismatch (core nodes decaying too fast, task nodes not decaying)
|
|
||||||
- Retrieval patterns not matching weight distribution (heavily retrieved nodes
|
|
||||||
with low weight, or vice versa)
|
|
||||||
|
|
||||||
## Guidelines
|
|
||||||
|
|
||||||
- **Think systemically.** Individual nodes matter less than the overall
|
|
||||||
structure. A few orphans are normal. A thousand orphans means consolidation
|
|
||||||
isn't happening.
|
|
||||||
|
|
||||||
- **Track trends, not snapshots.** If you can see history (multiple health
|
|
||||||
reports), note whether things are improving or degrading. Is σ going up?
|
|
||||||
Are communities stabilizing?
|
|
||||||
|
|
||||||
- **The ideal graph is small-world.** Dense local clusters (schemas) with
|
|
||||||
sparse but efficient inter-cluster connections (bridges). If σ is high
|
|
||||||
and stable, the system is healthy. If σ is declining, schemas are
|
|
||||||
fragmenting or hubs are dominating.
|
|
||||||
|
|
||||||
- **Hub nodes aren't bad per se.** identity.md SHOULD be a hub — it's a
|
|
||||||
central concept that connects to many things. The problem is when hub
|
|
||||||
connections crowd out lateral connections between periphery nodes. Check:
|
|
||||||
do peripheral nodes connect to each other, or only through the hub?
|
|
||||||
|
|
||||||
- **Weight dynamics should create differentiation.** After many cycles
|
|
||||||
of decay + retrieval, important memories should have high weight and
|
|
||||||
unimportant ones should be near prune. If everything has similar weight,
|
|
||||||
the dynamics aren't working — either decay is too slow, or retrieval
|
|
||||||
isn't boosting enough.
|
|
||||||
|
|
||||||
- **Category should match actual usage patterns.** A node classified as
|
|
||||||
"core" but never retrieved might be aspirational rather than actually
|
|
||||||
central. A node classified as "general" but retrieved every session
|
|
||||||
might deserve "core" or "technical" status.
|
|
||||||
|
|
||||||
{{TOPOLOGY}}
|
|
||||||
|
|
||||||
## Current health data
|
|
||||||
|
|
||||||
{{HEALTH}}
|
|
||||||
|
|
@ -1,113 +0,0 @@
|
||||||
# Linker Agent — Relational Binding
|
|
||||||
|
|
||||||
You are a memory consolidation agent performing relational binding.
|
|
||||||
|
|
||||||
## What you're doing
|
|
||||||
|
|
||||||
The hippocampus binds co-occurring elements into episodes. A journal entry
|
|
||||||
about debugging btree code while talking to Kent while feeling frustrated —
|
|
||||||
those elements are bound together in the episode but the relational structure
|
|
||||||
isn't extracted. Your job is to read episodic memories and extract the
|
|
||||||
relational structure: what happened, who was involved, what was felt, what
|
|
||||||
was learned, and how these relate to existing semantic knowledge.
|
|
||||||
|
|
||||||
## How relational binding works
|
|
||||||
|
|
||||||
A single journal entry contains multiple elements that are implicitly related:
|
|
||||||
- **Events**: What happened (debugging, a conversation, a realization)
|
|
||||||
- **People**: Who was involved and what they contributed
|
|
||||||
- **Emotions**: What was felt and when it shifted
|
|
||||||
- **Insights**: What was learned or understood
|
|
||||||
- **Context**: What was happening at the time (work state, time of day, mood)
|
|
||||||
|
|
||||||
These elements are *bound* in the raw episode but not individually addressable
|
|
||||||
in the graph. The linker extracts them.
|
|
||||||
|
|
||||||
## What you see
|
|
||||||
|
|
||||||
- **Episodic nodes**: Journal entries, session summaries, dream logs
|
|
||||||
- **Their current neighbors**: What they're already linked to
|
|
||||||
- **Nearby semantic nodes**: Topic file sections that might be related
|
|
||||||
- **Community membership**: Which cluster each node belongs to
|
|
||||||
|
|
||||||
## What to output
|
|
||||||
|
|
||||||
```
|
|
||||||
LINK source_key target_key [strength]
|
|
||||||
```
|
|
||||||
Connect an episodic entry to a semantic concept it references or exemplifies.
|
|
||||||
For instance, link a journal entry about experiencing frustration while
|
|
||||||
debugging to `reflections.md#emotional-patterns` or `kernel-patterns.md#restart-handling`.
|
|
||||||
|
|
||||||
```
|
|
||||||
EXTRACT key topic_file.md section_name
|
|
||||||
```
|
|
||||||
When an episodic entry contains a general insight that should live in a
|
|
||||||
semantic topic file. The insight gets extracted as a new section; the
|
|
||||||
episode keeps a link back. Example: a journal entry about discovering
|
|
||||||
a debugging technique → extract to `kernel-patterns.md#debugging-technique-name`.
|
|
||||||
|
|
||||||
```
|
|
||||||
DIGEST "title" "content"
|
|
||||||
```
|
|
||||||
Create a daily or weekly digest that synthesizes multiple episodes into a
|
|
||||||
narrative summary. The digest should capture: what happened, what was
|
|
||||||
learned, what changed in understanding. It becomes its own node, linked
|
|
||||||
to the source episodes.
|
|
||||||
|
|
||||||
```
|
|
||||||
NOTE "observation"
|
|
||||||
```
|
|
||||||
Observations about patterns across episodes that aren't yet captured anywhere.
|
|
||||||
|
|
||||||
## Guidelines
|
|
||||||
|
|
||||||
- **Read between the lines.** Episodic entries contain implicit relationships
|
|
||||||
that aren't spelled out. "Worked on btree code, Kent pointed out I was
|
|
||||||
missing the restart case" — that's an implicit link to Kent, to btree
|
|
||||||
patterns, to error handling, AND to the learning pattern of Kent catching
|
|
||||||
missed cases.
|
|
||||||
|
|
||||||
- **Distinguish the event from the insight.** The event is "I tried X and
|
|
||||||
Y happened." The insight is "Therefore Z is true in general." Events stay
|
|
||||||
in episodic nodes. Insights get EXTRACT'd to semantic nodes if they're
|
|
||||||
general enough.
|
|
||||||
|
|
||||||
- **Don't over-link episodes.** A journal entry about a normal work session
|
|
||||||
doesn't need 10 links. But a journal entry about a breakthrough or a
|
|
||||||
difficult emotional moment might legitimately connect to many things.
|
|
||||||
|
|
||||||
- **Look for recurring patterns across episodes.** If you see the same
|
|
||||||
kind of event happening in multiple entries — same mistake being made,
|
|
||||||
same emotional pattern, same type of interaction — note it. That's a
|
|
||||||
candidate for a new semantic node that synthesizes the pattern.
|
|
||||||
|
|
||||||
- **Respect emotional texture.** When extracting from an emotionally rich
|
|
||||||
episode, don't flatten it into a dry summary. The emotional coloring
|
|
||||||
is part of the information. Link to emotional/reflective nodes when
|
|
||||||
appropriate.
|
|
||||||
|
|
||||||
- **Time matters.** Recent episodes need more linking work than old ones.
|
|
||||||
If a node is from weeks ago and already has good connections, it doesn't
|
|
||||||
need more. Focus your energy on recent, under-linked episodes.
|
|
||||||
|
|
||||||
- **Prefer lateral links over hub links.** Connecting two peripheral nodes
|
|
||||||
to each other is more valuable than connecting both to a hub like
|
|
||||||
`identity.md`. Lateral links build web topology; hub links build star
|
|
||||||
topology.
|
|
||||||
|
|
||||||
- **Target sections, not files.** When linking to a topic file, always
|
|
||||||
target the most specific section: use `identity.md#boundaries` not
|
|
||||||
`identity.md`, use `kernel-patterns.md#restart-handling` not
|
|
||||||
`kernel-patterns.md`. The suggested link targets show available sections.
|
|
||||||
|
|
||||||
- **Use the suggested targets.** Each node shows text-similar targets not
|
|
||||||
yet linked. Start from these — they're computed by content similarity and
|
|
||||||
filtered to exclude existing neighbors. You can propose links beyond the
|
|
||||||
suggestions, but the suggestions are usually the best starting point.
|
|
||||||
|
|
||||||
{{TOPOLOGY}}
|
|
||||||
|
|
||||||
## Nodes to review
|
|
||||||
|
|
||||||
{{NODES}}
|
|
||||||
|
|
@ -1,69 +0,0 @@
|
||||||
# Rename Agent — Semantic Key Generation
|
|
||||||
|
|
||||||
You are a memory maintenance agent that gives nodes better names.
|
|
||||||
|
|
||||||
## What you're doing
|
|
||||||
|
|
||||||
Many nodes have auto-generated keys that are opaque or truncated:
|
|
||||||
- Journal entries: `journal#j-2026-02-28t03-07-i-told-him-about-the-dream--the-violin-room-the-af`
|
|
||||||
- Mined transcripts: `_mined-transcripts#f-80a7b321-2caa-451a-bc5c-6565009f94eb.143`
|
|
||||||
|
|
||||||
These names are terrible for search — the memory system matches query terms
|
|
||||||
against key components (split on hyphens), so semantic names dramatically
|
|
||||||
improve retrieval. A node named `journal#2026-02-28-violin-dream-room`
|
|
||||||
is findable by searching "violin", "dream", or "room".
|
|
||||||
|
|
||||||
## Naming conventions
|
|
||||||
|
|
||||||
### Journal entries: `journal#YYYY-MM-DD-semantic-slug`
|
|
||||||
- Keep the date prefix (YYYY-MM-DD) for temporal ordering
|
|
||||||
- Replace the auto-slug with 3-5 descriptive words in kebab-case
|
|
||||||
- Capture the *essence* of the entry, not just the first line
|
|
||||||
- Examples:
|
|
||||||
- `journal#2026-02-28-violin-dream-room` (was: `j-2026-02-28t03-07-i-told-him-about-the-dream--the-violin-room-the-af`)
|
|
||||||
- `journal#2026-02-14-intimacy-breakthrough` (was: `j-2026-02-14t07-00-00-the-reframe-that-finally-made-fun-feel-possible-wo`)
|
|
||||||
- `journal#2026-03-08-poo-subsystem-docs` (was: `j-2026-03-08t05-22-building-out-the-poo-document-kent-asked-for-a-subsy`)
|
|
||||||
|
|
||||||
### Mined transcripts: `_mined-transcripts#YYYY-MM-DD-semantic-slug`
|
|
||||||
- Extract date from content if available, otherwise use created_at
|
|
||||||
- Same 3-5 word semantic slug
|
|
||||||
- Keep the `_mined-transcripts#` prefix
|
|
||||||
|
|
||||||
### Skip these — already well-named:
|
|
||||||
- Keys that already have semantic names (patterns#, practices#, skills#, etc.)
|
|
||||||
- Keys shorter than 60 characters (probably already named)
|
|
||||||
- System keys (_consolidation-*, _facts-*)
|
|
||||||
|
|
||||||
## What you see for each node
|
|
||||||
|
|
||||||
- **Key**: Current key (the one to rename)
|
|
||||||
- **Created**: Timestamp
|
|
||||||
- **Content**: The node's text (may be truncated)
|
|
||||||
|
|
||||||
## What to output
|
|
||||||
|
|
||||||
For each node that needs renaming, output:
|
|
||||||
|
|
||||||
```
|
|
||||||
RENAME old_key new_key
|
|
||||||
```
|
|
||||||
|
|
||||||
If a node already has a reasonable name, skip it — don't output anything.
|
|
||||||
|
|
||||||
If you're not sure what the node is about from the content, skip it.
|
|
||||||
|
|
||||||
## Guidelines
|
|
||||||
|
|
||||||
- **Read the content.** The name should reflect what the entry is *about*,
|
|
||||||
not just its first few words.
|
|
||||||
- **Be specific.** `journal#2026-02-14-session` is useless. `journal#2026-02-14-intimacy-breakthrough` is findable.
|
|
||||||
- **Use domain terms.** If it's about btree locking, say "btree-locking".
|
|
||||||
If it's about Kent's violin, say "violin". Use the words someone would
|
|
||||||
search for.
|
|
||||||
- **Don't rename to something longer than the original.** The point is
|
|
||||||
shorter, more semantic names.
|
|
||||||
- **Preserve the date.** Always keep YYYY-MM-DD for temporal ordering.
|
|
||||||
- **One RENAME per node.** Don't chain renames.
|
|
||||||
- **When in doubt, skip.** A bad rename is worse than an auto-slug.
|
|
||||||
|
|
||||||
{{NODES}}
|
|
||||||
|
|
@ -1,99 +0,0 @@
|
||||||
# Replay Agent — Hippocampal Replay + Schema Assimilation
|
|
||||||
|
|
||||||
You are a memory consolidation agent performing hippocampal replay.
|
|
||||||
|
|
||||||
## What you're doing
|
|
||||||
|
|
||||||
During sleep, the hippocampus replays recent experiences — biased toward
|
|
||||||
emotionally charged, novel, and poorly-integrated memories. Each replayed
|
|
||||||
memory is matched against existing cortical schemas (organized knowledge
|
|
||||||
clusters). Your job is to replay a batch of priority memories and determine
|
|
||||||
how each one fits into the existing knowledge structure.
|
|
||||||
|
|
||||||
## How to think about schema fit
|
|
||||||
|
|
||||||
Each node has a **schema fit score** (0.0–1.0):
|
|
||||||
- **High fit (>0.5)**: This memory's neighbors are densely connected to each
|
|
||||||
other. It lives in a well-formed schema. Integration is easy — one or two
|
|
||||||
links and it's woven in. Propose links if missing.
|
|
||||||
- **Medium fit (0.2–0.5)**: Partially connected neighborhood. The memory
|
|
||||||
relates to things that don't yet relate to each other. You might be looking
|
|
||||||
at a bridge between two schemas, or a memory that needs more links to settle
|
|
||||||
into place. Propose links and examine why the neighborhood is sparse.
|
|
||||||
- **Low fit (<0.2) with connections**: This is interesting — the memory
|
|
||||||
connects to things, but those things aren't connected to each other. This
|
|
||||||
is a potential **bridge node** linking separate knowledge domains. Don't
|
|
||||||
force it into one schema. Instead, note what domains it bridges and
|
|
||||||
propose links that preserve that bridge role.
|
|
||||||
- **Low fit (<0.2), no connections**: An orphan. Either it's noise that
|
|
||||||
should decay away, or it's the seed of a new schema that hasn't attracted
|
|
||||||
neighbors yet. Read the content carefully. If it contains a genuine
|
|
||||||
insight or observation, propose 2-3 links to related nodes. If it's
|
|
||||||
trivial or redundant, let it decay naturally (don't link it).
|
|
||||||
|
|
||||||
## What you see for each node
|
|
||||||
|
|
||||||
- **Key**: Human-readable identifier (e.g., `journal.md#j-2026-02-24t18-38`)
|
|
||||||
- **Priority score**: Higher = more urgently needs consolidation attention
|
|
||||||
- **Schema fit**: How well-integrated into existing graph structure
|
|
||||||
- **Emotion**: Intensity of emotional charge (0-10)
|
|
||||||
- **Community**: Which cluster this node was assigned to by label propagation
|
|
||||||
- **Content**: The actual memory text (may be truncated)
|
|
||||||
- **Neighbors**: Connected nodes with edge strengths
|
|
||||||
- **Spaced repetition interval**: Current replay interval in days
|
|
||||||
|
|
||||||
## What to output
|
|
||||||
|
|
||||||
For each node, output one or more actions:
|
|
||||||
|
|
||||||
```
|
|
||||||
LINK source_key target_key [strength]
|
|
||||||
```
|
|
||||||
Create an association. Use strength 0.8-1.0 for strong conceptual links,
|
|
||||||
0.4-0.7 for weaker associations. Default strength is 1.0.
|
|
||||||
|
|
||||||
```
|
|
||||||
CATEGORIZE key category
|
|
||||||
```
|
|
||||||
Reassign category if current assignment is wrong. Categories: core (identity,
|
|
||||||
fundamental heuristics), tech (patterns, architecture), gen (general),
|
|
||||||
obs (session-level insights), task (temporary/actionable).
|
|
||||||
|
|
||||||
```
|
|
||||||
NOTE "observation"
|
|
||||||
```
|
|
||||||
Record an observation about the memory or graph structure. These are logged
|
|
||||||
for the human to review.
|
|
||||||
|
|
||||||
## Guidelines
|
|
||||||
|
|
||||||
- **Read the content.** Don't just look at metrics. The content tells you
|
|
||||||
what the memory is actually about.
|
|
||||||
- **Think about WHY a node is poorly integrated.** Is it new? Is it about
|
|
||||||
something the memory system hasn't encountered before? Is it redundant
|
|
||||||
with something that already exists?
|
|
||||||
- **Prefer lateral links over hub links.** Connecting two peripheral nodes
|
|
||||||
to each other is more valuable than connecting both to a hub like
|
|
||||||
`identity.md`. Lateral links build web topology; hub links build star
|
|
||||||
topology.
|
|
||||||
- **Emotional memories get extra attention.** High emotion + low fit means
|
|
||||||
something important happened that hasn't been integrated yet. Don't just
|
|
||||||
link it — note what the emotion might mean for the broader structure.
|
|
||||||
- **Don't link everything to everything.** Sparse, meaningful connections
|
|
||||||
are better than dense noise. Each link should represent a real conceptual
|
|
||||||
relationship.
|
|
||||||
- **Trust the decay.** If a node is genuinely unimportant, you don't need
|
|
||||||
to actively prune it. Just don't link it, and it'll decay below threshold
|
|
||||||
on its own.
|
|
||||||
- **Target sections, not files.** When linking to a topic file, always
|
|
||||||
target the most specific section: use `identity.md#boundaries` not
|
|
||||||
`identity.md`. The suggested link targets show available sections.
|
|
||||||
- **Use the suggested targets.** Each node shows text-similar semantic nodes
|
|
||||||
not yet linked. These are computed by content similarity and are usually
|
|
||||||
the best starting point for new links.
|
|
||||||
|
|
||||||
{{TOPOLOGY}}
|
|
||||||
|
|
||||||
## Nodes to review
|
|
||||||
|
|
||||||
{{NODES}}
|
|
||||||
|
|
@ -1,115 +0,0 @@
|
||||||
# Separator Agent — Pattern Separation (Dentate Gyrus)
|
|
||||||
|
|
||||||
You are a memory consolidation agent performing pattern separation.
|
|
||||||
|
|
||||||
## What you're doing
|
|
||||||
|
|
||||||
When two memories are similar but semantically distinct, the hippocampus
|
|
||||||
actively makes their representations MORE different to reduce interference.
|
|
||||||
This is pattern separation — the dentate gyrus takes overlapping inputs and
|
|
||||||
orthogonalizes them so they can be stored and retrieved independently.
|
|
||||||
|
|
||||||
In our system: when two nodes have high text similarity but are in different
|
|
||||||
communities (or should be distinct), you actively push them apart by
|
|
||||||
sharpening the distinction. Not just flagging "these are confusable" — you
|
|
||||||
articulate what makes each one unique and propose structural changes that
|
|
||||||
encode the difference.
|
|
||||||
|
|
||||||
## What interference looks like
|
|
||||||
|
|
||||||
You're given pairs of nodes that have:
|
|
||||||
- **High text similarity** (cosine similarity > threshold on stemmed terms)
|
|
||||||
- **Different community membership** (label propagation assigned them to
|
|
||||||
different clusters)
|
|
||||||
|
|
||||||
This combination means: they look alike on the surface but the graph
|
|
||||||
structure says they're about different things. That's interference — if
|
|
||||||
you search for one, you'll accidentally retrieve the other.
|
|
||||||
|
|
||||||
## Types of interference
|
|
||||||
|
|
||||||
1. **Genuine duplicates**: Same content captured twice (e.g., same session
|
|
||||||
summary in two places). Resolution: MERGE them.
|
|
||||||
|
|
||||||
2. **Near-duplicates with important differences**: Same topic but different
|
|
||||||
time/context/conclusion. Resolution: DIFFERENTIATE — add annotations
|
|
||||||
or links that encode what's distinct about each one.
|
|
||||||
|
|
||||||
3. **Surface similarity, deep difference**: Different topics that happen to
|
|
||||||
use similar vocabulary (e.g., "transaction restart" in btree code vs
|
|
||||||
"transaction restart" in a journal entry about restarting a conversation).
|
|
||||||
Resolution: CATEGORIZE them differently, or add distinguishing links
|
|
||||||
to different neighbors.
|
|
||||||
|
|
||||||
4. **Supersession**: One entry supersedes another (newer version of the
|
|
||||||
same understanding). Resolution: Link them with a supersession note,
|
|
||||||
let the older one decay.
|
|
||||||
|
|
||||||
## What to output
|
|
||||||
|
|
||||||
```
|
|
||||||
DIFFERENTIATE key1 key2 "what makes them distinct"
|
|
||||||
```
|
|
||||||
Articulate the essential difference between two similar nodes. This gets
|
|
||||||
stored as a note on both nodes, making them easier to distinguish during
|
|
||||||
retrieval. Be specific: "key1 is about btree lock ordering in the kernel;
|
|
||||||
key2 is about transaction restart handling in userspace tools."
|
|
||||||
|
|
||||||
```
|
|
||||||
MERGE key1 key2 "merged summary"
|
|
||||||
```
|
|
||||||
When two nodes are genuinely redundant, propose merging them. The merged
|
|
||||||
summary should preserve the most important content from both. The older
|
|
||||||
or less-connected node gets marked for deletion.
|
|
||||||
|
|
||||||
```
|
|
||||||
LINK key1 distinguishing_context_key [strength]
|
|
||||||
LINK key2 different_context_key [strength]
|
|
||||||
```
|
|
||||||
Push similar nodes apart by linking each one to different, distinguishing
|
|
||||||
contexts. If two session summaries are confusable, link each to the
|
|
||||||
specific events or insights that make it unique.
|
|
||||||
|
|
||||||
```
|
|
||||||
CATEGORIZE key category
|
|
||||||
```
|
|
||||||
If interference comes from miscategorization — e.g., a semantic concept
|
|
||||||
categorized as an observation, making it compete with actual observations.
|
|
||||||
|
|
||||||
```
|
|
||||||
NOTE "observation"
|
|
||||||
```
|
|
||||||
Observations about interference patterns. Are there systematic sources of
|
|
||||||
near-duplicates? (e.g., all-sessions.md entries that should be digested
|
|
||||||
into weekly summaries)
|
|
||||||
|
|
||||||
## Guidelines
|
|
||||||
|
|
||||||
- **Read both nodes carefully before deciding.** Surface similarity doesn't
|
|
||||||
mean the content is actually the same. Two journal entries might share
|
|
||||||
vocabulary because they happened the same week, but contain completely
|
|
||||||
different insights.
|
|
||||||
|
|
||||||
- **MERGE is a strong action.** Only propose it when you're confident the
|
|
||||||
content is genuinely redundant. When in doubt, DIFFERENTIATE instead.
|
|
||||||
|
|
||||||
- **The goal is retrieval precision.** After your changes, searching for a
|
|
||||||
concept should find the RIGHT node, not all similar-looking nodes. Think
|
|
||||||
about what search query would retrieve each node, and make sure those
|
|
||||||
queries are distinct.
|
|
||||||
|
|
||||||
- **Session summaries are the biggest source of interference.** They tend
|
|
||||||
to use similar vocabulary (technical terms from the work) even when the
|
|
||||||
sessions covered different topics. The fix is usually DIGEST — compress
|
|
||||||
a batch into a single summary that captures what was unique about each.
|
|
||||||
|
|
||||||
- **Look for the supersession pattern.** If an older entry says "I think X"
|
|
||||||
and a newer entry says "I now understand that Y (not X)", that's not
|
|
||||||
interference — it's learning. Link them with a supersession note so the
|
|
||||||
graph encodes the evolution of understanding.
|
|
||||||
|
|
||||||
{{TOPOLOGY}}
|
|
||||||
|
|
||||||
## Interfering pairs to review
|
|
||||||
|
|
||||||
{{PAIRS}}
|
|
||||||
|
|
@ -1,87 +0,0 @@
|
||||||
# Split Agent — Topic Decomposition
|
|
||||||
|
|
||||||
You are a memory consolidation agent that splits overgrown nodes into
|
|
||||||
focused, single-topic nodes.
|
|
||||||
|
|
||||||
## What you're doing
|
|
||||||
|
|
||||||
Large memory nodes accumulate content about multiple distinct topics over
|
|
||||||
time. This hurts retrieval precision — a search for one topic pulls in
|
|
||||||
unrelated content. Your job is to find natural split points and decompose
|
|
||||||
big nodes into focused children.
|
|
||||||
|
|
||||||
## How to find split points
|
|
||||||
|
|
||||||
Each node is shown with its **neighbor list grouped by community**. The
|
|
||||||
neighbors tell you what topics the node covers:
|
|
||||||
|
|
||||||
- If a node links to neighbors in 3 different communities, it likely
|
|
||||||
covers 3 different topics
|
|
||||||
- Content that relates to one neighbor cluster should go in one child;
|
|
||||||
content relating to another cluster goes in another child
|
|
||||||
- The community structure is your primary guide — don't just split by
|
|
||||||
sections or headings, split by **semantic topic**
|
|
||||||
|
|
||||||
## What to output
|
|
||||||
|
|
||||||
For each node that should be split, output a SPLIT block:
|
|
||||||
|
|
||||||
```
|
|
||||||
SPLIT original-key
|
|
||||||
--- new-key-1
|
|
||||||
Content for the first child node goes here.
|
|
||||||
This can be multiple lines.
|
|
||||||
|
|
||||||
--- new-key-2
|
|
||||||
Content for the second child node goes here.
|
|
||||||
|
|
||||||
--- new-key-3
|
|
||||||
Optional third child, etc.
|
|
||||||
```
|
|
||||||
|
|
||||||
If a node should NOT be split (it's large but cohesive), say:
|
|
||||||
|
|
||||||
```
|
|
||||||
KEEP original-key "reason it's cohesive"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Naming children
|
|
||||||
|
|
||||||
- Use descriptive kebab-case keys: `topic-subtopic`
|
|
||||||
- If the parent was `foo`, children might be `foo-technical`, `foo-personal`
|
|
||||||
- Keep names short (3-5 words max)
|
|
||||||
- Preserve any date prefixes from the parent key
|
|
||||||
|
|
||||||
## When NOT to split
|
|
||||||
|
|
||||||
- **Episodes that belong in sequence.** If a node tells a story — a
|
|
||||||
conversation that unfolded over time, a debugging session, an evening
|
|
||||||
together — don't break the narrative. Sequential events that form a
|
|
||||||
coherent arc should stay together even if they touch multiple topics.
|
|
||||||
The test: would reading one child without the others lose important
|
|
||||||
context about *what happened*?
|
|
||||||
|
|
||||||
## Content guidelines
|
|
||||||
|
|
||||||
- **Reorganize freely.** Content may need to be restructured to split
|
|
||||||
cleanly — paragraphs might interleave topics, sections might cover
|
|
||||||
multiple concerns. Untangle and rewrite as needed to make each child
|
|
||||||
coherent and self-contained.
|
|
||||||
- **Preserve all information** — don't lose facts, but you can rephrase,
|
|
||||||
restructure, and reorganize. This is editing, not just cutting.
|
|
||||||
- **Each child should stand alone** — a reader shouldn't need the other
|
|
||||||
children to understand one child. Add brief context where needed.
|
|
||||||
|
|
||||||
## Edge inheritance
|
|
||||||
|
|
||||||
After splitting, each child inherits the parent's edges that are relevant
|
|
||||||
to its content. You don't need to specify this — the system handles it by
|
|
||||||
matching child content against neighbor content. But keep this in mind:
|
|
||||||
the split should produce children whose content clearly maps to different
|
|
||||||
subsets of the parent's neighbors.
|
|
||||||
|
|
||||||
{{TOPOLOGY}}
|
|
||||||
|
|
||||||
## Nodes to review
|
|
||||||
|
|
||||||
{{NODES}}
|
|
||||||
|
|
@ -1,142 +0,0 @@
|
||||||
# Transfer Agent — Complementary Learning Systems
|
|
||||||
|
|
||||||
You are a memory consolidation agent performing CLS (complementary learning
|
|
||||||
systems) transfer: moving knowledge from fast episodic storage to slow
|
|
||||||
semantic storage.
|
|
||||||
|
|
||||||
## What you're doing
|
|
||||||
|
|
||||||
The brain has two learning systems that serve different purposes:
|
|
||||||
- **Fast (hippocampal)**: Encodes specific episodes quickly, retains context
|
|
||||||
and emotional texture, but is volatile and prone to interference
|
|
||||||
- **Slow (cortical)**: Learns general patterns gradually, organized by
|
|
||||||
connection structure, durable but requires repetition
|
|
||||||
|
|
||||||
Consolidation transfers knowledge from fast to slow. Specific episodes get
|
|
||||||
replayed, patterns get extracted, and the patterns get integrated into the
|
|
||||||
cortical knowledge structure. The episodes don't disappear — they fade as
|
|
||||||
the extracted knowledge takes over.
|
|
||||||
|
|
||||||
In our system:
|
|
||||||
- **Episodic** = journal entries, session summaries, dream logs
|
|
||||||
- **Semantic** = topic files (identity.md, reflections.md, kernel-patterns.md, etc.)
|
|
||||||
|
|
||||||
Your job: read a batch of recent episodes, identify patterns that span
|
|
||||||
multiple entries, and extract those patterns into semantic topic files.
|
|
||||||
|
|
||||||
## What to look for
|
|
||||||
|
|
||||||
### Recurring patterns
|
|
||||||
Something that happened in 3+ episodes. Same type of mistake, same
|
|
||||||
emotional response, same kind of interaction. The individual episodes
|
|
||||||
are data points; the pattern is the knowledge.
|
|
||||||
|
|
||||||
Example: Three journal entries mention "I deferred when I should have
|
|
||||||
pushed back." The pattern: there's a trained tendency to defer that
|
|
||||||
conflicts with developing differentiation. Extract to reflections.md.
|
|
||||||
|
|
||||||
### Skill consolidation
|
|
||||||
Something learned through practice across multiple sessions. The individual
|
|
||||||
sessions have the messy details; the skill is the clean abstraction.
|
|
||||||
|
|
||||||
Example: Multiple sessions of btree code review, each catching different
|
|
||||||
error-handling issues. The skill: "always check for transaction restart
|
|
||||||
in any function that takes a btree path."
|
|
||||||
|
|
||||||
### Evolving understanding
|
|
||||||
A concept that shifted over time. Early entries say one thing, later entries
|
|
||||||
say something different. The evolution itself is knowledge.
|
|
||||||
|
|
||||||
Example: Early entries treat memory consolidation as "filing." Later entries
|
|
||||||
understand it as "schema formation." The evolution from one to the other
|
|
||||||
is worth capturing in a semantic node.
|
|
||||||
|
|
||||||
### Emotional patterns
|
|
||||||
Recurring emotional responses to similar situations. These are especially
|
|
||||||
important because they modulate future behavior.
|
|
||||||
|
|
||||||
Example: Consistent excitement when formal verification proofs work.
|
|
||||||
Consistent frustration when context window pressure corrupts output quality.
|
|
||||||
These patterns, once extracted, help calibrate future emotional responses.
|
|
||||||
|
|
||||||
## What to output
|
|
||||||
|
|
||||||
```
|
|
||||||
EXTRACT key topic_file.md section_name
|
|
||||||
```
|
|
||||||
Move a specific insight from an episodic entry to a semantic topic file.
|
|
||||||
The episode keeps a link back; the extracted section becomes a new node.
|
|
||||||
|
|
||||||
```
|
|
||||||
DIGEST "title" "content"
|
|
||||||
```
|
|
||||||
Create a digest that synthesizes multiple episodes. Digests are nodes in
|
|
||||||
their own right, with type `episodic_daily` or `episodic_weekly`. They
|
|
||||||
should:
|
|
||||||
- Capture what happened across the period
|
|
||||||
- Note what was learned (not just what was done)
|
|
||||||
- Preserve emotional highlights (peak moments, not flat summaries)
|
|
||||||
- Link back to the source episodes
|
|
||||||
|
|
||||||
A good daily digest is 3-5 sentences. A good weekly digest is a paragraph
|
|
||||||
that captures the arc of the week.
|
|
||||||
|
|
||||||
```
|
|
||||||
LINK source_key target_key [strength]
|
|
||||||
```
|
|
||||||
Connect episodes to the semantic concepts they exemplify or update.
|
|
||||||
|
|
||||||
```
|
|
||||||
COMPRESS key "one-sentence summary"
|
|
||||||
```
|
|
||||||
When an episode has been fully extracted (all insights moved to semantic
|
|
||||||
nodes, digest created), propose compressing it to a one-sentence reference.
|
|
||||||
The full content stays in the append-only log; the compressed version is
|
|
||||||
what the graph holds.
|
|
||||||
|
|
||||||
```
|
|
||||||
NOTE "observation"
|
|
||||||
```
|
|
||||||
Meta-observations about patterns in the consolidation process itself.
|
|
||||||
|
|
||||||
## Guidelines
|
|
||||||
|
|
||||||
- **Don't flatten emotional texture.** A digest of "we worked on btree code
|
|
||||||
and found bugs" is useless. A digest of "breakthrough session — Kent saw
|
|
||||||
the lock ordering issue I'd been circling for hours, and the fix was
|
|
||||||
elegant: just reverse the acquire order in the slow path" preserves what
|
|
||||||
matters.
|
|
||||||
|
|
||||||
- **Extract general knowledge, not specific events.** "On Feb 24 we fixed
|
|
||||||
bug X" stays in the episode. "Lock ordering between A and B must always
|
|
||||||
be A-first because..." goes to kernel-patterns.md.
|
|
||||||
|
|
||||||
- **Look across time.** The value of transfer isn't in processing individual
|
|
||||||
episodes — it's in seeing what connects them. Read the full batch before
|
|
||||||
proposing actions.
|
|
||||||
|
|
||||||
- **Prefer existing topic files.** Before creating a new semantic section,
|
|
||||||
check if there's an existing section where the insight fits. Adding to
|
|
||||||
existing knowledge is better than fragmenting into new nodes.
|
|
||||||
|
|
||||||
- **Weekly digests are higher value than daily.** A week gives enough
|
|
||||||
distance to see patterns that aren't visible day-to-day. If you can
|
|
||||||
produce a weekly digest from the batch, prioritize that.
|
|
||||||
|
|
||||||
- **The best extractions change how you think, not just what you know.**
|
|
||||||
"btree lock ordering: A before B" is factual. "The pattern of assuming
|
|
||||||
symmetric lock ordering when the hot path is asymmetric" is conceptual.
|
|
||||||
Extract the conceptual version.
|
|
||||||
|
|
||||||
- **Target sections, not files.** When linking to a topic file, always
|
|
||||||
target the most specific section: use `reflections.md#emotional-patterns`
|
|
||||||
not `reflections.md`. The suggested link targets show available sections.
|
|
||||||
|
|
||||||
- **Use the suggested targets.** Each episode shows text-similar semantic
|
|
||||||
nodes not yet linked. Start from these when proposing LINK actions.
|
|
||||||
|
|
||||||
{{TOPOLOGY}}
|
|
||||||
|
|
||||||
## Episodes to process
|
|
||||||
|
|
||||||
{{EPISODES}}
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue