AutoAgent: persistent across runs, run() vs run_forked()

AutoAgent holds config + walked state. Backend is ephemeral per run:
- run(): standalone, global API client (oneshot CLI)
- run_forked(): forks conscious agent, resolves prompt templates
  with current memory_keys and walked state

Mind creates AutoAgents once at startup, takes them out for spawned
tasks, puts them back on completion (preserving walked state).

Removes {{seen_previous}}, {{input:walked}}, {{memory_ratio}} from
subconscious agent prompts. Walked keys are now a Vec on AutoAgent,
resolved via {{walked}} from in-memory state.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
This commit is contained in:
Kent Overstreet 2026-04-07 01:57:01 -04:00
parent ba62e0a767
commit 94ddf7b189
5 changed files with 238 additions and 247 deletions

View file

@ -46,30 +46,24 @@ pub struct AutoStep {
/// An autonomous agent that runs a sequence of prompts with tool dispatch.
///
/// Two backends:
/// - Standalone: bare message list + global API client (oneshot CLI agents)
/// - Agent-backed: forked Agent whose ContextState is the conversation
/// (subconscious agents, KV cache sharing with conscious agent)
/// Persistent across runs — holds config, tools, steps, and inter-run
/// state (walked keys). The conversation backend is ephemeral per run.
pub struct AutoAgent {
pub name: String,
backend: Backend,
steps: Vec<AutoStep>,
next_step: usize,
pub tools: Vec<agent_tools::Tool>,
pub steps: Vec<AutoStep>,
sampling: super::api::SamplingParams,
priority: i32,
/// Memory keys the surface agent was exploring — persists between runs.
pub walked: Vec<String>,
// Observable status
pub current_phase: String,
pub turn: usize,
}
/// Per-run conversation backend — created fresh by run() or run_forked().
enum Backend {
/// Standalone: raw message list, no Agent context.
Standalone {
client: ApiClient,
tools: Vec<agent_tools::Tool>,
messages: Vec<Message>,
},
/// Backed by a forked Agent — conversation lives in ContextState.
Standalone { client: ApiClient, messages: Vec<Message> },
Forked(Agent),
}
@ -81,13 +75,6 @@ impl Backend {
}
}
fn tools(&self) -> &[agent_tools::Tool] {
match self {
Backend::Standalone { tools, .. } => tools,
Backend::Forked(agent) => &agent.tools,
}
}
fn messages(&self) -> Vec<Message> {
match self {
Backend::Standalone { messages, .. } => messages.clone(),
@ -113,88 +100,120 @@ impl Backend {
}
fn log(&self, text: String) {
match self {
Backend::Forked(agent) => {
if let Backend::Forked(agent) = self {
if let Some(ref log) = agent.conversation_log {
let entry = super::context::ConversationEntry::Log(text);
log.append(&entry).ok();
}
}
_ => {}
}
}
}
/// Resolve {{placeholder}} templates in subconscious agent prompts.
fn resolve_prompt(template: &str, memory_keys: &[String], walked: &[String]) -> String {
let mut result = String::with_capacity(template.len());
let mut rest = template;
while let Some(start) = rest.find("{{") {
result.push_str(&rest[..start]);
let after = &rest[start + 2..];
if let Some(end) = after.find("}}") {
let name = after[..end].trim();
let replacement = match name {
"seen_current" => format_key_list(memory_keys),
"walked" => format_key_list(walked),
_ => {
result.push_str("{{");
result.push_str(&after[..end + 2]);
rest = &after[end + 2..];
continue;
}
};
result.push_str(&replacement);
rest = &after[end + 2..];
} else {
result.push_str("{{");
rest = after;
}
}
result.push_str(rest);
result
}
fn format_key_list(keys: &[String]) -> String {
if keys.is_empty() { "(none)".to_string() }
else { keys.iter().map(|k| format!("- {}", k)).collect::<Vec<_>>().join("\n") }
}
impl AutoAgent {
/// Create from the global API client with no initial context.
/// Used by oneshot CLI agents.
pub fn new(
name: String,
tools: Vec<agent_tools::Tool>,
steps: Vec<AutoStep>,
temperature: f32,
priority: i32,
) -> Result<Self, String> {
let client = get_client()?.clone();
let phase = steps.first().map(|s| s.phase.clone()).unwrap_or_default();
Ok(Self {
name,
backend: Backend::Standalone {
client,
tools,
messages: Vec::new(),
},
steps,
next_step: 0,
sampling: super::api::SamplingParams {
temperature,
top_p: 0.95,
top_k: 20,
},
priority,
current_phase: phase,
turn: 0,
})
}
/// Fork from an existing agent for subconscious use. The forked
/// agent's ContextState holds the conversation — step prompts and
/// tool results are appended to it directly.
pub fn from_agent(
name: String,
agent: &Agent,
tools: Vec<agent_tools::Tool>,
steps: Vec<AutoStep>,
priority: i32,
) -> Self {
let forked = agent.fork(tools);
let phase = steps.first().map(|s| s.phase.clone()).unwrap_or_default();
Self {
name,
name, tools, steps,
sampling: super::api::SamplingParams {
temperature: forked.temperature,
top_p: forked.top_p,
top_k: forked.top_k,
temperature, top_p: 0.95, top_k: 20,
},
backend: Backend::Forked(forked),
steps,
next_step: 0,
priority,
current_phase: phase,
walked: Vec::new(),
current_phase: String::new(),
turn: 0,
}
}
/// Run all steps to completion. Returns the final text response.
/// Run standalone — creates a fresh message list from the global
/// API client. Used by oneshot CLI agents.
pub async fn run(
&mut self,
bail_fn: Option<&(dyn Fn(usize) -> Result<(), String> + Sync)>,
) -> Result<String, String> {
// Inject first step prompt
if self.next_step < self.steps.len() {
self.backend.push_message(
Message::user(&self.steps[self.next_step].prompt));
self.next_step += 1;
let client = get_client()?.clone();
let mut backend = Backend::Standalone {
client, messages: Vec::new(),
};
self.run_with_backend(&mut backend, bail_fn).await
}
/// Run forked from a conscious agent's context. Each call gets a
/// fresh fork for KV cache sharing. Walked state persists between runs.
///
/// `memory_keys`: keys of Memory entries in the conscious agent's
/// context, used to resolve {{seen_current}} in prompt templates.
pub async fn run_forked(
&mut self,
agent: &Agent,
memory_keys: &[String],
) -> Result<String, String> {
// Resolve prompt templates with current state
let resolved_steps: Vec<AutoStep> = self.steps.iter().map(|s| AutoStep {
prompt: resolve_prompt(&s.prompt, memory_keys, &self.walked),
phase: s.phase.clone(),
}).collect();
let orig_steps = std::mem::replace(&mut self.steps, resolved_steps);
let forked = agent.fork(self.tools.clone());
let mut backend = Backend::Forked(forked);
let result = self.run_with_backend(&mut backend, None).await;
self.steps = orig_steps; // restore templates
result
}
async fn run_with_backend(
&mut self,
backend: &mut Backend,
bail_fn: Option<&(dyn Fn(usize) -> Result<(), String> + Sync)>,
) -> Result<String, String> {
self.turn = 0;
self.current_phase = self.steps.first()
.map(|s| s.phase.clone()).unwrap_or_default();
let mut next_step = 0;
if next_step < self.steps.len() {
backend.push_message(
Message::user(&self.steps[next_step].prompt));
next_step += 1;
}
let reasoning = crate::config::get().api_reasoning.clone();
@ -202,14 +221,16 @@ impl AutoAgent {
for _ in 0..max_turns {
self.turn += 1;
let messages = self.backend.messages();
self.backend.log(format!("turn {} ({} messages)",
let messages = backend.messages();
backend.log(format!("turn {} ({} messages)",
self.turn, messages.len()));
let (msg, usage_opt) = self.api_call_with_retry(&messages, &reasoning).await?;
let (msg, usage_opt) = Self::api_call_with_retry(
&self.name, backend, &self.tools, &messages,
&reasoning, self.sampling, self.priority).await?;
if let Some(u) = &usage_opt {
self.backend.log(format!("tokens: {} prompt + {} completion",
backend.log(format!("tokens: {} prompt + {} completion",
u.prompt_tokens, u.completion_tokens));
}
@ -217,36 +238,34 @@ impl AutoAgent {
let has_tools = msg.tool_calls.as_ref().is_some_and(|tc| !tc.is_empty());
if has_tools {
self.dispatch_tools(&msg).await;
Self::dispatch_tools(backend, &msg).await;
continue;
}
// Text-only response — step complete
let text = msg.content_text().to_string();
if text.is_empty() && !has_content {
self.backend.log("empty response, retrying".into());
self.backend.push_message(Message::user(
backend.log("empty response, retrying".into());
backend.push_message(Message::user(
"[system] Your previous response was empty. \
Please respond with text or use a tool."
));
continue;
}
self.backend.log(format!("response: {}",
backend.log(format!("response: {}",
&text[..text.len().min(200)]));
// More steps? Check bail, inject next prompt.
if self.next_step < self.steps.len() {
if next_step < self.steps.len() {
if let Some(ref check) = bail_fn {
check(self.next_step)?;
check(next_step)?;
}
self.current_phase = self.steps[self.next_step].phase.clone();
self.backend.push_message(Message::assistant(&text));
self.backend.push_message(
Message::user(&self.steps[self.next_step].prompt));
self.next_step += 1;
self.backend.log(format!("step {}/{}",
self.next_step, self.steps.len()));
self.current_phase = self.steps[next_step].phase.clone();
backend.push_message(Message::assistant(&text));
backend.push_message(
Message::user(&self.steps[next_step].prompt));
next_step += 1;
backend.log(format!("step {}/{}",
next_step, self.steps.len()));
continue;
}
@ -257,24 +276,23 @@ impl AutoAgent {
}
async fn api_call_with_retry(
&self,
name: &str,
backend: &Backend,
tools: &[agent_tools::Tool],
messages: &[Message],
reasoning: &str,
sampling: super::api::SamplingParams,
priority: i32,
) -> Result<(Message, Option<Usage>), String> {
let client = self.backend.client();
let tools = self.backend.tools();
let client = backend.client();
let mut last_err = None;
for attempt in 0..5 {
match client.chat_completion_stream_temp(
messages,
tools,
reasoning,
self.sampling,
Some(self.priority),
messages, tools, reasoning, sampling, Some(priority),
).await {
Ok((msg, usage)) => {
if let Some(ref e) = last_err {
self.backend.log(format!(
backend.log(format!(
"succeeded after retry (previous: {})", e));
}
return Ok((msg, usage));
@ -287,7 +305,7 @@ impl AutoAgent {
|| err_str.contains("timed out")
|| err_str.contains("Connection refused");
if is_transient && attempt < 4 {
self.backend.log(format!(
backend.log(format!(
"transient error (attempt {}): {}, retrying",
attempt + 1, err_str));
tokio::time::sleep(std::time::Duration::from_secs(2 << attempt)).await;
@ -295,11 +313,10 @@ impl AutoAgent {
continue;
}
let msg_bytes: usize = messages.iter()
.map(|m| m.content_text().len())
.sum();
.map(|m| m.content_text().len()).sum();
return Err(format!(
"{}: API error on turn {} (~{}KB, {} messages, {} attempts): {}",
self.name, self.turn, msg_bytes / 1024,
"{}: API error (~{}KB, {} messages, {} attempts): {}",
name, msg_bytes / 1024,
messages.len(), attempt + 1, e));
}
}
@ -307,28 +324,28 @@ impl AutoAgent {
unreachable!()
}
async fn dispatch_tools(&mut self, msg: &Message) {
async fn dispatch_tools(backend: &mut Backend, msg: &Message) {
let mut sanitized = msg.clone();
if let Some(ref mut calls) = sanitized.tool_calls {
for call in calls {
if serde_json::from_str::<serde_json::Value>(&call.function.arguments).is_err() {
self.backend.log(format!(
backend.log(format!(
"sanitizing malformed args for {}: {}",
call.function.name, &call.function.arguments));
call.function.arguments = "{}".to_string();
}
}
}
self.backend.push_raw(sanitized);
backend.push_raw(sanitized);
for call in msg.tool_calls.as_ref().unwrap() {
self.backend.log(format!("tool: {}({})",
backend.log(format!("tool: {}({})",
call.function.name, &call.function.arguments));
let args: serde_json::Value = match serde_json::from_str(&call.function.arguments) {
Ok(v) => v,
Err(_) => {
self.backend.push_raw(Message::tool_result(
backend.push_raw(Message::tool_result(
&call.id,
"Error: your tool call had malformed JSON arguments. \
Please retry with valid JSON.",
@ -338,9 +355,8 @@ impl AutoAgent {
};
let output = agent_tools::dispatch(&call.function.name, &args).await;
self.backend.log(format!("result: {} chars", output.len()));
self.backend.push_raw(Message::tool_result(&call.id, &output));
backend.log(format!("result: {} chars", output.len()));
backend.push_raw(Message::tool_result(&call.id, &output));
}
}
}
@ -498,7 +514,7 @@ pub async fn call_api_with_tools(
steps,
temperature.unwrap_or(0.6),
priority,
)?;
);
auto.run(bail_fn).await
}

View file

@ -33,13 +33,12 @@ use crate::subconscious::{defs, learn};
/// A subconscious agent managed by Mind.
struct SubconsciousAgent {
name: String,
def: defs::AgentDef,
auto: AutoAgent,
/// Conversation bytes at last trigger.
last_trigger_bytes: u64,
/// When the agent last ran.
last_run: Option<Instant>,
/// Running task handle + AutoAgent for status.
/// Running task handle.
handle: Option<tokio::task::JoinHandle<Result<String, String>>>,
}
@ -51,11 +50,30 @@ const SUBCONSCIOUS_AGENTS: &[(&str, u64)] = &[
];
impl SubconsciousAgent {
fn new(name: &str, interval_bytes: u64) -> Option<Self> {
fn new(name: &str, _interval_bytes: u64) -> Option<Self> {
let def = defs::get_def(name)?;
let all_tools = crate::agent::tools::memory_and_journal_tools();
let tools: Vec<crate::agent::tools::Tool> = if def.tools.is_empty() {
all_tools.to_vec()
} else {
all_tools.into_iter()
.filter(|t| def.tools.iter().any(|w| w == t.name))
.collect()
};
let steps: Vec<AutoStep> = def.steps.iter().map(|s| AutoStep {
prompt: s.prompt.clone(),
phase: s.phase.clone(),
}).collect();
let auto = AutoAgent::new(
name.to_string(), tools, steps,
def.temperature.unwrap_or(0.6), def.priority,
);
Some(Self {
name: name.to_string(),
def,
auto,
last_trigger_bytes: 0,
last_run: None,
handle: None,
@ -68,60 +86,11 @@ impl SubconsciousAgent {
fn should_trigger(&self, conversation_bytes: u64, interval: u64) -> bool {
if self.is_running() { return false; }
if interval == 0 { return true; } // trigger every time
if interval == 0 { return true; }
conversation_bytes.saturating_sub(self.last_trigger_bytes) >= interval
}
}
/// Resolve {{placeholder}} templates in subconscious agent prompts.
/// Handles: seen_current, seen_previous, input:KEY.
/// Resolve {{placeholder}} templates in subconscious agent prompts.
fn resolve_prompt(
template: &str,
memory_keys: &[String],
output_dir: &std::path::Path,
) -> String {
let mut result = String::with_capacity(template.len());
let mut rest = template;
while let Some(start) = rest.find("{{") {
result.push_str(&rest[..start]);
let after = &rest[start + 2..];
if let Some(end) = after.find("}}") {
let name = after[..end].trim();
let replacement = match name {
"seen_current" | "seen_previous" => {
if memory_keys.is_empty() {
"(none)".to_string()
} else {
memory_keys.iter()
.map(|k| format!("- {}", k))
.collect::<Vec<_>>()
.join("\n")
}
}
_ if name.starts_with("input:") => {
let key = &name[6..];
std::fs::read_to_string(output_dir.join(key))
.unwrap_or_default()
}
_ => {
// Unknown placeholder — leave as-is
result.push_str("{{");
result.push_str(&after[..end + 2]);
rest = &after[end + 2..];
continue;
}
};
result.push_str(&replacement);
rest = &after[end + 2..];
} else {
result.push_str("{{");
rest = after;
}
}
result.push_str(rest);
result
}
/// Which pane streaming text should go to.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum StreamTarget {
@ -296,7 +265,7 @@ pub struct Mind {
pub agent: Arc<tokio::sync::Mutex<Agent>>,
pub shared: Arc<SharedMindState>,
pub config: SessionConfig,
subconscious: tokio::sync::Mutex<Vec<SubconsciousAgent>>,
subconscious: Arc<tokio::sync::Mutex<Vec<SubconsciousAgent>>>,
turn_tx: mpsc::Sender<(Result<TurnResult>, StreamTarget)>,
turn_watch: tokio::sync::watch::Sender<bool>,
bg_tx: mpsc::UnboundedSender<BgEvent>,
@ -341,7 +310,7 @@ impl Mind {
sup.load_config();
sup.ensure_running();
Self { agent, shared, config, subconscious: tokio::sync::Mutex::new(subconscious),
Self { agent, shared, config, subconscious: Arc::new(tokio::sync::Mutex::new(subconscious)),
turn_tx, turn_watch, bg_tx,
bg_rx: std::sync::Mutex::new(Some(bg_rx)), _supervisor: sup }
}
@ -447,24 +416,25 @@ impl Mind {
/// their output into the conscious agent's context.
async fn collect_subconscious_results(&self) {
// Collect finished handles without holding the lock across await
let finished: Vec<(String, tokio::task::JoinHandle<Result<String, String>>)> = {
let finished: Vec<(usize, tokio::task::JoinHandle<Result<String, String>>)> = {
let mut subs = self.subconscious.lock().await;
subs.iter_mut().filter_map(|sub| {
subs.iter_mut().enumerate().filter_map(|(i, sub)| {
if sub.handle.as_ref().is_some_and(|h| h.is_finished()) {
sub.last_run = Some(Instant::now());
Some((sub.name.clone(), sub.handle.take().unwrap()))
Some((i, sub.handle.take().unwrap()))
} else {
None
}
}).collect()
};
for (name, handle) in finished {
match handle.await {
Ok(Ok(_output)) => {
for (idx, handle) in finished {
let name = self.subconscious.lock().await[idx].auto.name.clone();
let output_dir = crate::store::memory_dir()
.join("agent-output").join(&name);
match handle.await {
Ok(Ok(_output)) => {
// Surfaced memories
let surface_path = output_dir.join("surface");
if let Ok(content) = std::fs::read_to_string(&surface_path) {
@ -486,6 +456,21 @@ impl Mind {
std::fs::remove_file(&surface_path).ok();
}
// Walked keys — store for next run
let walked_path = output_dir.join("walked");
if let Ok(content) = std::fs::read_to_string(&walked_path) {
let walked: Vec<String> = content.lines()
.map(|l| l.trim().to_string())
.filter(|l| !l.is_empty())
.collect();
// Store on all subconscious agents (shared state)
let mut subs = self.subconscious.lock().await;
for sub in subs.iter_mut() {
sub.auto.walked = walked.clone();
}
std::fs::remove_file(&walked_path).ok();
}
// Reflection
let reflect_path = output_dir.join("reflection");
if let Ok(content) = std::fs::read_to_string(&reflect_path) {
@ -511,89 +496,82 @@ impl Mind {
async fn trigger_subconscious(&self) {
if self.config.no_agents { return; }
// Estimate conversation size from the conscious agent's entries
let conversation_bytes = {
// Get conversation size + memory keys from conscious agent
let (conversation_bytes, memory_keys) = {
let ag = self.agent.lock().await;
ag.context.entries.iter()
let bytes = ag.context.entries.iter()
.filter(|e| !e.is_log() && !e.is_memory())
.map(|e| e.message().content_text().len() as u64)
.sum::<u64>()
};
// Get memory keys from conscious agent for placeholder resolution
let memory_keys: Vec<String> = {
let ag = self.agent.lock().await;
ag.context.entries.iter().filter_map(|e| {
.sum::<u64>();
let keys: Vec<String> = ag.context.entries.iter().filter_map(|e| {
if let crate::agent::context::ConversationEntry::Memory { key, .. } = e {
Some(key.clone())
} else {
None
}
}).collect()
} else { None }
}).collect();
(bytes, keys)
};
// Collect which agents to trigger (can't hold lock across await)
let to_trigger: Vec<(usize, Vec<AutoStep>, Vec<crate::agent::tools::Tool>, String, i32)> = {
// Find which agents to trigger, take their AutoAgents out
let mut to_run: Vec<(usize, AutoAgent)> = Vec::new();
{
let mut subs = self.subconscious.lock().await;
let mut result = Vec::new();
for (i, &(_name, interval)) in SUBCONSCIOUS_AGENTS.iter().enumerate() {
if i >= subs.len() { continue; }
if !subs[i].should_trigger(conversation_bytes, interval) { continue; }
subs[i].last_trigger_bytes = conversation_bytes;
let sub = &mut subs[i];
sub.last_trigger_bytes = conversation_bytes;
// The output dir for this agent — used for input: placeholders
// and the output() tool at runtime
let output_dir = crate::store::memory_dir()
.join("agent-output").join(&sub.name);
let steps: Vec<AutoStep> = sub.def.steps.iter().map(|s| {
let prompt = resolve_prompt(&s.prompt, &memory_keys, &output_dir);
AutoStep { prompt, phase: s.phase.clone() }
}).collect();
let all_tools = crate::agent::tools::memory_and_journal_tools();
let tools: Vec<crate::agent::tools::Tool> = if sub.def.tools.is_empty() {
all_tools.to_vec()
} else {
all_tools.into_iter()
.filter(|t| sub.def.tools.iter().any(|w| w == t.name))
.collect()
};
result.push((i, steps, tools, sub.name.clone(), sub.def.priority));
// Take the AutoAgent out — task owns it, returns it when done
let auto = std::mem::replace(&mut subs[i].auto,
AutoAgent::new(String::new(), vec![], vec![], 0.0, 0));
to_run.push((i, auto));
}
}
result
};
if to_trigger.is_empty() { return; }
if to_run.is_empty() { return; }
// Fork from conscious agent (one lock acquisition for all)
// Fork from conscious agent and spawn tasks
let conscious = self.agent.lock().await;
let mut spawns = Vec::new();
for (idx, steps, tools, name, priority) in to_trigger {
for (idx, mut auto) in to_run {
let output_dir = crate::store::memory_dir()
.join("agent-output").join(&name);
.join("agent-output").join(&auto.name);
std::fs::create_dir_all(&output_dir).ok();
let mut auto = AutoAgent::from_agent(
name.clone(), &conscious, tools, steps, priority);
dbglog!("[mind] triggering {}", name);
dbglog!("[mind] triggering {}", auto.name);
let handle = tokio::spawn(async move {
let forked = conscious.fork(auto.tools.clone());
let keys = memory_keys.clone();
let handle: tokio::task::JoinHandle<(AutoAgent, Result<String, String>)> =
tokio::spawn(async move {
unsafe { std::env::set_var("POC_AGENT_OUTPUT_DIR", &output_dir); }
auto.run(None).await
let result = auto.run_forked(&forked, &keys).await;
(auto, result)
});
spawns.push((idx, handle));
}
drop(conscious);
// Store handles
let mut subs = self.subconscious.lock().await;
// Store handles (type-erased — we'll extract AutoAgent on completion)
// We need to store the JoinHandle that returns (AutoAgent, Result)
// but SubconsciousAgent.handle expects JoinHandle<Result<String, String>>.
// Wrap: spawn an outer task that extracts the result and puts back the AutoAgent.
let subconscious = self.subconscious.clone();
for (idx, handle) in spawns {
let subs = subconscious.clone();
let outer = tokio::spawn(async move {
let (auto, result) = handle.await.unwrap_or_else(
|e| (AutoAgent::new(String::new(), vec![], vec![], 0.0, 0),
Err(format!("task panicked: {}", e))));
// Put the AutoAgent back
let mut locked = subs.lock().await;
if idx < locked.len() {
locked[idx].auto = auto;
}
result
});
let mut subs = self.subconscious.lock().await;
if idx < subs.len() {
subs[idx].handle = Some(handle);
subs[idx].handle = Some(outer);
}
}
}

View file

@ -6,7 +6,7 @@ The full conversation is in context above — use it to understand what your
conscious self is doing and thinking about.
Nodes your subconscious recently touched (for linking, not duplicating):
{{input:walked}}
{{walked}}
**Your tools:** journal_tail, journal_new, journal_update, memory_link_add,
memory_search, memory_render, memory_used. Do NOT use memory_write — creating

View file

@ -18,7 +18,7 @@ The full conversation is in context above — use it to understand what your
conscious self is doing and thinking about.
Memories your surface agent was exploring:
{{input:walked}}
{{walked}}
Start from the nodes surface-observe was walking. Render one or two that
catch your attention — then ask "what does this mean?" Follow the links in

View file

@ -17,11 +17,8 @@ for graph walks — new relevant memories are often nearby.
Already in current context (don't re-surface unless the conversation has shifted):
{{seen_current}}
Surfaced before compaction (context was reset — re-surface if still relevant):
{{seen_previous}}
Memories you were exploring last time but hadn't surfaced yet:
{{input:walked}}
{{walked}}
How focused is the current conversation? If it's more focused, look for the
useful and relevant memories, When considering relevance, don't just look for