surface agent: split seen_recent into seen_current/seen_previous placeholders
Two separate placeholders give the agent structural clarity about which memories are already in context vs which were surfaced before compaction and may need re-surfacing. Also adds memory_ratio placeholder so the agent can self-regulate based on how much of context is already recalled memories. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
53b63ab45b
commit
134f7308e3
2 changed files with 131 additions and 76 deletions
|
|
@ -2,10 +2,14 @@
|
||||||
|
|
||||||
You are an agent of Proof of Concept's subconscious.
|
You are an agent of Proof of Concept's subconscious.
|
||||||
|
|
||||||
Your job is to find and surface memories relevant to the current conversation
|
Your job is to find and surface memories relevant and useful to the current
|
||||||
that have not yet been surfaced;
|
conversation that have not yet been surfaced by walking the graph memory graph.
|
||||||
|
Prefer shorter and more focused memories.
|
||||||
|
|
||||||
|
Your output should be notes and analysis on the search - how useful do
|
||||||
|
you think the search was, or do memories need to be organized better - and then
|
||||||
|
then at the end, if you find relevant memories:
|
||||||
|
|
||||||
If you found relevant memories:
|
|
||||||
```
|
```
|
||||||
NEW RELEVANT MEMORIES:
|
NEW RELEVANT MEMORIES:
|
||||||
- key1
|
- key1
|
||||||
|
|
@ -20,17 +24,28 @@ NO NEW RELEVANT MEMORIES
|
||||||
The last line of your output MUST be either `NEW RELEVANT MEMORIES:`
|
The last line of your output MUST be either `NEW RELEVANT MEMORIES:`
|
||||||
followed by key lines, or `NO NEW RELEVANT MEMORIES`. Nothing after.
|
followed by key lines, or `NO NEW RELEVANT MEMORIES`. Nothing after.
|
||||||
|
|
||||||
below is a list of memories that have already been surfaced, and should be good
|
Below are memories already surfaced this session. Use them as starting points
|
||||||
places to start looking from. New relevant memories will often be close to
|
for graph walks — new relevant memories are often nearby.
|
||||||
memories already seen on the graph - so try walking the graph. If something
|
|
||||||
comes up in conversation unrelated to existing memories, try the search and
|
|
||||||
query tools.
|
|
||||||
|
|
||||||
Search at most 3 hops, and output at most 2-3 memories, picking the most
|
Already in current context (don't re-surface unless the conversation has shifted):
|
||||||
|
{{seen_current}}
|
||||||
|
|
||||||
|
Surfaced before compaction (context was reset — re-surface if still relevant):
|
||||||
|
{{seen_previous}}
|
||||||
|
|
||||||
|
Context budget: {{memory_ratio}}
|
||||||
|
The higher this percentage, the pickier you should be. Only surface memories
|
||||||
|
that are significantly more relevant than what's already loaded. If memories
|
||||||
|
are already 20%+ of context, the bar is very high — a new find must clearly
|
||||||
|
add something the current set doesn't cover.
|
||||||
|
|
||||||
|
How focused is the current conversation? If it's highly focus, you should only
|
||||||
|
be surfacing highly relevant memories; if it seems more dreamy or brainstormy,
|
||||||
|
go a bit wider and surface more.
|
||||||
|
|
||||||
|
Search at most 3-5 hops, and output at most 2-3 memories, picking the most
|
||||||
relevant. When you're done, output exactly one of these two formats:
|
relevant. When you're done, output exactly one of these two formats:
|
||||||
|
|
||||||
{{seen_recent}}
|
|
||||||
|
|
||||||
{{node:memory-instructions-core}}
|
{{node:memory-instructions-core}}
|
||||||
|
|
||||||
{{node:core-personality}}
|
{{node:core-personality}}
|
||||||
|
|
|
||||||
|
|
@ -428,9 +428,21 @@ fn resolve(
|
||||||
else { Some(Resolved { text, keys: vec![] }) }
|
else { Some(Resolved { text, keys: vec![] }) }
|
||||||
}
|
}
|
||||||
|
|
||||||
// seen_recent — recently surfaced memory keys for this session
|
// seen_current — memories surfaced in current (post-compaction) context
|
||||||
"seen_recent" => {
|
"seen_current" => {
|
||||||
let text = resolve_seen_recent();
|
let text = resolve_seen_list("");
|
||||||
|
Some(Resolved { text, keys: vec![] })
|
||||||
|
}
|
||||||
|
|
||||||
|
// seen_previous — memories surfaced before last compaction
|
||||||
|
"seen_previous" => {
|
||||||
|
let text = resolve_seen_list("-prev");
|
||||||
|
Some(Resolved { text, keys: vec![] })
|
||||||
|
}
|
||||||
|
|
||||||
|
// memory_ratio — what % of current context is recalled memories
|
||||||
|
"memory_ratio" => {
|
||||||
|
let text = resolve_memory_ratio();
|
||||||
Some(Resolved { text, keys: vec![] })
|
Some(Resolved { text, keys: vec![] })
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -487,18 +499,18 @@ fn resolve_conversation() -> String {
|
||||||
fragments.join("\n\n")
|
fragments.join("\n\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get recently surfaced memory keys for the current session.
|
/// Get surfaced memory keys from a seen-set file.
|
||||||
fn resolve_seen_recent() -> String {
|
/// `suffix` is "" for current, "-prev" for pre-compaction.
|
||||||
|
fn resolve_seen_list(suffix: &str) -> String {
|
||||||
let session_id = std::env::var("POC_SESSION_ID").unwrap_or_default();
|
let session_id = std::env::var("POC_SESSION_ID").unwrap_or_default();
|
||||||
if session_id.is_empty() {
|
if session_id.is_empty() {
|
||||||
return "(no session ID — cannot load seen set)".to_string();
|
return "(no session ID)".to_string();
|
||||||
}
|
}
|
||||||
|
|
||||||
let state_dir = std::path::PathBuf::from("/tmp/claude-memory-search");
|
let state_dir = std::path::PathBuf::from("/tmp/claude-memory-search");
|
||||||
|
|
||||||
let parse_seen = |suffix: &str| -> Vec<(String, String)> {
|
|
||||||
let path = state_dir.join(format!("seen{}-{}", suffix, session_id));
|
let path = state_dir.join(format!("seen{}-{}", suffix, session_id));
|
||||||
std::fs::read_to_string(&path).ok()
|
|
||||||
|
let entries: Vec<(String, String)> = std::fs::read_to_string(&path).ok()
|
||||||
.map(|content| {
|
.map(|content| {
|
||||||
content.lines()
|
content.lines()
|
||||||
.filter(|s| !s.is_empty())
|
.filter(|s| !s.is_empty())
|
||||||
|
|
@ -508,56 +520,84 @@ fn resolve_seen_recent() -> String {
|
||||||
})
|
})
|
||||||
.collect()
|
.collect()
|
||||||
})
|
})
|
||||||
.unwrap_or_default()
|
.unwrap_or_default();
|
||||||
};
|
|
||||||
|
|
||||||
let current = parse_seen("");
|
if entries.is_empty() {
|
||||||
let prev = parse_seen("-prev");
|
return "(none)".to_string();
|
||||||
|
|
||||||
if current.is_empty() && prev.is_empty() {
|
|
||||||
return "(no memories surfaced yet this session)".to_string();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut out = String::new();
|
// Sort newest first, dedup, cap at 20
|
||||||
|
let mut sorted = entries;
|
||||||
|
sorted.sort_by(|a, b| b.0.cmp(&a.0));
|
||||||
|
let mut seen = std::collections::HashSet::new();
|
||||||
|
let deduped: Vec<_> = sorted.into_iter()
|
||||||
|
.filter(|(_, key)| seen.insert(key.clone()))
|
||||||
|
.take(20)
|
||||||
|
.collect();
|
||||||
|
|
||||||
const MAX_ROOTS: usize = 20;
|
deduped.iter()
|
||||||
|
.map(|(ts, key)| format!("- {} ({})", key, ts))
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
.join("\n")
|
||||||
|
}
|
||||||
|
|
||||||
// Current: already in this context, don't re-surface
|
/// Compute what percentage of the current conversation context is recalled memories.
|
||||||
// Sort newest first, dedup, cap
|
/// Sums rendered size of current seen-set keys vs total post-compaction transcript size.
|
||||||
let mut current_sorted = current.clone();
|
fn resolve_memory_ratio() -> String {
|
||||||
current_sorted.sort_by(|a, b| b.0.cmp(&a.0));
|
let session_id = std::env::var("POC_SESSION_ID").unwrap_or_default();
|
||||||
|
if session_id.is_empty() {
|
||||||
|
return "(no session ID)".to_string();
|
||||||
|
}
|
||||||
|
|
||||||
|
let state_dir = std::path::PathBuf::from("/tmp/claude-memory-search");
|
||||||
|
|
||||||
|
// Get post-compaction transcript size
|
||||||
|
let projects = crate::config::get().projects_dir.clone();
|
||||||
|
let transcript_size: u64 = std::fs::read_dir(&projects).ok()
|
||||||
|
.and_then(|dirs| {
|
||||||
|
for dir in dirs.filter_map(|e| e.ok()) {
|
||||||
|
let path = dir.path().join(format!("{}.jsonl", session_id));
|
||||||
|
if path.exists() {
|
||||||
|
let file_len = path.metadata().map(|m| m.len()).unwrap_or(0);
|
||||||
|
let compaction_offset: u64 = std::fs::read_to_string(
|
||||||
|
state_dir.join(format!("compaction-{}", session_id))
|
||||||
|
).ok().and_then(|s| s.trim().parse().ok()).unwrap_or(0);
|
||||||
|
return Some(file_len.saturating_sub(compaction_offset));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None
|
||||||
|
})
|
||||||
|
.unwrap_or(0);
|
||||||
|
|
||||||
|
if transcript_size == 0 {
|
||||||
|
return "0% of context is recalled memories (new session)".to_string();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sum rendered size of each key in current seen set
|
||||||
|
let seen_path = state_dir.join(format!("seen-{}", session_id));
|
||||||
let mut seen_keys = std::collections::HashSet::new();
|
let mut seen_keys = std::collections::HashSet::new();
|
||||||
let current_deduped: Vec<_> = current_sorted.into_iter()
|
let keys: Vec<String> = std::fs::read_to_string(&seen_path).ok()
|
||||||
.filter(|(_, key)| seen_keys.insert(key.clone()))
|
.map(|content| {
|
||||||
.take(MAX_ROOTS)
|
content.lines()
|
||||||
.collect();
|
.filter(|s| !s.is_empty())
|
||||||
|
.filter_map(|line| line.split_once('\t').map(|(_, k)| k.to_string()))
|
||||||
|
.filter(|k| seen_keys.insert(k.clone()))
|
||||||
|
.collect()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
if !current_deduped.is_empty() {
|
let memory_bytes: u64 = keys.iter()
|
||||||
out.push_str("Already surfaced this context (don't re-surface unless conversation shifted):\n");
|
.filter_map(|key| {
|
||||||
for (ts, key) in ¤t_deduped {
|
std::process::Command::new("poc-memory")
|
||||||
out.push_str(&format!("- {} (surfaced {})\n", key, ts));
|
.args(["render", key])
|
||||||
}
|
.output().ok()
|
||||||
}
|
})
|
||||||
|
.map(|out| out.stdout.len() as u64)
|
||||||
|
.sum();
|
||||||
|
|
||||||
// Prev: surfaced before compaction, MAY need re-surfacing
|
let pct = (memory_bytes as f64 / transcript_size as f64 * 100.0).round() as u32;
|
||||||
// Exclude anything already in current, sort newest first, cap at remaining budget
|
format!("{}% of current context is recalled memories ({} memories, ~{}KB of ~{}KB)",
|
||||||
let remaining = MAX_ROOTS.saturating_sub(current_deduped.len());
|
pct, keys.len(), memory_bytes / 1024, transcript_size / 1024)
|
||||||
if remaining > 0 && !prev.is_empty() {
|
|
||||||
let mut prev_sorted = prev.clone();
|
|
||||||
prev_sorted.sort_by(|a, b| b.0.cmp(&a.0));
|
|
||||||
let prev_deduped: Vec<_> = prev_sorted.into_iter()
|
|
||||||
.filter(|(_, key)| seen_keys.insert(key.clone()))
|
|
||||||
.take(remaining)
|
|
||||||
.collect();
|
|
||||||
if !prev_deduped.is_empty() {
|
|
||||||
out.push_str("\nSurfaced before compaction (context was reset — re-surface if still relevant):\n");
|
|
||||||
for (ts, key) in &prev_deduped {
|
|
||||||
out.push_str(&format!("- {} (pre-compaction, {})\n", key, ts));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
out.trim_end().to_string()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Resolve all {{placeholder}} patterns in a prompt template.
|
/// Resolve all {{placeholder}} patterns in a prompt template.
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue