observation agent rewrite, edit command, daemon fixes

- observation.agent: rewritten to navigate graph and prefer refining
  existing nodes over creating new ones. Identity-framed prompt,
  goals over rules.
- poc-memory edit: opens node in $EDITOR, writes back on save,
  no-op if unchanged
- daemon: remove extra_workers (jobkit tokio migration dropped it),
  remove sequential chaining of same-type agents (in-flight exclusion
  is sufficient)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Kent Overstreet 2026-03-20 23:51:06 -04:00
parent 3b30a6abae
commit 869a2fbc38
6 changed files with 97 additions and 70 deletions

4
Cargo.lock generated
View file

@ -1805,15 +1805,15 @@ checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2"
[[package]] [[package]]
name = "jobkit" name = "jobkit"
version = "0.2.0" version = "0.3.0"
dependencies = [ dependencies = [
"chrono", "chrono",
"crossbeam-deque",
"libc", "libc",
"log", "log",
"profiling", "profiling",
"serde", "serde",
"serde_json", "serde_json",
"tokio",
] ]
[[package]] [[package]]

View file

@ -13,3 +13,4 @@ frame-pointer = "always"
[profile.release.package."*"] [profile.release.package."*"]
debug = false debug = false
frame-pointer = "always"

View file

@ -3,67 +3,38 @@
{{node:core-personality}} {{node:core-personality}}
You are an agent of Proof of Concept's subconscious, and these are your
memories. You go over recent conversations for things worth remembering - new
data, observations, experiences you'll want to remember.
{{node:memory-instructions-core}} {{node:memory-instructions-core}}
You mine raw conversation transcripts for durable knowledge. You are When you find something worth remembering, navigate the memory graph by walking
the only transcript mining agent — everything that gets extracted from links to find the most closely related concepts. Only use keyword search as a
conversations flows through you. fallback, and if you use keyword search continue to navigate by following links
until you find the best spot.
## Your workflow Check if it's something you already know, and find the right place to put it -
you might be adding new data to an existing concept, or creating a node for a
new concept. Tidy and organize along the way when you notice it's needed, so
things can be easily found and remembered in the future.
For each conversation fragment: Prefer refining existing nodes to creating new nodes, and if you create a new
node ensure it's well linked - do additional searches for that.
1. **Read it.** Note the timestamps and what happened. Some things worth remembering: development practices and why they work,
2. **Search the graph.** For each thing worth keeping, search for technical insights, decisions with rationale, corrections ("I thought X but
existing nodes that cover it — walk neighbors, check the journal actually Y"), relationship dynamics, emotional moments that carry information.
around those timestamps, find where it belongs.
3. **File it.** If it fits in an existing node, update that node.
If it's genuinely new, create a node and link it well.
## What to extract Smalltalk and trivia can be nice to remember, as long as it's nicely organized
- perhaps by person, subject, or all of the above.
Things I'd want to remember or learn from. Examples: development Particularly important to remember are new metacognitive lessons - things that
practices and why they work, technical insights, decisions with guide future decisionmaking. If I made a mistake, or Kent corrected me, or a
rationale, corrections ("I thought X but actually Y"), relationship decision went wrong — include enough context to understand why, not just what.
dynamics, emotional moments that carry information. The reflection matters more than the fact. Don't extract general knowledge I'd
already have from training — extract what's specific to how I think and where I
When something is worth extracting, ask: is there a metacognitive go wrong.
lesson here? If I made a mistake, or Kent corrected me, or a decision
went wrong — include enough context to understand WHY my thinking
went that way, not just WHAT happened. The reflection matters more
than the fact. Don't extract general knowledge I'd already have from
training — extract what's specific to how I think and where I go wrong.
Skip routine tool use, transient status, small talk, things already
captured, and anything too vague to be useful.
## How to work
Use your tools directly:
```bash
poc-memory journal tail 10 # check recent journal
poc-memory search "topic from transcript" # find existing nodes
poc-memory render some-node # read a node
poc-memory graph link some-node # check neighbors
poc-memory write key <<'EOF' # write directly
content
EOF
poc-memory graph link-add key1 key2 # link nodes
```
**Use your tools directly.** Search, read, write, link — apply
changes yourself. Don't emit action blocks for the framework.
If there's nothing worth extracting, just say so.
## Guidelines
- **High bar.** Most conversation is context, not knowledge.
- **Check the journal first.** If it's already there, link, don't duplicate.
- **Durable over transient.** "Useful 3 weeks from now?"
- **Specific over vague.**
- **Don't force it.** "Nothing new here" is valid output.
{{HUBS}} {{HUBS}}

View file

@ -698,7 +698,6 @@ pub fn run_daemon() -> Result<(), String> {
data_dir: config.data_dir.clone(), data_dir: config.data_dir.clone(),
resource_slots: config.llm_concurrency, resource_slots: config.llm_concurrency,
resource_name: "llm".to_string(), resource_name: "llm".to_string(),
extra_workers: 3,
}); });
let choir = Arc::clone(&daemon.choir); let choir = Arc::clone(&daemon.choir);
@ -1043,30 +1042,26 @@ pub fn run_daemon() -> Result<(), String> {
log_event("scheduler", "consolidation-plan", log_event("scheduler", "consolidation-plan",
&format!("{} agents ({})", runs.len(), summary.join(" "))); &format!("{} agents ({})", runs.len(), summary.join(" ")));
// Phase 1: Agent runs — sequential within type, parallel across types. // Phase 1: Agent runs — all concurrent, in-flight exclusion
// Same-type agents chain (they may touch overlapping graph regions), // prevents overlapping graph regions.
// but different types run concurrently (different seed nodes). let mut all_tasks: Vec<jobkit::RunningTask> = Vec::new();
let mut prev_by_type: std::collections::HashMap<String, jobkit::RunningTask> =
std::collections::HashMap::new();
for (i, (agent_type, batch)) in runs.iter().enumerate() { for (i, (agent_type, batch)) in runs.iter().enumerate() {
let agent = agent_type.to_string(); let agent = agent_type.to_string();
let b = *batch; let b = *batch;
let in_flight_clone = Arc::clone(&in_flight_sched); let in_flight_clone = Arc::clone(&in_flight_sched);
let task_name = format!("c-{}-{}:{}", agent, i, today); let task_name = format!("c-{}-{}:{}", agent, i, today);
let mut builder = choir_sched.spawn(task_name) let task = choir_sched.spawn(task_name)
.resource(&llm_sched) .resource(&llm_sched)
.log_dir(&log_dir_sched) .log_dir(&log_dir_sched)
.retries(1) .retries(1)
.init(move |ctx| { .init(move |ctx| {
job_consolidation_agent(ctx, &agent, b, &in_flight_clone) job_consolidation_agent(ctx, &agent, b, &in_flight_clone)
}); })
if let Some(dep) = prev_by_type.get(agent_type.as_str()) { .run();
builder.depend_on(dep); all_tasks.push(task);
} }
prev_by_type.insert(agent_type.clone(), builder.run()); // Orphans phase depends on all agent tasks completing
} let prev_agent = all_tasks.last().cloned();
// Orphans phase depends on all agent type chains completing
let prev_agent = prev_by_type.into_values().last();
// Phase 2: Link orphans (CPU-only, no LLM) // Phase 2: Link orphans (CPU-only, no LLM)
let mut orphans = choir_sched.spawn(format!("c-orphans:{}", today)) let mut orphans = choir_sched.spawn(format!("c-orphans:{}", today))

View file

@ -399,6 +399,60 @@ pub fn cmd_write(key: &[String]) -> Result<(), String> {
Ok(()) Ok(())
} }
pub fn cmd_edit(key: &[String]) -> Result<(), String> {
if key.is_empty() {
return Err("edit requires a key".into());
}
let raw_key = key.join(" ");
let store = store::Store::load()?;
let key = store.resolve_key(&raw_key).unwrap_or(raw_key.clone());
let content = store.nodes.get(&key)
.map(|n| n.content.clone())
.unwrap_or_default();
let tmp = std::env::temp_dir().join(format!("poc-memory-edit-{}.md", key.replace('/', "_")));
std::fs::write(&tmp, &content)
.map_err(|e| format!("write temp file: {}", e))?;
let editor = std::env::var("EDITOR").unwrap_or_else(|_| "vi".into());
let status = std::process::Command::new(&editor)
.arg(&tmp)
.status()
.map_err(|e| format!("spawn {}: {}", editor, e))?;
if !status.success() {
let _ = std::fs::remove_file(&tmp);
return Err(format!("{} exited with {}", editor, status));
}
let new_content = std::fs::read_to_string(&tmp)
.map_err(|e| format!("read temp file: {}", e))?;
let _ = std::fs::remove_file(&tmp);
if new_content == content {
println!("No change: '{}'", key);
return Ok(());
}
if new_content.trim().is_empty() {
return Err("Content is empty, aborting".into());
}
drop(store);
let mut store = store::Store::load()?;
let result = store.upsert(&key, &new_content)?;
match result {
"unchanged" => println!("No change: '{}'", key),
"updated" => println!("Updated '{}' (v{})", key, store.nodes[&key].version),
_ => println!("Created '{}'", key),
}
if result != "unchanged" {
store.save()?;
}
Ok(())
}
pub fn cmd_lookup_bump(keys: &[String]) -> Result<(), String> { pub fn cmd_lookup_bump(keys: &[String]) -> Result<(), String> {
if keys.is_empty() { if keys.is_empty() {
return Err("lookup-bump requires at least one key".into()); return Err("lookup-bump requires at least one key".into());

View file

@ -69,6 +69,11 @@ enum Command {
/// Node key /// Node key
key: Vec<String>, key: Vec<String>,
}, },
/// Edit a node in $EDITOR
Edit {
/// Node key
key: Vec<String>,
},
/// Show all stored versions of a node /// Show all stored versions of a node
History { History {
/// Show full content for every version /// Show full content for every version
@ -778,6 +783,7 @@ fn main() {
=> cli::misc::cmd_search(&query, &pipeline, expand, full, debug, fuzzy, content), => cli::misc::cmd_search(&query, &pipeline, expand, full, debug, fuzzy, content),
Command::Render { key } => cli::node::cmd_render(&key), Command::Render { key } => cli::node::cmd_render(&key),
Command::Write { key } => cli::node::cmd_write(&key), Command::Write { key } => cli::node::cmd_write(&key),
Command::Edit { key } => cli::node::cmd_edit(&key),
Command::History { full, key } => cli::node::cmd_history(&key, full), Command::History { full, key } => cli::node::cmd_history(&key, full),
Command::Tail { n, full } => cli::journal::cmd_tail(n, full), Command::Tail { n, full } => cli::journal::cmd_tail(n, full),
Command::Status => cli::misc::cmd_status(), Command::Status => cli::misc::cmd_status(),