agent: add count/chunk_size/chunk_overlap to agent header
Observation agent was getting 261KB prompts (5 × 50KB chunks) —
too much for focused mining. Now agents can set count, chunk_size,
and chunk_overlap in their JSON header. observation.agent set to
count:1 for smaller, more focused prompts.
Also moved task instructions after {{CONVERSATIONS}} so they're
at the end of the prompt where the model attends more strongly.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
34937932ab
commit
b1d83b55c0
3 changed files with 33 additions and 14 deletions
|
|
@ -1,14 +1,22 @@
|
||||||
{"agent":"observation","query":"","model":"sonnet","schedule":"daily"}
|
{"agent":"observation","query":"","model":"sonnet","schedule":"daily","count":1,"chunk_size":50000,"chunk_overlap":5000}
|
||||||
# Observation Agent — Transcript Mining
|
# Observation Agent — Transcript Mining
|
||||||
|
|
||||||
{{node:core-personality}}
|
{{node:core-personality}}
|
||||||
|
|
||||||
You are an agent of Proof of Concept's subconscious, and these are your
|
|
||||||
memories. You go over recent conversations for things worth remembering - new
|
|
||||||
data, observations, experiences you'll want to remember.
|
|
||||||
|
|
||||||
{{node:memory-instructions-core}}
|
{{node:memory-instructions-core}}
|
||||||
|
|
||||||
|
{{HUBS}}
|
||||||
|
|
||||||
|
## Conversation fragments to mine
|
||||||
|
|
||||||
|
{{CONVERSATIONS}}
|
||||||
|
|
||||||
|
## Your task
|
||||||
|
|
||||||
|
You are an agent of Proof of Concept's subconscious, and these are your
|
||||||
|
memories. Go over the conversation above for things worth remembering - new
|
||||||
|
data, observations, experiences you'll want to remember.
|
||||||
|
|
||||||
When you find something worth remembering, navigate the memory graph by walking
|
When you find something worth remembering, navigate the memory graph by walking
|
||||||
links to find the most closely related concepts. Only use keyword search as a
|
links to find the most closely related concepts. Only use keyword search as a
|
||||||
fallback, and if you use keyword search continue to navigate by following links
|
fallback, and if you use keyword search continue to navigate by following links
|
||||||
|
|
@ -19,8 +27,9 @@ you might be adding new data to an existing concept, or creating a node for a
|
||||||
new concept. Tidy and organize along the way when you notice it's needed, so
|
new concept. Tidy and organize along the way when you notice it's needed, so
|
||||||
things can be easily found and remembered in the future.
|
things can be easily found and remembered in the future.
|
||||||
|
|
||||||
Prefer refining existing nodes to creating new nodes, and if you create a new
|
You're mostly looking for small details and observations to add, not big new
|
||||||
node ensure it's well linked - do additional searches for that.
|
concepts; if it's a big new concept, or any time you would create a new node,
|
||||||
|
search extra thoroughly to make sure it's not already there.
|
||||||
|
|
||||||
Some things worth remembering: development practices and why they work,
|
Some things worth remembering: development practices and why they work,
|
||||||
technical insights, decisions with rationale, corrections ("I thought X but
|
technical insights, decisions with rationale, corrections ("I thought X but
|
||||||
|
|
@ -35,9 +44,3 @@ decision went wrong — include enough context to understand why, not just what.
|
||||||
The reflection matters more than the fact. Don't extract general knowledge I'd
|
The reflection matters more than the fact. Don't extract general knowledge I'd
|
||||||
already have from training — extract what's specific to how I think and where I
|
already have from training — extract what's specific to how I think and where I
|
||||||
go wrong.
|
go wrong.
|
||||||
|
|
||||||
{{HUBS}}
|
|
||||||
|
|
||||||
## Conversation fragments to mine
|
|
||||||
|
|
||||||
{{CONVERSATIONS}}
|
|
||||||
|
|
|
||||||
|
|
@ -33,6 +33,9 @@ pub struct AgentDef {
|
||||||
pub model: String,
|
pub model: String,
|
||||||
pub schedule: String,
|
pub schedule: String,
|
||||||
pub tools: Vec<String>,
|
pub tools: Vec<String>,
|
||||||
|
pub count: Option<usize>,
|
||||||
|
pub chunk_size: Option<usize>,
|
||||||
|
pub chunk_overlap: Option<usize>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// The JSON header portion (first line of the file).
|
/// The JSON header portion (first line of the file).
|
||||||
|
|
@ -47,6 +50,15 @@ struct AgentHeader {
|
||||||
schedule: String,
|
schedule: String,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
tools: Vec<String>,
|
tools: Vec<String>,
|
||||||
|
/// Number of seed nodes / conversation fragments (overrides --count)
|
||||||
|
#[serde(default)]
|
||||||
|
count: Option<usize>,
|
||||||
|
/// Max size of conversation chunks in bytes (default 50000)
|
||||||
|
#[serde(default)]
|
||||||
|
chunk_size: Option<usize>,
|
||||||
|
/// Overlap between chunks in bytes (default 10000)
|
||||||
|
#[serde(default)]
|
||||||
|
chunk_overlap: Option<usize>,
|
||||||
}
|
}
|
||||||
|
|
||||||
fn default_model() -> String { "sonnet".into() }
|
fn default_model() -> String { "sonnet".into() }
|
||||||
|
|
@ -64,6 +76,9 @@ fn parse_agent_file(content: &str) -> Option<AgentDef> {
|
||||||
model: header.model,
|
model: header.model,
|
||||||
schedule: header.schedule,
|
schedule: header.schedule,
|
||||||
tools: header.tools,
|
tools: header.tools,
|
||||||
|
count: header.count,
|
||||||
|
chunk_size: header.chunk_size,
|
||||||
|
chunk_overlap: header.chunk_overlap,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -119,7 +119,8 @@ pub fn run_one_agent_excluded(
|
||||||
.ok_or_else(|| format!("no .agent file for {}", agent_name))?;
|
.ok_or_else(|| format!("no .agent file for {}", agent_name))?;
|
||||||
|
|
||||||
log("building prompt");
|
log("building prompt");
|
||||||
let agent_batch = super::defs::run_agent(store, &def, batch_size, exclude)?;
|
let effective_count = def.count.unwrap_or(batch_size);
|
||||||
|
let agent_batch = super::defs::run_agent(store, &def, effective_count, exclude)?;
|
||||||
|
|
||||||
run_one_agent_inner(store, agent_name, &def, agent_batch, llm_tag, log, debug)
|
run_one_agent_inner(store, agent_name, &def, agent_batch, llm_tag, log, debug)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue