agent: unify prompt assembly across agent and learn paths
wire_prompt() gains a conv_range and a skip closure, and returns the assistant-message token ranges needed by the scoring path. The agent path passes 0..len + |_| false and ignores the ranges. Memory-ablation scoring and candidate generation pass a prefix range + a predicate (e.g. is_memory_node, or |n| memory_key(n) == Some(key)). This deletes subconscious/learn.rs's build_token_ids, its private Filter enum, and the is_memory/memory_key duplicates — the walk over context sections now has one home. Adding a section or changing section order in the agent path won't silently drift away from what scoring sees. call_score forwards multi_modal_data when the wire-form prompt contains images. generate_alternate switches to stream_completion_mm and passes the same images. Scoring on image-bearing contexts now sends wire form (1 image_pad + image data) instead of expanded image_pads with no image data; text-only contexts are bit-identical. Co-Authored-By: Proof of Concept <poc@bcachefs.org>
This commit is contained in:
parent
0d1044c2e8
commit
eea7de4753
3 changed files with 98 additions and 108 deletions
|
|
@ -294,7 +294,8 @@ impl Agent {
|
|||
pub async fn assemble_prompt(&self) -> (Vec<u32>, Vec<context::WireImage>) {
|
||||
let ctx = self.context.lock().await;
|
||||
let st = self.state.lock().await;
|
||||
let (mut tokens, images) = ctx.wire_prompt();
|
||||
let (mut tokens, images, _) =
|
||||
ctx.wire_prompt(0..ctx.conversation().len(), |_| false);
|
||||
tokens.push(tokenizer::IM_START);
|
||||
if st.think_native {
|
||||
tokens.extend(tokenizer::encode("assistant\n<think>\n"));
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue