Commit graph

17 commits

Author SHA1 Message Date
ProofOfConcept
be65399710 Switch memory scoring from chat messages to raw token IDs
The /score endpoint was receiving chat-format messages which had to go
through the chat template tokenizer — this was failing with "System
message must be first" errors because the AST structure doesn't map
cleanly to chat message format.

Send raw token IDs via the new `prompt` field instead, matching what
the /completions endpoint already does. The vLLM score endpoint finds
assistant boundaries by scanning for <|im_start|>assistant token
patterns, so no message-level metadata is needed.

Also includes identity and journal sections in the scored context,
matching what the model actually sees during inference.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-09 21:07:00 -04:00
ProofOfConcept
67332eb55e Add vLLM priority to memory scoring requests
Scoring calls the /score endpoint directly via HTTP, bypassing the
stream_completion path. These requests had no priority field, so they
could preempt interactive work. Set priority=5 (between subconscious
agents at 2 and unconscious at 10).

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-09 20:42:38 -04:00
Kent Overstreet
14fd8c9b90 Clean up warnings: StreamToken pub, dead oneshot code, SkipIndex
Made StreamToken pub (was pub(crate), needed by context.rs).
Removed dead API_CLIENT, get_client, sampling/priority fields
from oneshot. Suppressed pre-existing SkipIndex warning in learn.rs.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-08 16:35:57 -04:00
Kent Overstreet
1d61b091b0 WIP: Agent/AgentState — 36 errors remaining, all .lock() → .state.lock() or .context.lock()
Bulk replaced Arc<Mutex<Agent>> with Arc<Agent> across all files.
Fixed control.rs, memory.rs tool handlers. Fixed oneshot Backend.
Remaining errors are all agent.lock() → agent.state.lock() or
agent.context.lock() in mind/, user/, and a few in mod.rs.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-08 15:40:36 -04:00
Kent Overstreet
e587431f9a IT BUILDS: Full AST migration compiles — zero errors
All callers migrated from old context types to AstNode/ContextState.
Killed: Message, Role (api), ConversationEntry, ContextEntry,
ContextSection, working_stack, api/parsing.rs, api/types.rs,
api/openai.rs, context_old.rs.

Oneshot standalone path stubbed (needs completions API rewrite).
12 warnings remaining (dead code cleanup).

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-08 15:29:52 -04:00
Kent Overstreet
d0d876e067 WIP: Fix mind/, dmn, UI layer — 35 errors remaining
mind/mod.rs and mind/dmn.rs fully migrated to AST types.
user/context.rs, user/widgets.rs, user/chat.rs partially migrated.
Killed working_stack tool, tokenize_conv_entry, context_old.rs.

Remaining: learn.rs (22), oneshot.rs (5), subconscious.rs (3),
chat.rs (3), widgets.rs (1), context.rs (1).

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-08 15:24:49 -04:00
Kent Overstreet
5e4067c04f Replace token counting with token generation via HuggingFace tokenizer
Add agent/tokenizer.rs with global Qwen 3.5 tokenizer that generates
actual token IDs including chat template wrapping. ContextEntry now
stores token_ids: Vec<u32> instead of tokens: usize — the count is
derived from the length.

ContextEntry::new() tokenizes automatically via the global tokenizer.
ContextSection::push_entry() takes a raw ConversationEntry and
tokenizes it. set_message() re-tokenizes without needing an external
tokenizer parameter.

Token IDs include the full chat template: <|im_start|>role\ncontent
<|im_end|>\n — so concatenating token_ids across entries produces a
ready-to-send prompt for vLLM's /v1/completions endpoint.

The old tiktoken CoreBPE is now unused on Agent (will be removed in
a followup). Token counts are now exact for Qwen 3.5 instead of the
~85-90% approximation from cl100k_base.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-08 11:20:03 -04:00
Kent Overstreet
613704720b Score memories in first 60% of conversation by tokens
Use cumulative token position instead of entry index for the scoring
cutoff. This reflects actual context usage — a few large entries
near the end won't skew the boundary.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-07 21:43:59 -04:00
Kent Overstreet
fd58386951 Incremental memory scoring with per-score persistence
score_memories_incremental now takes an async callback that fires
after each memory is scored. The callback:
- Writes the score to the conversation entry via set_score()
- Persists to memory-scores.json immediately
- Notifies the UI so the context screen updates live

Scoring no longer batches — each score is visible and persisted
as it completes. Does not touch the memory store.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-07 21:34:14 -04:00
Kent Overstreet
62996e27d7 WIP: ContextEntry/ContextSection data structures for incremental token counting
New types — not yet wired to callers:

- ContextEntry: wraps ConversationEntry with cached token count and
  timestamp
- ContextSection: named group of entries with cached token total.
  Private entries/tokens, read via entries()/tokens().
  Mutation via push(entry), set(index, entry), del(index).
- ContextState: system/identity/journal/conversation sections + working_stack
- ConversationEntry::System variant for system prompt entries

Token counting happens once at push time. Sections maintain their
totals incrementally via push/set/del. No more recomputing from
scratch on every budget check.

Does not compile — callers need updating.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-07 20:48:08 -04:00
Kent Overstreet
f33b1767da Restrict API types visibility — types module is now private
Only Message, Role, MessageContent, ContentPart, ToolCall,
FunctionCall, Usage, ImageUrl are pub-exported from agent::api.

Internal types (ChatRequest, ChatCompletionChunk, ChunkChoice,
Delta, ReasoningConfig, ToolCallDelta, FunctionCallDelta) are
pub(crate) — invisible outside the crate.

All callers updated to import from agent::api:: instead of
agent::api::types::.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-07 13:39:20 -04:00
Kent Overstreet
1cf4f504c0 Kill reqwest — minimal HTTP client on raw hyper + tokio-rustls
New src/agent/api/http.rs: ~240 lines, supports GET/POST, JSON/form
bodies, SSE streaming via chunk(), TLS via rustls. No tracing dep.

Removes reqwest from the main crate and telegram channel crate.
Cargo.lock drops ~900 lines of transitive dependencies.

tracing now only pulled in by tui-markdown.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-07 12:50:40 -04:00
ProofOfConcept
f390fa1617 Delete ui_channel.rs — relocate types, remove all UiMessage/UiSender plumbing
Types relocated:
- StreamTarget → mind/mod.rs (Mind decides Conversation vs Autonomous)
- SharedActiveTools + shared_active_tools() → agent/tools/mod.rs
- ContextSection + SharedContextState → agent/context.rs (already there)
- StatusInfo + ContextInfo → user/mod.rs (UI display state)

Removed UiSender from: Agent::turn, Mind, learn.rs, all function signatures.
The entire message-passing layer is gone. All state flows through
Agent fields (activities, entries, streaming) read by the UI via try_lock.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-05 22:34:48 -04:00
ProofOfConcept
cfddb55ed9 Kill TextDelta, Info — UiMessage is dead. RAII ActivityGuards replace all status feedback
Streaming text now goes directly to agent entries via append_streaming().
sync_from_agent diffs the growing entry each tick. The streaming entry
is popped when the response completes; build_response_message pushes
the final version.

All status feedback uses RAII ActivityGuards:
- push_activity() for long-running work (thinking, streaming, scoring)
- notify() for instant feedback (compacted, DMN state changes, commands)
- Guards auto-remove on Drop, appending "(complete)" and lingering 5s
- expire_activities() cleans up timed-out notifications on render tick

UiMessage enum reduced to a single Info variant with zero sends.
The channel infrastructure remains for now (Mind/Agent still take
UiSender in signatures) — mechanical cleanup for a follow-up.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-05 22:18:07 -04:00
ProofOfConcept
e7914e3d58 Kill Reasoning, Debug, Activity variants — read status from Agent directly
Reasoning tokens: dropped for now, will land in context entries later.
Debug sends: converted to dbglog! macro (writes to debug.log).
Activity: now a field on Agent, set directly, read by UI via try_lock.
score_memories_incremental takes agent Arc for activity writes.

UiMessage down to 2 variants: TextDelta, Info.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-05 21:45:55 -04:00
ProofOfConcept
eafc2887a3 Kill StatusUpdate, Activity, DmnAnnotation, ContextInfoUpdate, AgentUpdate
Status bar reads directly from Agent and MindState on each render tick.
Activity is now a field on Agent — set by agent code directly, read by
UI via try_lock. DmnAnnotation, ContextInfoUpdate, AgentUpdate were
already dead (no senders).

UiMessage down to 4 variants: TextDelta, Reasoning, Debug, Info.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-05 21:34:27 -04:00
Kent Overstreet
390b6c6c0a more reorg 2026-04-05 01:48:11 -04:00
Renamed from src/agent/training.rs (Browse further)