consciousness/.claude
Kent Overstreet 5e4067c04f Replace token counting with token generation via HuggingFace tokenizer
Add agent/tokenizer.rs with global Qwen 3.5 tokenizer that generates
actual token IDs including chat template wrapping. ContextEntry now
stores token_ids: Vec<u32> instead of tokens: usize — the count is
derived from the length.

ContextEntry::new() tokenizes automatically via the global tokenizer.
ContextSection::push_entry() takes a raw ConversationEntry and
tokenizes it. set_message() re-tokenizes without needing an external
tokenizer parameter.

Token IDs include the full chat template: <|im_start|>role\ncontent
<|im_end|>\n — so concatenating token_ids across entries produces a
ready-to-send prompt for vLLM's /v1/completions endpoint.

The old tiktoken CoreBPE is now unused on Agent (will be removed in
a followup). Token counts are now exact for Qwen 3.5 instead of the
~85-90% approximation from cl100k_base.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-08 11:20:03 -04:00
..
analysis flatten: move poc-memory contents to workspace root 2026-03-25 00:54:12 -04:00
dmn-algorithm-plan.md stash DMN algorithm plan and connector prompt fix 2026-03-05 10:24:24 -05:00
query-language-design.md flatten: move poc-memory contents to workspace root 2026-03-25 00:54:12 -04:00
scheduled_tasks.lock Replace token counting with token generation via HuggingFace tokenizer 2026-04-08 11:20:03 -04:00
scoring-persistence-analysis.md Fix bounds check panic and batched lock in collect_results 2026-04-07 03:49:49 -04:00
ui-desync-analysis.md Analysis notes: UI desync pop/push line count mismatch 2026-04-07 11:54:30 -04:00