Runtime-mutable settings (F6's threshold knob, the generate-alternates
toggle, anything else that comes along) were ending up as mirrored
fields on MindState — each new config setting grew MindState::new's
signature and added a clone+sync path. Wrong home. MindState is
ephemeral session state, not a config projection.
Give AppConfig the same treatment the memory Config has: install it
into a global RwLock<AppConfig> at startup via load_app, read through
config::app() (returns a read guard), mutate through update_app. The
config_writer functions now write to disk AND update the cache
atomically, so the one-stop-shop call keeps both in sync.
Also while in here:
- learn.generate_alternates moves from a sentinel file
(~/.consciousness/cache/finetune-alternates, "exists = enabled")
into the config under the learn section. On first run with this
build, if the sentinel file still exists Mind::new flips the
config value to true and removes it. Drops
alternates_enabled()/set_alternates().
- Default threshold 0.0000001 → 1.0. With the timestamp filter
removed the previous value was letting essentially everything
through; 1.0 is a sane "nothing gets through unless you actually
want it" default.
- score_finetune_candidates takes generate_alternates as a parameter
instead of reading a global — caller snapshots the config values
once at the top of start_finetune_scoring so the async task
doesn't need to hold the config read lock across awaits.
- MindState.learn_threshold / learn_generate_alternates gone; the
SetLearn* command handlers now just delegate to config_writer.
Kent noted RwLock<Arc<AppConfig>> (the pattern used by the memory
Config global) is pointless here — nobody needs a snapshot-after-
release, reads are short — so this uses a plain RwLock<AppConfig>
and returns a read guard.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
With the timestamp filter gone (previous commit), score_finetune_candidates
started returning the actual ~100+ candidates per scoring run. The
existing code generated alternates for all of them in a tight loop
before returning anything, leaving the status line stuck on
"finetune: scoring N responses..." for ~100s of seconds while the
B200 was pegged.
Two fixes:
1. score_finetune_candidates now takes an ActivityGuard and a callback.
Candidates are emitted one-at-a-time as they complete (after their
alternate if that's enabled, immediately otherwise). The activity
status updates to "finetune: generating alternate N/M" during the
alternate-gen phase so it's clear what's happening.
2. BgEvent::FinetuneCandidates(Vec<_>) → FinetuneCandidate(one). Each
emitted candidate is pushed onto shared.finetune_candidates; the UI
tick picks it up and renders it on the next frame. start_finetune_scoring
clears the previous run's list at the top so each run is fresh.
Return type changes from (Vec, f64) → (usize, f64) — the count above
threshold is all the caller still needs since the candidates stream
through the callback.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Previously NodeLeaf.timestamp and AstNode::Branch.timestamp accepted
null or missing via a deserialize_timestamp_or_epoch fallback — legacy
entries in conversation.jsonl from before Branch timestamps existed
(and from before chrono serialization was wired up) would load with
UNIX_EPOCH as a sentinel. Downstream, node_timestamp_ns() returned
Option<i64> and callers had to handle None as "old entry, skip."
That second filter was silently dropping every candidate in
score_finetune_candidates when scoring an older session — the F6
screen showed "0 above threshold" even when max_divergence was
orders of magnitude above the threshold, because every entry was
failing the None check, not the divergence check.
The fix, in three parts:
1. src/bin/fix-timestamps.rs — one-off migration tool that walks a
conversation.jsonl, linearly interpolates timestamps for entries
stuck at UNIX_EPOCH (using surrounding real timestamps as anchors),
propagates to child leaves with per-sibling ns offsets, and bumps
any collisions by 1 ns for uniqueness. Ran against the current
session's log: 11887 entries, 72289 ns bumps, all unique.
2. context.rs — drop default_timestamp and
deserialize_timestamp_or_epoch. NodeLeaf and Branch now require a
present non-null timestamp on deserialize. Tests flip from
"missing/null → UNIX_EPOCH" to "missing/null → Err."
3. subconscious/learn.rs — node_timestamp_ns now returns i64, not
Option<i64>. The matching caller in score_finetune_candidates
collapses from a Some/None match to a single trained-set check.
mind/log.rs's oldest_timestamp no longer filters UNIX_EPOCH.
Every line currently on disk has already been migrated. Going
forward, new AstNodes always carry real timestamps (Utc::now() at
construction time), so the strict schema is the invariant, not an
aspiration.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
vllm's /v1/score endpoint made score_ranges a required field (the
messages-mode fallback that used to pattern-scan for assistant
boundaries is gone). Always send the field, and if we have nothing to
score, skip the HTTP round-trip entirely instead of letting the server
422 us.
Response parsing is unchanged — serde ignores the renamed range_index
field and the dropped role field since we only extract total_logprob.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Three changes that together reshape the F6 fine-tune-review screen:
1. Finetune scoring reports through the standard agent activity system
instead of a separate finetune_progress String. The previous design
ran an independent progress field that forced a cross-lock dance and
bespoke UI plumbing. start_finetune_scoring now uses start_activity
+ activity.update, so the usual status line and notifications
capture scoring progress uniformly with other background work.
2. MindState gains a FinetuneScoringStats snapshot (responses seen,
above threshold, max divergence, error). The F6 empty screen shows
this instead of a loading message — so after a scoring run that
produced zero candidates, you can see *why* (e.g., max_divergence
below threshold).
3. The divergence threshold is configurable from F6 via +/- hotkeys
(scales by 10×) and persisted to ~/.consciousness/config.json5 via
config_writer::set_learn_threshold. AppConfig grows a learn section
with a threshold field (default 1e-7).
Also: user/mod.rs no longer uses try_lock() for the per-tick
unconscious/mind state sync — we fixed the locking hot paths that
made try_lock necessary, so lock().await is now the right choice.
And subconscious::learn::score_finetune_candidates now returns
(candidates, max_divergence) so the stats can be populated.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Two related changes to the learn subsystem:
1. AST node timestamps are now non-optional — both Leaf and Branch
variants carry a DateTime<Utc>. UNIX_EPOCH means "unset" (old entries
deserialized from on-disk conversation logs).
Training uses timestamps as unique keys for dedup, so we promote to
nanosecond precision: node_timestamp_ns(), TrainData.timestamp_ns,
FinetuneCandidate.timestamp_ns, mark_trained(ns).
2. build_token_ids() now also returns token-position ranges of assistant
messages. These are passed to vLLM's /score endpoint via the new
score_ranges field so only scored-position logprobs are returned —
cuts bandwidth/compute when scoring small windows.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
When 's' is pressed on the learn screen, approved candidates are now
sent to the inference server's /train endpoint.
Samples are marked as sent immediately in the UI, and mark_trained()
is called after successful API response to prevent re-scoring.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Wire up divergence scoring to identify responses that depend heavily on
memories the model hasn't internalized. These are candidates for fine-tuning.
- Score finetune candidates automatically after each turn
- Track trained responses by timestamp to prevent overtraining
- F6 screen shows candidates with divergence scores
- j/k nav, a=approve, r=reject, g=toggle alternate gen, s=send
- Additive sync preserves approval status across ticks
- Keeps 10 most recent rejected, removes sent
The 's' key currently just marks as trained locally — actual /finetune
endpoint call to follow.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Identity memory nodes now participate in importance scoring alongside
conversation memories. Score loading/saving handles both sections, and
the conscious screen uses node.label() consistently for memory display.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Replace complex context_groups (with ContextGroup struct, ContextSource
enum, labels, keys arrays) with simple string lists:
- personality_nodes: loaded into main session context
- agent_nodes: loaded into subconscious agent context
Removed ~200 lines of code. The distinction between session and agent
context is now just which list you're in, not a per-group flag.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
The strip_md_suffix function was removed but its usages remained,
causing lookups like `identity.md` to fail (stripped to `identity`
which didn't exist). Now keys are used as-is.
Renamed 4 nodes that had .md suffixes to canonical form:
- identity.md → identity
- promotion-work-queue.md-* → promotion-work-queue-*
- patterns.md#* → patterns-*
- practices.md#* → practices-*
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Remove POC_PROVENANCE env var lookup from new_relation - callers
now pass provenance explicitly. This fixes tracking when the env
var wasn't set correctly.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Store now has internal Mutex for capnp appends and AtomicU64 for
size tracking. All methods take &self. The external Arc<Mutex<Store>>
is replaced with Arc<Store>.
- Store::append_lock protects file appends
- local.rs functions take &Store (not &mut Store)
- access_local() returns Arc<Store>
- All .lock().await calls removed from callers
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
resolve_placeholders() and run_agent() no longer take &Store.
All placeholders now use async memory_render/memory_links/memory_query
directly. The "siblings" placeholder uses Vec<LinkInfo> for ranking
neighbors by link_strength * node_weight.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
- node.rs: use memory::* typed helpers instead of memory_rpc()
- main.rs: make Run trait async, await all command dispatch
- defs.rs: bridge get_group_content async via block_in_place
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
The organize agents handle renaming as part of their normal work now.
Also simplified resolve_placeholders to build graph internally.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
- links() in memory.rs: use cached_store() instead of MemoryNode::load()
- identity.rs: use memory_rpc for Store context loading
- defs.rs: delete dead placeholders (topology, nodes/episodes, health, split)
- agents now use {{tool: graph_topology}} etc instead
- prompts.rs: delete unused format_split_plan_node()
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
One function that uses memory_rpc (which handles daemon vs local).
Removes 65 lines of duplicate logic.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
- Add memory_history MCP tool for version history
- Convert cmd_history to use memory_rpc
- Add raw parameter to memory_render for editing
- Remove unused: dump-json, list-edges, lookup-bump, lookups
- Fix render_node path in defs.rs/subconscious.rs
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
These were early experiments with manual feedback signals that
never worked well. The scoring system will handle this properly.
Removed:
- CLI: used, wrong, not-relevant, not-useful, gap
- MCP: memory_used
- Store: mark_used, mark_wrong, record_gap, modify_node
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Check if the current period's digest exists and update it with
journal_update before starting a new one with journal_new.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Instead of memory_write, the digest agent now uses journal_new with
level parameter (1=daily, 2=weekly, 3=monthly) which correctly sets
the node type.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
- Remove TurnResult.text (was dead code - Agent::turn handles text internally)
- Simplify run_with_backend to just iterate over steps (Agent::turn loops
for tool calls and handles empty responses internally)
- Change run/run_shared/run_forked_shared to return Result<(), String>
- Remove AgentResult.output field (no callers used it)
- Stub out legacy text-parsing code (audit, compare) that needs redesign
- Update digest.rs to not depend on text return
- Add level parameter to journal_new/journal_update for digest support
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
When def.tools was non-empty, it was filtering to ONLY those tools
instead of using memory tools as base + adding extras. This broke
digest agent (and any agent with explicit tools list) by removing
all 13 base memory tools.
Fixed to match the pattern in unconscious.rs:
- base = memory_tools()
- extras from journal_tools() if listed in def.tools
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
PEG parser now handles both expression syntax (degree > 5 | sort degree)
and pipeline syntax (all | type:episodic | sort:timestamp). Deleted
Stage::parse() and helpers from engine.rs — it's now pure execution.
All callers use parse_stages() from parser.rs as the single entry point.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Every unconscious agent gets memory_tools() as baseline. The tools
field in the agent def specifies additional tools on top of that —
digest agent now gets journal_tail, journal_new, journal_update.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Instead of reimplementing filtering logic, journal_tail builds a
query string (type + sort + age + limit) and delegates to query().
Supports format and after parameters. Removes keys_only in favor
of format:"compact". Digest agent updated to use dates not key names.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Rewrite digest.agent to be fully autonomous — it uses journal_tail
to discover what needs digesting and generates digests during its
run. No more pre-populated {{CONTENT}}/{{LEVEL}} placeholders.
Extend journal_tail with level parameter (0=journal, 1=daily,
2=weekly, 3=monthly) and keys_only mode. Also include node keys
in full output for better agent context.
Remove stale format:"neighborhood" case from memory_query.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Add "format": "full" option to memory_query that renders with
full content, graph metrics, and hub analysis (format_nodes_section).
Convert 6 agents (linker, challenger, connector, extractor, replay,
transfer) to inline their queries via {{tool: memory_query}} instead
of separate header query + {{nodes}} placeholder.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Text cosine similarity was being used as a crutch for operations
the graph structure should handle: interference detection, orphan
linking, triangle closing, hub differentiation. These are all
graph-structural operations that the agents (linker, extractor)
handle with actual semantic understanding.
Removed: similarity.rs (stemming + cosine), rewrite.rs (orphan
linking, triangle closing, hub differentiation), detect_interference,
and all CLI commands and consolidation steps that used them.
-794 lines.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Interference detection via O(n²) text cosine similarity is
redundant — the graph structure should surface similar nodes
through link topology, shared neighbors, and community detection.
The other agents (linker, extractor) already maintain these
relationships.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Convert {{topology}}, {{health}}, {{pairs}} placeholders to
{{tool:}} calls. Made format_topology_header, format_health_section,
format_pairs_section pub so tools can call them.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>