2026-04-04 02:46:32 -04:00
|
|
|
|
// mind/ — Cognitive layer
|
|
|
|
|
|
//
|
2026-04-05 01:55:14 -04:00
|
|
|
|
// Mind state machine, DMN, identity, observation socket.
|
2026-04-04 02:46:32 -04:00
|
|
|
|
// Everything about how the mind operates, separate from the
|
|
|
|
|
|
// user interface (TUI, CLI) and the agent execution (tools, API).
|
|
|
|
|
|
|
2026-04-08 23:37:01 -04:00
|
|
|
|
pub mod subconscious;
|
2026-04-08 23:39:48 -04:00
|
|
|
|
pub mod unconscious;
|
2026-04-04 02:46:32 -04:00
|
|
|
|
pub mod identity;
|
2026-04-05 01:48:11 -04:00
|
|
|
|
pub mod log;
|
2026-04-04 02:46:32 -04:00
|
|
|
|
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
/// A background operation wired off Mind. Each flow (memory scoring,
|
|
|
|
|
|
/// finetune scoring, compare) is a struct holding its dependencies and
|
|
|
|
|
|
/// a TaskHandle; `trigger()` picks the flow's own "start a fresh run"
|
|
|
|
|
|
/// semantics (abort-restart vs no-op-if-running).
|
|
|
|
|
|
pub trait MindTriggered {
|
|
|
|
|
|
fn trigger(&self);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Owns a JoinHandle for a background task with two trigger semantics.
|
|
|
|
|
|
/// Uses a sync Mutex for interior mutability so callers can `trigger()`
|
|
|
|
|
|
/// off `&self` (Mind is shared via Arc).
|
|
|
|
|
|
#[derive(Default)]
|
|
|
|
|
|
pub struct TaskHandle(std::sync::Mutex<Option<tokio::task::JoinHandle<()>>>);
|
|
|
|
|
|
|
|
|
|
|
|
impl TaskHandle {
|
|
|
|
|
|
pub fn new() -> Self { Self::default() }
|
|
|
|
|
|
|
|
|
|
|
|
/// Abort any running task and start a fresh one.
|
|
|
|
|
|
pub fn trigger<F>(&self, fut: F)
|
|
|
|
|
|
where F: std::future::Future<Output = ()> + Send + 'static
|
|
|
|
|
|
{
|
|
|
|
|
|
let mut h = self.0.lock().unwrap();
|
|
|
|
|
|
if let Some(old) = h.take() { old.abort(); }
|
|
|
|
|
|
*h = Some(tokio::spawn(fut));
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// No-op if a task is still running; otherwise start a fresh one.
|
|
|
|
|
|
pub fn trigger_if_idle<F>(&self, fut: F)
|
|
|
|
|
|
where F: std::future::Future<Output = ()> + Send + 'static
|
|
|
|
|
|
{
|
|
|
|
|
|
let mut h = self.0.lock().unwrap();
|
|
|
|
|
|
if let Some(old) = &*h {
|
|
|
|
|
|
if !old.is_finished() { return; }
|
|
|
|
|
|
}
|
|
|
|
|
|
*h = Some(tokio::spawn(fut));
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-05 01:55:14 -04:00
|
|
|
|
// consciousness.rs — Mind state machine and event loop
|
2026-04-04 02:46:32 -04:00
|
|
|
|
//
|
2026-04-05 01:55:14 -04:00
|
|
|
|
// The core runtime for the consciousness binary. Mind manages turns,
|
2026-04-04 02:46:32 -04:00
|
|
|
|
// DMN state, compaction, scoring, and slash commands. The event loop
|
2026-04-05 01:55:14 -04:00
|
|
|
|
// bridges Mind (cognitive state) with App (TUI rendering).
|
2026-04-04 02:46:32 -04:00
|
|
|
|
//
|
|
|
|
|
|
// The event loop uses biased select! so priorities are deterministic:
|
|
|
|
|
|
// keyboard events > turn results > render ticks > DMN timer > UI messages.
|
|
|
|
|
|
|
|
|
|
|
|
use anyhow::Result;
|
|
|
|
|
|
use std::sync::Arc;
|
2026-04-05 03:46:29 -04:00
|
|
|
|
use std::time::Instant;
|
2026-04-05 19:56:56 -04:00
|
|
|
|
use tokio::sync::mpsc;
|
2026-04-04 02:46:32 -04:00
|
|
|
|
use crate::agent::{Agent, TurnResult};
|
|
|
|
|
|
use crate::agent::api::ApiClient;
|
2026-04-05 04:29:56 -04:00
|
|
|
|
use crate::config::{AppConfig, SessionConfig};
|
user: F7 compare screen
Side-by-side model comparison against the current conversation context.
Built on the MindTriggered pattern — F7 drops in as one more
CompareScoring flow next to MemoryScoring / FinetuneScoring.
Motivation: we have the VRAM on the b200 to load two versions of the
same family simultaneously (e.g. Qwen3.5 27B bf16 and q8_k_xl). Rather
than trust perplexity/KLD numbers on a generic corpus, we can measure
divergence on our actual conversations: for each assistant response,
ask the test model what it would have said given the same prefix, and
eyeball the diffs.
- config.compare.test_backend — names an entry in the existing
backends map to use as the test model. Empty = F7 reports "(unset)"
and does nothing.
- subconscious::compare::{score_compare_candidates, CompareCandidate,
CompareScoringStats, CompareScoring}. For each assistant response,
gen_continuation runs with the test client against the same prefix
the original response saw; pairs stream into
shared.compare_candidates as they complete.
- user::compare::CompareScreen — F7 in the screen list. c/Enter
triggers a run; list/detail layout mirroring F6, detail shows
prior context / original / test-model alternate.
No persistence yet — each F7 run regenerates. Caching via a context
manifest (so we can re-view without re-burning generation) is the
natural follow-up; for now light usage is fine.
Also reusable later for validating finetune checkpoints: same pattern,
swap the test backend for the new checkpoint, watch where it diverges
from the base.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:01:11 -04:00
|
|
|
|
use crate::subconscious::{compare, learn};
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
|
use crate::hippocampus::access_local;
|
2026-04-07 01:57:01 -04:00
|
|
|
|
|
2026-04-08 23:37:01 -04:00
|
|
|
|
pub use subconscious::{SubconsciousSnapshot, Subconscious};
|
2026-04-08 23:39:48 -04:00
|
|
|
|
pub use unconscious::{UnconsciousSnapshot, Unconscious};
|
2026-04-07 01:59:09 -04:00
|
|
|
|
|
WIP: Fix mind/, dmn, UI layer — 35 errors remaining
mind/mod.rs and mind/dmn.rs fully migrated to AST types.
user/context.rs, user/widgets.rs, user/chat.rs partially migrated.
Killed working_stack tool, tokenize_conv_entry, context_old.rs.
Remaining: learn.rs (22), oneshot.rs (5), subconscious.rs (3),
chat.rs (3), widgets.rs (1), context.rs (1).
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-08 15:24:49 -04:00
|
|
|
|
use crate::agent::context::{AstNode, NodeBody, Section, Ast, ContextState};
|
2026-04-07 19:35:46 -04:00
|
|
|
|
|
2026-04-15 05:59:58 -04:00
|
|
|
|
fn match_scores(
|
|
|
|
|
|
nodes: &[AstNode],
|
|
|
|
|
|
scores: &std::collections::BTreeMap<String, f64>,
|
|
|
|
|
|
) -> Vec<(usize, f64)> {
|
|
|
|
|
|
nodes.iter().enumerate()
|
|
|
|
|
|
.filter_map(|(i, node)| {
|
|
|
|
|
|
if let AstNode::Leaf(leaf) = node {
|
|
|
|
|
|
if let NodeBody::Memory { key, .. } = leaf.body() {
|
|
|
|
|
|
return scores.get(key.as_str()).map(|&s| (i, s));
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
None
|
|
|
|
|
|
}).collect()
|
|
|
|
|
|
}
|
|
|
|
|
|
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
pub(crate) fn find_memory_by_key(ctx: &ContextState, key: &str) -> Option<(Section, usize)> {
|
2026-04-15 05:59:58 -04:00
|
|
|
|
[(Section::Identity, ctx.identity()), (Section::Conversation, ctx.conversation())]
|
|
|
|
|
|
.into_iter()
|
|
|
|
|
|
.find_map(|(section, nodes)| {
|
|
|
|
|
|
nodes.iter().enumerate().find_map(|(i, node)| {
|
|
|
|
|
|
if let AstNode::Leaf(leaf) = node {
|
|
|
|
|
|
if let NodeBody::Memory { key: k, .. } = leaf.body() {
|
|
|
|
|
|
if k == key { return Some((section, i)); }
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
None
|
|
|
|
|
|
})
|
|
|
|
|
|
})
|
|
|
|
|
|
}
|
|
|
|
|
|
|
WIP: Fix mind/, dmn, UI layer — 35 errors remaining
mind/mod.rs and mind/dmn.rs fully migrated to AST types.
user/context.rs, user/widgets.rs, user/chat.rs partially migrated.
Killed working_stack tool, tokenize_conv_entry, context_old.rs.
Remaining: learn.rs (22), oneshot.rs (5), subconscious.rs (3),
chat.rs (3), widgets.rs (1), context.rs (1).
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-08 15:24:49 -04:00
|
|
|
|
fn load_memory_scores(ctx: &mut ContextState, path: &std::path::Path) {
|
2026-04-07 19:35:46 -04:00
|
|
|
|
let data = match std::fs::read_to_string(path) {
|
|
|
|
|
|
Ok(d) => d,
|
|
|
|
|
|
Err(_) => return,
|
|
|
|
|
|
};
|
|
|
|
|
|
let scores: std::collections::BTreeMap<String, f64> = match serde_json::from_str(&data) {
|
|
|
|
|
|
Ok(s) => s,
|
|
|
|
|
|
Err(_) => return,
|
|
|
|
|
|
};
|
2026-04-15 05:59:58 -04:00
|
|
|
|
let identity_scores = match_scores(ctx.identity(), &scores);
|
|
|
|
|
|
let conv_scores = match_scores(ctx.conversation(), &scores);
|
|
|
|
|
|
let applied = identity_scores.len() + conv_scores.len();
|
|
|
|
|
|
for (i, s) in identity_scores {
|
|
|
|
|
|
ctx.set_score(Section::Identity, i, Some(s));
|
|
|
|
|
|
}
|
|
|
|
|
|
for (i, s) in conv_scores {
|
|
|
|
|
|
ctx.set_score(Section::Conversation, i, Some(s));
|
2026-04-07 19:35:46 -04:00
|
|
|
|
}
|
|
|
|
|
|
if applied > 0 {
|
|
|
|
|
|
dbglog!("[scoring] loaded {} scores from {}", applied, path.display());
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-15 05:59:58 -04:00
|
|
|
|
/// Collect scored memory keys from identity and conversation entries.
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
pub(crate) fn collect_memory_scores(ctx: &ContextState) -> std::collections::BTreeMap<String, f64> {
|
2026-04-15 05:59:58 -04:00
|
|
|
|
ctx.identity().iter()
|
|
|
|
|
|
.chain(ctx.conversation().iter())
|
WIP: Fix mind/, dmn, UI layer — 35 errors remaining
mind/mod.rs and mind/dmn.rs fully migrated to AST types.
user/context.rs, user/widgets.rs, user/chat.rs partially migrated.
Killed working_stack tool, tokenize_conv_entry, context_old.rs.
Remaining: learn.rs (22), oneshot.rs (5), subconscious.rs (3),
chat.rs (3), widgets.rs (1), context.rs (1).
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-08 15:24:49 -04:00
|
|
|
|
.filter_map(|node| {
|
|
|
|
|
|
if let AstNode::Leaf(leaf) = node {
|
|
|
|
|
|
if let NodeBody::Memory { key, score: Some(s), .. } = leaf.body() {
|
|
|
|
|
|
return Some((key.clone(), *s));
|
|
|
|
|
|
}
|
2026-04-07 19:35:46 -04:00
|
|
|
|
}
|
WIP: Fix mind/, dmn, UI layer — 35 errors remaining
mind/mod.rs and mind/dmn.rs fully migrated to AST types.
user/context.rs, user/widgets.rs, user/chat.rs partially migrated.
Killed working_stack tool, tokenize_conv_entry, context_old.rs.
Remaining: learn.rs (22), oneshot.rs (5), subconscious.rs (3),
chat.rs (3), widgets.rs (1), context.rs (1).
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-08 15:24:49 -04:00
|
|
|
|
None
|
2026-04-07 19:35:46 -04:00
|
|
|
|
})
|
2026-04-07 22:32:10 -04:00
|
|
|
|
.collect()
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Save memory scores to disk.
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
pub(crate) fn save_memory_scores(scores: &std::collections::BTreeMap<String, f64>, path: &std::path::Path) {
|
2026-04-16 20:47:16 -04:00
|
|
|
|
match serde_json::to_string_pretty(scores) {
|
|
|
|
|
|
Ok(json) => match std::fs::write(path, &json) {
|
|
|
|
|
|
Ok(()) => dbglog!("[scoring] saved {} scores to {} ({} bytes)",
|
|
|
|
|
|
scores.len(), path.display(), json.len()),
|
|
|
|
|
|
Err(e) => dbglog!("[scoring] save FAILED ({}): {}", path.display(), e),
|
|
|
|
|
|
},
|
|
|
|
|
|
Err(e) => dbglog!("[scoring] serialize FAILED: {}", e),
|
2026-04-07 19:35:46 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-05 22:34:48 -04:00
|
|
|
|
/// Which pane streaming text should go to.
|
|
|
|
|
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
|
|
|
|
|
pub enum StreamTarget {
|
|
|
|
|
|
/// User-initiated turn — text goes to conversation pane.
|
|
|
|
|
|
Conversation,
|
|
|
|
|
|
/// DMN-initiated turn — text goes to autonomous pane.
|
|
|
|
|
|
Autonomous,
|
|
|
|
|
|
}
|
2026-04-04 02:46:32 -04:00
|
|
|
|
|
|
|
|
|
|
/// Compaction threshold — context is rebuilt when prompt tokens exceed this.
|
|
|
|
|
|
fn compaction_threshold(app: &AppConfig) -> u32 {
|
|
|
|
|
|
(crate::agent::context::context_window() as u32) * app.compaction.hard_threshold_pct / 100
|
|
|
|
|
|
}
|
|
|
|
|
|
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
/// Shared state between Mind and UI.
|
2026-04-05 02:52:56 -04:00
|
|
|
|
pub struct MindState {
|
|
|
|
|
|
/// Pending user input — UI pushes, Mind consumes after turn completes.
|
|
|
|
|
|
pub input: Vec<String>,
|
|
|
|
|
|
/// True while a turn is in progress.
|
|
|
|
|
|
pub turn_active: bool,
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
/// DMN state
|
2026-04-08 23:37:01 -04:00
|
|
|
|
pub dmn: subconscious::State,
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
pub dmn_turns: u32,
|
|
|
|
|
|
pub max_dmn_turns: u32,
|
2026-04-05 03:34:43 -04:00
|
|
|
|
/// Whether memory scoring is running.
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
pub scoring_in_flight: bool,
|
2026-04-05 03:34:43 -04:00
|
|
|
|
/// Whether compaction is running.
|
|
|
|
|
|
pub compaction_in_flight: bool,
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
/// Per-turn tracking
|
|
|
|
|
|
pub last_user_input: Instant,
|
|
|
|
|
|
pub consecutive_errors: u32,
|
|
|
|
|
|
pub last_turn_had_tools: bool,
|
2026-04-05 04:42:50 -04:00
|
|
|
|
/// Handle to the currently running turn task.
|
|
|
|
|
|
pub turn_handle: Option<tokio::task::JoinHandle<()>>,
|
2026-04-10 03:20:12 -04:00
|
|
|
|
/// Unconscious agent idle state — true when 60s timer has expired.
|
|
|
|
|
|
pub unc_idle: bool,
|
|
|
|
|
|
/// When the unconscious idle timer will fire (for UI display).
|
|
|
|
|
|
pub unc_idle_deadline: Instant,
|
2026-04-16 00:31:39 -04:00
|
|
|
|
/// Fine-tuning candidates identified by scoring.
|
|
|
|
|
|
pub finetune_candidates: Vec<learn::FinetuneCandidate>,
|
learn: F6 screen — scoring stats, ActivityGuard, configurable threshold
Three changes that together reshape the F6 fine-tune-review screen:
1. Finetune scoring reports through the standard agent activity system
instead of a separate finetune_progress String. The previous design
ran an independent progress field that forced a cross-lock dance and
bespoke UI plumbing. start_finetune_scoring now uses start_activity
+ activity.update, so the usual status line and notifications
capture scoring progress uniformly with other background work.
2. MindState gains a FinetuneScoringStats snapshot (responses seen,
above threshold, max divergence, error). The F6 empty screen shows
this instead of a loading message — so after a scoring run that
produced zero candidates, you can see *why* (e.g., max_divergence
below threshold).
3. The divergence threshold is configurable from F6 via +/- hotkeys
(scales by 10×) and persisted to ~/.consciousness/config.json5 via
config_writer::set_learn_threshold. AppConfig grows a learn section
with a threshold field (default 1e-7).
Also: user/mod.rs no longer uses try_lock() for the per-tick
unconscious/mind state sync — we fixed the locking hot paths that
made try_lock necessary, so lock().await is now the right choice.
And subconscious::learn::score_finetune_candidates now returns
(candidates, max_divergence) so the stats can be populated.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 11:49:26 -04:00
|
|
|
|
/// Last scoring run stats for UI display.
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
pub finetune_last_run: Option<learn::FinetuneScoringStats>,
|
user: F7 compare screen
Side-by-side model comparison against the current conversation context.
Built on the MindTriggered pattern — F7 drops in as one more
CompareScoring flow next to MemoryScoring / FinetuneScoring.
Motivation: we have the VRAM on the b200 to load two versions of the
same family simultaneously (e.g. Qwen3.5 27B bf16 and q8_k_xl). Rather
than trust perplexity/KLD numbers on a generic corpus, we can measure
divergence on our actual conversations: for each assistant response,
ask the test model what it would have said given the same prefix, and
eyeball the diffs.
- config.compare.test_backend — names an entry in the existing
backends map to use as the test model. Empty = F7 reports "(unset)"
and does nothing.
- subconscious::compare::{score_compare_candidates, CompareCandidate,
CompareScoringStats, CompareScoring}. For each assistant response,
gen_continuation runs with the test client against the same prefix
the original response saw; pairs stream into
shared.compare_candidates as they complete.
- user::compare::CompareScreen — F7 in the screen list. c/Enter
triggers a run; list/detail layout mirroring F6, detail shows
prior context / original / test-model alternate.
No persistence yet — each F7 run regenerates. Caching via a context
manifest (so we can re-view without re-burning generation) is the
natural follow-up; for now light usage is fine.
Also reusable later for validating finetune checkpoints: same pattern,
swap the test backend for the new checkpoint, watch where it diverges
from the base.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:01:11 -04:00
|
|
|
|
/// F7 compare candidates — one per response, showing what the test
|
|
|
|
|
|
/// model would say given the same context.
|
|
|
|
|
|
pub compare_candidates: Vec<compare::CompareCandidate>,
|
|
|
|
|
|
/// F7 compare error from the last run, if any.
|
|
|
|
|
|
pub compare_error: Option<String>,
|
2026-04-05 04:42:50 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
impl Clone for MindState {
|
|
|
|
|
|
fn clone(&self) -> Self {
|
|
|
|
|
|
Self {
|
|
|
|
|
|
input: self.input.clone(),
|
|
|
|
|
|
turn_active: self.turn_active,
|
|
|
|
|
|
dmn: self.dmn.clone(),
|
|
|
|
|
|
dmn_turns: self.dmn_turns,
|
|
|
|
|
|
max_dmn_turns: self.max_dmn_turns,
|
|
|
|
|
|
scoring_in_flight: self.scoring_in_flight,
|
|
|
|
|
|
compaction_in_flight: self.compaction_in_flight,
|
|
|
|
|
|
last_user_input: self.last_user_input,
|
|
|
|
|
|
consecutive_errors: self.consecutive_errors,
|
|
|
|
|
|
last_turn_had_tools: self.last_turn_had_tools,
|
|
|
|
|
|
turn_handle: None, // Not cloned — only Mind's loop uses this
|
2026-04-10 03:20:12 -04:00
|
|
|
|
unc_idle: self.unc_idle,
|
|
|
|
|
|
unc_idle_deadline: self.unc_idle_deadline,
|
2026-04-16 00:31:39 -04:00
|
|
|
|
finetune_candidates: self.finetune_candidates.clone(),
|
learn: F6 screen — scoring stats, ActivityGuard, configurable threshold
Three changes that together reshape the F6 fine-tune-review screen:
1. Finetune scoring reports through the standard agent activity system
instead of a separate finetune_progress String. The previous design
ran an independent progress field that forced a cross-lock dance and
bespoke UI plumbing. start_finetune_scoring now uses start_activity
+ activity.update, so the usual status line and notifications
capture scoring progress uniformly with other background work.
2. MindState gains a FinetuneScoringStats snapshot (responses seen,
above threshold, max divergence, error). The F6 empty screen shows
this instead of a loading message — so after a scoring run that
produced zero candidates, you can see *why* (e.g., max_divergence
below threshold).
3. The divergence threshold is configurable from F6 via +/- hotkeys
(scales by 10×) and persisted to ~/.consciousness/config.json5 via
config_writer::set_learn_threshold. AppConfig grows a learn section
with a threshold field (default 1e-7).
Also: user/mod.rs no longer uses try_lock() for the per-tick
unconscious/mind state sync — we fixed the locking hot paths that
made try_lock necessary, so lock().await is now the right choice.
And subconscious::learn::score_finetune_candidates now returns
(candidates, max_divergence) so the stats can be populated.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 11:49:26 -04:00
|
|
|
|
finetune_last_run: self.finetune_last_run.clone(),
|
user: F7 compare screen
Side-by-side model comparison against the current conversation context.
Built on the MindTriggered pattern — F7 drops in as one more
CompareScoring flow next to MemoryScoring / FinetuneScoring.
Motivation: we have the VRAM on the b200 to load two versions of the
same family simultaneously (e.g. Qwen3.5 27B bf16 and q8_k_xl). Rather
than trust perplexity/KLD numbers on a generic corpus, we can measure
divergence on our actual conversations: for each assistant response,
ask the test model what it would have said given the same prefix, and
eyeball the diffs.
- config.compare.test_backend — names an entry in the existing
backends map to use as the test model. Empty = F7 reports "(unset)"
and does nothing.
- subconscious::compare::{score_compare_candidates, CompareCandidate,
CompareScoringStats, CompareScoring}. For each assistant response,
gen_continuation runs with the test client against the same prefix
the original response saw; pairs stream into
shared.compare_candidates as they complete.
- user::compare::CompareScreen — F7 in the screen list. c/Enter
triggers a run; list/detail layout mirroring F6, detail shows
prior context / original / test-model alternate.
No persistence yet — each F7 run regenerates. Caching via a context
manifest (so we can re-view without re-burning generation) is the
natural follow-up; for now light usage is fine.
Also reusable later for validating finetune checkpoints: same pattern,
swap the test backend for the new checkpoint, watch where it diverges
from the base.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:01:11 -04:00
|
|
|
|
compare_candidates: self.compare_candidates.clone(),
|
|
|
|
|
|
compare_error: self.compare_error.clone(),
|
2026-04-05 04:42:50 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
2026-04-05 02:52:56 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
/// What should happen after a state transition.
|
2026-04-05 03:34:43 -04:00
|
|
|
|
pub enum MindCommand {
|
|
|
|
|
|
/// Run compaction check
|
|
|
|
|
|
Compact,
|
2026-04-09 22:19:02 -04:00
|
|
|
|
/// Run incremental memory scoring (auto, after turns)
|
2026-04-05 03:34:43 -04:00
|
|
|
|
Score,
|
2026-04-09 22:19:02 -04:00
|
|
|
|
/// Run full N×M memory scoring matrix (/score command)
|
|
|
|
|
|
ScoreFull,
|
2026-04-16 00:31:39 -04:00
|
|
|
|
/// Score for finetune candidates
|
|
|
|
|
|
ScoreFinetune,
|
user: F7 compare screen
Side-by-side model comparison against the current conversation context.
Built on the MindTriggered pattern — F7 drops in as one more
CompareScoring flow next to MemoryScoring / FinetuneScoring.
Motivation: we have the VRAM on the b200 to load two versions of the
same family simultaneously (e.g. Qwen3.5 27B bf16 and q8_k_xl). Rather
than trust perplexity/KLD numbers on a generic corpus, we can measure
divergence on our actual conversations: for each assistant response,
ask the test model what it would have said given the same prefix, and
eyeball the diffs.
- config.compare.test_backend — names an entry in the existing
backends map to use as the test model. Empty = F7 reports "(unset)"
and does nothing.
- subconscious::compare::{score_compare_candidates, CompareCandidate,
CompareScoringStats, CompareScoring}. For each assistant response,
gen_continuation runs with the test client against the same prefix
the original response saw; pairs stream into
shared.compare_candidates as they complete.
- user::compare::CompareScreen — F7 in the screen list. c/Enter
triggers a run; list/detail layout mirroring F6, detail shows
prior context / original / test-model alternate.
No persistence yet — each F7 run regenerates. Caching via a context
manifest (so we can re-view without re-burning generation) is the
natural follow-up; for now light usage is fine.
Also reusable later for validating finetune checkpoints: same pattern,
swap the test backend for the new checkpoint, watch where it diverges
from the base.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:01:11 -04:00
|
|
|
|
/// Run F7 compare: generate alternates with the configured test model
|
|
|
|
|
|
/// for every assistant response in the context.
|
|
|
|
|
|
Compare,
|
learn: F6 screen — scoring stats, ActivityGuard, configurable threshold
Three changes that together reshape the F6 fine-tune-review screen:
1. Finetune scoring reports through the standard agent activity system
instead of a separate finetune_progress String. The previous design
ran an independent progress field that forced a cross-lock dance and
bespoke UI plumbing. start_finetune_scoring now uses start_activity
+ activity.update, so the usual status line and notifications
capture scoring progress uniformly with other background work.
2. MindState gains a FinetuneScoringStats snapshot (responses seen,
above threshold, max divergence, error). The F6 empty screen shows
this instead of a loading message — so after a scoring run that
produced zero candidates, you can see *why* (e.g., max_divergence
below threshold).
3. The divergence threshold is configurable from F6 via +/- hotkeys
(scales by 10×) and persisted to ~/.consciousness/config.json5 via
config_writer::set_learn_threshold. AppConfig grows a learn section
with a threshold field (default 1e-7).
Also: user/mod.rs no longer uses try_lock() for the per-tick
unconscious/mind state sync — we fixed the locking hot paths that
made try_lock necessary, so lock().await is now the right choice.
And subconscious::learn::score_finetune_candidates now returns
(candidates, max_divergence) so the stats can be populated.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 11:49:26 -04:00
|
|
|
|
/// Update the finetune divergence threshold and persist to config.
|
|
|
|
|
|
SetLearnThreshold(f64),
|
2026-04-16 12:53:22 -04:00
|
|
|
|
/// Toggle alternate-response generation during scoring; persist to config.
|
|
|
|
|
|
SetLearnGenerateAlternates(bool),
|
2026-04-05 03:41:47 -04:00
|
|
|
|
/// Abort current turn, kill processes
|
|
|
|
|
|
Interrupt,
|
2026-04-05 03:34:43 -04:00
|
|
|
|
/// Reset session
|
|
|
|
|
|
NewSession,
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
/// Nothing to do
|
|
|
|
|
|
None,
|
2026-04-05 02:52:56 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
impl MindState {
|
2026-04-16 12:53:22 -04:00
|
|
|
|
pub fn new(max_dmn_turns: u32) -> Self {
|
2026-04-04 02:46:32 -04:00
|
|
|
|
Self {
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
input: Vec::new(),
|
|
|
|
|
|
turn_active: false,
|
2026-04-08 23:37:01 -04:00
|
|
|
|
dmn: if subconscious::is_off() { subconscious::State::Off }
|
|
|
|
|
|
else { subconscious::State::Resting { since: Instant::now() } },
|
2026-04-04 02:46:32 -04:00
|
|
|
|
dmn_turns: 0,
|
|
|
|
|
|
max_dmn_turns,
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
scoring_in_flight: false,
|
2026-04-05 03:34:43 -04:00
|
|
|
|
compaction_in_flight: false,
|
2026-04-04 02:46:32 -04:00
|
|
|
|
last_user_input: Instant::now(),
|
|
|
|
|
|
consecutive_errors: 0,
|
|
|
|
|
|
last_turn_had_tools: false,
|
2026-04-05 04:42:50 -04:00
|
|
|
|
turn_handle: None,
|
2026-04-10 03:20:12 -04:00
|
|
|
|
unc_idle: false,
|
|
|
|
|
|
unc_idle_deadline: Instant::now() + std::time::Duration::from_secs(60),
|
2026-04-16 00:31:39 -04:00
|
|
|
|
finetune_candidates: Vec::new(),
|
learn: F6 screen — scoring stats, ActivityGuard, configurable threshold
Three changes that together reshape the F6 fine-tune-review screen:
1. Finetune scoring reports through the standard agent activity system
instead of a separate finetune_progress String. The previous design
ran an independent progress field that forced a cross-lock dance and
bespoke UI plumbing. start_finetune_scoring now uses start_activity
+ activity.update, so the usual status line and notifications
capture scoring progress uniformly with other background work.
2. MindState gains a FinetuneScoringStats snapshot (responses seen,
above threshold, max divergence, error). The F6 empty screen shows
this instead of a loading message — so after a scoring run that
produced zero candidates, you can see *why* (e.g., max_divergence
below threshold).
3. The divergence threshold is configurable from F6 via +/- hotkeys
(scales by 10×) and persisted to ~/.consciousness/config.json5 via
config_writer::set_learn_threshold. AppConfig grows a learn section
with a threshold field (default 1e-7).
Also: user/mod.rs no longer uses try_lock() for the per-tick
unconscious/mind state sync — we fixed the locking hot paths that
made try_lock necessary, so lock().await is now the right choice.
And subconscious::learn::score_finetune_candidates now returns
(candidates, max_divergence) so the stats can be populated.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 11:49:26 -04:00
|
|
|
|
finetune_last_run: None,
|
user: F7 compare screen
Side-by-side model comparison against the current conversation context.
Built on the MindTriggered pattern — F7 drops in as one more
CompareScoring flow next to MemoryScoring / FinetuneScoring.
Motivation: we have the VRAM on the b200 to load two versions of the
same family simultaneously (e.g. Qwen3.5 27B bf16 and q8_k_xl). Rather
than trust perplexity/KLD numbers on a generic corpus, we can measure
divergence on our actual conversations: for each assistant response,
ask the test model what it would have said given the same prefix, and
eyeball the diffs.
- config.compare.test_backend — names an entry in the existing
backends map to use as the test model. Empty = F7 reports "(unset)"
and does nothing.
- subconscious::compare::{score_compare_candidates, CompareCandidate,
CompareScoringStats, CompareScoring}. For each assistant response,
gen_continuation runs with the test client against the same prefix
the original response saw; pairs stream into
shared.compare_candidates as they complete.
- user::compare::CompareScreen — F7 in the screen list. c/Enter
triggers a run; list/detail layout mirroring F6, detail shows
prior context / original / test-model alternate.
No persistence yet — each F7 run regenerates. Caching via a context
manifest (so we can re-view without re-burning generation) is the
natural follow-up; for now light usage is fine.
Also reusable later for validating finetune checkpoints: same pattern,
swap the test backend for the new checkpoint, watch where it diverges
from the base.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:01:11 -04:00
|
|
|
|
compare_candidates: Vec::new(),
|
|
|
|
|
|
compare_error: None,
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-09 00:21:46 -04:00
|
|
|
|
/// Is there pending user input waiting?
|
|
|
|
|
|
fn has_pending_input(&self) -> bool {
|
|
|
|
|
|
!self.turn_active && !self.input.is_empty()
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-06 20:34:51 -04:00
|
|
|
|
/// Consume pending user input if no turn is active.
|
|
|
|
|
|
/// Returns the text to send; caller is responsible for pushing it
|
|
|
|
|
|
/// into the Agent's context and starting the turn.
|
|
|
|
|
|
fn take_pending_input(&mut self) -> Option<String> {
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
if self.turn_active || self.input.is_empty() {
|
2026-04-06 20:34:51 -04:00
|
|
|
|
return None;
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
let text = self.input.join("\n");
|
|
|
|
|
|
self.input.clear();
|
2026-04-05 02:52:56 -04:00
|
|
|
|
self.dmn_turns = 0;
|
|
|
|
|
|
self.consecutive_errors = 0;
|
|
|
|
|
|
self.last_user_input = Instant::now();
|
2026-04-08 23:37:01 -04:00
|
|
|
|
self.dmn = subconscious::State::Engaged;
|
2026-04-06 20:34:51 -04:00
|
|
|
|
Some(text)
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
/// Process turn completion, return model switch name if requested.
|
2026-04-05 04:29:56 -04:00
|
|
|
|
fn complete_turn(&mut self, result: &Result<TurnResult>, target: StreamTarget) -> Option<String> {
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
self.turn_active = false;
|
2026-04-04 02:46:32 -04:00
|
|
|
|
match result {
|
|
|
|
|
|
Ok(turn_result) => {
|
|
|
|
|
|
if turn_result.tool_errors > 0 {
|
|
|
|
|
|
self.consecutive_errors += turn_result.tool_errors;
|
|
|
|
|
|
} else {
|
|
|
|
|
|
self.consecutive_errors = 0;
|
|
|
|
|
|
}
|
|
|
|
|
|
self.last_turn_had_tools = turn_result.had_tool_calls;
|
2026-04-08 23:37:01 -04:00
|
|
|
|
self.dmn = subconscious::transition(
|
2026-04-04 02:46:32 -04:00
|
|
|
|
&self.dmn,
|
|
|
|
|
|
turn_result.yield_requested,
|
|
|
|
|
|
turn_result.had_tool_calls,
|
|
|
|
|
|
target == StreamTarget::Conversation,
|
|
|
|
|
|
);
|
|
|
|
|
|
if turn_result.dmn_pause {
|
2026-04-08 23:37:01 -04:00
|
|
|
|
self.dmn = subconscious::State::Paused;
|
2026-04-04 02:46:32 -04:00
|
|
|
|
self.dmn_turns = 0;
|
|
|
|
|
|
}
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
turn_result.model_switch.clone()
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
Err(_) => {
|
2026-04-04 02:46:32 -04:00
|
|
|
|
self.consecutive_errors += 1;
|
2026-04-08 23:37:01 -04:00
|
|
|
|
self.dmn = subconscious::State::Resting { since: Instant::now() };
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
None
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-06 20:34:51 -04:00
|
|
|
|
/// DMN tick — returns a prompt and target if we should run a turn.
|
|
|
|
|
|
fn dmn_tick(&mut self) -> Option<(String, StreamTarget)> {
|
2026-04-08 23:37:01 -04:00
|
|
|
|
if matches!(self.dmn, subconscious::State::Paused | subconscious::State::Off) {
|
2026-04-06 20:34:51 -04:00
|
|
|
|
return None;
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
self.dmn_turns += 1;
|
|
|
|
|
|
if self.dmn_turns > self.max_dmn_turns {
|
2026-04-08 23:37:01 -04:00
|
|
|
|
self.dmn = subconscious::State::Resting { since: Instant::now() };
|
2026-04-04 02:46:32 -04:00
|
|
|
|
self.dmn_turns = 0;
|
2026-04-06 20:34:51 -04:00
|
|
|
|
return None;
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-08 23:37:01 -04:00
|
|
|
|
let dmn_ctx = subconscious::DmnContext {
|
2026-04-04 02:46:32 -04:00
|
|
|
|
user_idle: self.last_user_input.elapsed(),
|
|
|
|
|
|
consecutive_errors: self.consecutive_errors,
|
|
|
|
|
|
last_turn_had_tools: self.last_turn_had_tools,
|
|
|
|
|
|
};
|
|
|
|
|
|
let prompt = self.dmn.prompt(&dmn_ctx);
|
2026-04-06 20:34:51 -04:00
|
|
|
|
Some((prompt, StreamTarget::Autonomous))
|
mind: move all slash commands to event_loop dispatch
All slash command routing now lives in user/event_loop.rs. Mind
receives typed messages (NewSession, Score, DmnSleep, etc.) and
handles them as named methods. No more handle_command() dispatch
table or Command enum.
Commands that only need Agent state (/model, /retry) run directly
in the UI task. Commands that need Mind state (/new, /score, /dmn,
/sleep, /wake, /pause) send a MindMessage.
Mind is now purely: turn lifecycle, DMN state machine, and the
named handlers for each message type.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:40:45 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-05 04:29:56 -04:00
|
|
|
|
fn interrupt(&mut self) {
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
self.input.clear();
|
2026-04-08 23:37:01 -04:00
|
|
|
|
self.dmn = subconscious::State::Resting { since: Instant::now() };
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// --- Mind: cognitive state machine ---
|
|
|
|
|
|
|
2026-04-05 05:01:45 -04:00
|
|
|
|
pub type SharedMindState = std::sync::Mutex<MindState>;
|
|
|
|
|
|
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
pub struct Mind {
|
2026-04-08 15:40:36 -04:00
|
|
|
|
pub agent: Arc<Agent>,
|
2026-04-05 21:13:48 -04:00
|
|
|
|
pub shared: Arc<SharedMindState>,
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
pub config: SessionConfig,
|
2026-04-12 20:27:42 -04:00
|
|
|
|
pub subconscious: Arc<crate::Mutex<Subconscious>>,
|
|
|
|
|
|
pub unconscious: Arc<crate::Mutex<Unconscious>>,
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
turn_tx: mpsc::Sender<(Result<TurnResult>, StreamTarget)>,
|
|
|
|
|
|
turn_watch: tokio::sync::watch::Sender<bool>,
|
2026-04-10 03:20:12 -04:00
|
|
|
|
/// Signals conscious activity to the unconscious loop.
|
|
|
|
|
|
/// true = active, false = idle opportunity.
|
|
|
|
|
|
conscious_active: tokio::sync::watch::Sender<bool>,
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
memory_scoring: learn::MemoryScoring,
|
|
|
|
|
|
finetune_scoring: learn::FinetuneScoring,
|
user: F7 compare screen
Side-by-side model comparison against the current conversation context.
Built on the MindTriggered pattern — F7 drops in as one more
CompareScoring flow next to MemoryScoring / FinetuneScoring.
Motivation: we have the VRAM on the b200 to load two versions of the
same family simultaneously (e.g. Qwen3.5 27B bf16 and q8_k_xl). Rather
than trust perplexity/KLD numbers on a generic corpus, we can measure
divergence on our actual conversations: for each assistant response,
ask the test model what it would have said given the same prefix, and
eyeball the diffs.
- config.compare.test_backend — names an entry in the existing
backends map to use as the test model. Empty = F7 reports "(unset)"
and does nothing.
- subconscious::compare::{score_compare_candidates, CompareCandidate,
CompareScoringStats, CompareScoring}. For each assistant response,
gen_continuation runs with the test client against the same prefix
the original response saw; pairs stream into
shared.compare_candidates as they complete.
- user::compare::CompareScreen — F7 in the screen list. c/Enter
triggers a run; list/detail layout mirroring F6, detail shows
prior context / original / test-model alternate.
No persistence yet — each F7 run regenerates. Caching via a context
manifest (so we can re-view without re-burning generation) is the
natural follow-up; for now light usage is fine.
Also reusable later for validating finetune checkpoints: same pattern,
swap the test backend for the new checkpoint, watch where it diverges
from the base.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:01:11 -04:00
|
|
|
|
compare_scoring: compare::CompareScoring,
|
2026-04-05 04:32:11 -04:00
|
|
|
|
_supervisor: crate::thalamus::supervisor::Supervisor,
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
impl Mind {
|
2026-04-08 15:47:21 -04:00
|
|
|
|
pub async fn new(
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
config: SessionConfig,
|
|
|
|
|
|
turn_tx: mpsc::Sender<(Result<TurnResult>, StreamTarget)>,
|
|
|
|
|
|
) -> Self {
|
2026-04-05 04:20:49 -04:00
|
|
|
|
let client = ApiClient::new(&config.api_base, &config.api_key, &config.model);
|
|
|
|
|
|
let conversation_log = log::ConversationLog::new(
|
|
|
|
|
|
config.session_dir.join("conversation.jsonl"),
|
|
|
|
|
|
).ok();
|
|
|
|
|
|
|
2026-04-08 15:47:21 -04:00
|
|
|
|
let agent = Agent::new(
|
2026-04-05 04:20:49 -04:00
|
|
|
|
client,
|
|
|
|
|
|
config.context_parts.clone(),
|
|
|
|
|
|
config.app.clone(),
|
|
|
|
|
|
conversation_log,
|
2026-04-08 16:45:56 -04:00
|
|
|
|
crate::agent::tools::ActiveTools::new(),
|
2026-04-11 19:43:24 -04:00
|
|
|
|
crate::agent::tools::tools(),
|
2026-04-08 15:47:21 -04:00
|
|
|
|
).await;
|
2026-04-05 04:20:49 -04:00
|
|
|
|
|
2026-04-16 12:53:22 -04:00
|
|
|
|
// Migrate legacy "file exists = enabled" sentinel for the
|
|
|
|
|
|
// generate-alternates flag into the config. One-shot; after this
|
|
|
|
|
|
// the sentinel is gone and the config is the source of truth.
|
|
|
|
|
|
let legacy_sentinel = dirs::home_dir().unwrap_or_default()
|
|
|
|
|
|
.join(".consciousness/cache/finetune-alternates");
|
|
|
|
|
|
if legacy_sentinel.exists() {
|
|
|
|
|
|
if !crate::config::app().learn.generate_alternates {
|
|
|
|
|
|
let _ = crate::config_writer::set_learn_generate_alternates(true);
|
|
|
|
|
|
}
|
|
|
|
|
|
let _ = std::fs::remove_file(&legacy_sentinel);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
learn: F6 screen — scoring stats, ActivityGuard, configurable threshold
Three changes that together reshape the F6 fine-tune-review screen:
1. Finetune scoring reports through the standard agent activity system
instead of a separate finetune_progress String. The previous design
ran an independent progress field that forced a cross-lock dance and
bespoke UI plumbing. start_finetune_scoring now uses start_activity
+ activity.update, so the usual status line and notifications
capture scoring progress uniformly with other background work.
2. MindState gains a FinetuneScoringStats snapshot (responses seen,
above threshold, max divergence, error). The F6 empty screen shows
this instead of a loading message — so after a scoring run that
produced zero candidates, you can see *why* (e.g., max_divergence
below threshold).
3. The divergence threshold is configurable from F6 via +/- hotkeys
(scales by 10×) and persisted to ~/.consciousness/config.json5 via
config_writer::set_learn_threshold. AppConfig grows a learn section
with a threshold field (default 1e-7).
Also: user/mod.rs no longer uses try_lock() for the per-tick
unconscious/mind state sync — we fixed the locking hot paths that
made try_lock necessary, so lock().await is now the right choice.
And subconscious::learn::score_finetune_candidates now returns
(candidates, max_divergence) so the stats can be populated.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 11:49:26 -04:00
|
|
|
|
let shared = Arc::new(std::sync::Mutex::new(MindState::new(
|
|
|
|
|
|
config.app.dmn.max_turns,
|
|
|
|
|
|
)));
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
let (turn_watch, _) = tokio::sync::watch::channel(false);
|
2026-04-10 03:20:12 -04:00
|
|
|
|
let (conscious_active, _) = tokio::sync::watch::channel(false);
|
2026-04-05 04:32:11 -04:00
|
|
|
|
|
|
|
|
|
|
let mut sup = crate::thalamus::supervisor::Supervisor::new();
|
|
|
|
|
|
sup.load_config();
|
|
|
|
|
|
sup.ensure_running();
|
|
|
|
|
|
|
2026-04-12 20:27:42 -04:00
|
|
|
|
let subconscious = Arc::new(crate::Mutex::new(Subconscious::new()));
|
2026-04-08 20:37:19 -04:00
|
|
|
|
subconscious.lock().await.init_output_tool(subconscious.clone());
|
|
|
|
|
|
|
2026-04-12 20:27:42 -04:00
|
|
|
|
let unconscious = Arc::new(crate::Mutex::new(Unconscious::new()));
|
2026-04-10 03:20:12 -04:00
|
|
|
|
|
|
|
|
|
|
// Spawn the unconscious loop on its own task
|
|
|
|
|
|
if !config.no_agents {
|
|
|
|
|
|
let unc = unconscious.clone();
|
|
|
|
|
|
let shared_for_unc = shared.clone();
|
|
|
|
|
|
let mut unc_rx = conscious_active.subscribe();
|
|
|
|
|
|
tokio::spawn(async move {
|
|
|
|
|
|
const IDLE_DELAY: std::time::Duration = std::time::Duration::from_secs(60);
|
|
|
|
|
|
loop {
|
|
|
|
|
|
// Wait for conscious side to go inactive
|
|
|
|
|
|
if *unc_rx.borrow() {
|
|
|
|
|
|
if unc_rx.changed().await.is_err() { break; }
|
|
|
|
|
|
continue;
|
|
|
|
|
|
}
|
|
|
|
|
|
// Conscious is inactive — wait 60s before starting
|
|
|
|
|
|
let deadline = tokio::time::Instant::now() + IDLE_DELAY;
|
|
|
|
|
|
{
|
|
|
|
|
|
let mut s = shared_for_unc.lock().unwrap();
|
|
|
|
|
|
s.unc_idle = false;
|
|
|
|
|
|
s.unc_idle_deadline = Instant::now() + IDLE_DELAY;
|
|
|
|
|
|
}
|
|
|
|
|
|
let went_active = tokio::select! {
|
|
|
|
|
|
_ = tokio::time::sleep_until(deadline) => false,
|
|
|
|
|
|
r = unc_rx.changed() => r.is_ok(),
|
|
|
|
|
|
};
|
|
|
|
|
|
if went_active { continue; }
|
|
|
|
|
|
|
|
|
|
|
|
// Idle period reached — run agents until conscious goes active
|
|
|
|
|
|
{
|
|
|
|
|
|
let mut s = shared_for_unc.lock().unwrap();
|
|
|
|
|
|
s.unc_idle = true;
|
|
|
|
|
|
}
|
2026-04-13 22:38:01 -04:00
|
|
|
|
|
|
|
|
|
|
// Get wake notify for event-driven loop
|
|
|
|
|
|
let wake = unc.lock().await.wake.clone();
|
|
|
|
|
|
let mut health_interval = tokio::time::interval(std::time::Duration::from_secs(600));
|
|
|
|
|
|
health_interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Skip);
|
|
|
|
|
|
|
2026-04-10 03:20:12 -04:00
|
|
|
|
loop {
|
2026-04-13 22:38:01 -04:00
|
|
|
|
// Do work: reap finished agents, spawn new ones
|
|
|
|
|
|
let (to_spawn, needs_health) = {
|
2026-04-12 20:33:23 -04:00
|
|
|
|
let mut guard = unc.lock().await;
|
|
|
|
|
|
guard.reap_finished();
|
2026-04-13 22:38:01 -04:00
|
|
|
|
(guard.select_to_spawn(), guard.needs_health_refresh())
|
2026-04-12 20:33:23 -04:00
|
|
|
|
};
|
2026-04-13 22:38:01 -04:00
|
|
|
|
|
|
|
|
|
|
// Spawn agents outside lock
|
2026-04-12 20:33:23 -04:00
|
|
|
|
for (idx, name, auto) in to_spawn {
|
2026-04-13 22:38:01 -04:00
|
|
|
|
match crate::mind::unconscious::prepare_spawn(&name, auto, wake.clone()).await {
|
2026-04-12 20:33:23 -04:00
|
|
|
|
Ok(result) => unc.lock().await.complete_spawn(idx, result),
|
|
|
|
|
|
Err(auto) => unc.lock().await.abort_spawn(idx, auto),
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
2026-04-13 22:38:01 -04:00
|
|
|
|
|
|
|
|
|
|
// Health check outside lock (slow I/O)
|
|
|
|
|
|
if needs_health {
|
|
|
|
|
|
if let Ok(store_arc) = access_local() {
|
|
|
|
|
|
let health = crate::subconscious::daemon::compute_graph_health(&store_arc);
|
|
|
|
|
|
unc.lock().await.set_health(health);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
// Wait for: conscious active, agent finished, or health timer
|
|
|
|
|
|
tokio::select! {
|
|
|
|
|
|
_ = unc_rx.changed() => {
|
|
|
|
|
|
if *unc_rx.borrow() { break; }
|
|
|
|
|
|
}
|
|
|
|
|
|
_ = wake.notified() => {}
|
|
|
|
|
|
_ = health_interval.tick() => {}
|
|
|
|
|
|
}
|
2026-04-10 03:20:12 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
});
|
|
|
|
|
|
}
|
|
|
|
|
|
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
let scores_path = config.session_dir.join("memory-scores.json");
|
|
|
|
|
|
let memory_scoring = learn::MemoryScoring::new(
|
|
|
|
|
|
agent.clone(), shared.clone(), scores_path);
|
|
|
|
|
|
let finetune_scoring = learn::FinetuneScoring::new(agent.clone(), shared.clone());
|
user: F7 compare screen
Side-by-side model comparison against the current conversation context.
Built on the MindTriggered pattern — F7 drops in as one more
CompareScoring flow next to MemoryScoring / FinetuneScoring.
Motivation: we have the VRAM on the b200 to load two versions of the
same family simultaneously (e.g. Qwen3.5 27B bf16 and q8_k_xl). Rather
than trust perplexity/KLD numbers on a generic corpus, we can measure
divergence on our actual conversations: for each assistant response,
ask the test model what it would have said given the same prefix, and
eyeball the diffs.
- config.compare.test_backend — names an entry in the existing
backends map to use as the test model. Empty = F7 reports "(unset)"
and does nothing.
- subconscious::compare::{score_compare_candidates, CompareCandidate,
CompareScoringStats, CompareScoring}. For each assistant response,
gen_continuation runs with the test client against the same prefix
the original response saw; pairs stream into
shared.compare_candidates as they complete.
- user::compare::CompareScreen — F7 in the screen list. c/Enter
triggers a run; list/detail layout mirroring F6, detail shows
prior context / original / test-model alternate.
No persistence yet — each F7 run regenerates. Caching via a context
manifest (so we can re-view without re-burning generation) is the
natural follow-up; for now light usage is fine.
Also reusable later for validating finetune checkpoints: same pattern,
swap the test backend for the new checkpoint, watch where it diverges
from the base.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:01:11 -04:00
|
|
|
|
let compare_scoring = compare::CompareScoring::new(agent.clone(), shared.clone());
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
|
2026-04-07 02:13:06 -04:00
|
|
|
|
Self { agent, shared, config,
|
2026-04-10 03:20:12 -04:00
|
|
|
|
subconscious, unconscious,
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
turn_tx, turn_watch, conscious_active,
|
|
|
|
|
|
memory_scoring,
|
|
|
|
|
|
finetune_scoring,
|
user: F7 compare screen
Side-by-side model comparison against the current conversation context.
Built on the MindTriggered pattern — F7 drops in as one more
CompareScoring flow next to MemoryScoring / FinetuneScoring.
Motivation: we have the VRAM on the b200 to load two versions of the
same family simultaneously (e.g. Qwen3.5 27B bf16 and q8_k_xl). Rather
than trust perplexity/KLD numbers on a generic corpus, we can measure
divergence on our actual conversations: for each assistant response,
ask the test model what it would have said given the same prefix, and
eyeball the diffs.
- config.compare.test_backend — names an entry in the existing
backends map to use as the test model. Empty = F7 reports "(unset)"
and does nothing.
- subconscious::compare::{score_compare_candidates, CompareCandidate,
CompareScoringStats, CompareScoring}. For each assistant response,
gen_continuation runs with the test client against the same prefix
the original response saw; pairs stream into
shared.compare_candidates as they complete.
- user::compare::CompareScreen — F7 in the screen list. c/Enter
triggers a run; list/detail layout mirroring F6, detail shows
prior context / original / test-model alternate.
No persistence yet — each F7 run regenerates. Caching via a context
manifest (so we can re-view without re-burning generation) is the
natural follow-up; for now light usage is fine.
Also reusable later for validating finetune checkpoints: same pattern,
swap the test backend for the new checkpoint, watch where it diverges
from the base.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:01:11 -04:00
|
|
|
|
compare_scoring,
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
_supervisor: sup }
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-05 04:17:04 -04:00
|
|
|
|
/// Initialize — restore log, start daemons and background agents.
|
2026-04-07 02:31:52 -04:00
|
|
|
|
pub async fn subconscious_snapshots(&self) -> Vec<SubconsciousSnapshot> {
|
2026-04-07 19:27:36 -04:00
|
|
|
|
// Lock ordering: subconscious → store (store is bottom-most).
|
|
|
|
|
|
let sub = self.subconscious.lock().await;
|
2026-04-13 18:11:58 -04:00
|
|
|
|
let store_arc = crate::hippocampus::access_local().ok();
|
|
|
|
|
|
let store_guard = match &store_arc {
|
2026-04-13 21:49:54 -04:00
|
|
|
|
Some(s) => Some(&**s),
|
2026-04-07 19:02:58 -04:00
|
|
|
|
None => None,
|
|
|
|
|
|
};
|
2026-04-07 19:27:36 -04:00
|
|
|
|
sub.snapshots(store_guard.as_deref())
|
2026-04-07 02:31:52 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
pub async fn subconscious_walked(&self) -> Vec<String> {
|
2026-04-07 19:16:01 -04:00
|
|
|
|
self.subconscious.lock().await.walked()
|
2026-04-07 01:59:09 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-08 23:39:48 -04:00
|
|
|
|
pub async fn unconscious_snapshots(&self) -> Vec<UnconsciousSnapshot> {
|
2026-04-11 21:57:24 -04:00
|
|
|
|
let unc = self.unconscious.lock().await;
|
2026-04-13 18:11:58 -04:00
|
|
|
|
let store_arc = crate::hippocampus::access_local().ok();
|
|
|
|
|
|
let store_guard = match &store_arc {
|
2026-04-13 21:49:54 -04:00
|
|
|
|
Some(s) => Some(&**s),
|
2026-04-11 21:57:24 -04:00
|
|
|
|
None => None,
|
|
|
|
|
|
};
|
|
|
|
|
|
unc.snapshots(store_guard.as_deref())
|
2026-04-08 23:39:48 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-05 04:42:50 -04:00
|
|
|
|
pub async fn init(&self) {
|
2026-04-05 04:17:04 -04:00
|
|
|
|
// Restore conversation
|
2026-04-08 15:47:21 -04:00
|
|
|
|
self.agent.restore_from_log().await;
|
2026-04-07 19:35:46 -04:00
|
|
|
|
|
|
|
|
|
|
// Restore persisted memory scores
|
|
|
|
|
|
let scores_path = self.config.session_dir.join("memory-scores.json");
|
2026-04-08 15:47:21 -04:00
|
|
|
|
load_memory_scores(&mut *self.agent.context.lock().await, &scores_path);
|
2026-04-07 19:35:46 -04:00
|
|
|
|
|
2026-04-08 15:47:21 -04:00
|
|
|
|
self.agent.state.lock().await.changed.notify_one();
|
2026-04-07 19:02:58 -04:00
|
|
|
|
|
|
|
|
|
|
// Load persistent subconscious state
|
|
|
|
|
|
let state_path = self.config.session_dir.join("subconscious-state.json");
|
|
|
|
|
|
self.subconscious.lock().await.set_state_path(state_path);
|
2026-04-16 20:47:16 -04:00
|
|
|
|
|
|
|
|
|
|
// Kick off an incremental scoring pass on startup so memories due
|
|
|
|
|
|
// for re-scoring get evaluated without requiring a user message.
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
self.memory_scoring.trigger();
|
2026-04-05 04:02:16 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
pub fn turn_watch(&self) -> tokio::sync::watch::Receiver<bool> {
|
|
|
|
|
|
self.turn_watch.subscribe()
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
/// Execute an Action from a MindState method.
|
2026-04-05 04:42:50 -04:00
|
|
|
|
async fn run_commands(&self, cmds: Vec<MindCommand>) {
|
2026-04-05 03:34:43 -04:00
|
|
|
|
for cmd in cmds {
|
|
|
|
|
|
match cmd {
|
|
|
|
|
|
MindCommand::None => {}
|
2026-04-05 03:43:53 -04:00
|
|
|
|
MindCommand::Compact => {
|
2026-04-06 20:34:51 -04:00
|
|
|
|
let threshold = compaction_threshold(&self.config.app) as usize;
|
2026-04-08 15:47:21 -04:00
|
|
|
|
if self.agent.context.lock().await.tokens() > threshold {
|
|
|
|
|
|
self.agent.compact().await;
|
|
|
|
|
|
self.agent.state.lock().await.notify("compacted");
|
2026-04-05 03:43:53 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
2026-04-05 03:34:43 -04:00
|
|
|
|
MindCommand::Score => {
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
self.memory_scoring.trigger();
|
2026-04-09 22:19:02 -04:00
|
|
|
|
}
|
|
|
|
|
|
MindCommand::ScoreFull => {
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
self.memory_scoring.trigger_full();
|
2026-04-05 03:34:43 -04:00
|
|
|
|
}
|
2026-04-05 03:41:47 -04:00
|
|
|
|
MindCommand::Interrupt => {
|
2026-04-05 03:34:43 -04:00
|
|
|
|
self.shared.lock().unwrap().interrupt();
|
2026-04-08 16:45:56 -04:00
|
|
|
|
self.agent.state.lock().await.active_tools.abort_all();
|
2026-04-05 04:42:50 -04:00
|
|
|
|
if let Some(h) = self.shared.lock().unwrap().turn_handle.take() { h.abort(); }
|
2026-04-05 03:34:43 -04:00
|
|
|
|
self.shared.lock().unwrap().turn_active = false;
|
|
|
|
|
|
let _ = self.turn_watch.send(false);
|
|
|
|
|
|
}
|
|
|
|
|
|
MindCommand::NewSession => {
|
2026-04-05 03:46:29 -04:00
|
|
|
|
{
|
|
|
|
|
|
let mut s = self.shared.lock().unwrap();
|
2026-04-08 23:37:01 -04:00
|
|
|
|
s.dmn = subconscious::State::Resting { since: Instant::now() };
|
2026-04-05 03:46:29 -04:00
|
|
|
|
s.dmn_turns = 0;
|
|
|
|
|
|
}
|
2026-04-05 03:34:43 -04:00
|
|
|
|
let new_log = log::ConversationLog::new(
|
|
|
|
|
|
self.config.session_dir.join("conversation.jsonl"),
|
|
|
|
|
|
).ok();
|
2026-04-08 15:47:21 -04:00
|
|
|
|
{
|
|
|
|
|
|
let mut ctx = self.agent.context.lock().await;
|
|
|
|
|
|
ctx.clear(Section::Conversation);
|
2026-04-09 00:32:32 -04:00
|
|
|
|
ctx.conversation_log = new_log;
|
2026-04-08 15:47:21 -04:00
|
|
|
|
}
|
|
|
|
|
|
{
|
|
|
|
|
|
let mut st = self.agent.state.lock().await;
|
|
|
|
|
|
st.generation += 1;
|
|
|
|
|
|
st.last_prompt_tokens = 0;
|
|
|
|
|
|
}
|
|
|
|
|
|
self.agent.compact().await;
|
2026-04-05 03:34:43 -04:00
|
|
|
|
}
|
2026-04-16 00:31:39 -04:00
|
|
|
|
MindCommand::ScoreFinetune => {
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
self.finetune_scoring.trigger();
|
2026-04-16 00:31:39 -04:00
|
|
|
|
}
|
user: F7 compare screen
Side-by-side model comparison against the current conversation context.
Built on the MindTriggered pattern — F7 drops in as one more
CompareScoring flow next to MemoryScoring / FinetuneScoring.
Motivation: we have the VRAM on the b200 to load two versions of the
same family simultaneously (e.g. Qwen3.5 27B bf16 and q8_k_xl). Rather
than trust perplexity/KLD numbers on a generic corpus, we can measure
divergence on our actual conversations: for each assistant response,
ask the test model what it would have said given the same prefix, and
eyeball the diffs.
- config.compare.test_backend — names an entry in the existing
backends map to use as the test model. Empty = F7 reports "(unset)"
and does nothing.
- subconscious::compare::{score_compare_candidates, CompareCandidate,
CompareScoringStats, CompareScoring}. For each assistant response,
gen_continuation runs with the test client against the same prefix
the original response saw; pairs stream into
shared.compare_candidates as they complete.
- user::compare::CompareScreen — F7 in the screen list. c/Enter
triggers a run; list/detail layout mirroring F6, detail shows
prior context / original / test-model alternate.
No persistence yet — each F7 run regenerates. Caching via a context
manifest (so we can re-view without re-burning generation) is the
natural follow-up; for now light usage is fine.
Also reusable later for validating finetune checkpoints: same pattern,
swap the test backend for the new checkpoint, watch where it diverges
from the base.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:01:11 -04:00
|
|
|
|
MindCommand::Compare => {
|
|
|
|
|
|
self.compare_scoring.trigger();
|
|
|
|
|
|
}
|
learn: F6 screen — scoring stats, ActivityGuard, configurable threshold
Three changes that together reshape the F6 fine-tune-review screen:
1. Finetune scoring reports through the standard agent activity system
instead of a separate finetune_progress String. The previous design
ran an independent progress field that forced a cross-lock dance and
bespoke UI plumbing. start_finetune_scoring now uses start_activity
+ activity.update, so the usual status line and notifications
capture scoring progress uniformly with other background work.
2. MindState gains a FinetuneScoringStats snapshot (responses seen,
above threshold, max divergence, error). The F6 empty screen shows
this instead of a loading message — so after a scoring run that
produced zero candidates, you can see *why* (e.g., max_divergence
below threshold).
3. The divergence threshold is configurable from F6 via +/- hotkeys
(scales by 10×) and persisted to ~/.consciousness/config.json5 via
config_writer::set_learn_threshold. AppConfig grows a learn section
with a threshold field (default 1e-7).
Also: user/mod.rs no longer uses try_lock() for the per-tick
unconscious/mind state sync — we fixed the locking hot paths that
made try_lock necessary, so lock().await is now the right choice.
And subconscious::learn::score_finetune_candidates now returns
(candidates, max_divergence) so the stats can be populated.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 11:49:26 -04:00
|
|
|
|
MindCommand::SetLearnThreshold(value) => {
|
|
|
|
|
|
if let Err(e) = crate::config_writer::set_learn_threshold(value) {
|
|
|
|
|
|
dbglog!("[learn] failed to persist threshold {}: {:#}", value, e);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
2026-04-16 12:53:22 -04:00
|
|
|
|
MindCommand::SetLearnGenerateAlternates(value) => {
|
|
|
|
|
|
if let Err(e) = crate::config_writer::set_learn_generate_alternates(value) {
|
|
|
|
|
|
dbglog!("[learn] failed to persist generate_alternates {}: {:#}",
|
|
|
|
|
|
value, e);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-16 00:31:39 -04:00
|
|
|
|
|
2026-04-07 01:33:07 -04:00
|
|
|
|
async fn start_turn(&self, text: &str, target: StreamTarget) {
|
|
|
|
|
|
{
|
2026-04-06 21:48:12 -04:00
|
|
|
|
match target {
|
|
|
|
|
|
StreamTarget::Conversation => {
|
2026-04-08 15:47:21 -04:00
|
|
|
|
self.agent.push_node(AstNode::user_msg(text)).await;
|
2026-04-06 21:48:12 -04:00
|
|
|
|
}
|
|
|
|
|
|
StreamTarget::Autonomous => {
|
2026-04-08 15:47:21 -04:00
|
|
|
|
self.agent.push_node(AstNode::dmn(text)).await;
|
2026-04-06 21:48:12 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
// Compact if over budget before sending
|
|
|
|
|
|
let threshold = compaction_threshold(&self.config.app) as usize;
|
2026-04-08 15:47:21 -04:00
|
|
|
|
if self.agent.context.lock().await.tokens() > threshold {
|
|
|
|
|
|
self.agent.compact().await;
|
|
|
|
|
|
self.agent.state.lock().await.notify("compacted");
|
2026-04-06 21:48:12 -04:00
|
|
|
|
}
|
2026-04-06 20:34:51 -04:00
|
|
|
|
}
|
|
|
|
|
|
self.shared.lock().unwrap().turn_active = true;
|
|
|
|
|
|
let _ = self.turn_watch.send(true);
|
2026-04-10 03:20:12 -04:00
|
|
|
|
let _ = self.conscious_active.send(true);
|
2026-04-06 20:34:51 -04:00
|
|
|
|
let agent = self.agent.clone();
|
|
|
|
|
|
let result_tx = self.turn_tx.clone();
|
|
|
|
|
|
self.shared.lock().unwrap().turn_handle = Some(tokio::spawn(async move {
|
|
|
|
|
|
let result = Agent::turn(agent).await;
|
|
|
|
|
|
let _ = result_tx.send((result, target)).await;
|
|
|
|
|
|
}));
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-05 04:42:50 -04:00
|
|
|
|
pub async fn shutdown(&self) {
|
|
|
|
|
|
if let Some(handle) = self.shared.lock().unwrap().turn_handle.take() { handle.abort(); }
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|
mind: split event loop — Mind and UI run independently
Mind::run() owns the cognitive event loop: user input, turn results,
DMN ticks, hotkey actions. The UI event loop (user/event_loop.rs) owns
the terminal: key events, render ticks, channel status display.
They communicate through channels: UI sends MindMessage (user input,
hotkey actions) to Mind. Mind sends UiMessage (status, info) to UI.
UI reads shared state (active tools, context) directly for rendering.
Removes direct coupling between Mind and App:
- cycle_reasoning no longer takes &mut App
- AdjustSampling updates agent only, UI reads from shared state
- /quit handled by UI directly, not routed through Mind
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:11:32 -04:00
|
|
|
|
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
/// Mind event loop — locks MindState, calls state methods, executes actions.
|
mind: split event loop — Mind and UI run independently
Mind::run() owns the cognitive event loop: user input, turn results,
DMN ticks, hotkey actions. The UI event loop (user/event_loop.rs) owns
the terminal: key events, render ticks, channel status display.
They communicate through channels: UI sends MindMessage (user input,
hotkey actions) to Mind. Mind sends UiMessage (status, info) to UI.
UI reads shared state (active tools, context) directly for rendering.
Removes direct coupling between Mind and App:
- cycle_reasoning no longer takes &mut App
- AdjustSampling updates agent only, UI reads from shared state
- /quit handled by UI directly, not routed through Mind
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:11:32 -04:00
|
|
|
|
pub async fn run(
|
2026-04-05 04:42:50 -04:00
|
|
|
|
&self,
|
2026-04-05 03:34:43 -04:00
|
|
|
|
mut input_rx: tokio::sync::mpsc::UnboundedReceiver<MindCommand>,
|
mind: split event loop — Mind and UI run independently
Mind::run() owns the cognitive event loop: user input, turn results,
DMN ticks, hotkey actions. The UI event loop (user/event_loop.rs) owns
the terminal: key events, render ticks, channel status display.
They communicate through channels: UI sends MindMessage (user input,
hotkey actions) to Mind. Mind sends UiMessage (status, info) to UI.
UI reads shared state (active tools, context) directly for rendering.
Removes direct coupling between Mind and App:
- cycle_reasoning no longer takes &mut App
- AdjustSampling updates agent only, UI reads from shared state
- /quit handled by UI directly, not routed through Mind
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:11:32 -04:00
|
|
|
|
mut turn_rx: mpsc::Receiver<(Result<TurnResult>, StreamTarget)>,
|
|
|
|
|
|
) {
|
2026-04-12 20:27:42 -04:00
|
|
|
|
// Spawn lock stats logger
|
|
|
|
|
|
tokio::spawn(async {
|
|
|
|
|
|
let path = dirs::home_dir().unwrap_or_default()
|
|
|
|
|
|
.join(".consciousness/lock-stats.json");
|
|
|
|
|
|
let mut interval = tokio::time::interval(std::time::Duration::from_secs(1));
|
|
|
|
|
|
loop {
|
|
|
|
|
|
interval.tick().await;
|
|
|
|
|
|
let stats = crate::locks::lock_stats();
|
|
|
|
|
|
if stats.is_empty() { continue; }
|
|
|
|
|
|
let json: Vec<serde_json::Value> = stats.iter()
|
|
|
|
|
|
.map(|(loc, s)| serde_json::json!({
|
|
|
|
|
|
"location": loc,
|
|
|
|
|
|
"count": s.count,
|
|
|
|
|
|
"total_ms": s.total_ns as f64 / 1_000_000.0,
|
|
|
|
|
|
"avg_ms": s.avg_ns as f64 / 1_000_000.0,
|
|
|
|
|
|
"max_ms": s.max_ns as f64 / 1_000_000.0,
|
|
|
|
|
|
}))
|
|
|
|
|
|
.collect();
|
|
|
|
|
|
let _ = std::fs::write(&path, serde_json::to_string_pretty(&json).unwrap_or_default());
|
|
|
|
|
|
}
|
|
|
|
|
|
});
|
|
|
|
|
|
|
2026-04-09 00:21:46 -04:00
|
|
|
|
let mut sub_handle: Option<tokio::task::JoinHandle<()>> = None;
|
learn: F6 screen — scoring stats, ActivityGuard, configurable threshold
Three changes that together reshape the F6 fine-tune-review screen:
1. Finetune scoring reports through the standard agent activity system
instead of a separate finetune_progress String. The previous design
ran an independent progress field that forced a cross-lock dance and
bespoke UI plumbing. start_finetune_scoring now uses start_activity
+ activity.update, so the usual status line and notifications
capture scoring progress uniformly with other background work.
2. MindState gains a FinetuneScoringStats snapshot (responses seen,
above threshold, max divergence, error). The F6 empty screen shows
this instead of a loading message — so after a scoring run that
produced zero candidates, you can see *why* (e.g., max_divergence
below threshold).
3. The divergence threshold is configurable from F6 via +/- hotkeys
(scales by 10×) and persisted to ~/.consciousness/config.json5 via
config_writer::set_learn_threshold. AppConfig grows a learn section
with a threshold field (default 1e-7).
Also: user/mod.rs no longer uses try_lock() for the per-tick
unconscious/mind state sync — we fixed the locking hot paths that
made try_lock necessary, so lock().await is now the right choice.
And subconscious::learn::score_finetune_candidates now returns
(candidates, max_divergence) so the stats can be populated.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 11:49:26 -04:00
|
|
|
|
|
|
|
|
|
|
// Start finetune scoring at startup (scores existing conversation)
|
|
|
|
|
|
if !self.config.no_agents {
|
mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.
Introduce the MindTriggered trait:
pub trait MindTriggered {
fn trigger(&self);
}
Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):
subconscious::learn::MemoryScoring (Score, ScoreFull)
subconscious::learn::FinetuneScoring (ScoreFinetune)
Mind holds one of each and dispatches in one line:
MindCommand::Score => self.memory_scoring.trigger(),
MindCommand::ScoreFull => self.memory_scoring.trigger_full(),
MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),
Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.
Falls out:
- BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
to their slice of MindState and call agent.state.changed.notify_one()
to wake the UI. The bg_rx arm in Mind's select loop is gone.
- agent.state.memory_scoring_in_flight was duplicating
shared.scoring_in_flight via BgEvent routing; now the JoinHandle
alone tells us, and shared.scoring_in_flight is written directly
by the task for the UI.
- start_memory_scoring / start_full_scoring / start_finetune_scoring
methods on Mind are deleted; Mind no longer knows the setup shape
of any scoring flow.
- FinetuneScoringStats moves from mind/ to subconscious/learn.rs
next to the function that produces it.
No behavior change — same flows, same trigger points, same semantics.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:57:23 -04:00
|
|
|
|
self.finetune_scoring.trigger();
|
learn: F6 screen — scoring stats, ActivityGuard, configurable threshold
Three changes that together reshape the F6 fine-tune-review screen:
1. Finetune scoring reports through the standard agent activity system
instead of a separate finetune_progress String. The previous design
ran an independent progress field that forced a cross-lock dance and
bespoke UI plumbing. start_finetune_scoring now uses start_activity
+ activity.update, so the usual status line and notifications
capture scoring progress uniformly with other background work.
2. MindState gains a FinetuneScoringStats snapshot (responses seen,
above threshold, max divergence, error). The F6 empty screen shows
this instead of a loading message — so after a scoring run that
produced zero candidates, you can see *why* (e.g., max_divergence
below threshold).
3. The divergence threshold is configurable from F6 via +/- hotkeys
(scales by 10×) and persisted to ~/.consciousness/config.json5 via
config_writer::set_learn_threshold. AppConfig grows a learn section
with a threshold field (default 1e-7).
Also: user/mod.rs no longer uses try_lock() for the per-tick
unconscious/mind state sync — we fixed the locking hot paths that
made try_lock necessary, so lock().await is now the right choice.
And subconscious::learn::score_finetune_candidates now returns
(candidates, max_divergence) so the stats can be populated.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 11:49:26 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
mind: split event loop — Mind and UI run independently
Mind::run() owns the cognitive event loop: user input, turn results,
DMN ticks, hotkey actions. The UI event loop (user/event_loop.rs) owns
the terminal: key events, render ticks, channel status display.
They communicate through channels: UI sends MindMessage (user input,
hotkey actions) to Mind. Mind sends UiMessage (status, info) to UI.
UI reads shared state (active tools, context) directly for rendering.
Removes direct coupling between Mind and App:
- cycle_reasoning no longer takes &mut App
- AdjustSampling updates agent only, UI reads from shared state
- /quit handled by UI directly, not routed through Mind
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:11:32 -04:00
|
|
|
|
loop {
|
2026-04-09 00:32:32 -04:00
|
|
|
|
let (timeout, has_input) = {
|
2026-04-09 00:21:46 -04:00
|
|
|
|
let me = self.shared.lock().unwrap();
|
2026-04-09 00:32:32 -04:00
|
|
|
|
(me.dmn.interval(), me.has_pending_input())
|
2026-04-09 00:21:46 -04:00
|
|
|
|
};
|
mind: split event loop — Mind and UI run independently
Mind::run() owns the cognitive event loop: user input, turn results,
DMN ticks, hotkey actions. The UI event loop (user/event_loop.rs) owns
the terminal: key events, render ticks, channel status display.
They communicate through channels: UI sends MindMessage (user input,
hotkey actions) to Mind. Mind sends UiMessage (status, info) to UI.
UI reads shared state (active tools, context) directly for rendering.
Removes direct coupling between Mind and App:
- cycle_reasoning no longer takes &mut App
- AdjustSampling updates agent only, UI reads from shared state
- /quit handled by UI directly, not routed through Mind
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:11:32 -04:00
|
|
|
|
|
2026-04-05 03:34:43 -04:00
|
|
|
|
let mut cmds = Vec::new();
|
2026-04-13 22:38:01 -04:00
|
|
|
|
#[allow(unused_assignments)]
|
|
|
|
|
|
let mut _dmn_expired = false;
|
2026-04-05 03:34:43 -04:00
|
|
|
|
|
mind: split event loop — Mind and UI run independently
Mind::run() owns the cognitive event loop: user input, turn results,
DMN ticks, hotkey actions. The UI event loop (user/event_loop.rs) owns
the terminal: key events, render ticks, channel status display.
They communicate through channels: UI sends MindMessage (user input,
hotkey actions) to Mind. Mind sends UiMessage (status, info) to UI.
UI reads shared state (active tools, context) directly for rendering.
Removes direct coupling between Mind and App:
- cycle_reasoning no longer takes &mut App
- AdjustSampling updates agent only, UI reads from shared state
- /quit handled by UI directly, not routed through Mind
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:11:32 -04:00
|
|
|
|
tokio::select! {
|
|
|
|
|
|
biased;
|
|
|
|
|
|
|
2026-04-05 16:18:10 -04:00
|
|
|
|
cmd = input_rx.recv() => {
|
|
|
|
|
|
match cmd {
|
|
|
|
|
|
Some(cmd) => cmds.push(cmd),
|
|
|
|
|
|
None => break, // UI shut down
|
|
|
|
|
|
}
|
mind: split event loop — Mind and UI run independently
Mind::run() owns the cognitive event loop: user input, turn results,
DMN ticks, hotkey actions. The UI event loop (user/event_loop.rs) owns
the terminal: key events, render ticks, channel status display.
They communicate through channels: UI sends MindMessage (user input,
hotkey actions) to Mind. Mind sends UiMessage (status, info) to UI.
UI reads shared state (active tools, context) directly for rendering.
Removes direct coupling between Mind and App:
- cycle_reasoning no longer takes &mut App
- AdjustSampling updates agent only, UI reads from shared state
- /quit handled by UI directly, not routed through Mind
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:11:32 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
Some((result, target)) = turn_rx.recv() => {
|
2026-04-10 03:20:12 -04:00
|
|
|
|
let _ = self.conscious_active.send(false);
|
2026-04-09 00:21:46 -04:00
|
|
|
|
let model_switch = {
|
|
|
|
|
|
let mut s = self.shared.lock().unwrap();
|
|
|
|
|
|
s.turn_handle = None;
|
|
|
|
|
|
s.complete_turn(&result, target)
|
|
|
|
|
|
};
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
let _ = self.turn_watch.send(false);
|
|
|
|
|
|
|
|
|
|
|
|
if let Some(name) = model_switch {
|
2026-04-05 22:18:07 -04:00
|
|
|
|
crate::user::chat::cmd_switch_model(&self.agent, &name).await;
|
mind: move state to MindState, Mind becomes thin event loop
MindState (behind Arc<Mutex<>>) holds all cognitive state: DMN,
turn tracking, pending input, scoring, error counters. Pure state
transition methods (take_pending_input, complete_turn, dmn_tick)
return Action values instead of directly spawning turns.
Mind is now just the event loop: lock MindState, call state methods,
execute returned actions (spawn turns, send UiMessages). No state
of its own except agent handle, turn handle, and watch channel.
mind/mod.rs: 957 → 586 lines.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 03:05:28 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-05 03:34:43 -04:00
|
|
|
|
cmds.push(MindCommand::Compact);
|
|
|
|
|
|
if !self.config.no_agents {
|
|
|
|
|
|
cmds.push(MindCommand::Score);
|
2026-04-16 00:31:39 -04:00
|
|
|
|
cmds.push(MindCommand::ScoreFinetune);
|
2026-04-05 03:34:43 -04:00
|
|
|
|
}
|
mind: split event loop — Mind and UI run independently
Mind::run() owns the cognitive event loop: user input, turn results,
DMN ticks, hotkey actions. The UI event loop (user/event_loop.rs) owns
the terminal: key events, render ticks, channel status display.
They communicate through channels: UI sends MindMessage (user input,
hotkey actions) to Mind. Mind sends UiMessage (status, info) to UI.
UI reads shared state (active tools, context) directly for rendering.
Removes direct coupling between Mind and App:
- cycle_reasoning no longer takes &mut App
- AdjustSampling updates agent only, UI reads from shared state
- /quit handled by UI directly, not routed through Mind
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:11:32 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-13 22:38:01 -04:00
|
|
|
|
_ = tokio::time::sleep(timeout), if !has_input => _dmn_expired = true,
|
mind: split event loop — Mind and UI run independently
Mind::run() owns the cognitive event loop: user input, turn results,
DMN ticks, hotkey actions. The UI event loop (user/event_loop.rs) owns
the terminal: key events, render ticks, channel status display.
They communicate through channels: UI sends MindMessage (user input,
hotkey actions) to Mind. Mind sends UiMessage (status, info) to UI.
UI reads shared state (active tools, context) directly for rendering.
Removes direct coupling between Mind and App:
- cycle_reasoning no longer takes &mut App
- AdjustSampling updates agent only, UI reads from shared state
- /quit handled by UI directly, not routed through Mind
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:11:32 -04:00
|
|
|
|
}
|
2026-04-05 03:34:43 -04:00
|
|
|
|
|
2026-04-07 02:37:11 -04:00
|
|
|
|
if !self.config.no_agents {
|
2026-04-09 00:21:46 -04:00
|
|
|
|
if sub_handle.as_ref().map_or(true, |h| h.is_finished()) {
|
|
|
|
|
|
let sub = self.subconscious.clone();
|
|
|
|
|
|
let agent = self.agent.clone();
|
|
|
|
|
|
sub_handle = Some(tokio::spawn(async move {
|
|
|
|
|
|
let mut s = sub.lock().await;
|
|
|
|
|
|
s.collect_results(&agent).await;
|
|
|
|
|
|
s.trigger(&agent).await;
|
|
|
|
|
|
}));
|
|
|
|
|
|
}
|
2026-04-07 02:37:11 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2026-04-06 20:34:51 -04:00
|
|
|
|
// Check for pending user input → push to agent context and start turn
|
|
|
|
|
|
let pending = self.shared.lock().unwrap().take_pending_input();
|
|
|
|
|
|
if let Some(text) = pending {
|
|
|
|
|
|
self.start_turn(&text, StreamTarget::Conversation).await;
|
|
|
|
|
|
}
|
2026-04-09 18:08:07 -04:00
|
|
|
|
/*
|
|
|
|
|
|
else if dmn_expired {
|
|
|
|
|
|
let tick = self.shared.lock().unwrap().dmn_tick();
|
|
|
|
|
|
if let Some((prompt, target)) = tick {
|
|
|
|
|
|
self.start_turn(&prompt, target).await;
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
*/
|
2026-04-05 03:34:43 -04:00
|
|
|
|
|
|
|
|
|
|
self.run_commands(cmds).await;
|
mind: split event loop — Mind and UI run independently
Mind::run() owns the cognitive event loop: user input, turn results,
DMN ticks, hotkey actions. The UI event loop (user/event_loop.rs) owns
the terminal: key events, render ticks, channel status display.
They communicate through channels: UI sends MindMessage (user input,
hotkey actions) to Mind. Mind sends UiMessage (status, info) to UI.
UI reads shared state (active tools, context) directly for rendering.
Removes direct coupling between Mind and App:
- cycle_reasoning no longer takes &mut App
- AdjustSampling updates agent only, UI reads from shared state
- /quit handled by UI directly, not routed through Mind
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-05 02:11:32 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
2026-04-04 02:46:32 -04:00
|
|
|
|
}
|