config: global writable AppConfig; learn settings live there

Runtime-mutable settings (F6's threshold knob, the generate-alternates
toggle, anything else that comes along) were ending up as mirrored
fields on MindState — each new config setting grew MindState::new's
signature and added a clone+sync path. Wrong home. MindState is
ephemeral session state, not a config projection.

Give AppConfig the same treatment the memory Config has: install it
into a global RwLock<AppConfig> at startup via load_app, read through
config::app() (returns a read guard), mutate through update_app. The
config_writer functions now write to disk AND update the cache
atomically, so the one-stop-shop call keeps both in sync.

Also while in here:

- learn.generate_alternates moves from a sentinel file
  (~/.consciousness/cache/finetune-alternates, "exists = enabled")
  into the config under the learn section. On first run with this
  build, if the sentinel file still exists Mind::new flips the
  config value to true and removes it. Drops
  alternates_enabled()/set_alternates().

- Default threshold 0.0000001 → 1.0. With the timestamp filter
  removed the previous value was letting essentially everything
  through; 1.0 is a sane "nothing gets through unless you actually
  want it" default.

- score_finetune_candidates takes generate_alternates as a parameter
  instead of reading a global — caller snapshots the config values
  once at the top of start_finetune_scoring so the async task
  doesn't need to hold the config read lock across awaits.

- MindState.learn_threshold / learn_generate_alternates gone; the
  SetLearn* command handlers now just delegate to config_writer.

Kent noted RwLock<Arc<AppConfig>> (the pattern used by the memory
Config global) is pointless here — nobody needs a snapshot-after-
release, reads are short — so this uses a plain RwLock<AppConfig>
and returns a read guard.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
This commit is contained in:
Kent Overstreet 2026-04-16 12:53:22 -04:00
parent 343e43afab
commit 313f85f34a
5 changed files with 102 additions and 58 deletions

View file

@ -151,9 +151,6 @@ pub struct MindState {
pub finetune_candidates: Vec<learn::FinetuneCandidate>,
/// Last scoring run stats for UI display.
pub finetune_last_run: Option<FinetuneScoringStats>,
/// Divergence threshold for finetune scoring — mutable via F6 hotkeys
/// and persisted back to ~/.consciousness/config.json5.
pub learn_threshold: f64,
}
/// Stats from the last finetune scoring run.
@ -189,7 +186,6 @@ impl Clone for MindState {
unc_idle_deadline: self.unc_idle_deadline,
finetune_candidates: self.finetune_candidates.clone(),
finetune_last_run: self.finetune_last_run.clone(),
learn_threshold: self.learn_threshold,
}
}
}
@ -206,6 +202,8 @@ pub enum MindCommand {
ScoreFinetune,
/// Update the finetune divergence threshold and persist to config.
SetLearnThreshold(f64),
/// Toggle alternate-response generation during scoring; persist to config.
SetLearnGenerateAlternates(bool),
/// Abort current turn, kill processes
Interrupt,
/// Reset session
@ -215,7 +213,7 @@ pub enum MindCommand {
}
impl MindState {
pub fn new(max_dmn_turns: u32, learn_threshold: f64) -> Self {
pub fn new(max_dmn_turns: u32) -> Self {
Self {
input: Vec::new(),
turn_active: false,
@ -233,7 +231,6 @@ impl MindState {
unc_idle_deadline: Instant::now() + std::time::Duration::from_secs(60),
finetune_candidates: Vec::new(),
finetune_last_run: None,
learn_threshold,
}
}
@ -363,9 +360,20 @@ impl Mind {
crate::agent::tools::tools(),
).await;
// Migrate legacy "file exists = enabled" sentinel for the
// generate-alternates flag into the config. One-shot; after this
// the sentinel is gone and the config is the source of truth.
let legacy_sentinel = dirs::home_dir().unwrap_or_default()
.join(".consciousness/cache/finetune-alternates");
if legacy_sentinel.exists() {
if !crate::config::app().learn.generate_alternates {
let _ = crate::config_writer::set_learn_generate_alternates(true);
}
let _ = std::fs::remove_file(&legacy_sentinel);
}
let shared = Arc::new(std::sync::Mutex::new(MindState::new(
config.app.dmn.max_turns,
config.app.learn.threshold,
)));
let (turn_watch, _) = tokio::sync::watch::channel(false);
let (conscious_active, _) = tokio::sync::watch::channel(false);
@ -569,11 +577,16 @@ impl Mind {
self.start_finetune_scoring();
}
MindCommand::SetLearnThreshold(value) => {
self.shared.lock().unwrap().learn_threshold = value;
if let Err(e) = crate::config_writer::set_learn_threshold(value) {
dbglog!("[learn] failed to persist threshold {}: {:#}", value, e);
}
}
MindCommand::SetLearnGenerateAlternates(value) => {
if let Err(e) = crate::config_writer::set_learn_generate_alternates(value) {
dbglog!("[learn] failed to persist generate_alternates {}: {:#}",
value, e);
}
}
}
}
}
@ -656,12 +669,14 @@ impl Mind {
/// once this runs continuously, we'll just train whatever lands at full
/// context without filtering.
pub fn start_finetune_scoring(&self) {
let threshold = {
let mut s = self.shared.lock().unwrap();
// Clear the previous run's candidates so this run's stream in fresh.
s.finetune_candidates.clear();
s.learn_threshold
// Snapshot the config values we need before spawning — the scoring
// task shouldn't hold the config read lock across async work.
let (threshold, gen_alternates) = {
let app = crate::config::app();
(app.learn.threshold, app.learn.generate_alternates)
};
// Clear the previous run's candidates so this run's stream is fresh.
self.shared.lock().unwrap().finetune_candidates.clear();
let agent = self.agent.clone();
let bg_tx = self.bg_tx.clone();
@ -685,7 +700,8 @@ impl Mind {
let bg_tx_cb = bg_tx.clone();
let stats = match learn::score_finetune_candidates(
&context, score_count, &client, threshold, &activity,
&context, score_count, &client, threshold,
gen_alternates, &activity,
|c| { let _ = bg_tx_cb.send(BgEvent::FinetuneCandidate(c)); },
).await {
Ok((above_threshold, max_div)) => {