learn: F6 screen — scoring stats, ActivityGuard, configurable threshold
Three changes that together reshape the F6 fine-tune-review screen: 1. Finetune scoring reports through the standard agent activity system instead of a separate finetune_progress String. The previous design ran an independent progress field that forced a cross-lock dance and bespoke UI plumbing. start_finetune_scoring now uses start_activity + activity.update, so the usual status line and notifications capture scoring progress uniformly with other background work. 2. MindState gains a FinetuneScoringStats snapshot (responses seen, above threshold, max divergence, error). The F6 empty screen shows this instead of a loading message — so after a scoring run that produced zero candidates, you can see *why* (e.g., max_divergence below threshold). 3. The divergence threshold is configurable from F6 via +/- hotkeys (scales by 10×) and persisted to ~/.consciousness/config.json5 via config_writer::set_learn_threshold. AppConfig grows a learn section with a threshold field (default 1e-7). Also: user/mod.rs no longer uses try_lock() for the per-tick unconscious/mind state sync — we fixed the locking hot paths that made try_lock necessary, so lock().await is now the right choice. And subconscious::learn::score_finetune_candidates now returns (candidates, max_divergence) so the stats can be populated. Co-Authored-By: Proof of Concept <poc@bcachefs.org>
This commit is contained in:
parent
ac40c2cb98
commit
e5dd8312c7
5 changed files with 237 additions and 85 deletions
|
|
@ -252,6 +252,8 @@ pub struct AppConfig {
|
|||
pub debug: bool,
|
||||
pub compaction: CompactionConfig,
|
||||
pub dmn: DmnConfig,
|
||||
#[serde(default)]
|
||||
pub learn: LearnConfig,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub memory_project: Option<PathBuf>,
|
||||
#[serde(default)]
|
||||
|
|
@ -323,6 +325,22 @@ pub struct DmnConfig {
|
|||
pub max_turns: u32,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct LearnConfig {
|
||||
/// Divergence threshold — responses scoring above this become
|
||||
/// fine-tuning candidates. Lower = more sensitive.
|
||||
#[serde(default = "default_learn_threshold")]
|
||||
pub threshold: f64,
|
||||
}
|
||||
|
||||
fn default_learn_threshold() -> f64 { 0.0000001 }
|
||||
|
||||
impl Default for LearnConfig {
|
||||
fn default() -> Self {
|
||||
Self { threshold: default_learn_threshold() }
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct ModelConfig {
|
||||
/// Backend name ("anthropic" or "openrouter")
|
||||
|
|
@ -366,6 +384,7 @@ impl Default for AppConfig {
|
|||
soft_threshold_pct: 80,
|
||||
},
|
||||
dmn: DmnConfig { max_turns: 20 },
|
||||
learn: LearnConfig::default(),
|
||||
memory_project: None,
|
||||
models: HashMap::new(),
|
||||
default_model: String::new(),
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue