learn: stream candidates to UI, update status during alternate gen

With the timestamp filter gone (previous commit), score_finetune_candidates
started returning the actual ~100+ candidates per scoring run. The
existing code generated alternates for all of them in a tight loop
before returning anything, leaving the status line stuck on
"finetune: scoring N responses..." for ~100s of seconds while the
B200 was pegged.

Two fixes:

1. score_finetune_candidates now takes an ActivityGuard and a callback.
   Candidates are emitted one-at-a-time as they complete (after their
   alternate if that's enabled, immediately otherwise). The activity
   status updates to "finetune: generating alternate N/M" during the
   alternate-gen phase so it's clear what's happening.

2. BgEvent::FinetuneCandidates(Vec<_>) → FinetuneCandidate(one). Each
   emitted candidate is pushed onto shared.finetune_candidates; the UI
   tick picks it up and renders it on the next frame. start_finetune_scoring
   clears the previous run's list at the top so each run is fresh.

Return type changes from (Vec, f64) → (usize, f64) — the count above
threshold is all the caller still needs since the candidates stream
through the callback.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
This commit is contained in:
Kent Overstreet 2026-04-16 12:44:25 -04:00
parent d5a3398cc9
commit 343e43afab
2 changed files with 36 additions and 19 deletions

View file

@ -320,7 +320,7 @@ impl MindState {
/// Background task completion events.
enum BgEvent {
ScoringDone,
FinetuneCandidates(Vec<learn::FinetuneCandidate>),
FinetuneCandidate(learn::FinetuneCandidate),
}
// --- Mind: cognitive state machine ---
@ -656,7 +656,12 @@ impl Mind {
/// once this runs continuously, we'll just train whatever lands at full
/// context without filtering.
pub fn start_finetune_scoring(&self) {
let threshold = self.shared.lock().unwrap().learn_threshold;
let threshold = {
let mut s = self.shared.lock().unwrap();
// Clear the previous run's candidates so this run's stream in fresh.
s.finetune_candidates.clear();
s.learn_threshold
};
let agent = self.agent.clone();
let bg_tx = self.bg_tx.clone();
@ -678,12 +683,12 @@ impl Mind {
activity.update(format!("finetune: scoring {} responses...", responses_considered)).await;
let bg_tx_cb = bg_tx.clone();
let stats = match learn::score_finetune_candidates(
&context, score_count, &client, threshold,
&context, score_count, &client, threshold, &activity,
|c| { let _ = bg_tx_cb.send(BgEvent::FinetuneCandidate(c)); },
).await {
Ok((candidates, max_div)) => {
let above_threshold = candidates.len();
let _ = bg_tx.send(BgEvent::FinetuneCandidates(candidates));
Ok((above_threshold, max_div)) => {
FinetuneScoringStats {
responses_considered,
above_threshold,
@ -801,8 +806,8 @@ impl Mind {
BgEvent::ScoringDone => {
self.shared.lock().unwrap().scoring_in_flight = false;
}
BgEvent::FinetuneCandidates(candidates) => {
self.shared.lock().unwrap().finetune_candidates = candidates;
BgEvent::FinetuneCandidate(c) => {
self.shared.lock().unwrap().finetune_candidates.push(c);
}
}
}