daemon: rework consolidation pipeline and add graph health metrics

Replace monolithic consolidate job with individual agent jobs
(replay, linker, separator, transfer, health) that run sequentially
and store reports. Multi-phase daily pipeline: agent runs → apply
actions → link orphans → cap degree → digest → digest links →
knowledge loop.

Add GraphHealth struct with graph metrics (alpha, gini, clustering
coefficient, episodic ratio) computed during health checks. Display
in `poc-memory daemon status`. Use cached metrics to build
consolidation plan without expensive O(n²) interference detection.

Add RPC consolidate command to trigger consolidation via socket.
Harden session watcher: skip transcripts with zero segments, improve
migration error handling.

Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
This commit is contained in:
Kent Overstreet 2026-03-09 17:02:01 -04:00
parent 8eb6308760
commit 53e6b32cb4
4 changed files with 426 additions and 61 deletions

View file

@ -2182,6 +2182,7 @@ fn cmd_daemon(sub: Option<&str>, args: &[String]) -> Result<(), String> {
daemon::show_log(job, lines)
}
Some("install") => daemon::install_service(),
Some("consolidate") => daemon::rpc_consolidate(),
Some(other) => Err(format!("unknown daemon subcommand: {}", other)),
}
}