daemon: rework consolidation pipeline and add graph health metrics

Replace monolithic consolidate job with individual agent jobs
(replay, linker, separator, transfer, health) that run sequentially
and store reports. Multi-phase daily pipeline: agent runs → apply
actions → link orphans → cap degree → digest → digest links →
knowledge loop.

Add GraphHealth struct with graph metrics (alpha, gini, clustering
coefficient, episodic ratio) computed during health checks. Display
in `poc-memory daemon status`. Use cached metrics to build
consolidation plan without expensive O(n²) interference detection.

Add RPC consolidate command to trigger consolidation via socket.
Harden session watcher: skip transcripts with zero segments, improve
migration error handling.

Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
This commit is contained in:
Kent Overstreet 2026-03-09 17:02:01 -04:00
parent 8eb6308760
commit 53e6b32cb4
4 changed files with 426 additions and 61 deletions

View file

@ -9,6 +9,7 @@ mod rewrite;
pub use scoring::{
ReplayItem,
ConsolidationPlan,
consolidation_priority,
replay_queue, replay_queue_with_graph,
detect_interference,

View file

@ -164,6 +164,7 @@ pub fn detect_interference(
}
/// Agent allocation from the control loop
#[derive(Default)]
pub struct ConsolidationPlan {
pub replay_count: usize,
pub linker_count: usize,