poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
// Append-only Cap'n Proto storage + derived KV cache
|
|
|
|
|
//
|
|
|
|
|
// Two log files are source of truth:
|
|
|
|
|
// nodes.capnp - ContentNode messages
|
|
|
|
|
// relations.capnp - Relation messages
|
|
|
|
|
//
|
|
|
|
|
// The Store struct is the derived cache: latest version per UUID,
|
2026-03-03 01:33:31 -05:00
|
|
|
// rebuilt from logs when stale. Three-tier load strategy:
|
|
|
|
|
// 1. rkyv mmap snapshot (snapshot.rkyv) — ~4ms deserialize
|
|
|
|
|
// 2. bincode cache (state.bin) — ~10ms
|
|
|
|
|
// 3. capnp log replay — ~40ms
|
|
|
|
|
// Staleness: log file sizes embedded in cache headers.
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
//
|
|
|
|
|
// Module layout:
|
|
|
|
|
// types.rs — Node, Relation, enums, capnp macros, path helpers
|
|
|
|
|
// parse.rs — markdown → MemoryUnit parsing
|
|
|
|
|
// view.rs — zero-copy read-only access (StoreView, MmapView)
|
|
|
|
|
// persist.rs — load, save, replay, append, snapshot (all disk IO)
|
|
|
|
|
// ops.rs — mutations (upsert, delete, decay, cap_degree, etc.)
|
|
|
|
|
// mod.rs — re-exports, key resolution, ingestion, rendering
|
poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
|
2026-03-03 12:56:15 -05:00
|
|
|
mod types;
|
|
|
|
|
mod parse;
|
|
|
|
|
mod view;
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
mod persist;
|
|
|
|
|
mod ops;
|
2026-03-03 12:56:15 -05:00
|
|
|
|
|
|
|
|
// Re-export everything callers need
|
|
|
|
|
pub use types::*;
|
|
|
|
|
pub use parse::{MemoryUnit, parse_units};
|
|
|
|
|
pub use view::{StoreView, AnyView};
|
2026-03-08 18:31:19 -04:00
|
|
|
pub use persist::fsck;
|
2026-03-03 12:56:15 -05:00
|
|
|
|
poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
use crate::graph::{self, Graph};
|
|
|
|
|
|
|
|
|
|
use std::fs;
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
use std::io::Write as IoWrite;
|
2026-03-03 12:56:15 -05:00
|
|
|
use std::path::Path;
|
poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
|
2026-03-03 12:56:15 -05:00
|
|
|
use parse::classify_filename;
|
poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
|
|
|
|
|
impl Store {
|
|
|
|
|
pub fn build_graph(&self) -> Graph {
|
|
|
|
|
graph::build_graph(self)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
pub fn resolve_key(&self, target: &str) -> Result<String, String> {
|
|
|
|
|
let normalized = if target.contains('#') {
|
|
|
|
|
let parts: Vec<&str> = target.splitn(2, '#').collect();
|
|
|
|
|
let file = if parts[0].ends_with(".md") {
|
|
|
|
|
parts[0].to_string()
|
|
|
|
|
} else {
|
|
|
|
|
format!("{}.md", parts[0])
|
|
|
|
|
};
|
|
|
|
|
format!("{}#{}", file, parts[1])
|
|
|
|
|
} else if target.ends_with(".md") {
|
|
|
|
|
target.to_string()
|
|
|
|
|
} else {
|
|
|
|
|
format!("{}.md", target)
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
if self.nodes.contains_key(&normalized) {
|
|
|
|
|
return Ok(normalized);
|
|
|
|
|
}
|
|
|
|
|
|
2026-02-28 22:40:17 -05:00
|
|
|
// Check redirects for moved sections (e.g. reflections.md split)
|
|
|
|
|
if let Some(redirect) = self.resolve_redirect(&normalized) {
|
|
|
|
|
if self.nodes.contains_key(&redirect) {
|
|
|
|
|
return Ok(redirect);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
let matches: Vec<_> = self.nodes.keys()
|
|
|
|
|
.filter(|k| k.to_lowercase().contains(&target.to_lowercase()))
|
|
|
|
|
.cloned().collect();
|
|
|
|
|
|
|
|
|
|
match matches.len() {
|
|
|
|
|
0 => Err(format!("No entry for '{}'. Run 'init'?", target)),
|
|
|
|
|
1 => Ok(matches[0].clone()),
|
|
|
|
|
n if n <= 10 => {
|
|
|
|
|
let list = matches.join("\n ");
|
|
|
|
|
Err(format!("Ambiguous '{}'. Matches:\n {}", target, list))
|
|
|
|
|
}
|
|
|
|
|
n => Err(format!("Too many matches for '{}' ({}). Be more specific.", target, n)),
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2026-02-28 22:40:17 -05:00
|
|
|
/// Redirect table for sections that moved between files.
|
|
|
|
|
/// Like HTTP 301s — the old key resolves to the new location.
|
|
|
|
|
fn resolve_redirect(&self, key: &str) -> Option<String> {
|
|
|
|
|
// Sections moved from reflections.md to split files (2026-02-28)
|
|
|
|
|
static REDIRECTS: &[(&str, &str)] = &[
|
|
|
|
|
("reflections.md#pearl-lessons", "reflections-reading.md#pearl-lessons"),
|
|
|
|
|
("reflections.md#banks-lessons", "reflections-reading.md#banks-lessons"),
|
|
|
|
|
("reflections.md#mother-night", "reflections-reading.md#mother-night"),
|
|
|
|
|
("reflections.md#zoom-navigation", "reflections-zoom.md#zoom-navigation"),
|
|
|
|
|
("reflections.md#independence-of-components", "reflections-zoom.md#independence-of-components"),
|
|
|
|
|
("reflections.md#dream-marathon-2", "reflections-dreams.md#dream-marathon-2"),
|
|
|
|
|
("reflections.md#dream-through-line", "reflections-dreams.md#dream-through-line"),
|
|
|
|
|
("reflections.md#orthogonality-universal", "reflections-dreams.md#orthogonality-universal"),
|
|
|
|
|
("reflections.md#constraints-constitutive", "reflections-dreams.md#constraints-constitutive"),
|
|
|
|
|
("reflections.md#casualness-principle", "reflections-dreams.md#casualness-principle"),
|
|
|
|
|
("reflections.md#convention-boundary", "reflections-dreams.md#convention-boundary"),
|
|
|
|
|
("reflections.md#tension-brake", "reflections-dreams.md#tension-brake"),
|
|
|
|
|
];
|
|
|
|
|
|
|
|
|
|
REDIRECTS.iter()
|
|
|
|
|
.find(|(from, _)| *from == key)
|
|
|
|
|
.map(|(_, to)| to.to_string())
|
|
|
|
|
}
|
|
|
|
|
|
2026-03-03 12:35:00 -05:00
|
|
|
/// Resolve a link target to (key, uuid), trying direct lookup then redirect.
|
|
|
|
|
fn resolve_node_uuid(&self, target: &str) -> Option<(String, [u8; 16])> {
|
|
|
|
|
if let Some(n) = self.nodes.get(target) {
|
|
|
|
|
return Some((target.to_string(), n.uuid));
|
|
|
|
|
}
|
|
|
|
|
let redirected = self.resolve_redirect(target)?;
|
|
|
|
|
let n = self.nodes.get(&redirected)?;
|
|
|
|
|
Some((redirected, n.uuid))
|
|
|
|
|
}
|
|
|
|
|
|
2026-03-03 01:33:31 -05:00
|
|
|
/// Append retrieval event to retrieval.log without needing a Store instance.
|
|
|
|
|
pub fn log_retrieval_static(query: &str, results: &[String]) {
|
|
|
|
|
let path = memory_dir().join("retrieval.log");
|
|
|
|
|
let line = format!("[{}] q=\"{}\" hits={}\n", today(), query, results.len());
|
|
|
|
|
if let Ok(mut f) = fs::OpenOptions::new()
|
|
|
|
|
.create(true).append(true).open(&path) {
|
|
|
|
|
let _ = f.write_all(line.as_bytes());
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
/// Scan markdown files and index all memory units
|
|
|
|
|
pub fn init_from_markdown(&mut self) -> Result<usize, String> {
|
|
|
|
|
let dir = memory_dir();
|
|
|
|
|
let mut count = 0;
|
|
|
|
|
if dir.exists() {
|
|
|
|
|
count = self.scan_dir_for_init(&dir)?;
|
poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
Ok(count)
|
poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
}
|
|
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
fn scan_dir_for_init(&mut self, dir: &Path) -> Result<usize, String> {
|
|
|
|
|
let mut count = 0;
|
|
|
|
|
let entries = fs::read_dir(dir)
|
|
|
|
|
.map_err(|e| format!("read dir {}: {}", dir.display(), e))?;
|
2026-03-01 08:18:07 -05:00
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
for entry in entries.flatten() {
|
|
|
|
|
let path = entry.path();
|
|
|
|
|
if path.is_dir() {
|
|
|
|
|
count += self.scan_dir_for_init(&path)?;
|
2026-03-01 08:18:07 -05:00
|
|
|
continue;
|
|
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
let Some(ext) = path.extension() else { continue };
|
|
|
|
|
if ext != "md" { continue }
|
2026-03-01 08:18:07 -05:00
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
let filename = path.file_name().unwrap().to_string_lossy().to_string();
|
|
|
|
|
let content = fs::read_to_string(&path)
|
|
|
|
|
.map_err(|e| format!("read {}: {}", path.display(), e))?;
|
2026-03-01 08:18:07 -05:00
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
let units = parse_units(&filename, &content);
|
|
|
|
|
let (new_count, _) = self.ingest_units(&units, &filename)?;
|
|
|
|
|
count += new_count;
|
2026-03-01 08:18:07 -05:00
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
// Create relations from links
|
|
|
|
|
let mut new_relations = Vec::new();
|
|
|
|
|
for unit in &units {
|
|
|
|
|
let source_uuid = match self.nodes.get(&unit.key) {
|
|
|
|
|
Some(n) => n.uuid,
|
|
|
|
|
None => continue,
|
|
|
|
|
};
|
2026-03-01 08:18:07 -05:00
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
for link in unit.marker_links.iter().chain(unit.md_links.iter()) {
|
|
|
|
|
let Some((key, uuid)) = self.resolve_node_uuid(link) else { continue };
|
|
|
|
|
let exists = self.relations.iter().any(|r|
|
|
|
|
|
(r.source == source_uuid && r.target == uuid) ||
|
|
|
|
|
(r.source == uuid && r.target == source_uuid));
|
|
|
|
|
if !exists {
|
|
|
|
|
new_relations.push(new_relation(
|
|
|
|
|
source_uuid, uuid, RelationType::Link, 1.0,
|
|
|
|
|
&unit.key, &key,
|
|
|
|
|
));
|
|
|
|
|
}
|
2026-03-01 08:18:07 -05:00
|
|
|
}
|
|
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
for cause in &unit.causes {
|
|
|
|
|
let Some((key, uuid)) = self.resolve_node_uuid(cause) else { continue };
|
|
|
|
|
let exists = self.relations.iter().any(|r|
|
|
|
|
|
r.source == uuid && r.target == source_uuid
|
|
|
|
|
&& r.rel_type == RelationType::Causal);
|
|
|
|
|
if !exists {
|
|
|
|
|
new_relations.push(new_relation(
|
|
|
|
|
uuid, source_uuid, RelationType::Causal, 1.0,
|
|
|
|
|
&key, &unit.key,
|
|
|
|
|
));
|
|
|
|
|
}
|
2026-03-01 08:18:07 -05:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
if !new_relations.is_empty() {
|
|
|
|
|
self.append_relations(&new_relations)?;
|
|
|
|
|
self.relations.extend(new_relations);
|
|
|
|
|
}
|
poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
Ok(count)
|
poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
}
|
2026-02-28 23:44:44 -05:00
|
|
|
|
2026-03-03 12:35:00 -05:00
|
|
|
/// Process parsed memory units: diff against existing nodes, persist changes.
|
|
|
|
|
fn ingest_units(&mut self, units: &[MemoryUnit], filename: &str) -> Result<(usize, usize), String> {
|
|
|
|
|
let node_type = classify_filename(filename);
|
2026-02-28 23:44:44 -05:00
|
|
|
let mut new_nodes = Vec::new();
|
|
|
|
|
let mut updated_nodes = Vec::new();
|
|
|
|
|
|
|
|
|
|
for (pos, unit) in units.iter().enumerate() {
|
|
|
|
|
if let Some(existing) = self.nodes.get(&unit.key) {
|
2026-03-03 12:35:00 -05:00
|
|
|
if existing.content != unit.content || existing.position != pos as u32 {
|
2026-02-28 23:44:44 -05:00
|
|
|
let mut node = existing.clone();
|
|
|
|
|
node.content = unit.content.clone();
|
|
|
|
|
node.position = pos as u32;
|
|
|
|
|
node.version += 1;
|
2026-03-03 12:35:00 -05:00
|
|
|
if let Some(ref s) = unit.state { node.state_tag = s.clone(); }
|
|
|
|
|
if let Some(ref s) = unit.source_ref { node.source_ref = s.clone(); }
|
2026-02-28 23:44:44 -05:00
|
|
|
updated_nodes.push(node);
|
|
|
|
|
}
|
|
|
|
|
} else {
|
2026-03-03 12:56:15 -05:00
|
|
|
let mut node = new_node(&unit.key, &unit.content);
|
2026-02-28 23:44:44 -05:00
|
|
|
node.node_type = node_type;
|
|
|
|
|
node.position = pos as u32;
|
2026-03-03 12:35:00 -05:00
|
|
|
if let Some(ref s) = unit.state { node.state_tag = s.clone(); }
|
|
|
|
|
if let Some(ref s) = unit.source_ref { node.source_ref = s.clone(); }
|
2026-02-28 23:44:44 -05:00
|
|
|
new_nodes.push(node);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if !new_nodes.is_empty() {
|
|
|
|
|
self.append_nodes(&new_nodes)?;
|
|
|
|
|
for node in &new_nodes {
|
|
|
|
|
self.uuid_to_key.insert(node.uuid, node.key.clone());
|
|
|
|
|
self.nodes.insert(node.key.clone(), node.clone());
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
if !updated_nodes.is_empty() {
|
|
|
|
|
self.append_nodes(&updated_nodes)?;
|
|
|
|
|
for node in &updated_nodes {
|
|
|
|
|
self.nodes.insert(node.key.clone(), node.clone());
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
Ok((new_nodes.len(), updated_nodes.len()))
|
|
|
|
|
}
|
|
|
|
|
|
2026-03-03 12:35:00 -05:00
|
|
|
/// Import a markdown file into the store, parsing it into nodes.
|
|
|
|
|
pub fn import_file(&mut self, path: &Path) -> Result<(usize, usize), String> {
|
|
|
|
|
let filename = path.file_name().unwrap().to_string_lossy().to_string();
|
|
|
|
|
let content = fs::read_to_string(path)
|
|
|
|
|
.map_err(|e| format!("read {}: {}", path.display(), e))?;
|
|
|
|
|
let units = parse_units(&filename, &content);
|
|
|
|
|
self.ingest_units(&units, &filename)
|
|
|
|
|
}
|
|
|
|
|
|
2026-02-28 23:44:44 -05:00
|
|
|
/// Gather all sections for a file key, sorted by position.
|
|
|
|
|
pub fn file_sections(&self, file_key: &str) -> Option<Vec<&Node>> {
|
|
|
|
|
let prefix = format!("{}#", file_key);
|
|
|
|
|
let mut sections: Vec<_> = self.nodes.values()
|
|
|
|
|
.filter(|n| n.key == file_key || n.key.starts_with(&prefix))
|
|
|
|
|
.collect();
|
|
|
|
|
if sections.is_empty() {
|
|
|
|
|
return None;
|
|
|
|
|
}
|
|
|
|
|
sections.sort_by_key(|n| n.position);
|
|
|
|
|
Some(sections)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Render a file key as plain content (no mem markers).
|
|
|
|
|
pub fn render_file(&self, file_key: &str) -> Option<String> {
|
|
|
|
|
let sections = self.file_sections(file_key)?;
|
|
|
|
|
let mut output = String::new();
|
|
|
|
|
for node in §ions {
|
|
|
|
|
output.push_str(&node.content);
|
|
|
|
|
if !node.content.ends_with('\n') {
|
|
|
|
|
output.push('\n');
|
|
|
|
|
}
|
|
|
|
|
output.push('\n');
|
|
|
|
|
}
|
|
|
|
|
Some(output.trim_end().to_string())
|
|
|
|
|
}
|
|
|
|
|
|
2026-03-03 12:56:15 -05:00
|
|
|
/// Render a file key back to markdown with reconstituted mem markers.
|
2026-02-28 23:44:44 -05:00
|
|
|
pub fn export_to_markdown(&self, file_key: &str) -> Option<String> {
|
|
|
|
|
let sections = self.file_sections(file_key)?;
|
|
|
|
|
|
|
|
|
|
let mut output = String::new();
|
|
|
|
|
for node in §ions {
|
|
|
|
|
if node.key.contains('#') {
|
2026-02-28 23:47:11 -05:00
|
|
|
let section_id = node.key.rsplit_once('#').map_or("", |(_, s)| s);
|
2026-02-28 23:44:44 -05:00
|
|
|
|
|
|
|
|
let links: Vec<_> = self.relations.iter()
|
|
|
|
|
.filter(|r| r.source_key == node.key && !r.deleted
|
|
|
|
|
&& r.rel_type != RelationType::Causal)
|
|
|
|
|
.map(|r| r.target_key.clone())
|
|
|
|
|
.collect();
|
|
|
|
|
let causes: Vec<_> = self.relations.iter()
|
|
|
|
|
.filter(|r| r.target_key == node.key && !r.deleted
|
|
|
|
|
&& r.rel_type == RelationType::Causal)
|
|
|
|
|
.map(|r| r.source_key.clone())
|
|
|
|
|
.collect();
|
|
|
|
|
|
|
|
|
|
let mut marker_parts = vec![format!("id={}", section_id)];
|
|
|
|
|
if !links.is_empty() {
|
|
|
|
|
marker_parts.push(format!("links={}", links.join(",")));
|
|
|
|
|
}
|
|
|
|
|
if !causes.is_empty() {
|
|
|
|
|
marker_parts.push(format!("causes={}", causes.join(",")));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
output.push_str(&format!("<!-- mem: {} -->\n", marker_parts.join(" ")));
|
|
|
|
|
}
|
|
|
|
|
output.push_str(&node.content);
|
|
|
|
|
if !node.content.ends_with('\n') {
|
|
|
|
|
output.push('\n');
|
|
|
|
|
}
|
|
|
|
|
output.push('\n');
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
Some(output.trim_end().to_string())
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Find the journal node that best matches the given entry text.
|
|
|
|
|
pub fn find_journal_node(&self, entry_text: &str) -> Option<String> {
|
|
|
|
|
if entry_text.is_empty() {
|
|
|
|
|
return None;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
let words: Vec<&str> = entry_text.split_whitespace()
|
|
|
|
|
.filter(|w| w.len() > 5)
|
|
|
|
|
.take(5)
|
|
|
|
|
.collect();
|
|
|
|
|
|
|
|
|
|
let mut best_key = None;
|
|
|
|
|
let mut best_score = 0;
|
|
|
|
|
|
|
|
|
|
for (key, node) in &self.nodes {
|
|
|
|
|
if !key.starts_with("journal.md#") {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
let content_lower = node.content.to_lowercase();
|
|
|
|
|
let score: usize = words.iter()
|
|
|
|
|
.filter(|w| content_lower.contains(&w.to_lowercase()))
|
|
|
|
|
.count();
|
|
|
|
|
if score > best_score {
|
|
|
|
|
best_score = score;
|
|
|
|
|
best_key = Some(key.clone());
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
best_key
|
|
|
|
|
}
|
poc-memory v0.4.0: graph-structured memory with consolidation pipeline
Rust core:
- Cap'n Proto append-only storage (nodes + relations)
- Graph algorithms: clustering coefficient, community detection,
schema fit, small-world metrics, interference detection
- BM25 text similarity with Porter stemming
- Spaced repetition replay queue
- Commands: search, init, health, status, graph, categorize,
link-add, link-impact, decay, consolidate-session, etc.
Python scripts:
- Episodic digest pipeline: daily/weekly/monthly-digest.py
- retroactive-digest.py for backfilling
- consolidation-agents.py: 3 parallel Sonnet agents
- apply-consolidation.py: structured action extraction + apply
- digest-link-parser.py: extract ~400 explicit links from digests
- content-promotion-agent.py: promote episodic obs to semantic files
- bulk-categorize.py: categorize all nodes via single Sonnet call
- consolidation-loop.py: multi-round automated consolidation
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-02-28 22:17:00 -05:00
|
|
|
}
|