- has_node: defined on StoreView trait but never called externally
- fix_categories: was appending ALL nodes when only changed ones needed
persisting; now collects changed nodes and appends only those
- save_snapshot: pass log sizes from caller instead of re-statting files
- params: use Copy instead of .clone() in snapshot construction
Dead code removed:
- rebuild_uuid_index (never called, index built during load)
- node_weight inherent method (all callers use StoreView trait)
- node_community (no callers)
- state_json_path (no callers)
- log_retrieval, log_retrieval_append (no callers; only _static is used)
- memory_dir_pub wrapper (just make memory_dir pub directly)
API consolidation:
- insert_node eliminated — callers use upsert_node (same behavior
for new nodes, plus handles re-upsert gracefully)
AnyView StoreView dispatch compressed to one line per method
(also removes UFCS workaround that was needed when inherent
node_weight shadowed the trait method).
-69 lines net.
- modify_node(): get_mut→modify→version++→append pattern was duplicated
across mark_used, mark_wrong, categorize — extract once
- resolve_node_uuid(): resolve-or-redirect pattern was inlined in both
link and causal edge creation — extract once
- ingest_units() + classify_filename(): shared logic between
scan_dir_for_init and import_file — import_file shrinks to 6 lines
- Remove dead seen_keys HashSet (built but never read)
- partial_cmp().unwrap() → total_cmp() in cap_degree
-95 lines net.
Replace 130 lines of manual field-by-field capnp serialization with
two declarative macros:
capnp_enum! — generates to_capnp/from_capnp for enum types
capnp_message! — generates from_capnp/to_capnp for structs
Adding a field to the capnp schema now means adding it in one place;
both read and write directions are generated from the same declaration.
Eliminates: read_content_node, write_content_node, read_relation,
write_relation, read_provenance (5 functions → 2 macro invocations).
Callers updated to method syntax: Node::from_capnp() / node.to_capnp().
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
schema_fit was algebraically identical to clustering_coefficient
(both compute 2E/(d*(d-1)) = fraction of connected neighbor pairs).
Remove the redundant function, field, and metrics column.
- Delete schema_fit() and schema_fit_all() from graph.rs
- Remove schema_fit field from Node struct
- Remove avg_schema_fit from MetricsSnapshot (duplicated avg_cc)
- Replace all callers with graph.clustering_coefficient()
- Rename ReplayItem.schema_fit to .cc
- Query: "cc" and "schema_fit" both resolve from graph CC
- Low-CC count folded into health report CC line
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
for_each_relation() was iterating deleted relations, polluting the
graph with ghost edges. Also filter them from rkyv snapshots and
clean them from the in-memory vec after cap_degree pruning.
- New spectral module: Laplacian eigendecomposition of the memory graph.
Commands: spectral, spectral-save, spectral-neighbors, spectral-positions,
spectral-suggest. Spectral neighbors expand search results beyond keyword
matching to structural proximity.
- Search: use StoreView trait to avoid 6MB state.bin rewrite on every query.
Append-only retrieval logging. Spectral expansion shows structurally
nearby nodes after text results.
- Fix panic in journal-tail: string truncation at byte 67 could land inside
a multi-byte character (em dash). Now walks back to char boundary.
- Replay queue: show classification and spectral outlier score.
- Knowledge agents: extractor, challenger, connector prompts and runner
scripts for automated graph enrichment.
- memory-search hook: stale state file cleanup (24h expiry).
Three new tools for structural graph health:
- fix-categories: rule-based recategorization fixing core inflation
(225 → 26 core nodes). Only identity.md and kent.md stay core;
everything else reclassified to tech/obs/gen by file prefix rules.
- cap-degree: two-phase degree capping. First prunes weakest Auto
edges, then prunes Link edges to high-degree targets (they have
alternative paths). Brought max degree from 919 → 50.
- link-orphans: connects degree-0/1 nodes to most textually similar
connected nodes via cosine similarity. Linked 614 orphans.
Also: community detection now filters edges below strength 0.3,
preventing weak auto-links from merging unrelated communities.
Pipeline updated: consolidate-full now runs link-orphans + cap-degree
instead of triangle-close (which was counterproductive — densified
hub neighborhoods instead of building bridges).
Net effect: Gini 0.754 → 0.546, max degree 919 → 50.
The mtime-based cache (state.bin) was causing data loss under
concurrent writes. Multiple processes (dream loop journal writes,
link audit agents, journal enrichment agents) would each:
1. Load state.bin (stale - missing other processes' recent writes)
2. Make their own changes
3. Save state.bin, overwriting entries from other processes
This caused 48 nodes to be lost from tonight's dream session -
entries were in the append-only capnp log but invisible to the
index because a later writer's state.bin overwrote the version
that contained them.
Fix: always replay from the capnp log (the source of truth).
Cost: ~10ms extra at 2K nodes (36ms vs 26ms). The cache saved
10ms but introduced a correctness bug that lost real data.
The append-only log design was correct - the cache layer violated
its invariant by allowing stale reads to silently discard writes.
- &PathBuf → &Path in memory-search.rs signatures
- Redundant field name in graph.rs struct init
- Add truncate(false) to lock file open
- Derive Default for Store instead of manual impl
- slice::from_ref instead of &[x.clone()]
- rsplit_once instead of split().last()
- str::repeat instead of iter::repeat().take().collect()
- is_none_or instead of map_or(true, ...)
- strip_prefix instead of manual slicing
Zero warnings on `cargo clippy`.
- Replace all 5 `Command::new("date")` calls across 4 files with
pure Rust time formatting via libc localtime_r
- Add format_date/format_datetime/format_datetime_space helpers to
capnp_store
- Move import_file, find_journal_node, export_to_markdown, render_file,
file_sections into Store methods where they belong
- Fix find_current_transcript to search all project dirs instead of
hardcoding bcachefs-tools path
- Fix double-reference .clone() warnings in cmd_trace
- Fix unused variable warning in neuro.rs
main.rs: 1290 → 1137 lines, zero warnings.
Position was only in the bincode cache (serde field) — it would
be lost on cache rebuild from capnp logs. Now persisted in the
append-only log via ContentNode.position @19.
Also fixes journal-tail sorting to extract dates from content
headers, falling back to key-embedded dates.
Sections within a file have a natural order that matters —
identity.md reads as a narrative, not an alphabetical index.
The position field (u32) tracks section index within the file.
Set during init and import from parse order. Export and
load-context sort by position instead of key, preserving the
author's intended structure.
write KEY: upsert a single node from stdin. Creates new or updates
existing with version bump. No-op if content unchanged.
import FILE: parse markdown sections, diff against store, upsert
changed/new nodes. Incremental — only touches what changed.
export FILE|--all: regenerate markdown from store nodes. Gathers
file-level + section nodes, reconstitutes mem markers with links
and causes from the relation graph.
Together these close the bidirectional sync loop:
markdown → import → store → export → markdown
Also exposes memory_dir_pub() for use from main.rs.
init now detects content changes in markdown files and updates
existing nodes (bumps version, appends to capnp log) instead of
only creating new ones. Link resolution uses the redirect table
so references to moved sections (e.g. from the reflections split)
create edges to the correct target.
On cache rebuild from capnp logs, filter out relations that
reference deleted/missing nodes so the relation count matches
the actual graph edge count.
node-delete: soft-deletes a node by appending a deleted version to
the capnp log, then removing it from the in-memory cache.
resolve_redirect: when resolve_key can't find a node, checks a static
redirect table for sections that moved during file splits (like the
reflections.md → reflections-{reading,dreams,zoom}.md split). This
handles immutable files (journal.md with chattr +a) that can't have
their references updated.
Faster serialization/deserialization, smaller on disk (4.2MB vs 5.9MB).
Automatic migration from state.json on first load — reads the JSON,
writes state.bin, deletes the old file.
Added list-keys, list-edges, dump-json commands so Python scripts no
longer need to parse the cache directly. Updated bulk-categorize.py
and consolidation-loop.py to use the new CLI commands.
mark_used, mark_wrong, and decay all modified node state (weight,
uses, wrongs, spaced_repetition_interval) only in memory + state.json.
Like the categorize fix, these changes would be lost on cache rebuild.
Now all three append updated node versions to the capnp log. Decay
appends all nodes in one batch since it touches every node.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
categorize() only updated the in-memory HashMap and state.json cache.
When init appended new nodes to nodes.capnp (making it newer than
state.json), the next load() would rebuild from capnp logs and lose
all category assignments.
Fix: append an updated node version to the capnp log when category
changes, so it survives cache rebuilds.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>