Batch all non-deleted links (~3,800) into char-budgeted groups,
send each batch to Sonnet with full content of both endpoints,
and apply KEEP/DELETE/RETARGET/WEAKEN/STRENGTHEN decisions.
One-time cleanup for links created before refine_target existed.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
Pattern separation for memory graph: when a file-level node (e.g.
identity.md) has section children, redistribute its links to the
best-matching section using cosine similarity.
- differentiate_hub: analyze hub, propose link redistribution
- refine_target: at link creation time, automatically target the
most specific section instead of the file-level hub
- Applied refine_target in all four link creation paths (digest
links, journal enrichment, apply consolidation, link-add command)
- Saturated hubs listed in agent topology header with "DO NOT LINK"
This prevents hub formation proactively (refine_target) and
remediates existing hubs (differentiate command).
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
Three Python scripts (858 lines) replaced with native Rust subcommands:
- digest-links [--apply]: parses ## Links sections from episodic digests,
normalizes keys, applies to graph with section-level fallback
- journal-enrich JSONL TEXT [LINE]: extracts conversation from JSONL
transcript, calls Sonnet for link proposals and source location
- apply-consolidation [--apply]: reads consolidation reports, sends to
Sonnet for structured action extraction (links, categorizations,
manual items)
Shared infrastructure: call_sonnet now pub(crate), new
parse_json_response helper for Sonnet output parsing with markdown
fence stripping.
Replace daily-digest.py, weekly-digest.py, monthly-digest.py with a
single digest.rs module. All three digest types now:
- Gather input directly from the Store (no subprocess calls)
- Build prompts in Rust (same templates as the Python versions)
- Call Sonnet via `claude -p --model sonnet`
- Import results back into the store automatically
- Extract links and save agent results
606 lines of Rust replaces 729 lines of Python + store_helpers.py
overhead. More importantly: this is now callable as a library from
poc-agent, and shares types/code with the rest of poc-memory.
Also adds `digest monthly [YYYY-MM]` subcommand (was Python-only).
- &PathBuf → &Path in memory-search.rs signatures
- Redundant field name in graph.rs struct init
- Add truncate(false) to lock file open
- Derive Default for Store instead of manual impl
- slice::from_ref instead of &[x.clone()]
- rsplit_once instead of split().last()
- str::repeat instead of iter::repeat().take().collect()
- is_none_or instead of map_or(true, ...)
- strip_prefix instead of manual slicing
Zero warnings on `cargo clippy`.
- Replace all 5 `Command::new("date")` calls across 4 files with
pure Rust time formatting via libc localtime_r
- Add format_date/format_datetime/format_datetime_space helpers to
capnp_store
- Move import_file, find_journal_node, export_to_markdown, render_file,
file_sections into Store methods where they belong
- Fix find_current_transcript to search all project dirs instead of
hardcoding bcachefs-tools path
- Fix double-reference .clone() warnings in cmd_trace
- Fix unused variable warning in neuro.rs
main.rs: 1290 → 1137 lines, zero warnings.
journal-write creates entries directly in the capnp store with
auto-generated timestamped keys (journal.md#j-YYYY-MM-DDtHH-MM-slug),
episodic session type, and source ref from current transcript.
journal-tail sorts entries by date extracted from content headers,
falling back to key-embedded dates, then node timestamp.
poc-journal shell script now delegates to these commands instead
of appending to journal.md. Journal entries are store-first.
Sections within a file have a natural order that matters —
identity.md reads as a narrative, not an alphabetical index.
The position field (u32) tracks section index within the file.
Set during init and import from parse order. Export and
load-context sort by position instead of key, preserving the
author's intended structure.
write KEY: upsert a single node from stdin. Creates new or updates
existing with version bump. No-op if content unchanged.
import FILE: parse markdown sections, diff against store, upsert
changed/new nodes. Incremental — only touches what changed.
export FILE|--all: regenerate markdown from store nodes. Gathers
file-level + section nodes, reconstitutes mem markers with links
and causes from the relation graph.
Together these close the bidirectional sync loop:
markdown → import → store → export → markdown
Also exposes memory_dir_pub() for use from main.rs.
load-context replaces the shell hook's file-by-file cat approach.
Queries the capnp store directly for all session-start context:
orientation, identity, reflections, interests, inner life, people,
active context, shared reference, technical, and recent journal.
Sections are gathered per-file and output in priority order.
Journal entries filtered to last 7 days by key-embedded date,
capped at 20 most recent.
render outputs a single node's content to stdout.
The load-memory.sh hook now delegates entirely to
`poc-memory load-context` — capnp store is the single source
of truth for session startup context.
node-delete: soft-deletes a node by appending a deleted version to
the capnp log, then removing it from the in-memory cache.
resolve_redirect: when resolve_key can't find a node, checks a static
redirect table for sections that moved during file splits (like the
reflections.md → reflections-{reading,dreams,zoom}.md split). This
handles immutable files (journal.md with chattr +a) that can't have
their references updated.
Faster serialization/deserialization, smaller on disk (4.2MB vs 5.9MB).
Automatic migration from state.json on first load — reads the JSON,
writes state.bin, deletes the old file.
Added list-keys, list-edges, dump-json commands so Python scripts no
longer need to parse the cache directly. Updated bulk-categorize.py
and consolidation-loop.py to use the new CLI commands.