Running the miner twice on the same transcript produced near-duplicate
entries because:
1. Prompt-based dedup (passing recent entries to Sonnet) doesn't catch
semantic duplicates written in a different emotional register
2. Key-based dedup (timestamp + content slug) fails because Sonnet
assigns different timestamps and wording each run
Fix: hash the transcript file content before mining. Store the hash
as a _mined-transcripts node. Skip if already present.
Limitation: doesn't catch overlapping content when a live transcript
grows between runs (content hash changes). This is fine — the miner
is intended for archived conversations, not live ones.
Tested: second run on same transcript correctly skipped with
"Already mined this transcript" message.
Reads a conversation JSONL, identifies experiential moments that
weren't captured in real-time journal entries, and writes them as
journal nodes in the store. The agent writes in PoC's voice with
emotion tags, focusing on intimate moments, shifts in understanding,
and small pleasures — not clinical topic extraction.
Conversation timestamps are now extracted and included in formatted
output, enabling accurate temporal placement of mined entries.
Also: extract_conversation now returns timestamps as a 4th tuple field.
Batch all non-deleted links (~3,800) into char-budgeted groups,
send each batch to Sonnet with full content of both endpoints,
and apply KEEP/DELETE/RETARGET/WEAKEN/STRENGTHEN decisions.
One-time cleanup for links created before refine_target existed.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
Pattern separation for memory graph: when a file-level node (e.g.
identity.md) has section children, redistribute its links to the
best-matching section using cosine similarity.
- differentiate_hub: analyze hub, propose link redistribution
- refine_target: at link creation time, automatically target the
most specific section instead of the file-level hub
- Applied refine_target in all four link creation paths (digest
links, journal enrichment, apply consolidation, link-add command)
- Saturated hubs listed in agent topology header with "DO NOT LINK"
This prevents hub formation proactively (refine_target) and
remediates existing hubs (differentiate command).
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
Three Python scripts (858 lines) replaced with native Rust subcommands:
- digest-links [--apply]: parses ## Links sections from episodic digests,
normalizes keys, applies to graph with section-level fallback
- journal-enrich JSONL TEXT [LINE]: extracts conversation from JSONL
transcript, calls Sonnet for link proposals and source location
- apply-consolidation [--apply]: reads consolidation reports, sends to
Sonnet for structured action extraction (links, categorizations,
manual items)
Shared infrastructure: call_sonnet now pub(crate), new
parse_json_response helper for Sonnet output parsing with markdown
fence stripping.
Replace daily-digest.py, weekly-digest.py, monthly-digest.py with a
single digest.rs module. All three digest types now:
- Gather input directly from the Store (no subprocess calls)
- Build prompts in Rust (same templates as the Python versions)
- Call Sonnet via `claude -p --model sonnet`
- Import results back into the store automatically
- Extract links and save agent results
606 lines of Rust replaces 729 lines of Python + store_helpers.py
overhead. More importantly: this is now callable as a library from
poc-agent, and shares types/code with the rest of poc-memory.
Also adds `digest monthly [YYYY-MM]` subcommand (was Python-only).