scripts: use capnp store instead of reading markdown directly

Add store_helpers.py with shared helpers that call poc-memory commands
(list-keys, render, journal-tail) instead of globbing ~/.claude/memory/*.md
and parsing section headers.

All 9 Python scripts updated: get_semantic_keys(), get_topic_file_index(),
get_recent_journal(), parse_journal_entries(), read_journal_range(),
collect_topic_stems(), and file preview rendering now go through the store.

This completes the clean switch — no script reads archived markdown files.
This commit is contained in:
ProofOfConcept 2026-02-28 23:32:47 -05:00
parent f20ea4f827
commit d14710e477
10 changed files with 324 additions and 297 deletions

View file

@ -68,20 +68,22 @@ def get_unique_files(keys: list[str]) -> list[str]:
def build_prompt(files: list[str]) -> str:
"""Build categorization prompt."""
# Read first few lines of each file for context
# Read file previews from the store
file_previews = []
for f in files:
path = MEMORY_DIR / f
if not path.exists():
# Try episodic
path = MEMORY_DIR / "episodic" / f
if path.exists():
content = path.read_text()
# First 5 lines or 300 chars
preview = '\n'.join(content.split('\n')[:5])[:300]
file_previews.append(f" {f}: {preview.replace(chr(10), ' | ')}")
else:
file_previews.append(f" {f}: (file not found)")
try:
r = subprocess.run(
["poc-memory", "render", f],
capture_output=True, text=True, timeout=10
)
content = r.stdout.strip()
if content:
preview = '\n'.join(content.split('\n')[:5])[:300]
file_previews.append(f" {f}: {preview.replace(chr(10), ' | ')}")
else:
file_previews.append(f" {f}: (no content)")
except Exception:
file_previews.append(f" {f}: (render failed)")
previews_text = '\n'.join(file_previews)