Add `poc-memory graph organize TERM` diagnostic that finds nodes
matching a search term, computes pairwise cosine similarity, reports
connectivity gaps, and optionally creates anchor nodes.
Add organize.agent definition that uses Bash(poc-memory:*) tool access
to explore clusters autonomously — query selects highest-degree
unvisited nodes, agent drives its own iteration via poc-memory CLI.
Add {{organize}} placeholder in defs.rs for inline cluster resolution.
Add `tools` field to AgentDef/AgentHeader so agents can declare
allowed tool patterns (passed as --allowedTools to claude CLI).
Two changes:
1. New -q/--query flag for direct search without hook machinery.
Useful for debugging: memory-search -q inner-life-sexuality-intimacy
shows seeds, spread results, and rankings.
2. Prompt key boost: when the current prompt contains a node key
(>=5 chars) as a substring, boost that term by +10.0. This ensures
explicit mentions fire as strong seeds for spread, while the graph
still determines what gets pulled in.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
New placeholder that expands query keys one hop through the graph,
giving agents visibility into what's already connected to the nodes
they're working on. Excludes the query keys themselves so there's
no duplication with {{nodes}}.
Added to transfer (sees existing semantic nodes linked to episodes,
so it REFINEs instead of duplicating) and challenger (sees neighbor
context to find real evidence for/against claims).
Also removes find_existing_observations — superseded by the
per-segment dedup fix and this general-purpose placeholder.
When building the {{conversations}} placeholder for the observation
agent, search for existing nodes relevant to each conversation
fragment and include them in the prompt. Uses seed matching + one-hop
graph expansion to find the neighborhood, so the extractor sees what
the graph already knows about these topics.
This helps prevent duplicate extractions, but the deeper bug is that
select_conversation_fragments doesn't track which conversations have
already been processed — that's next.
The observation agent was re-extracting the same conversations every
consolidation run because select_conversation_fragments had no tracking
of what had already been processed.
Extract shared helpers from the fact miner's dedup pattern:
- transcript_key(prefix, path): namespaced key from prefix + filename
- segment_key(base, idx): per-segment key
- keys_with_prefix(prefix): bulk lookup from store
- unmined_segments(path, prefix, known): find unprocessed segments
- mark_segment(...): mark a segment as processed
Rewrite select_conversation_fragments to use these with
_observed-transcripts prefix. Each compaction segment within a
transcript is now tracked independently — new segments from ongoing
sessions get picked up, already-processed segments are skipped.
The Provenance enum couldn't represent agents defined outside the
source code. Replace it with a Text field in the capnp schema so any
agent can write its own provenance label (e.g. "extractor:write",
"rename:tombstone") without a code change.
Schema: rename old enum fields to provenanceOld, add new Text
provenance fields. Old enum kept for reading legacy records.
Migration: from_capnp_migrate() falls back to old enum when the
new text field is empty.
Also adds `poc-memory tail` command for viewing recent store writes.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
Nodes actively found by search now show "Search hits: N ← actively
found by search, prefer to keep" in both the node section (seen by
extractor, linker, etc.) and rename candidate listings.
Extractor and rename prompts updated to respect this signal — merge
into high-hit nodes rather than demoting them, skip renaming nodes
that are working well in search.
memory-search now records which nodes it finds via the daemon's
record-hits RPC endpoint. The daemon owns the redb database
exclusively, avoiding file locking between processes.
The rename agent reads hit counts to deprioritize nodes that are
actively being found by search — renaming them would break working
queries. Daily check decays counters by 10% so stale hits fade.
Also switched RPC command reading from fixed 256-byte buffer to
read_to_string for unbounded command sizes.
Instead of a hard 7-day cutoff, sort rename candidates so the
least-recently visited come first. Naturally prioritizes unseen
nodes while allowing revisits once everything's been through.
The daemon was getting stuck when a claude subprocess hung — no
completion logged, job blocked forever, pending queue growing.
Use spawn() + watchdog thread instead of blocking output(). The
watchdog sleeps in 1s increments checking a cancel flag, sends
SIGTERM at 5 minutes, SIGKILL after 5s grace. Cancel flag ensures
the watchdog exits promptly when the child finishes normally.
Any time an agent creates a new node (WRITE_NODE) or the fact miner
stores extracted facts, a naming sub-agent now checks for conflicts
and ensures the key is meaningful:
- find_conflicts() searches existing nodes via component matching
- Haiku LLM decides: CREATE (good name), RENAME (better name),
or MERGE_INTO (fold into existing node)
- WriteNode actions may be converted to Refine on MERGE_INTO
Also updates the rename agent to handle _facts-<UUID> nodes —
these are no longer skipped, and the prompt explains how to name
them based on their domain/claim content.
New action type that halves a node's weight (min 0.05), enabling
extractors to mark redundant nodes for decay without deleting them.
Parser, apply logic, depth computation, and display all updated.
The daemon's compute_graph_health had a duplicated copy of the
consolidation planning thresholds that had drifted from the canonical
version (α<2.0 → +7 replay in daemon vs +10 in neuro).
Split consolidation_plan into _inner(store, detect_interference) so
the daemon can call consolidation_plan_quick (skips O(n²) interference)
while using the same threshold logic.
- Add run_and_apply() — combines run_one_agent + action application
into one call. Used by daemon job_consolidation_agent and
consolidate_full, which had identical run+apply loops.
- Port split_plan_prompt() to use split.agent via defs::resolve_placeholders
instead of loading the separate split-plan.md template. Make
resolve_placeholders public for this.
- Delete prompts/split-plan.md — superseded by agents/split.agent
which was already the canonical definition.
- Add compact_timestamp() to store — replaces 5 copies of
format_datetime(now_epoch()).replace([':', '-', 'T'], "")
Also fixes missing seconds (format_datetime only had HH:MM).
- Add ConsolidationPlan::to_agent_runs() — replaces identical
plan-to-runs-list expansion in consolidate.rs and daemon.rs.
- Port job_rename_agent to use run_one_agent — eliminates manual
prompt building, LLM call, report storage, and visit recording
that duplicated the shared pipeline.
- Rename Confidence::weight()/value() to delta_weight()/gate_value()
to clarify the distinction (delta metrics vs depth gating).
Three places duplicated the agent execution loop (build prompt → call
LLM → store output → parse actions → record visits): consolidate.rs,
knowledge.rs, and daemon.rs. Extract into run_one_agent() in
knowledge.rs that all three now call.
Also standardize consolidation agent prompts to use WRITE_NODE/LINK/REFINE
— the same commands the parser handles. Previously agents output
CATEGORIZE/NOTE/EXTRACT/DIGEST/DIFFERENTIATE/MERGE/COMPRESS which were
silently dropped after the second-LLM-call removal.
The consolidation pipeline previously made a second Sonnet call to
extract structured JSON actions from agent reports. This was both
wasteful (extra LLM call per consolidation) and lossy (only extracted
links and manual items, ignoring WRITE_NODE/REFINE).
Now actions are parsed and applied inline after each agent runs, using
the same parse_all_actions() parser as the knowledge loop. The daemon
scheduler's separate apply phase is also removed.
Also deletes 8 superseded/orphaned prompt .md files (784 lines) that
have been replaced by .agent files.
The four knowledge agents (observation, extractor, connector,
challenger) were hardcoded in knowledge.rs with their own node
selection logic that bypassed the query pipeline and visit tracking.
Now they're .agent files like the consolidation agents:
- extractor: not-visited:extractor,7d | sort:priority | limit:20
- observation: uses new {{CONVERSATIONS}} placeholder
- connector: type:semantic | not-visited:connector,7d
- challenger: type:semantic | not-visited:challenger,14d
The knowledge loop's run_cycle dispatches through defs::run_agent
instead of calling hardcoded functions, so all agents get visit
tracking automatically. This means the extractor now sees _facts-*
and _mined-transcripts nodes that it was previously blind to.
~200 lines of dead code removed (old runner functions, spectral
clustering for node selection, per-agent LLM dispatch).
New placeholders in defs.rs:
- {{CONVERSATIONS}} — raw transcript fragments for observation agent
- {{TARGETS}} — alias for {{NODES}} (challenger compatibility)
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Adds `poc-memory daemon run-agent <type> <count>` CLI command that
sends an RPC to the daemon to queue agent runs, instead of spawning
separate processes.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
The parser was using split_once("\n\n") which broke when the prompt
started immediately after the JSON header (no blank line). Parse
the first line as JSON, treat the rest as the prompt body.
All agents now go through the config-driven path via .agent files.
agent_prompt() just delegates to defs::run_agent(). Remove the 100+
line hardcoded match block and the _pub wrapper functions — make the
formatters pub directly.
Replace the formatter dispatch with a generic {{placeholder}} lookup
system. Placeholders in prompt templates are resolved at runtime from
a table: topology, nodes, episodes, health, pairs, rename, split.
The query in the header selects what to operate on (keys for visit
tracking); placeholders pull in formatted context. Placeholders that
produce their own node selection (pairs, rename) contribute keys back.
Port health, separator, rename, and split agents to .agent files.
All 7 agents now use the config-driven path.
Each agent is a .agent file: JSON config on the first line, blank line,
then the raw prompt markdown. Fully self-contained, fully readable.
No separate template files needed.
Agents dir: checked into repo at poc-memory/agents/. Code looks there
first (via CARGO_MANIFEST_DIR), falls back to ~/.claude/memory/agents/.
Three agents migrated: replay, linker, transfer.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
New append-only visits.capnp log records which agent processed which
node and when. Only recorded on successful completion — transient
errors don't mark nodes as "seen."
Schema: AgentVisit{nodeUuid, nodeKey, agent, timestamp, outcome}
Storage: append_visits(), replay_visits(), in-memory VisitIndex
Recording: daemon records visits after successful LLM call
API: agent_prompt() returns AgentBatch{prompt, node_keys} so callers
know which nodes to mark as visited.
Groundwork for using visit recency in agent node selection — agents
will deprioritize recently-visited nodes.
- Refactor split from serial batch to independent per-node tasks
(run-agent split N spawns N parallel tasks, gated by llm_concurrency)
- Replace cosine similarity edge inheritance with agent-assigned
neighbors in the plan JSON — the LLM already understands the
semantic relationships, no need to approximate with bag-of-words
- Add --strict-mcp-config to claude CLI calls to skip MCP server
startup (saves ~5s per call)
- Remove hardcoded 2000-char split threshold — let the agent decide
what's worth splitting
- Reload store before mutations to handle concurrent split races
Episodic nodes (journal entries, digests) are narratives that should
not be split even when large. Only semantic reference nodes that have
grown to cover multiple topics are candidates.
Phase 1 sends a large node with its neighbor communities to the LLM
and gets back a JSON split plan (child keys, descriptions, section
hints). Phase 2 fires one extraction call per child in parallel —
each gets the full parent content and extracts/reorganizes just its
portion.
This handles arbitrarily large nodes because output is always
proportional to one child, not the whole parent. Tested on the kent
node (19K chars → 3 children totaling 20K chars with clean topic
separation).
New files:
prompts/split-plan.md — phase 1 planning prompt
prompts/split-extract.md — phase 2 extraction prompt
prompts/split.md — original single-phase (kept for reference)
Modified:
agents/prompts.rs — split_candidates(), split_plan_prompt(),
split_extract_prompt(), agent_prompt "split" arm
agents/daemon.rs — job_split_agent() two-phase implementation,
RPC dispatch for "split" agent type
tui.rs — added "split" to AGENT_TYPES
New consolidation agent that reads node content and generates semantic
3-5 word kebab-case keys, replacing auto-generated slugs (5K+ journal
entries with truncated first-line slugs, 2.5K mined transcripts with
opaque UUIDs).
Implementation:
- prompts/rename.md: agent prompt template with naming conventions
- prompts.rs: format_rename_candidates() selects nodes with long
auto-generated keys, newest first
- daemon.rs: job_rename_agent() parses RENAME actions from LLM
output and applies them directly via store.rename_node()
- Wired into RPC handler (run-agent rename) and TUI agent types
- Fix epoch_to_local panic on invalid timestamps (fallback to UTC)
Rename dramatically improves search: key-component matching on
"journal#2026-02-28-violin-dream-room" makes the node findable by
"violin", "dream", or "room" — the auto-slug was unsearchable.
Per-agent-type tabs (health, replay, linker, separator, transfer,
apply, orphans, cap, digest, digest-links, knowledge) with dynamic
visibility — tabs only appear when tasks or log history exist.
Features:
- Overview tab: health gauges (α, gini, cc, episodic%), in-flight
tasks, and recent log entries
- Pipeline tab: table with phase ordering and status
- Per-agent tabs: active tasks, output logs, log history
- Log tab: auto-scrolling daemon.log tail
- Vim-style count prefix: e.g. 5r runs 5 iterations of the agent
- Flash messages for RPC feedback
- Tab/Shift-Tab navigation, number keys for tab selection
Also adds run-agent RPC to the daemon: accepts agent type and
iteration count, spawns chained tasks with LLM resource pool.
poc-memory status launches TUI when stdout is a terminal and daemon
is running, falls back to text output otherwise.
Replace monolithic consolidate job with individual agent jobs
(replay, linker, separator, transfer, health) that run sequentially
and store reports. Multi-phase daily pipeline: agent runs → apply
actions → link orphans → cap degree → digest → digest links →
knowledge loop.
Add GraphHealth struct with graph metrics (alpha, gini, clustering
coefficient, episodic ratio) computed during health checks. Display
in `poc-memory daemon status`. Use cached metrics to build
consolidation plan without expensive O(n²) interference detection.
Add RPC consolidate command to trigger consolidation via socket.
Harden session watcher: skip transcripts with zero segments, improve
migration error handling.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
- New agents/transcript.rs: shared JSONL parsing for enrich, fact_mine,
and knowledge (was 3 separate implementations, ~150 lines duplicated)
- New best_match() and section_children() helpers in neuro/rewrite.rs
(was duplicated find-best-by-similarity loop + section collection)
- Net -153 lines
- Replace `pub use types::*` in store/mod.rs with explicit re-export list
- Make transcript_dedup_key private in agents/enrich.rs (only used internally)
- Inline duplicated projects_dir() helper in agents/knowledge.rs and daemon.rs