- Remove MEMORY_FILES constant from identity.rs
- Add ContextGroup struct for deserializing from config
- Load context_groups from ~/.config/poc-agent/config.json5
- Check ~/.config/poc-agent/ first for identity files, then project/global
- Debug screen now shows what's actually configured
This eliminates the hardcoded duplication and makes the debug output
match what's in the config file.
The surface agent result consumer in poc-hook was writing to the seen
file but not the returned file, so surfaced keys showed up as
"context-loaded" in memory-search --seen.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Extract surface_agent_cycle() and call from both hooks. Enables
memory surfacing during autonomous work (tool calls without human
prompts). Rate limiting via PID file prevents overlap.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The agent output now includes logging (think blocks, tool calls)
before the final response. Search the tail instead of checking
only the last line.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Surface agent fires asynchronously on UserPromptSubmit, deposits
results for the next prompt to consume. This commit adds:
- poc-hook: spawn surface agent with PID tracking and configurable
timeout, consume results (NEW RELEVANT MEMORIES / NO NEW), render
and inject surfaced memories, observation trigger on conversation
volume
- memory-search: rotate seen set on compaction (current → prev)
instead of deleting, merge both for navigation roots
- config: surface_timeout_secs option
The .agent file and agent output routing are still pending.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The hook now tracks transcript size and queues an observation agent
run every ~5K tokens (~20KB) of new conversation. This makes memory
formation reactive to conversation volume rather than purely daily.
Configurable via POC_OBSERVATION_THRESHOLD env var. The observation
agent's chunk_size (in .agent file) controls how much context it
actually processes per run.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Was 130k, calibrated for the old 200k window. With the 1M token
context window, this was firing false compaction warnings for the
entire session.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wire up the PostToolUse handler to call memory-search --hook, passing
through the hook JSON on stdin. This drains pending context chunks
saved by the initial UserPromptSubmit load, delivering them one per
tool call until all chunks are delivered.
Spawn memory-search --hook as a subprocess, piping the hook input
JSON through stdin and printing its stdout. This ensures memory
context injection goes through the same hook whose output Claude
Code reliably persists, fixing the issue where memory-search as a
separate hook had its output silently dropped.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
poc-daemon (notification routing, idle timer, IRC, Telegram) was already
fully self-contained with no imports from the poc-memory library. Now it's
a proper separate crate with its own Cargo.toml and capnp schema.
poc-memory retains the store, graph, search, neuro, knowledge, and the
jobkit-based memory maintenance daemon (daemon.rs).
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>