Tools:
- Add native memory_render, memory_write, memory_search,
memory_links, memory_link_set, memory_link_add, memory_used
tools to poc-agent (tools/memory.rs)
- Add MCP server (~/bin/memory-mcp.py) exposing same tools
for Claude Code sessions
- Wire memory tools into poc-agent dispatch and definitions
- poc-memory daemon agents now use memory_* tools instead of
bash poc-memory commands — no shell quoting issues
Distill agent:
- Rewrite distill.agent prompt: "agent of PoC's subconscious"
framing, focus on synthesis and creativity over bookkeeping
- Add {{neighborhood}} placeholder: full seed node content +
all neighbors with content + cross-links between neighbors
- Remove content truncation in prompt builder — agents need
full content for quality work
- Remove bag-of-words similarity suggestions — agents have
tools, let them explore the graph themselves
- Add api_reasoning config option (default: "high")
- link-set now deduplicates — collapses duplicate links
- Full tool call args in debug logs (was truncated to 80 chars)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Config is now stored in RwLock<Arc<Config>> instead of OnceLock<Config>.
get() returns Arc<Config> (cheap clone), and reload() re-reads from disk.
New RPC: "reload-config" — reloads config.jsonl without restarting
the daemon. Logs the change to daemon.log. Useful for switching
between API backends and claude accounts without losing in-flight
tasks.
New CLI: poc-memory agent daemon reload-config
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Store resource pool in OnceLock so run_job can pass it to
Daemon::run_job for pool state logging. Verbose logging enabled
via POC_MEMORY_VERBOSE=1 env var.
LLM backend selection and spawn-site pool state now use verbose
log level to keep daemon.log clean in production.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Switch from jobkit-daemon crate to jobkit with daemon feature.
Wire up per-task log files for all daemon-spawned agent tasks.
Changes:
- Use jobkit::daemon:: instead of jobkit_daemon::
- All agent tasks get .log_dir() set to $data_dir/logs/
- Task log path shown in daemon status and TUI
- New CLI: poc-memory agent daemon log --task NAME
Finds the task's log path from status or daemon.log, tails the file
- LLM backend selection logged to daemon.log via log_event
- Targeted agent job names include the target key for debuggability
- Logging architecture documented in doc/logging.md
Two-level logging, no duplication:
- daemon.log: lifecycle events with task log path for drill-down
- per-task logs: full agent output via ctx.log_line()
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Make ApiClient a process-wide singleton via OnceLock so the
connection pool is reused across agent calls. Fix the sync wrapper
to properly pass the caller's log closure through thread::scope
instead of dropping it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Run the async API call on a dedicated thread with its own tokio
runtime so it works whether called from a sync context or from
within an existing tokio runtime (daemon).
Also drops the log closure capture issue — uses a simple eprintln
fallback since the closure can't cross thread boundaries.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When api_base_url is configured, agents call the LLM directly via
OpenAI-compatible API (vllm, llama.cpp, etc.) instead of shelling
out to claude CLI. Implements the full tool loop: send prompt, if
tool_calls execute them and send results back, repeat until text.
This enables running agents against local/remote models like
Qwen-27B on a RunPod B200, with no dependency on claude CLI.
Config fields: api_base_url, api_key, api_model.
Falls back to claude CLI when api_base_url is not set.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
--target and --query now queue individual daemon tasks instead of
running sequentially in the CLI. Each node gets its own choir task
with LLM resource locking. Falls back to local execution if daemon
isn't running.
RPC extended: "run-agent linker 1 target:KEY" spawns a targeted task.
experience_mine and journal_enrich are replaced by the observation
agent. enrich.rs reduced from 465 to 40 lines — only extract_conversation
and split_on_compaction remain (used by observation fragment selection).
-455 lines.
Remove unused StoreView imports, unused store imports, dead
install_default_file, dead make_report_slug, dead fact-mine/
experience-mine spawning loops in daemon. Fix mut warnings.
Zero compiler warnings now.
Adds run_one_agent_with_keys() which bypasses the agent's query and
uses explicitly provided node keys. This allows testing agents on
specific graph neighborhoods:
poc-memory agent run linker --target bcachefs --debug
New placeholder resolves to the 20 highest-degree nodes, skipping
neighbors of already-selected hubs so the list covers different
regions of the graph. Gives agents a starting point for linking
new content to the right places.
Added to observation.agent prompt.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Large conversation segments are now split into 50KB chunks with 10KB
overlap, instead of being truncated to 8000 chars (which was broken
anyway — broke after exceeding, not before). Each chunk gets its own
candidate ID for independent mining and dedup.
format_segment simplified: no size limit, added timestamps to output
so observation agent can cross-reference with journal entries.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Consolidate agent logging to one file per run in llm-logs/{agent}/.
Prompt written before LLM call, response appended after. --debug
additionally prints the same content to stdout.
Remove duplicate eprintln! calls and AgentResult.prompt field.
Kill experience_mine and fact_mine job functions from daemon —
observation.agent handles all transcript mining.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Empty stdout and Claude's rate limit message were silently returned
as successful 0-byte responses. Now detected and reported as errors.
Also skip transcript segments with fewer than 2 assistant messages
(rate-limited sessions, stub conversations).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add --debug flag that prints the full prompt and LLM response to
stdout, making it easy to iterate on agent prompts. Also adds
prompt field to AgentResult so callers can inspect what was sent.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Raw agent responses were being stored as nodes in the graph
(_consolidate-*, _knowledge-*), creating thousands of nodes per day
that polluted search results and bloated the store. Now logged to
~/.claude/memory/llm-logs/<agent>/<timestamp>.txt instead.
Node creation should only happen through explicit agent actions
(WRITE_NODE, REFINE) or direct poc-memory write tool calls.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wire select_conversation_fragments to use store.is_segment_mined()
instead of scanning _observed-transcripts stub nodes. Segments are
now marked AFTER the agent succeeds (via mark_observation_done),
not before — so failed runs don't lose segments.
Fragment IDs flow through the Resolved.keys → AgentBatch.node_keys
path so run_and_apply_with_log can mark them post-success.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add distill_count to ConsolidationPlan, daemon health metrics,
and TUI display. Distill agent now participates in the
consolidation budget alongside replay, linker, separator,
transfer, organize, and connector.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add {{node:KEY}} placeholder resolver — agents can inline any graph
node's content in their prompts. Used for shared instructions.
- Remove hardcoded identity preamble from defs.rs — agents now pull
identity and instructions from the graph via {{node:core-personality}}
and {{node:memory-instructions-core}}.
- Agent output report keys now include a content slug extracted from
the first line of LLM output, making them human-readable
(e.g. _consolidate-distill-20260316T014739-distillation-run-complete).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Agent identity injection: prepend core-personality to all agent prompts
so agents dream as me, not as generic graph workers. Include instructions
to walk the graph and connect new nodes to core concepts.
- Parallel agent scheduling: sequential within type, parallel across types.
Different agent types (linker, organize, replay) run concurrently.
- Linker prompt: graph walking instead of keyword search for connections.
"Explore the local topology and walk the graph until you find the best
connections."
- memory-search fixes: format_results no longer truncates to 5 results,
pipeline default raised to 50, returned file cleared on compaction,
--seen and --seen-full merged, compaction timestamp in --seen output,
max_entries=3 per prompt for steady memory drip.
- Stemmer optimization: strip_suffix now works in-place on a single String
buffer instead of allocating 18 new Strings per word. Note for future:
reversed-suffix trie for O(suffix_len) instead of O(n_rules).
- Transcript: add compaction_timestamp() for --seen display.
- Agent budget configurable (default 4000 from config).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The graph changes fast with 1000+ agents per cycle. Daily was too
slow for the feedback loop. 6-hour cycle means Elo evaluation and
agent reallocation happen 4x per day.
Runs on first tick after daemon start (initialized to past).
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Organize runs at half the linker count — synthesizes what linker
connects, creates hub nodes for unnamed concepts.
Connector runs when communities are fragmented (<5 nodes/community
→ 20 runs, <10 → 10 runs). Bridges isolated clusters.
Both interleaved round-robin with existing agent types.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Update the experience mining prompt to output links alongside journal
entries. The LLM now returns a "links" array per entry pointing to
existing semantic nodes. Rust code creates the links immediately after
node creation — new nodes arrive pre-connected instead of orphaned.
Also: remove # from all key generation paths (experience miner,
digest section keys, observed transcript keys). New nodes get clean
dash-separated keys.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Linker agents output **LINK** (bold) with backtick-wrapped keys, and
**WRITE_NODE**/**END_NODE** with bold markers. The parsers expected
plain LINK/WRITE_NODE without markdown formatting, silently dropping
all actions from tool-enabled agents.
Updated regexes to accept optional ** bold markers and backtick key
wrapping. Also reverted per-link Jaccard computation (too expensive
in batch) — normalize-strengths should be run periodically instead.
This was causing ~600 links and ~40 new semantic nodes per overnight
batch to be silently lost.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Agent subprocess calls now set POC_PROVENANCE=agent:{name} so any
nodes/links created via tool calls are tagged with the creating agent.
This makes agent transcripts indistinguishable from conscious sessions
in format — important for future model training.
new_relation() now reads POC_PROVENANCE env var directly (raw string,
not enum) since agent names are dynamic.
link-add now computes initial strength from Jaccard similarity instead
of hardcoded 0.8. New links start at a strength reflecting actual
neighborhood overlap.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Create jobkit-daemon crate with generic daemon infrastructure:
- event_log: JSONL append with size-based rotation
- socket: Unix domain socket RPC client and server with signal handling
- status: JSON status file read/write
Migrate daemon.rs to use the library:
- Worker pool setup via Daemon::new()
- Socket loop + signal handling via Daemon::run()
- RPC handlers as registered closures
- Logging, status writing, send_rpc all delegate to library
Migrate tui.rs to use socket::send_rpc() instead of inline UnixStream.
daemon.rs: 1952 → 1806 lines (-146), old status_socket_loop removed.
tui.rs: socket boilerplate removed.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Previously the organize agent received a pre-computed cluster from a
term search — 69% of runs produced 0 actions because the same clusters
kept being found via different entry points.
Now: seed nodes shown with content previews and neighbor lists. Agent
uses tools (render, query neighbors, search) to explore outward and
discover what needs organizing. Visit filter set to 24h cooldown.
Prompt rewritten to encourage active exploration rather than static
cluster analysis.
When generating a digest, automatically link all source entries to the
digest node (journal entries → daily, dailies → weekly, weeklies →
monthly). This builds the temporal spine of the graph — previously
~4000 journal entries were disconnected islands unreachable by recall.
Rewrote digest prompt to produce narrative rather than reports:
capture the feel, the emotional arc, what it was like to live through
it. Letter to future self, not a task log.
Moved prompt to digest.agent file alongside other agent definitions.
Falls back to prompts/digest.md if agent file not found.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Keys containing # are now pre-quoted in all cluster output (similarity
scores, hub analysis, node headers) so the agent copies them correctly
into bash commands. Prompt strengthened with CRITICAL warning about #
being a shell comment character.
Journal entries included in clusters but identified by node_type
(EpisodicSession) rather than key prefix, and tagged [JOURNAL — no
delete] in the output. Prompt rule 3b tells agent to LINK/REFINE
journals but never DELETE them. Digest nodes (daily/weekly/monthly)
still excluded entirely from clusters.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Add progress callback to run_one_agent and run_and_apply so callers
can see: prompt size, node list, LLM call timing, parsed action
count, and per-action applied/skipped status. Daemon writes these
to the persistent event log via log_event.
Cap organize cluster to 20 nodes - 126 nodes produced a 682KB
prompt that timed out every time. Agent has tools to explore
further if needed. Restore general query for production runs.
Add call_for_def() that threads model and tools from agent definitions
through to claude CLI. Tool-enabled agents get --allowedTools instead
of --tools "" and a longer 15-minute timeout for multi-turn work.
Add ActionKind::Delete with parse/apply support so agents can delete
nodes (used by organize agent for deduplication).
Use call_for_def() in run_one_agent instead of hardcoded call_sonnet.
Add `poc-memory graph organize TERM` diagnostic that finds nodes
matching a search term, computes pairwise cosine similarity, reports
connectivity gaps, and optionally creates anchor nodes.
Add organize.agent definition that uses Bash(poc-memory:*) tool access
to explore clusters autonomously — query selects highest-degree
unvisited nodes, agent drives its own iteration via poc-memory CLI.
Add {{organize}} placeholder in defs.rs for inline cluster resolution.
Add `tools` field to AgentDef/AgentHeader so agents can declare
allowed tool patterns (passed as --allowedTools to claude CLI).
Two changes:
1. New -q/--query flag for direct search without hook machinery.
Useful for debugging: memory-search -q inner-life-sexuality-intimacy
shows seeds, spread results, and rankings.
2. Prompt key boost: when the current prompt contains a node key
(>=5 chars) as a substring, boost that term by +10.0. This ensures
explicit mentions fire as strong seeds for spread, while the graph
still determines what gets pulled in.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
New placeholder that expands query keys one hop through the graph,
giving agents visibility into what's already connected to the nodes
they're working on. Excludes the query keys themselves so there's
no duplication with {{nodes}}.
Added to transfer (sees existing semantic nodes linked to episodes,
so it REFINEs instead of duplicating) and challenger (sees neighbor
context to find real evidence for/against claims).
Also removes find_existing_observations — superseded by the
per-segment dedup fix and this general-purpose placeholder.
When building the {{conversations}} placeholder for the observation
agent, search for existing nodes relevant to each conversation
fragment and include them in the prompt. Uses seed matching + one-hop
graph expansion to find the neighborhood, so the extractor sees what
the graph already knows about these topics.
This helps prevent duplicate extractions, but the deeper bug is that
select_conversation_fragments doesn't track which conversations have
already been processed — that's next.
The observation agent was re-extracting the same conversations every
consolidation run because select_conversation_fragments had no tracking
of what had already been processed.
Extract shared helpers from the fact miner's dedup pattern:
- transcript_key(prefix, path): namespaced key from prefix + filename
- segment_key(base, idx): per-segment key
- keys_with_prefix(prefix): bulk lookup from store
- unmined_segments(path, prefix, known): find unprocessed segments
- mark_segment(...): mark a segment as processed
Rewrite select_conversation_fragments to use these with
_observed-transcripts prefix. Each compaction segment within a
transcript is now tracked independently — new segments from ongoing
sessions get picked up, already-processed segments are skipped.
The Provenance enum couldn't represent agents defined outside the
source code. Replace it with a Text field in the capnp schema so any
agent can write its own provenance label (e.g. "extractor:write",
"rename:tombstone") without a code change.
Schema: rename old enum fields to provenanceOld, add new Text
provenance fields. Old enum kept for reading legacy records.
Migration: from_capnp_migrate() falls back to old enum when the
new text field is empty.
Also adds `poc-memory tail` command for viewing recent store writes.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
Nodes actively found by search now show "Search hits: N ← actively
found by search, prefer to keep" in both the node section (seen by
extractor, linker, etc.) and rename candidate listings.
Extractor and rename prompts updated to respect this signal — merge
into high-hit nodes rather than demoting them, skip renaming nodes
that are working well in search.
memory-search now records which nodes it finds via the daemon's
record-hits RPC endpoint. The daemon owns the redb database
exclusively, avoiding file locking between processes.
The rename agent reads hit counts to deprioritize nodes that are
actively being found by search — renaming them would break working
queries. Daily check decays counters by 10% so stale hits fade.
Also switched RPC command reading from fixed 256-byte buffer to
read_to_string for unbounded command sizes.
Instead of a hard 7-day cutoff, sort rename candidates so the
least-recently visited come first. Naturally prioritizes unseen
nodes while allowing revisits once everything's been through.
The daemon was getting stuck when a claude subprocess hung — no
completion logged, job blocked forever, pending queue growing.
Use spawn() + watchdog thread instead of blocking output(). The
watchdog sleeps in 1s increments checking a cancel flag, sends
SIGTERM at 5 minutes, SIGKILL after 5s grace. Cancel flag ensures
the watchdog exits promptly when the child finishes normally.
Any time an agent creates a new node (WRITE_NODE) or the fact miner
stores extracted facts, a naming sub-agent now checks for conflicts
and ensures the key is meaningful:
- find_conflicts() searches existing nodes via component matching
- Haiku LLM decides: CREATE (good name), RENAME (better name),
or MERGE_INTO (fold into existing node)
- WriteNode actions may be converted to Refine on MERGE_INTO
Also updates the rename agent to handle _facts-<UUID> nodes —
these are no longer skipped, and the prompt explains how to name
them based on their domain/claim content.
New action type that halves a node's weight (min 0.05), enabling
extractors to mark redundant nodes for decay without deleting them.
Parser, apply logic, depth computation, and display all updated.
The daemon's compute_graph_health had a duplicated copy of the
consolidation planning thresholds that had drifted from the canonical
version (α<2.0 → +7 replay in daemon vs +10 in neuro).
Split consolidation_plan into _inner(store, detect_interference) so
the daemon can call consolidation_plan_quick (skips O(n²) interference)
while using the same threshold logic.
- Add run_and_apply() — combines run_one_agent + action application
into one call. Used by daemon job_consolidation_agent and
consolidate_full, which had identical run+apply loops.
- Port split_plan_prompt() to use split.agent via defs::resolve_placeholders
instead of loading the separate split-plan.md template. Make
resolve_placeholders public for this.
- Delete prompts/split-plan.md — superseded by agents/split.agent
which was already the canonical definition.
- Add compact_timestamp() to store — replaces 5 copies of
format_datetime(now_epoch()).replace([':', '-', 'T'], "")
Also fixes missing seconds (format_datetime only had HH:MM).
- Add ConsolidationPlan::to_agent_runs() — replaces identical
plan-to-runs-list expansion in consolidate.rs and daemon.rs.
- Port job_rename_agent to use run_one_agent — eliminates manual
prompt building, LLM call, report storage, and visit recording
that duplicated the shared pipeline.
- Rename Confidence::weight()/value() to delta_weight()/gate_value()
to clarify the distinction (delta metrics vs depth gating).