- Remove MEMORY_FILES constant from identity.rs
- Add ContextGroup struct for deserializing from config
- Load context_groups from ~/.config/poc-agent/config.json5
- Check ~/.config/poc-agent/ first for identity files, then project/global
- Debug screen now shows what's actually configured
This eliminates the hardcoded duplication and makes the debug output
match what's in the config file.
- store/types.rs: sanitize timestamps on capnp load — old records had
raw offsets instead of unix epoch, breaking sort-by-timestamp queries
- agents/api.rs: drain reasoning tokens from UI channel into LLM logs
so we can see Qwen's chain-of-thought in agent output
- agents/daemon.rs: persistent task queue (pending-tasks.jsonl) —
tasks survive daemon restarts. Push before spawn, remove on completion,
recover on startup.
- api/openai.rs: only send reasoning field when explicitly configured,
not on every request (fixes vllm warning)
- api/mod.rs: add 600s total request timeout as backstop for hung
connections
- Cargo.toml: enable tokio-console feature for task introspection
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two bugs: upsert_provenance didn't update node.timestamp, so history
showed the original creation date for every version. And native memory
tools (poc-agent dispatch) didn't set POC_PROVENANCE, so all agent
writes showed provenance "manual" instead of "agent:organize" etc.
Fix: set node.timestamp = now_epoch() in upsert_provenance. Thread
provenance through memory::dispatch as Option<&str>, set it via
.env("POC_PROVENANCE") on each subprocess Command. api.rs passes
"agent:{name}" for daemon agent calls.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The POC_PROVENANCE env var lookup was duplicated in upsert,
delete_node, and rename_node. Extract to a single function.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
delete_node and rename_node were cloning the previous node version
for the tombstone/rename entry without updating provenance or
timestamp. This made it impossible to tell who deleted a node or
when — the tombstone just inherited whatever the last write had.
Now both operations derive provenance from POC_PROVENANCE env var
(same as upsert) and set timestamp to now.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
cmd_history was silently hiding the deleted flag, making it
impossible to tell from the output that a node had been deleted.
This masked the kernel-patterns deletion — looked like the node
existed in the log but wouldn't load.
Also adds merge-logs and diag-key diagnostic binaries, and makes
Node::to_capnp public for use by external tools.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
rewrite_store() used File::create() to truncate and overwrite the
entire nodes.capnp log with only the latest version of each node
from the in-memory store. This destroyed all historical versions
and made no backup. Worse, any node missing from the in-memory
store due to a loading bug would be permanently lost.
strip_md_keys() now appends migrated nodes to the existing log
instead of rewriting it. The dead function is kept with a warning
comment explaining what went wrong.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Add TranscriptSegment capnp schema and append-only log for tracking
which transcript segments have been mined by which agents. Replaces
the old approach of creating stub nodes (_observed-transcripts,
_mined-transcripts, _facts-) in the main graph store.
- New schema: TranscriptSegment and TranscriptProgressLog
- Store methods: append_transcript_progress, replay, is_segment_mined,
mark_segment_mined
- Migration command: admin migrate-transcript-progress (migrated 1771
markers, soft-deleted old stub nodes)
- Progress log replayed on all Store::load paths
Also: revert extractor.agent to graph-only (no CONVERSATIONS),
update memory-instructions-core with refine-over-create principle.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Agent subprocess calls now set POC_PROVENANCE=agent:{name} so any
nodes/links created via tool calls are tagged with the creating agent.
This makes agent transcripts indistinguishable from conscious sessions
in format — important for future model training.
new_relation() now reads POC_PROVENANCE env var directly (raw string,
not enum) since agent names are dynamic.
link-add now computes initial strength from Jaccard similarity instead
of hardcoded 0.8. New links start at a strength reflecting actual
neighborhood overlap.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Add adjust_edge_strength() to Store — modifies strength on all edges
between two nodes, clamped to [0.05, 0.95].
New commands:
- `not-relevant KEY` — weakens ALL edges to the node by 0.01
(bad routing: search found the wrong thing)
- `not-useful KEY` — weakens node weight, not edges
(bad content: search found the right thing but it's not good)
Enhanced `used KEY` — now also strengthens all edges to the node by
0.01, in addition to the existing node weight boost.
Three-tier design: agents adjust by 0.00001 (automatic), conscious
commands adjust by 0.01 (deliberate), manual override sets directly.
All clamped, never hitting 0 or 1.
Design spec: .claude/analysis/2026-03-14-link-strength-feedback.md
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Compute parent/child (session→daily→weekly→monthly) and prev/next
(chronological ordering within each level) edges at graph build time
from node metadata. Parse dates from keys for digest nodes (whose
timestamps reflect creation, not covered date) and prefer key-parsed
dates over timestamp-derived dates for sessions (timezone fix).
Result: ~9185 implicit edges, communities halved, gini improved.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
Load store from both cache (rkyv/bincode) and raw capnp logs,
then diff: missing nodes, phantom nodes, version mismatches.
Auto-rebuilds cache if inconsistencies found.
This would have caught the mysterious the-plan deletion — likely
caused by a stale/corrupt snapshot that silently dropped the node
while the capnp log still had it.
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
The Provenance enum couldn't represent agents defined outside the
source code. Replace it with a Text field in the capnp schema so any
agent can write its own provenance label (e.g. "extractor:write",
"rename:tombstone") without a code change.
Schema: rename old enum fields to provenanceOld, add new Text
provenance fields. Old enum kept for reading legacy records.
Migration: from_capnp_migrate() falls back to old enum when the
new text field is empty.
Also adds `poc-memory tail` command for viewing recent store writes.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
- Add compact_timestamp() to store — replaces 5 copies of
format_datetime(now_epoch()).replace([':', '-', 'T'], "")
Also fixes missing seconds (format_datetime only had HH:MM).
- Add ConsolidationPlan::to_agent_runs() — replaces identical
plan-to-runs-list expansion in consolidate.rs and daemon.rs.
- Port job_rename_agent to use run_one_agent — eliminates manual
prompt building, LLM call, report storage, and visit recording
that duplicated the shared pipeline.
- Rename Confidence::weight()/value() to delta_weight()/gate_value()
to clarify the distinction (delta metrics vs depth gating).
New append-only visits.capnp log records which agent processed which
node and when. Only recorded on successful completion — transient
errors don't mark nodes as "seen."
Schema: AgentVisit{nodeUuid, nodeKey, agent, timestamp, outcome}
Storage: append_visits(), replay_visits(), in-memory VisitIndex
Recording: daemon records visits after successful LLM call
API: agent_prompt() returns AgentBatch{prompt, node_keys} so callers
know which nodes to mark as visited.
Groundwork for using visit recency in agent node selection — agents
will deprioritize recently-visited nodes.
All write paths (upsert_node, upsert_provenance, delete_node,
rename_node, ingest_units) now hold StoreLock across the full
refresh→check→write cycle. This prevents the race where two
concurrent processes both see a key as "new" and create separate
UUIDs for it.
Adds append_nodes_unlocked() and append_relations_unlocked() for
callers already holding the lock. Adds refresh_nodes() to replay
log tail under lock before deciding create vs update.
Also adds find_duplicates() for detecting existing duplicates
in the log (replays full log, groups live nodes by key).
New consolidation agent that reads node content and generates semantic
3-5 word kebab-case keys, replacing auto-generated slugs (5K+ journal
entries with truncated first-line slugs, 2.5K mined transcripts with
opaque UUIDs).
Implementation:
- prompts/rename.md: agent prompt template with naming conventions
- prompts.rs: format_rename_candidates() selects nodes with long
auto-generated keys, newest first
- daemon.rs: job_rename_agent() parses RENAME actions from LLM
output and applies them directly via store.rename_node()
- Wired into RPC handler (run-agent rename) and TUI agent types
- Fix epoch_to_local panic on invalid timestamps (fallback to UTC)
Rename dramatically improves search: key-component matching on
"journal#2026-02-28-violin-dream-room" makes the node findable by
"violin", "dream", or "room" — the auto-slug was unsearchable.
chrono's timestamp_opt can return None during DST transitions.
Handle all three variants (Single, Ambiguous, None) instead of
unwrapping. For DST gaps, offset by one hour to land in valid
local time.
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
- Replace `pub use types::*` in store/mod.rs with explicit re-export list
- Make transcript_dedup_key private in agents/enrich.rs (only used internally)
- Inline duplicated projects_dir() helper in agents/knowledge.rs and daemon.rs
Replace all partial_cmp().unwrap() with total_cmp() in spectral.rs
and knowledge.rs — eliminates potential panics on NaN without
changing behavior for normal floats.
Use existing weighted_distance() and eigenvalue_weights() helpers in
nearest_neighbors() and nearest_to_seeds() instead of inlining the
same distance computation.
Move parse_timestamp_to_epoch() from enrich.rs to util.rs — was
duplicated logic, now shared.
Replace O(n²) relation existence check in init_from_markdown() with
a HashSet of (source, target) UUID pairs. With 26K relations this
was scanning linearly for every link in every markdown unit.
Add util::truncate() and util::first_n_chars() to replace 16 call
sites doing the same floor_char_boundary or chars().take().collect()
patterns. Deduplicate the batching loop in consolidate.rs (4 copies
→ 1 loop over an array). Fix all clippy warnings: redundant closures,
needless borrows, collapsible if, unnecessary cast, manual strip_prefix.
Net: -44 lines across 16 files.
poc-daemon (notification routing, idle timer, IRC, Telegram) was already
fully self-contained with no imports from the poc-memory library. Now it's
a proper separate crate with its own Cargo.toml and capnp schema.
poc-memory retains the store, graph, search, neuro, knowledge, and the
jobkit-based memory maintenance daemon (daemon.rs).
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>