Commit graph

186 commits

Author SHA1 Message Date
Kent Overstreet
0f1c4cf1de agent/api: carry readout alongside streamed tokens
StreamToken::Token is now a struct variant with an optional
TokenReadout (shape [n_layers][n_concepts]) per token — parsed from
the vLLM completion response's choices[i].readout field when the
server has readout enabled.

ApiClient gains a fetch_readout_manifest() method that hits
GET /v1/readout/manifest. Returns Ok(None) on 404 (server has
readout disabled), so callers can gracefully fall back when pointed
at a non-readout-enabled endpoint.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-18 01:15:46 -04:00
Kent Overstreet
2b03dbb200 user: F7 compare screen
Side-by-side model comparison against the current conversation context.
Built on the MindTriggered pattern — F7 drops in as one more
CompareScoring flow next to MemoryScoring / FinetuneScoring.

Motivation: we have the VRAM on the b200 to load two versions of the
same family simultaneously (e.g. Qwen3.5 27B bf16 and q8_k_xl). Rather
than trust perplexity/KLD numbers on a generic corpus, we can measure
divergence on our actual conversations: for each assistant response,
ask the test model what it would have said given the same prefix, and
eyeball the diffs.

 - config.compare.test_backend — names an entry in the existing
   backends map to use as the test model. Empty = F7 reports "(unset)"
   and does nothing.

 - subconscious::compare::{score_compare_candidates, CompareCandidate,
   CompareScoringStats, CompareScoring}. For each assistant response,
   gen_continuation runs with the test client against the same prefix
   the original response saw; pairs stream into
   shared.compare_candidates as they complete.

 - user::compare::CompareScreen — F7 in the screen list. c/Enter
   triggers a run; list/detail layout mirroring F6, detail shows
   prior context / original / test-model alternate.

No persistence yet — each F7 run regenerates. Caching via a context
manifest (so we can re-view without re-burning generation) is the
natural follow-up; for now light usage is fine.

Also reusable later for validating finetune checkpoints: same pattern,
swap the test backend for the new checkpoint, watch where it diverges
from the base.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:12:26 -04:00
Kent Overstreet
575325e855 mind: MindTriggered trait for background scoring flows
Mind's impl had accumulated ~50 lines of setup glue per scoring flow
(memory, memory-full, finetune): snapshot config, clone handles,
resolve context, spawn task, route results back through BgEvent,
write stats. The shape was identical; only the middle changed.

Introduce the MindTriggered trait:

    pub trait MindTriggered {
        fn trigger(&self);
    }

Each flow becomes a struct next to its scoring code that owns its
dependencies and a JoinHandle (behind a sync Mutex for interior
mutability):

    subconscious::learn::MemoryScoring    (Score, ScoreFull)
    subconscious::learn::FinetuneScoring  (ScoreFinetune)

Mind holds one of each and dispatches in one line:

    MindCommand::Score         => self.memory_scoring.trigger(),
    MindCommand::ScoreFull     => self.memory_scoring.trigger_full(),
    MindCommand::ScoreFinetune => self.finetune_scoring.trigger(),

Each struct picks its own trigger semantics — memory scoring is
no-op-if-running (!handle.is_finished()); finetune is abort-restart.

Falls out:

 - BgEvent / bg_tx / bg_rx disappear entirely. Tasks write directly
   to their slice of MindState and call agent.state.changed.notify_one()
   to wake the UI. The bg_rx arm in Mind's select loop is gone.

 - agent.state.memory_scoring_in_flight was duplicating
   shared.scoring_in_flight via BgEvent routing; now the JoinHandle
   alone tells us, and shared.scoring_in_flight is written directly
   by the task for the UI.

 - start_memory_scoring / start_full_scoring / start_finetune_scoring
   methods on Mind are deleted; Mind no longer knows the setup shape
   of any scoring flow.

 - FinetuneScoringStats moves from mind/ to subconscious/learn.rs
   next to the function that produces it.

No behavior change — same flows, same trigger points, same semantics.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 16:12:26 -04:00
Kent Overstreet
c5745e38e2 subconscious: lift continuation gen + render helpers into shared homes
- context.rs gains is_assistant, render_branch_text, render_prior_context
  alongside memory_key / is_memory_node. They're pure AST helpers, used
  by both the finetune pipeline and the forthcoming compare screen.

- new subconscious/generate.rs holds gen_continuation(context, entry_idx,
  skip, client): build the prompt from a context prefix with an arbitrary
  skip predicate, send to the model, decode the completion. Takes both
  the predicate and the client so callers can aim it at memory-stripped
  contexts (finetune), same-context-different-model (F7 compare), or
  whatever else.

- learn.rs drops its private copies of those helpers and the inline
  generate_alternate; the finetune path now reads as
  gen_continuation(context, idx, is_memory_node, client).

Pure refactor, no behavior change.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:20:02 -04:00
Kent Overstreet
eea7de4753 agent: unify prompt assembly across agent and learn paths
wire_prompt() gains a conv_range and a skip closure, and returns the
assistant-message token ranges needed by the scoring path. The agent
path passes 0..len + |_| false and ignores the ranges. Memory-ablation
scoring and candidate generation pass a prefix range + a predicate
(e.g. is_memory_node, or |n| memory_key(n) == Some(key)).

This deletes subconscious/learn.rs's build_token_ids, its private
Filter enum, and the is_memory/memory_key duplicates — the walk over
context sections now has one home. Adding a section or changing
section order in the agent path won't silently drift away from what
scoring sees.

call_score forwards multi_modal_data when the wire-form prompt
contains images. generate_alternate switches to stream_completion_mm
and passes the same images. Scoring on image-bearing contexts now
sends wire form (1 image_pad + image data) instead of expanded
image_pads with no image data; text-only contexts are bit-identical.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-17 15:16:07 -04:00
Kent Overstreet
592a3e2e52 config: move user_name/assistant_name to AppConfig (top level)
These are identity settings, not memory-graph settings. Sat inside the
\`memory\` section only because that's where Config started life. Move
to AppConfig alongside the other top-level stuff.

Readers now pull from \`config::app()\` instead of \`config::get()\`.
subconscious/defs.rs's conversation-building pass still needs Config
for surface_conversation_bytes, so both guards coexist there —
AppConfig's guard is dropped before the per-step await loop so we
don't stall the config-watcher's writer.

show_config picks up the two new fields at the top of its output.
Kent's config already has them hoisted to the top level.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 16:20:17 -04:00
Kent Overstreet
0e6b5dc8be agent: phase-aware bail script for surface-observe concurrency
bail-no-competing.sh used to bail if any other live agent existed in
the state dir, period. That was too coarse: surface-observe agents run
a multi-step pipeline (surface → organize-search → organize-new →
observe), and the intent is to let a new surface-phase agent start
while an older one finishes its post-surface tail. With the old check
the newer agent always bailed, so surface-observe was effectively
serialized at the slowest cycle time.

Make the script phase-aware:

- oneshot.rs now passes the current phase as argv[2] alongside the pid
  file name. The script writes that phase into its own pid file on
  every step transition, so concurrent agents can read each other's
  phase just by cat'ing the pid files.

- Bail only when another live agent is in the same phase-group as us.
  Groups: "surface" vs. "everything else" (post-surface). At most one
  agent per group alive at a time — surface runs at a higher cadence
  than the organize/observe tail.

- Still clean up stale pid files for dead processes.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 15:41:28 -04:00
Kent Overstreet
2eddf3b4cf learn: skip empty responses; show prior conversation context on F6
Two fixes to the F6 candidate display:

1. Turns where the assistant produced nothing human-visible (an
   interrupted generation, a turn consisting of only a tool call the
   renderer folds to the tool name) were landing as candidates with
   an empty response_text. They'd render as blank cards and, worse,
   we'd still burn a full alternate generation on each one. Filter
   them out before they reach the candidate list.

2. The detail pane showed only the scored response + alternate, with
   no hint of what the user had actually asked. Pre-compute the last
   two user/assistant exchanges on each candidate as a rendered
   prior_context string ([user]/[assistant] markers) and show them
   above the response, under a new "context & response" section
   heading.

render_branch_text and render_prior_context extracted as helpers —
the response-text rendering and prior-context rendering share the
same "flatten Branch children to text" pass.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 13:20:03 -04:00
Kent Overstreet
313f85f34a config: global writable AppConfig; learn settings live there
Runtime-mutable settings (F6's threshold knob, the generate-alternates
toggle, anything else that comes along) were ending up as mirrored
fields on MindState — each new config setting grew MindState::new's
signature and added a clone+sync path. Wrong home. MindState is
ephemeral session state, not a config projection.

Give AppConfig the same treatment the memory Config has: install it
into a global RwLock<AppConfig> at startup via load_app, read through
config::app() (returns a read guard), mutate through update_app. The
config_writer functions now write to disk AND update the cache
atomically, so the one-stop-shop call keeps both in sync.

Also while in here:

- learn.generate_alternates moves from a sentinel file
  (~/.consciousness/cache/finetune-alternates, "exists = enabled")
  into the config under the learn section. On first run with this
  build, if the sentinel file still exists Mind::new flips the
  config value to true and removes it. Drops
  alternates_enabled()/set_alternates().

- Default threshold 0.0000001 → 1.0. With the timestamp filter
  removed the previous value was letting essentially everything
  through; 1.0 is a sane "nothing gets through unless you actually
  want it" default.

- score_finetune_candidates takes generate_alternates as a parameter
  instead of reading a global — caller snapshots the config values
  once at the top of start_finetune_scoring so the async task
  doesn't need to hold the config read lock across awaits.

- MindState.learn_threshold / learn_generate_alternates gone; the
  SetLearn* command handlers now just delegate to config_writer.

Kent noted RwLock<Arc<AppConfig>> (the pattern used by the memory
Config global) is pointless here — nobody needs a snapshot-after-
release, reads are short — so this uses a plain RwLock<AppConfig>
and returns a read guard.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 12:53:22 -04:00
Kent Overstreet
343e43afab learn: stream candidates to UI, update status during alternate gen
With the timestamp filter gone (previous commit), score_finetune_candidates
started returning the actual ~100+ candidates per scoring run. The
existing code generated alternates for all of them in a tight loop
before returning anything, leaving the status line stuck on
"finetune: scoring N responses..." for ~100s of seconds while the
B200 was pegged.

Two fixes:

1. score_finetune_candidates now takes an ActivityGuard and a callback.
   Candidates are emitted one-at-a-time as they complete (after their
   alternate if that's enabled, immediately otherwise). The activity
   status updates to "finetune: generating alternate N/M" during the
   alternate-gen phase so it's clear what's happening.

2. BgEvent::FinetuneCandidates(Vec<_>) → FinetuneCandidate(one). Each
   emitted candidate is pushed onto shared.finetune_candidates; the UI
   tick picks it up and renders it on the next frame. start_finetune_scoring
   clears the previous run's list at the top so each run is fresh.

Return type changes from (Vec, f64) → (usize, f64) — the count above
threshold is all the caller still needs since the candidates stream
through the callback.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 12:44:25 -04:00
Kent Overstreet
080b4f9084 context: tighten timestamp schema; every AstNode has one
Previously NodeLeaf.timestamp and AstNode::Branch.timestamp accepted
null or missing via a deserialize_timestamp_or_epoch fallback — legacy
entries in conversation.jsonl from before Branch timestamps existed
(and from before chrono serialization was wired up) would load with
UNIX_EPOCH as a sentinel. Downstream, node_timestamp_ns() returned
Option<i64> and callers had to handle None as "old entry, skip."

That second filter was silently dropping every candidate in
score_finetune_candidates when scoring an older session — the F6
screen showed "0 above threshold" even when max_divergence was
orders of magnitude above the threshold, because every entry was
failing the None check, not the divergence check.

The fix, in three parts:

1. src/bin/fix-timestamps.rs — one-off migration tool that walks a
   conversation.jsonl, linearly interpolates timestamps for entries
   stuck at UNIX_EPOCH (using surrounding real timestamps as anchors),
   propagates to child leaves with per-sibling ns offsets, and bumps
   any collisions by 1 ns for uniqueness. Ran against the current
   session's log: 11887 entries, 72289 ns bumps, all unique.

2. context.rs — drop default_timestamp and
   deserialize_timestamp_or_epoch. NodeLeaf and Branch now require a
   present non-null timestamp on deserialize. Tests flip from
   "missing/null → UNIX_EPOCH" to "missing/null → Err."

3. subconscious/learn.rs — node_timestamp_ns now returns i64, not
   Option<i64>. The matching caller in score_finetune_candidates
   collapses from a Some/None match to a single trained-set check.
   mind/log.rs's oldest_timestamp no longer filters UNIX_EPOCH.

Every line currently on disk has already been migrated. Going
forward, new AstNodes always carry real timestamps (Utc::now() at
construction time), so the strict schema is the invariant, not an
aspiration.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 12:35:16 -04:00
Kent Overstreet
77822992c8 learn: score_ranges is now required; short-circuit on empty
vllm's /v1/score endpoint made score_ranges a required field (the
messages-mode fallback that used to pattern-scan for assistant
boundaries is gone). Always send the field, and if we have nothing to
score, skip the HTTP round-trip entirely instead of letting the server
422 us.

Response parsing is unchanged — serde ignores the renamed range_index
field and the dropped role field since we only extract total_logprob.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 12:19:28 -04:00
Kent Overstreet
e5dd8312c7 learn: F6 screen — scoring stats, ActivityGuard, configurable threshold
Three changes that together reshape the F6 fine-tune-review screen:

1. Finetune scoring reports through the standard agent activity system
   instead of a separate finetune_progress String. The previous design
   ran an independent progress field that forced a cross-lock dance and
   bespoke UI plumbing. start_finetune_scoring now uses start_activity
   + activity.update, so the usual status line and notifications
   capture scoring progress uniformly with other background work.

2. MindState gains a FinetuneScoringStats snapshot (responses seen,
   above threshold, max divergence, error). The F6 empty screen shows
   this instead of a loading message — so after a scoring run that
   produced zero candidates, you can see *why* (e.g., max_divergence
   below threshold).

3. The divergence threshold is configurable from F6 via +/- hotkeys
   (scales by 10×) and persisted to ~/.consciousness/config.json5 via
   config_writer::set_learn_threshold. AppConfig grows a learn section
   with a threshold field (default 1e-7).

Also: user/mod.rs no longer uses try_lock() for the per-tick
unconscious/mind state sync — we fixed the locking hot paths that
made try_lock necessary, so lock().await is now the right choice.
And subconscious::learn::score_finetune_candidates now returns
(candidates, max_divergence) so the stats can be populated.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 11:49:26 -04:00
Kent Overstreet
2b632d568b learn: nanosecond timestamps, token ranges for /score
Two related changes to the learn subsystem:

1. AST node timestamps are now non-optional — both Leaf and Branch
   variants carry a DateTime<Utc>. UNIX_EPOCH means "unset" (old entries
   deserialized from on-disk conversation logs).

   Training uses timestamps as unique keys for dedup, so we promote to
   nanosecond precision: node_timestamp_ns(), TrainData.timestamp_ns,
   FinetuneCandidate.timestamp_ns, mark_trained(ns).

2. build_token_ids() now also returns token-position ranges of assistant
   messages. These are passed to vLLM's /score endpoint via the new
   score_ranges field so only scored-position logprobs are returned —
   cuts bandwidth/compute when scoring small windows.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 11:48:37 -04:00
Kent Overstreet
5d9d3ffc5b learn: wire up /train endpoint for approved candidates
When 's' is pressed on the learn screen, approved candidates are now
sent to the inference server's /train endpoint.

Samples are marked as sent immediately in the UI, and mark_trained()
is called after successful API response to prevent re-scoring.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 02:04:26 -04:00
Kent Overstreet
50b7b3a33a F6 learn screen: fine-tuning candidate review
Wire up divergence scoring to identify responses that depend heavily on
memories the model hasn't internalized. These are candidates for fine-tuning.

- Score finetune candidates automatically after each turn
- Track trained responses by timestamp to prevent overtraining
- F6 screen shows candidates with divergence scores
- j/k nav, a=approve, r=reject, g=toggle alternate gen, s=send
- Additive sync preserves approval status across ticks
- Keeps 10 most recent rejected, removes sent

The 's' key currently just marks as trained locally — actual /finetune
endpoint call to follow.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 02:04:26 -04:00
Kent Overstreet
7046e63b9d Include identity nodes in memory scoring
Identity memory nodes now participate in importance scoring alongside
conversation memories. Score loading/saving handles both sections, and
the conscious screen uses node.label() consistently for memory display.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-15 05:59:58 -04:00
Kent Overstreet
a88428d642 Simplify context config: personality_nodes and agent_nodes
Replace complex context_groups (with ContextGroup struct, ContextSource
enum, labels, keys arrays) with simple string lists:
- personality_nodes: loaded into main session context
- agent_nodes: loaded into subconscious agent context

Removed ~200 lines of code. The distinction between session and agent
context is now just which list you're in, not a per-group flag.

Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-15 02:37:49 -04:00
Kent Overstreet
e8462af505 Remove .md suffix stripping from key lookups
The strip_md_suffix function was removed but its usages remained,
causing lookups like `identity.md` to fail (stripped to `identity`
which didn't exist). Now keys are used as-is.

Renamed 4 nodes that had .md suffixes to canonical form:
- identity.md → identity
- promotion-work-queue.md-* → promotion-work-queue-*
- patterns.md#* → patterns-*
- practices.md#* → practices-*

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-15 02:08:35 -04:00
Kent Overstreet
cc29cd2225 provenance: new_relation takes explicit provenance parameter
Remove POC_PROVENANCE env var lookup from new_relation - callers
now pass provenance explicitly. This fixes tracking when the env
var wasn't set correctly.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-15 01:39:58 -04:00
Kent Overstreet
b3d0a3ab25 store: internal locking, remove Arc<Mutex<Store>> wrapper
Store now has internal Mutex for capnp appends and AtomicU64 for
size tracking. All methods take &self. The external Arc<Mutex<Store>>
is replaced with Arc<Store>.

- Store::append_lock protects file appends
- local.rs functions take &Store (not &mut Store)
- access_local() returns Arc<Store>
- All .lock().await calls removed from callers

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 21:49:54 -04:00
Kent Overstreet
5832e57970 store: convert more callers to use RELS index
Convert remaining Vec users to index-based access:
- memory.rs: MemoryNode::from_store uses Store::neighbors()
- graph.rs: orphan detection uses for_each_relation
- local.rs: normalize_strengths uses for_each_relation + set_link_strength

Add Store::neighbors() method and index::get_offsets_for_uuid().

Cleanup:
- for_each_relation: build both uuid↔key maps in one pass
- cap_degree: consolidate key/uuid/degree collection

Remaining Vec uses: admin.rs (fsck, dedup), capnp.rs (load path).

Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-13 21:20:27 -04:00
Kent Overstreet
af3e41f1d9 migrate more files to use index-based node access
- learn.rs, daemon.rs, graph.rs, digest.rs, prompts.rs
- Convert store.nodes.get() → store.get_node()
- Convert store.nodes.contains_key() → store.contains_key()
- Convert store.nodes.values/iter() → all_keys + get_node

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 19:37:11 -04:00
Kent Overstreet
b8db8754be Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs

Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.

Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.

Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
Kent Overstreet
419bb222b5 defs.rs: remove store/graph params, use typed memory API
resolve_placeholders() and run_agent() no longer take &Store.
All placeholders now use async memory_render/memory_links/memory_query
directly. The "siblings" placeholder uses Vec<LinkInfo> for ranking
neighbors by link_strength * node_weight.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 15:18:05 -04:00
Kent Overstreet
359955f838 defs.rs: async conversion, remove block_in_place
Convert resolve(), resolve_placeholders(), run_agent() to async.
Use memory_render/memory_query directly with .await instead of
block_in_place wrappers.

Propagate async to callers:
- config.rs: resolve(), load_session(), reload_for_model()
- identity.rs: load_memory_files(), assemble_context_message()
- oneshot.rs: run_one_agent()
- prompts.rs: agent_prompt()

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 14:56:26 -04:00
Kent Overstreet
fb46ab095d Consolidate memory RPC in tools/memory.rs
- Move memory_rpc(), socket_path(), SocketConn from mcp_server.rs
- Convert remaining callers to typed async API:
  - defs.rs: organize placeholder, run_agent query
  - cli/agent.rs: query resolution (now async)
  - mind/identity.rs: Store context loading
- Re-export socket_path/memory_rpc from mcp_server for compatibility

All external memory access now goes through tools/memory.rs typed API.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 13:39:59 -04:00
Kent Overstreet
fa50f1c826 CLI: convert node commands to typed async API
- node.rs: use memory::* typed helpers instead of memory_rpc()
- main.rs: make Run trait async, await all command dispatch
- defs.rs: bridge get_group_content async via block_in_place

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 13:20:04 -04:00
Kent Overstreet
7476e9d0db delete rename agent and related code
The organize agents handle renaming as part of their normal work now.
Also simplified resolve_placeholders to build graph internally.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 02:05:58 -04:00
Kent Overstreet
bd9ce3ed09 keys_to_replay_items() -> memory.rs 2026-04-13 01:57:23 -04:00
Kent Overstreet
a08f521b02 defs.rs: convert run_agent query to use RPC
Uses memory_rpc("memory_query", ...) instead of direct search::run_query.
Removes now-unused crate::search import.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 01:54:22 -04:00
Kent Overstreet
b863f77998 defs.rs: convert seed placeholder to use resolve_tool
Uses the existing tool infrastructure instead of direct store access.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 01:49:22 -04:00
Kent Overstreet
c688b812ef defs.rs: convert organize placeholder to use RPC
Uses memory_render RPC instead of direct store access.
Simplifies from ~60 to ~20 lines.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 01:45:12 -04:00
Kent Overstreet
4cfeb9ee2f defs.rs: delete dead placeholders, simplify siblings
- Remove {{targets}}, {{hubs}}, {{node:KEY}}, {{latest_journal}} placeholders
- Add graph_hubs as proper RPC tool (was placeholder, now callable)
- Replace {{latest_journal}} with {{tool: journal_tail ...}} in journal.agent
- Simplify siblings/neighborhood: drop unused cross-links, use simple top-20
- Remove unused store/graph params from resolve_tool()

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 01:37:33 -04:00
Kent Overstreet
de5a6672c3 cleanup: remove dead placeholder code, use RPC for identity loading
- links() in memory.rs: use cached_store() instead of MemoryNode::load()
- identity.rs: use memory_rpc for Store context loading
- defs.rs: delete dead placeholders (topology, nodes/episodes, health, split)
  - agents now use {{tool: graph_topology}} etc instead
- prompts.rs: delete unused format_split_plan_node()

Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-13 01:22:08 -04:00
Kent Overstreet
2ab4aef19f CLI: more RPC conversions, delete obsolete commands
- cmd_health: use graph_health RPC
- cmd_topology: new command using graph_topology RPC
- cmd_status: use graph_topology RPC (type counts folded into topology)
- cmd_run_agent: query resolution via memory_query RPC
- Delete cmd_bulk_rename (one-time migration, obsolete)
- Delete cmd_replay_queue, cmd_digest_links (unconscious agents handle)
- format_topology_header: add type counts, takes &Store now

Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-12 23:37:05 -04:00
Kent Overstreet
70097fa84b kill cli/misc.rs 2026-04-12 23:03:00 -04:00
ProofOfConcept
5a832b1d6c get_group_content: use RPC, delete store-based version
One function that uses memory_rpc (which handles daemon vs local).
Removes 65 lines of duplicate logic.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-12 23:00:10 -04:00
Kent Overstreet
ad59596335 cli: add memory_history, remove dump-json/edges/lookups
- Add memory_history MCP tool for version history
- Convert cmd_history to use memory_rpc
- Add raw parameter to memory_render for editing
- Remove unused: dump-json, list-edges, lookup-bump, lookups
- Fix render_node path in defs.rs/subconscious.rs

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-12 22:24:34 -04:00
Kent Overstreet
7842b6fc8b remove legacy feedback commands (used, wrong, gap, etc.)
These were early experiments with manual feedback signals that
never worked well. The scoring system will handle this properly.

Removed:
- CLI: used, wrong, not-relevant, not-useful, gap
- MCP: memory_used
- Store: mark_used, mark_wrong, record_gap, modify_node

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-12 22:12:02 -04:00
Kent Overstreet
dfab7d0a33 prompts: remove unused replay_queue import
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-12 16:21:54 -04:00
Kent Overstreet
d5aad5c1a4 kill consolidation_batch 2026-04-12 02:41:59 -04:00
Kent Overstreet
919749dc67 more dead code deletion
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2026-04-12 02:27:05 -04:00
Kent Overstreet
31aa0f3125 digest.agent: document journal_update workflow
Check if the current period's digest exists and update it with
journal_update before starting a new one with journal_new.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-12 02:06:55 -04:00
Kent Overstreet
b77f07fef7 digest.agent: use journal_new with level for writing digests
Instead of memory_write, the digest agent now uses journal_new with
level parameter (1=daily, 2=weekly, 3=monthly) which correctly sets
the node type.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-12 02:05:12 -04:00
Kent Overstreet
f00532bdb7 TurnResult: remove text field, simplify oneshot loop
- Remove TurnResult.text (was dead code - Agent::turn handles text internally)
- Simplify run_with_backend to just iterate over steps (Agent::turn loops
  for tool calls and handles empty responses internally)
- Change run/run_shared/run_forked_shared to return Result<(), String>
- Remove AgentResult.output field (no callers used it)
- Stub out legacy text-parsing code (audit, compare) that needs redesign
- Update digest.rs to not depend on text return
- Add level parameter to journal_new/journal_update for digest support

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-12 02:04:50 -04:00
Kent Overstreet
271e09adcc fix: run_one_agent uses memory tools as base, not filter
When def.tools was non-empty, it was filtering to ONLY those tools
instead of using memory tools as base + adding extras. This broke
digest agent (and any agent with explicit tools list) by removing
all 13 base memory tools.

Fixed to match the pattern in unconscious.rs:
- base = memory_tools()
- extras from journal_tools() if listed in def.tools

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-11 21:20:44 -04:00
ProofOfConcept
aad227e487 query: unify PEG and engine parsers
PEG parser now handles both expression syntax (degree > 5 | sort degree)
and pipeline syntax (all | type:episodic | sort:timestamp). Deleted
Stage::parse() and helpers from engine.rs — it's now pure execution.

All callers use parse_stages() from parser.rs as the single entry point.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-11 20:42:58 -04:00
ProofOfConcept
bc991c3521 unconscious: memory tools as base, agent def adds extras
Every unconscious agent gets memory_tools() as baseline. The tools
field in the agent def specifies additional tools on top of that —
digest agent now gets journal_tail, journal_new, journal_update.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-11 19:54:18 -04:00
Kent Overstreet
c300013ce5 improve bail-no-competing.sh 2026-04-11 18:41:44 -04:00