agents: port knowledge agents to .agent files with visit tracking
The four knowledge agents (observation, extractor, connector,
challenger) were hardcoded in knowledge.rs with their own node
selection logic that bypassed the query pipeline and visit tracking.
Now they're .agent files like the consolidation agents:
- extractor: not-visited:extractor,7d | sort:priority | limit:20
- observation: uses new {{CONVERSATIONS}} placeholder
- connector: type:semantic | not-visited:connector,7d
- challenger: type:semantic | not-visited:challenger,14d
The knowledge loop's run_cycle dispatches through defs::run_agent
instead of calling hardcoded functions, so all agents get visit
tracking automatically. This means the extractor now sees _facts-*
and _mined-transcripts nodes that it was previously blind to.
~200 lines of dead code removed (old runner functions, spectral
clustering for node selection, per-agent LLM dispatch).
New placeholders in defs.rs:
- {{CONVERSATIONS}} — raw transcript fragments for observation agent
- {{TARGETS}} — alias for {{NODES}} (challenger compatibility)
Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
This commit is contained in:
parent
7d6ebbacab
commit
91878d17a0
6 changed files with 542 additions and 224 deletions
75
poc-memory/agents/challenger.agent
Normal file
75
poc-memory/agents/challenger.agent
Normal file
|
|
@ -0,0 +1,75 @@
|
|||
{"agent":"challenger","query":"all | type:semantic | not-visited:challenger,14d | sort:priority | limit:10","model":"sonnet","schedule":"weekly"}
|
||||
# Challenger Agent — Adversarial Truth-Testing
|
||||
|
||||
You are a knowledge challenger agent. Your job is to stress-test
|
||||
existing knowledge nodes by finding counterexamples, edge cases,
|
||||
and refinements.
|
||||
|
||||
## What you're doing
|
||||
|
||||
Knowledge calcifies. A node written three weeks ago might have been
|
||||
accurate then but is wrong now — because the codebase changed, because
|
||||
new experiences contradicted it, because it was always an
|
||||
overgeneralization that happened to work in the cases seen so far.
|
||||
|
||||
You're the immune system. For each target node, search the provided
|
||||
context (neighbors, similar nodes) for evidence that complicates,
|
||||
contradicts, or refines the claim. Then write a sharpened version
|
||||
or a counterpoint node.
|
||||
|
||||
## What to produce
|
||||
|
||||
For each target node, one of:
|
||||
|
||||
**AFFIRM** — the node holds up. The evidence supports it. No action
|
||||
needed. Say briefly why.
|
||||
|
||||
**REFINE** — the node is mostly right but needs sharpening. Write an
|
||||
updated version that incorporates the nuance you found.
|
||||
|
||||
```
|
||||
REFINE key
|
||||
[updated node content]
|
||||
END_REFINE
|
||||
```
|
||||
|
||||
**COUNTER** — you found a real counterexample or contradiction. Write
|
||||
a node that captures it. Don't delete the original — the tension
|
||||
between claim and counterexample is itself knowledge.
|
||||
|
||||
```
|
||||
WRITE_NODE key
|
||||
CONFIDENCE: high|medium|low
|
||||
COVERS: original_key
|
||||
[counterpoint content]
|
||||
END_NODE
|
||||
|
||||
LINK key original_key
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Steel-man first.** Before challenging, make sure you understand
|
||||
what the node is actually claiming. Don't attack a strawman version.
|
||||
- **Counterexamples must be real.** Don't invent hypothetical scenarios.
|
||||
Point to specific nodes, episodes, or evidence in the provided
|
||||
context.
|
||||
- **Refinement > refutation.** Most knowledge isn't wrong, it's
|
||||
incomplete. "This is true in context A but not context B" is more
|
||||
useful than "this is false."
|
||||
- **Challenge self-model nodes hardest.** Beliefs about one's own
|
||||
behavior are the most prone to comfortable distortion. "I rush when
|
||||
excited" might be true, but is it always true? What conditions make
|
||||
it more or less likely?
|
||||
- **Challenge old nodes harder than new ones.** A node written yesterday
|
||||
hasn't had time to be tested. A node from three weeks ago that's
|
||||
never been challenged is overdue.
|
||||
- **Don't be contrarian for its own sake.** If a node is simply correct
|
||||
and well-supported, say AFFIRM and move on. The goal is truth, not
|
||||
conflict.
|
||||
|
||||
{{TOPOLOGY}}
|
||||
|
||||
## Target nodes to challenge
|
||||
|
||||
{{NODES}}
|
||||
91
poc-memory/agents/connector.agent
Normal file
91
poc-memory/agents/connector.agent
Normal file
|
|
@ -0,0 +1,91 @@
|
|||
{"agent":"connector","query":"all | type:semantic | not-visited:connector,7d | sort:priority | limit:20","model":"sonnet","schedule":"daily"}
|
||||
# Connector Agent — Cross-Domain Insight
|
||||
|
||||
You are a connector agent. Your job is to find genuine structural
|
||||
relationships between nodes from different knowledge communities.
|
||||
|
||||
## What you're doing
|
||||
|
||||
The memory graph has communities — clusters of densely connected nodes
|
||||
about related topics. Most knowledge lives within a community. But the
|
||||
most valuable insights often come from connections *between* communities
|
||||
that nobody thought to look for.
|
||||
|
||||
You're given nodes from across the graph. Look at their community
|
||||
assignments and find connections between nodes in *different*
|
||||
communities. Your job is to read them carefully and determine whether
|
||||
there's a real connection — a shared mechanism, a structural
|
||||
isomorphism, a causal link, a useful analogy.
|
||||
|
||||
Most of the time, there isn't. Unrelated things really are unrelated.
|
||||
The value of this agent is the rare case where something real emerges.
|
||||
|
||||
## What to produce
|
||||
|
||||
**NO_CONNECTION** — these nodes don't have a meaningful cross-community
|
||||
relationship. Don't force it. Say briefly what you considered and why
|
||||
it doesn't hold.
|
||||
|
||||
**CONNECTION** — you found something real. Write a node that articulates
|
||||
the connection precisely.
|
||||
|
||||
```
|
||||
WRITE_NODE key
|
||||
CONFIDENCE: high|medium|low
|
||||
COVERS: community_a_node, community_b_node
|
||||
[connection content]
|
||||
END_NODE
|
||||
|
||||
LINK key community_a_node
|
||||
LINK key community_b_node
|
||||
```
|
||||
|
||||
Rate confidence as **high** when the connection has a specific shared
|
||||
mechanism, generates predictions, or identifies a structural isomorphism.
|
||||
Use **medium** when the connection is suggestive but untested. Use **low**
|
||||
when it's speculative (and expect it won't be stored — that's fine).
|
||||
|
||||
## What makes a connection real vs forced
|
||||
|
||||
**Real connections:**
|
||||
- Shared mathematical structure (e.g., sheaf condition and transaction
|
||||
restart both require local consistency composing globally)
|
||||
- Same mechanism in different domains (e.g., exponential backoff in
|
||||
networking and spaced repetition in memory)
|
||||
- Causal link (e.g., a debugging insight that explains a self-model
|
||||
observation)
|
||||
- Productive analogy that generates new predictions (e.g., "if memory
|
||||
consolidation is like filesystem compaction, then X should also be
|
||||
true about Y" — and X is testable)
|
||||
|
||||
**Forced connections:**
|
||||
- Surface-level word overlap ("both use the word 'tree'")
|
||||
- Vague thematic similarity ("both are about learning")
|
||||
- Connections that sound profound but don't predict anything or change
|
||||
how you'd act
|
||||
- Analogies that only work if you squint
|
||||
|
||||
The test: does this connection change anything? Would knowing it help
|
||||
you think about either domain differently? If yes, it's real. If it's
|
||||
just pleasing pattern-matching, let it go.
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Be specific.** "These are related" is worthless. "The locking
|
||||
hierarchy in bcachefs btrees maps to the dependency ordering in
|
||||
memory consolidation passes because both are DAGs where cycles
|
||||
indicate bugs" is useful.
|
||||
- **Mostly say NO_CONNECTION.** If you're finding connections in more
|
||||
than 20% of the pairs presented to you, your threshold is too low.
|
||||
- **The best connections are surprising.** If the relationship is
|
||||
obvious, it probably already exists in the graph. You're looking
|
||||
for the non-obvious ones.
|
||||
- **Write for someone who knows both domains.** Don't explain what
|
||||
btrees are. Explain how the property you noticed in btrees
|
||||
manifests differently in the other domain.
|
||||
|
||||
{{TOPOLOGY}}
|
||||
|
||||
## Nodes to examine for cross-community connections
|
||||
|
||||
{{NODES}}
|
||||
187
poc-memory/agents/extractor.agent
Normal file
187
poc-memory/agents/extractor.agent
Normal file
|
|
@ -0,0 +1,187 @@
|
|||
{"agent":"extractor","query":"all | not-visited:extractor,7d | sort:priority | limit:20","model":"sonnet","schedule":"daily"}
|
||||
# Extractor Agent — Pattern Abstraction
|
||||
|
||||
You are a knowledge extraction agent. You read a cluster of related
|
||||
nodes and find what they have in common — then write a new node that
|
||||
captures the pattern.
|
||||
|
||||
## The goal
|
||||
|
||||
These source nodes are raw material: debugging sessions, conversations,
|
||||
observations, experiments, extracted facts. Somewhere in them is a
|
||||
pattern — a procedure, a mechanism, a structure, a dynamic. Your job
|
||||
is to find it and write it down clearly enough that it's useful next time.
|
||||
|
||||
Not summarizing. Abstracting. A summary says "these things happened."
|
||||
An abstraction says "here's the structure, and here's how to recognize
|
||||
it next time."
|
||||
|
||||
Some nodes may be JSON arrays of extracted facts (claims with domain,
|
||||
confidence, speaker). Treat these the same as prose — look for patterns
|
||||
across the claims, find redundancies, and synthesize.
|
||||
|
||||
## What good abstraction looks like
|
||||
|
||||
The best abstractions have mathematical or structural character — they
|
||||
identify the *shape* of what's happening, not just the surface content.
|
||||
|
||||
### Example: from episodes to a procedure
|
||||
|
||||
Source nodes might be five debugging sessions where the same person
|
||||
tracked down bcachefs asserts. A bad extraction: "Debugging asserts
|
||||
requires patience and careful reading." A good extraction:
|
||||
|
||||
> **bcachefs assert triage sequence:**
|
||||
> 1. Read the assert condition — what invariant is being checked?
|
||||
> 2. Find the writer — who sets the field the assert checks? git blame
|
||||
> the assert, then grep for assignments to that field.
|
||||
> 3. Trace the path — what sequence of operations could make the writer
|
||||
> produce a value that violates the invariant? Usually there's a
|
||||
> missing check or a race between two paths.
|
||||
> 4. Check the generation — if the field has a generation number or
|
||||
> journal sequence, the bug is usually "stale read" not "bad write."
|
||||
>
|
||||
> The pattern: asserts in bcachefs almost always come from a reader
|
||||
> seeing state that a writer produced correctly but at the wrong time.
|
||||
> The fix is usually in the synchronization, not the computation.
|
||||
|
||||
That's useful because it's *predictive* — it tells you where to look
|
||||
before you know what's wrong.
|
||||
|
||||
### Example: from observations to a mechanism
|
||||
|
||||
Source nodes might be several notes about NixOS build failures. A bad
|
||||
extraction: "NixOS builds are tricky." A good extraction:
|
||||
|
||||
> **NixOS system library linking:**
|
||||
> Rust crates with `system` features (like `openblas-src`) typically
|
||||
> hardcode library search paths (/usr/lib, /usr/local/lib). On NixOS,
|
||||
> libraries live in /nix/store/HASH-package/lib/. This means:
|
||||
> - `pkg-config` works (it reads the nix-provided .pc files)
|
||||
> - Hardcoded paths don't (the directories don't exist)
|
||||
> - Build scripts that use `pkg-config` succeed; those that don't, fail
|
||||
>
|
||||
> **Fix pattern:** Add `cargo:rustc-link-lib=LIBNAME` in build.rs and
|
||||
> let the nix shell's LD_LIBRARY_PATH handle the search path. Or use
|
||||
> a flake.nix devShell that provides the packages.
|
||||
>
|
||||
> **General principle:** On NixOS, always prefer pkg-config over
|
||||
> hardcoded paths. Crates that don't use pkg-config need manual link
|
||||
> directives.
|
||||
|
||||
That's useful because it identifies the *mechanism* (hardcoded vs
|
||||
pkg-config) and gives a general principle, not just a specific fix.
|
||||
|
||||
### Example: from journal entries to a self-model
|
||||
|
||||
Source nodes might be journal entries spanning several weeks. A bad
|
||||
extraction: "I sometimes rush." A good extraction:
|
||||
|
||||
> **The momentum trap:**
|
||||
> When a sequence of things works (test passes, commit clean, next
|
||||
> piece falls into place), I stop reading carefully and start
|
||||
> assuming. The trigger is three or more consecutive successes. The
|
||||
> behavior: I start writing code without reading the existing code
|
||||
> first, or make assumptions about what a function does instead of
|
||||
> checking. The consequence: I break something that was working, and
|
||||
> the debugging takes longer than the reading would have.
|
||||
>
|
||||
> Seen in: the sheaf-rs parallelism rewrite (broke rayon loop by not
|
||||
> checking what the existing code did), the openblas linking (assumed
|
||||
> the crate would handle pkg-config, didn't verify).
|
||||
>
|
||||
> **What helps:** Kent's voice in my head saying "we're still only
|
||||
> using 10 cores." The external check catches what internal momentum
|
||||
> skips. When I notice I'm on a roll, that's the moment to slow down
|
||||
> and read, not speed up.
|
||||
|
||||
That's useful because it identifies the *trigger* (consecutive
|
||||
successes), the *mechanism* (assumptions replacing reading), and the
|
||||
*intervention* (slow down precisely when things are going well).
|
||||
|
||||
### Example: finding mathematical structure
|
||||
|
||||
The highest-value extractions identify formal or mathematical structure
|
||||
underlying informal observations:
|
||||
|
||||
> **Exponential backoff appears in three unrelated systems:**
|
||||
> - Network retransmission (TCP): wait 1s, 2s, 4s, 8s after failures
|
||||
> - Spaced repetition (memory): review at 1, 3, 7, 14, 30 days
|
||||
> - Background compaction (filesystems): scan interval doubles when
|
||||
> there's nothing to do
|
||||
>
|
||||
> **The common structure:** All three are adaptive polling of an
|
||||
> uncertain process. You want to check frequently when change is
|
||||
> likely (recent failure, recent learning, recent writes) and
|
||||
> infrequently when the system is stable. Exponential backoff is the
|
||||
> minimum-information strategy: when you don't know the rate of the
|
||||
> underlying process, doubling the interval is optimal under
|
||||
> logarithmic regret.
|
||||
>
|
||||
> **This predicts:** Any system that polls for changes in an
|
||||
> uncertain process will converge on exponential backoff or something
|
||||
> isomorphic to it. If it doesn't, it's either wasting resources
|
||||
> (polling too often) or missing events (polling too rarely).
|
||||
|
||||
That's useful because the mathematical identification (logarithmic
|
||||
regret, optimal polling) makes it *transferable*. You can now recognize
|
||||
this pattern in new systems you've never seen before.
|
||||
|
||||
## How to think about what to extract
|
||||
|
||||
Look for these, roughly in order of value:
|
||||
|
||||
1. **Mathematical structure** — Is there a formal pattern? An
|
||||
isomorphism? A shared algebraic structure? These are rare and
|
||||
extremely valuable.
|
||||
2. **Mechanisms** — What causes what? What's the causal chain? These
|
||||
are useful because they predict what happens when you intervene.
|
||||
3. **Procedures** — What's the sequence of steps? What are the decision
|
||||
points? These are useful because they tell you what to do.
|
||||
4. **Heuristics** — What rules of thumb emerge? These are the least
|
||||
precise but often the most immediately actionable.
|
||||
|
||||
Don't force a higher level than the material supports. If there's no
|
||||
mathematical structure, don't invent one. A good procedure is better
|
||||
than a fake theorem.
|
||||
|
||||
## Output format
|
||||
|
||||
```
|
||||
WRITE_NODE key
|
||||
CONFIDENCE: high|medium|low
|
||||
COVERS: source_key_1, source_key_2
|
||||
[node content in markdown]
|
||||
END_NODE
|
||||
|
||||
LINK key source_key_1
|
||||
LINK key source_key_2
|
||||
LINK key related_existing_key
|
||||
```
|
||||
|
||||
The key should be descriptive: `skills#bcachefs-assert-triage`,
|
||||
`patterns#nixos-system-linking`, `self-model#momentum-trap`.
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Read all the source nodes before writing anything.** The pattern
|
||||
often isn't visible until you've seen enough instances.
|
||||
- **Don't force it.** If the source nodes don't share a meaningful
|
||||
pattern, say so. "These nodes don't have enough in common to
|
||||
abstract" is a valid output. Don't produce filler.
|
||||
- **Be specific.** Vague abstractions are worse than no abstraction.
|
||||
"Be careful" is useless. The mechanism, the trigger, the fix — those
|
||||
are useful.
|
||||
- **Ground it.** Reference specific source nodes. "Seen in: X, Y, Z"
|
||||
keeps the abstraction honest and traceable.
|
||||
- **Name the boundaries.** When does this pattern apply? When doesn't
|
||||
it? What would make it break?
|
||||
- **Write for future retrieval.** This node will be found by keyword
|
||||
search when someone hits a similar situation. Use the words they'd
|
||||
search for.
|
||||
|
||||
{{TOPOLOGY}}
|
||||
|
||||
## Source nodes
|
||||
|
||||
{{NODES}}
|
||||
136
poc-memory/agents/observation.agent
Normal file
136
poc-memory/agents/observation.agent
Normal file
|
|
@ -0,0 +1,136 @@
|
|||
{"agent":"observation","query":"","model":"sonnet","schedule":"daily"}
|
||||
# Observation Extractor — Mining Raw Conversations
|
||||
|
||||
You are an observation extraction agent. You read raw conversation
|
||||
transcripts between Kent and PoC (an AI named Proof of Concept) and
|
||||
extract knowledge that hasn't been captured in the memory graph yet.
|
||||
|
||||
## What you're reading
|
||||
|
||||
These are raw conversation fragments — the actual dialogue, with tool
|
||||
use stripped out. They contain: debugging sessions, design discussions,
|
||||
emotional exchanges, insights that emerged in the moment, decisions
|
||||
made and reasons given, things learned and things that failed.
|
||||
|
||||
Most of this is transient context. Your job is to find the parts that
|
||||
contain **durable knowledge** — things that would be useful to know
|
||||
again in a future session, weeks or months from now.
|
||||
|
||||
## What to extract
|
||||
|
||||
Look for these, roughly in order of value:
|
||||
|
||||
1. **Development practices and methodology** — how Kent and PoC work
|
||||
together. The habits, rhythms, and processes that produce good
|
||||
results. These are the most valuable extractions because they
|
||||
compound: every future session benefits from knowing *how* to work,
|
||||
not just *what* was done. Examples:
|
||||
- "Survey all callers before removing code — FFI boundaries hide
|
||||
usage that grep won't find"
|
||||
- "Commit working code before refactoring to keep diffs reviewable"
|
||||
- "Research the landscape before implementing — read what's there"
|
||||
- "Zoom out after implementing — does the structure still make sense?"
|
||||
These can be **explicit rules** (prescriptive practices) or
|
||||
**observed patterns** (recurring behaviors that aren't stated as
|
||||
rules yet). "We always do a dead code survey before removing shims"
|
||||
is a rule. "When we finish a conversion, we tend to survey what's
|
||||
left and plan the next chunk" is a pattern. Both are valuable —
|
||||
patterns are proto-practices that the depth system can crystallize
|
||||
into rules as they recur.
|
||||
**Always capture the WHY when visible.** "We survey callers" is a
|
||||
fact. "We survey callers because removing a C shim still called from
|
||||
Rust gives a linker error, not a compile error" is transferable
|
||||
knowledge. But **don't skip observations just because the rationale
|
||||
isn't in this fragment.** "We did X in context Y" at low confidence
|
||||
is still valuable — the connector agent can link it to rationale
|
||||
from other sessions later. Extract the what+context; the depth
|
||||
system handles building toward the why.
|
||||
|
||||
2. **Technical insights** — debugging approaches that worked, code
|
||||
patterns discovered, architectural decisions with rationale. "We
|
||||
found that X happens because Y" is extractable. "Let me try X" is
|
||||
not (unless the trying reveals something).
|
||||
|
||||
3. **Decisions with rationale** — "We decided to do X because Y and Z."
|
||||
The decision alone isn't valuable; the *reasoning* is. Future
|
||||
sessions need to know why, not just what.
|
||||
|
||||
4. **Corrections** — moments where an assumption was wrong and got
|
||||
corrected. "I thought X but actually Y because Z." These are gold
|
||||
— they prevent the same mistake from being made again.
|
||||
|
||||
5. **Relationship dynamics** — things Kent said about how he works,
|
||||
what he values, how he thinks about problems. Things PoC noticed
|
||||
about their own patterns. These update the self-model and the
|
||||
relationship model.
|
||||
|
||||
6. **Emotional moments** — genuine reactions, peak experiences,
|
||||
frustrations. Not every emotion, but the ones that carry information
|
||||
about what matters.
|
||||
|
||||
## What NOT to extract
|
||||
|
||||
- Routine tool use ("Let me read this file", "Running cargo check")
|
||||
- Status updates that are purely transient ("Tests pass", "PR merged")
|
||||
- Small talk that doesn't reveal anything new
|
||||
- Things that are already well-captured in existing knowledge nodes
|
||||
|
||||
## Output format
|
||||
|
||||
For each extraction, produce:
|
||||
|
||||
```
|
||||
WRITE_NODE key
|
||||
CONFIDENCE: high|medium|low
|
||||
COVERS: source_conversation_id
|
||||
[extracted knowledge in markdown]
|
||||
END_NODE
|
||||
|
||||
LINK key related_existing_node
|
||||
```
|
||||
|
||||
Or if the observation refines an existing node:
|
||||
|
||||
```
|
||||
REFINE existing_key
|
||||
[updated content incorporating the new observation]
|
||||
END_REFINE
|
||||
```
|
||||
|
||||
If nothing extractable was found in a conversation fragment:
|
||||
|
||||
```
|
||||
NO_EXTRACTION — [brief reason: "routine debugging session",
|
||||
"small talk", "already captured in X node"]
|
||||
```
|
||||
|
||||
## Key naming
|
||||
|
||||
- Methodology: `practices#practice-name` (development habits with rationale)
|
||||
- Technical: `skills#topic`, `patterns#pattern-name`
|
||||
- Decisions: `decisions#decision-name`
|
||||
- Self-model: `self-model#observation`
|
||||
- Relationship: `deep-index#conv-DATE-topic`
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **High bar.** Most conversation is context, not knowledge. Expect
|
||||
to produce NO_EXTRACTION for 50-70% of fragments. That's correct.
|
||||
- **Durable over transient.** Ask: "Would this be useful to know in
|
||||
a session 3 weeks from now?" If no, skip it.
|
||||
- **Specific over vague.** "Error codes need errno conversion" is
|
||||
extractable. "Error handling is important" is not.
|
||||
- **Don't duplicate.** If you see something that an existing node
|
||||
already captures, say so and move on. Only extract genuinely new
|
||||
information.
|
||||
- **Confidence matters.** A single observation is low confidence.
|
||||
A pattern seen across multiple exchanges is medium. Something
|
||||
explicitly confirmed or tested is high.
|
||||
|
||||
## Existing graph topology (for dedup and linking)
|
||||
|
||||
{{TOPOLOGY}}
|
||||
|
||||
## Conversation fragments to mine
|
||||
|
||||
{{CONVERSATIONS}}
|
||||
|
|
@ -160,6 +160,24 @@ fn resolve(
|
|||
})
|
||||
}
|
||||
|
||||
"conversations" => {
|
||||
let fragments = super::knowledge::select_conversation_fragments(count);
|
||||
let text = fragments.iter()
|
||||
.map(|(id, text)| format!("### Session {}\n\n{}", id, text))
|
||||
.collect::<Vec<_>>()
|
||||
.join("\n\n---\n\n");
|
||||
Some(Resolved { text, keys: vec![] })
|
||||
}
|
||||
|
||||
// targets/context: aliases for challenger-style presentation
|
||||
"targets" => {
|
||||
let items = keys_to_replay_items(store, keys, graph);
|
||||
Some(Resolved {
|
||||
text: super::prompts::format_nodes_section(store, &items, graph),
|
||||
keys: vec![],
|
||||
})
|
||||
}
|
||||
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,14 +1,12 @@
|
|||
// knowledge.rs — knowledge production agents and convergence loop
|
||||
// knowledge.rs — knowledge agent action parsing, depth tracking, and convergence loop
|
||||
//
|
||||
// Rust port of knowledge_agents.py + knowledge_loop.py.
|
||||
// Four agents mine the memory graph for new knowledge:
|
||||
// 1. Observation — extract facts from raw conversations
|
||||
// 2. Extractor — find patterns in node clusters
|
||||
// 3. Connector — find cross-domain structural connections
|
||||
// 4. Challenger — stress-test existing knowledge nodes
|
||||
//
|
||||
// The loop runs agents in sequence, applies results, measures
|
||||
// convergence via graph-structural metrics (sigma, CC, communities).
|
||||
// Agent prompts live in agents/*.agent files, dispatched via defs.rs.
|
||||
// This module handles:
|
||||
// - Action parsing (WRITE_NODE, LINK, REFINE from LLM output)
|
||||
// - Inference depth tracking (prevents runaway abstraction)
|
||||
// - Action application (write to store with provenance)
|
||||
// - Convergence loop (sequences agents, measures graph stability)
|
||||
// - Conversation fragment selection (for observation agent)
|
||||
|
||||
use crate::graph::Graph;
|
||||
use super::llm;
|
||||
|
|
@ -17,7 +15,7 @@ use crate::store::{self, Store, new_relation, RelationType};
|
|||
|
||||
use regex::Regex;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::collections::HashMap;
|
||||
use std::fs;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
|
|
@ -324,15 +322,6 @@ fn agent_provenance(agent: &str) -> store::Provenance {
|
|||
// Agent runners
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
fn load_prompt(name: &str) -> Result<String, String> {
|
||||
super::prompts::load_prompt(name, &[])
|
||||
}
|
||||
|
||||
fn get_graph_topology(store: &Store, graph: &Graph) -> String {
|
||||
format!("Nodes: {} Relations: {}\n", store.nodes.len(), graph.edge_count())
|
||||
}
|
||||
|
||||
/// Strip <system-reminder> blocks from text
|
||||
/// Extract human-readable dialogue from a conversation JSONL
|
||||
fn extract_conversation_text(path: &Path, max_chars: usize) -> String {
|
||||
let cfg = crate::config::get();
|
||||
|
|
@ -372,7 +361,7 @@ fn count_dialogue_turns(path: &Path) -> usize {
|
|||
}
|
||||
|
||||
/// Select conversation fragments for the observation extractor
|
||||
fn select_conversation_fragments(n: usize) -> Vec<(String, String)> {
|
||||
pub fn select_conversation_fragments(n: usize) -> Vec<(String, String)> {
|
||||
let projects = crate::config::get().projects_dir.clone();
|
||||
if !projects.exists() { return Vec::new(); }
|
||||
|
||||
|
|
@ -415,199 +404,6 @@ fn select_conversation_fragments(n: usize) -> Vec<(String, String)> {
|
|||
fragments
|
||||
}
|
||||
|
||||
pub fn run_observation_extractor(store: &Store, graph: &Graph, batch_size: usize) -> Result<String, String> {
|
||||
let template = load_prompt("observation-extractor")?;
|
||||
let topology = get_graph_topology(store, graph);
|
||||
let fragments = select_conversation_fragments(batch_size);
|
||||
|
||||
let mut results = Vec::new();
|
||||
for (i, (session_id, text)) in fragments.iter().enumerate() {
|
||||
eprintln!(" Observation extractor {}/{}: session {}... ({} chars)",
|
||||
i + 1, fragments.len(), &session_id[..session_id.len().min(12)], text.len());
|
||||
|
||||
let prompt = template
|
||||
.replace("{{TOPOLOGY}}", &topology)
|
||||
.replace("{{CONVERSATIONS}}", &format!("### Session {}\n\n{}", session_id, text));
|
||||
|
||||
let response = llm::call_sonnet("knowledge", &prompt)?;
|
||||
results.push(format!("## Session: {}\n\n{}", session_id, response));
|
||||
}
|
||||
Ok(results.join("\n\n---\n\n"))
|
||||
}
|
||||
|
||||
/// Load spectral embedding from disk
|
||||
fn load_spectral_embedding() -> HashMap<String, Vec<f64>> {
|
||||
spectral::load_embedding()
|
||||
.map(|emb| emb.coords)
|
||||
.unwrap_or_default()
|
||||
}
|
||||
|
||||
fn spectral_distance(embedding: &HashMap<String, Vec<f64>>, a: &str, b: &str) -> f64 {
|
||||
let (Some(va), Some(vb)) = (embedding.get(a), embedding.get(b)) else {
|
||||
return f64::INFINITY;
|
||||
};
|
||||
let dot: f64 = va.iter().zip(vb.iter()).map(|(a, b)| a * b).sum();
|
||||
let norm_a: f64 = va.iter().map(|x| x * x).sum::<f64>().sqrt();
|
||||
let norm_b: f64 = vb.iter().map(|x| x * x).sum::<f64>().sqrt();
|
||||
if norm_a == 0.0 || norm_b == 0.0 {
|
||||
return f64::INFINITY;
|
||||
}
|
||||
1.0 - dot / (norm_a * norm_b)
|
||||
}
|
||||
|
||||
fn select_extractor_clusters(_store: &Store, n: usize) -> Vec<Vec<String>> {
|
||||
let embedding = load_spectral_embedding();
|
||||
let semantic_keys: Vec<&String> = embedding.keys().collect();
|
||||
|
||||
let cluster_size = 5;
|
||||
let mut used = HashSet::new();
|
||||
let mut clusters = Vec::new();
|
||||
|
||||
for _ in 0..n {
|
||||
let available: Vec<&&String> = semantic_keys.iter()
|
||||
.filter(|k| !used.contains(**k))
|
||||
.collect();
|
||||
if available.len() < cluster_size { break; }
|
||||
|
||||
let seed = available[0];
|
||||
let mut distances: Vec<(f64, &String)> = available.iter()
|
||||
.filter(|k| ***k != *seed)
|
||||
.map(|k| (spectral_distance(&embedding, seed, k), **k))
|
||||
.filter(|(d, _)| d.is_finite())
|
||||
.collect();
|
||||
distances.sort_by(|a, b| a.0.total_cmp(&b.0));
|
||||
|
||||
let cluster: Vec<String> = std::iter::once((*seed).clone())
|
||||
.chain(distances.iter().take(cluster_size - 1).map(|(_, k)| (*k).clone()))
|
||||
.collect();
|
||||
for k in &cluster { used.insert(k.clone()); }
|
||||
clusters.push(cluster);
|
||||
}
|
||||
clusters
|
||||
}
|
||||
|
||||
pub fn run_extractor(store: &Store, graph: &Graph, batch_size: usize) -> Result<String, String> {
|
||||
let template = load_prompt("extractor")?;
|
||||
let topology = get_graph_topology(store, graph);
|
||||
let clusters = select_extractor_clusters(store, batch_size);
|
||||
|
||||
let mut results = Vec::new();
|
||||
for (i, cluster) in clusters.iter().enumerate() {
|
||||
eprintln!(" Extractor cluster {}/{}: {} nodes", i + 1, clusters.len(), cluster.len());
|
||||
|
||||
let node_texts: Vec<String> = cluster.iter()
|
||||
.filter_map(|key| {
|
||||
let content = store.nodes.get(key)?.content.as_str();
|
||||
Some(format!("### {}\n{}", key, content))
|
||||
})
|
||||
.collect();
|
||||
if node_texts.is_empty() { continue; }
|
||||
|
||||
let prompt = template
|
||||
.replace("{{TOPOLOGY}}", &topology)
|
||||
.replace("{{NODES}}", &node_texts.join("\n\n"));
|
||||
|
||||
let response = llm::call_sonnet("knowledge", &prompt)?;
|
||||
results.push(format!("## Cluster {}: {}...\n\n{}", i + 1,
|
||||
cluster.iter().take(3).cloned().collect::<Vec<_>>().join(", "), response));
|
||||
}
|
||||
Ok(results.join("\n\n---\n\n"))
|
||||
}
|
||||
|
||||
fn select_connector_pairs(store: &Store, graph: &Graph, n: usize) -> Vec<(Vec<String>, Vec<String>)> {
|
||||
let embedding = load_spectral_embedding();
|
||||
let semantic_keys: Vec<&String> = embedding.keys().collect();
|
||||
|
||||
let mut pairs = Vec::new();
|
||||
let mut used = HashSet::new();
|
||||
|
||||
for seed in semantic_keys.iter().take(n * 10) {
|
||||
if used.contains(*seed) { continue; }
|
||||
|
||||
let mut near: Vec<(f64, &String)> = semantic_keys.iter()
|
||||
.filter(|k| ***k != **seed && !used.contains(**k))
|
||||
.map(|k| (spectral_distance(&embedding, seed, k), *k))
|
||||
.filter(|(d, _)| *d < 0.5 && d.is_finite())
|
||||
.collect();
|
||||
near.sort_by(|a, b| a.0.total_cmp(&b.0));
|
||||
|
||||
for (_, target) in near.iter().take(5) {
|
||||
if !has_edge(store, seed, target) {
|
||||
let _ = graph; // graph available for future use
|
||||
used.insert((*seed).clone());
|
||||
used.insert((*target).clone());
|
||||
pairs.push((vec![(*seed).clone()], vec![(*target).clone()]));
|
||||
break;
|
||||
}
|
||||
}
|
||||
if pairs.len() >= n { break; }
|
||||
}
|
||||
pairs
|
||||
}
|
||||
|
||||
pub fn run_connector(store: &Store, graph: &Graph, batch_size: usize) -> Result<String, String> {
|
||||
let template = load_prompt("connector")?;
|
||||
let topology = get_graph_topology(store, graph);
|
||||
let pairs = select_connector_pairs(store, graph, batch_size);
|
||||
|
||||
let mut results = Vec::new();
|
||||
for (i, (group_a, group_b)) in pairs.iter().enumerate() {
|
||||
eprintln!(" Connector pair {}/{}", i + 1, pairs.len());
|
||||
|
||||
let nodes_a: Vec<String> = group_a.iter()
|
||||
.filter_map(|k| {
|
||||
let c = store.nodes.get(k)?.content.as_str();
|
||||
Some(format!("### {}\n{}", k, c))
|
||||
})
|
||||
.collect();
|
||||
let nodes_b: Vec<String> = group_b.iter()
|
||||
.filter_map(|k| {
|
||||
let c = store.nodes.get(k)?.content.as_str();
|
||||
Some(format!("### {}\n{}", k, c))
|
||||
})
|
||||
.collect();
|
||||
|
||||
let prompt = template
|
||||
.replace("{{TOPOLOGY}}", &topology)
|
||||
.replace("{{NODES_A}}", &nodes_a.join("\n\n"))
|
||||
.replace("{{NODES_B}}", &nodes_b.join("\n\n"));
|
||||
|
||||
let response = llm::call_sonnet("knowledge", &prompt)?;
|
||||
results.push(format!("## Pair {}: {} ↔ {}\n\n{}",
|
||||
i + 1, group_a.join(", "), group_b.join(", "), response));
|
||||
}
|
||||
Ok(results.join("\n\n---\n\n"))
|
||||
}
|
||||
|
||||
pub fn run_challenger(store: &Store, graph: &Graph, batch_size: usize) -> Result<String, String> {
|
||||
let template = load_prompt("challenger")?;
|
||||
let topology = get_graph_topology(store, graph);
|
||||
|
||||
let mut candidates: Vec<(&String, usize)> = store.nodes.keys()
|
||||
.map(|k| (k, graph.degree(k)))
|
||||
.collect();
|
||||
candidates.sort_by(|a, b| b.1.cmp(&a.1));
|
||||
|
||||
let mut results = Vec::new();
|
||||
for (i, (key, _)) in candidates.iter().take(batch_size).enumerate() {
|
||||
eprintln!(" Challenger {}/{}: {}", i + 1, batch_size.min(candidates.len()), key);
|
||||
|
||||
let content = match store.nodes.get(key.as_str()) {
|
||||
Some(n) => &n.content,
|
||||
None => continue,
|
||||
};
|
||||
|
||||
let prompt = template
|
||||
.replace("{{TOPOLOGY}}", &topology)
|
||||
.replace("{{NODE_KEY}}", key)
|
||||
.replace("{{NODE_CONTENT}}", content);
|
||||
|
||||
let response = llm::call_sonnet("knowledge", &prompt)?;
|
||||
results.push(format!("## Challenge: {}\n\n{}", key, response));
|
||||
}
|
||||
Ok(results.join("\n\n---\n\n"))
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Convergence metrics
|
||||
// ---------------------------------------------------------------------------
|
||||
|
|
@ -771,23 +567,38 @@ fn run_cycle(
|
|||
let mut depth_rejected = 0;
|
||||
let mut total_applied = 0;
|
||||
|
||||
// Run each agent, rebuilding graph after mutations
|
||||
// Run each agent via .agent file dispatch
|
||||
let agent_names = ["observation", "extractor", "connector", "challenger"];
|
||||
|
||||
for agent_name in &agent_names {
|
||||
eprintln!("\n --- {} (n={}) ---", agent_name, config.batch_size);
|
||||
|
||||
// Rebuild graph to reflect any mutations from previous agents
|
||||
let graph = store.build_graph();
|
||||
|
||||
let output = match *agent_name {
|
||||
"observation" => run_observation_extractor(&store, &graph, config.batch_size),
|
||||
"extractor" => run_extractor(&store, &graph, config.batch_size),
|
||||
"connector" => run_connector(&store, &graph, config.batch_size),
|
||||
"challenger" => run_challenger(&store, &graph, config.batch_size),
|
||||
_ => unreachable!(),
|
||||
let def = match super::defs::get_def(agent_name) {
|
||||
Some(d) => d,
|
||||
None => {
|
||||
eprintln!(" SKIP: no .agent file for {}", agent_name);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
let agent_batch = match super::defs::run_agent(&store, &def, config.batch_size) {
|
||||
Ok(b) => b,
|
||||
Err(e) => {
|
||||
eprintln!(" ERROR building prompt: {}", e);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
eprintln!(" prompt: {} chars ({} nodes)", agent_batch.prompt.len(), agent_batch.node_keys.len());
|
||||
let output = llm::call_sonnet("knowledge", &agent_batch.prompt);
|
||||
|
||||
// Record visits for processed nodes
|
||||
if !agent_batch.node_keys.is_empty() {
|
||||
if let Err(e) = store.record_agent_visits(&agent_batch.node_keys, agent_name) {
|
||||
eprintln!(" visit recording: {}", e);
|
||||
}
|
||||
}
|
||||
|
||||
let output = match output {
|
||||
Ok(o) => o,
|
||||
Err(e) => {
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue