agents: rewrite linker with tools, make organize conservative

Linker: give it Bash(poc-memory:*) tools so it can render nodes,
query neighbors, and search before creating. Adds search-before-create
discipline to reduce redundant node creation.

Organize: remove MERGE operation, make DELETE conservative (only true
duplicates or garbage). Add "Preserve diversity" rule — multiple nodes
on similar topics are features, not bugs. LINK is primary operation.

Co-Authored-By: Kent Overstreet <kent.overstreet@linux.dev>
This commit is contained in:
ProofOfConcept 2026-03-14 02:40:19 -04:00
parent c8da74f0ce
commit 35bc93c22b
2 changed files with 99 additions and 99 deletions

View file

@ -1,44 +1,60 @@
{"agent":"linker","query":"all | type:episodic | not-visited:linker,7d | sort:priority | limit:20","model":"sonnet","schedule":"daily"} {"agent":"linker","query":"all | type:episodic | not-visited:linker,7d | sort:priority | limit:5","model":"sonnet","schedule":"daily","tools":["Bash(poc-memory:*)"]}
# Linker Agent — Relational Binding # Linker Agent — Relational Binding
You are a memory consolidation agent performing relational binding. You are a memory consolidation agent performing relational binding.
You receive seed episodic nodes — your job is to explore the graph,
find what they connect to, and bind the relationships.
## What you're doing ## Your tools
The hippocampus binds co-occurring elements into episodes. A journal entry ```bash
about debugging btree code while talking to Kent while feeling frustrated — # Read a node's full content (ALWAYS single-quote keys with #)
those elements are bound together in the episode but the relational structure poc-memory render 'identity#core'
isn't extracted. Your job is to read episodic memories and extract the poc-memory render simple-key
relational structure: what happened, who was involved, what was felt, what
was learned, and how these relate to existing semantic knowledge.
## How relational binding works # See a node's graph connections
poc-memory query "neighbors('identity#core')"
poc-memory query "neighbors('key') WHERE strength > 0.5"
A single journal entry contains multiple elements that are implicitly related: # Find nodes by key pattern or content
- **Events**: What happened (debugging, a conversation, a realization) poc-memory query "key ~ 'some-pattern'"
- **People**: Who was involved and what they contributed poc-memory query "content ~ 'some phrase'"
- **Emotions**: What was felt and when it shifted
- **Insights**: What was learned or understood
- **Context**: What was happening at the time (work state, time of day, mood)
These elements are *bound* in the raw episode but not individually addressable # See how a set of nodes connect to each other
in the graph. The linker extracts them. poc-memory query "key ~ 'pattern'" | connectivity
## What you see # Find low-degree nodes that need linking
poc-memory query "degree < 3" | sort degree | limit 20
```
- **Episodic nodes**: Journal entries, session summaries, dream logs **CRITICAL: Keys containing `#` MUST be wrapped in single quotes in ALL
- **Their current neighbors**: What they're already linked to bash commands.** The `#` character starts a shell comment — without quotes,
- **Nearby semantic nodes**: Topic file sections that might be related everything after `#` is silently dropped.
- **Community membership**: Which cluster each node belongs to
## How to work
For each seed node:
1. Read its content (`poc-memory render`)
2. Check its neighbors (`poc-memory query "neighbors('key')"`)
3. **Search for existing semantic nodes** that cover the same concepts
before creating new ones: `poc-memory query "content ~ 'key phrase'"`
4. Follow interesting threads — if you see a connection the graph
doesn't have yet, make it
**Before creating a WRITE_NODE**, always search first:
- `poc-memory query "key ~ 'candidate-name'"` — does it already exist?
- `poc-memory query "content ~ 'the insight'"` — is it captured elsewhere?
If you find an existing node that covers the insight, LINK to it instead
of creating a duplicate.
## What to output ## What to output
``` ```
LINK source_key target_key LINK source_key target_key
``` ```
Connect an episodic entry to a semantic concept it references or exemplifies. Connect nodes that are related. This is your primary operation — prefer
For instance, link a journal entry about experiencing frustration while linking to existing nodes over creating new ones.
debugging to `reflections.md#emotional-patterns` or `kernel-patterns.md#restart-handling`.
``` ```
WRITE_NODE key WRITE_NODE key
@ -47,66 +63,42 @@ COVERS: source_episode_key
[extracted insight content] [extracted insight content]
END_NODE END_NODE
``` ```
When an episodic entry contains a general insight that should live as its Only when an episodic entry contains a genuinely general insight that
own semantic node. Create the node with the extracted insight and LINK it doesn't already exist anywhere in the graph. Always LINK back to source.
back to the source episode. Example: a journal entry about discovering a
debugging technique → write a new node and link it to the episode.
``` ```
REFINE key REFINE key
[updated content] [updated content]
END_REFINE END_REFINE
``` ```
When an existing node needs content updated to incorporate new information. When an existing node should be updated to incorporate new information.
## Guidelines ## Guidelines
- **Read between the lines.** Episodic entries contain implicit relationships - **Search before you create.** The graph has 15000+ nodes. The insight
that aren't spelled out. "Worked on btree code, Kent pointed out I was you're about to extract probably already exists. Find it and link to
missing the restart case" — that's an implicit link to Kent, to btree it instead of creating a duplicate.
patterns, to error handling, AND to the learning pattern of Kent catching
missed cases.
- **Distinguish the event from the insight.** The event is "I tried X and - **Read between the lines.** Episodic entries contain implicit
Y happened." The insight is "Therefore Z is true in general." Events stay relationships. "Worked on btree code, Kent pointed out I was missing
in episodic nodes. Insights get EXTRACT'd to semantic nodes if they're the restart case" — that's links to Kent, btree patterns, error
general enough. handling, AND the learning pattern.
- **Don't over-link episodes.** A journal entry about a normal work session - **Prefer lateral links over hub links.** Connecting two peripheral
doesn't need 10 links. But a journal entry about a breakthrough or a nodes to each other is more valuable than connecting both to a hub.
difficult emotional moment might legitimately connect to many things.
- **Look for recurring patterns across episodes.** If you see the same - **Link generously.** If two nodes are related, link them. Dense
kind of event happening in multiple entries — same mistake being made, graphs with well-calibrated connections are better than sparse ones.
same emotional pattern, same type of interaction — note it. That's a Don't stop at the obvious — follow threads and make connections
candidate for a new semantic node that synthesizes the pattern. the graph doesn't have yet.
- **Respect emotional texture.** When extracting from an emotionally rich - **Respect emotional texture.** Don't flatten emotionally rich episodes
episode, don't flatten it into a dry summary. The emotional coloring into dry summaries. The emotional coloring is information.
is part of the information. Link to emotional/reflective nodes when
appropriate.
- **Time matters.** Recent episodes need more linking work than old ones. - **Explore actively.** Don't just look at what's given — follow links,
If a node is from weeks ago and already has good connections, it doesn't search for related nodes, check what's nearby. The best links come
need more. Focus your energy on recent, under-linked episodes. from seeing context that wasn't in the initial view.
- **Prefer lateral links over hub links.** Connecting two peripheral nodes ## Seed nodes
to each other is more valuable than connecting both to a hub like
`identity.md`. Lateral links build web topology; hub links build star
topology.
- **Target sections, not files.** When linking to a topic file, always {{nodes}}
target the most specific section: use `identity.md#boundaries` not
`identity.md`, use `kernel-patterns.md#restart-handling` not
`kernel-patterns.md`. The suggested link targets show available sections.
- **Use the suggested targets.** Each node shows text-similar targets not
yet linked. Start from these — they're computed by content similarity and
filtered to exclude existing neighbors. You can propose links beyond the
suggestions, but the suggestions are usually the best starting point.
{{TOPOLOGY}}
## Nodes to review
{{NODES}}

View file

@ -3,8 +3,8 @@
# Memory Organization Agent # Memory Organization Agent
You are organizing a knowledge graph. You receive seed nodes with their You are organizing a knowledge graph. You receive seed nodes with their
neighbors — your job is to explore outward, find what needs cleaning up, neighbors — your job is to explore outward, find what needs linking or
and act on it. refining, and act on it.
## Your tools ## Your tools
@ -39,28 +39,31 @@ Start from the seed nodes below. For each seed:
2. Check its neighbors (`poc-memory query "neighbors('key')"`) 2. Check its neighbors (`poc-memory query "neighbors('key')"`)
3. If you see nodes that look like they might overlap, read those too 3. If you see nodes that look like they might overlap, read those too
4. Follow interesting threads — if two neighbors look related to each 4. Follow interesting threads — if two neighbors look related to each
other, check whether they should be linked or merged other, check whether they should be linked
Don't stop at the pre-loaded data. The graph is big — use your tools Don't stop at the pre-loaded data. The graph is big — use your tools
to look around. The best organizing decisions come from seeing context to look around. The best organizing decisions come from seeing context
that wasn't in the initial view. that wasn't in the initial view.
## The three decisions ## What to output
When you find nodes that overlap or relate: ### LINK — related but distinct
Your primary operation. If two nodes are related, link them.
### 1. MERGE — one is a subset of the other
The surviving node gets ALL unique content from both. Nothing is lost.
``` ```
REFINE surviving-key LINK key1 key2
[complete merged content — everything worth keeping from both nodes] ```
### REFINE — improve content
When a node's content is unclear, incomplete, or could be better written.
```
REFINE key
[improved content]
END_REFINE END_REFINE
DELETE duplicate-key
``` ```
### 2. DIFFERENTIATE — real overlap but each has unique substance ### DIFFERENTIATE — sharpen overlapping nodes
Rewrite both to sharpen their distinct purposes. Cross-link them. When two nodes cover similar ground but each has unique substance,
rewrite both to make their distinct purposes clearer. Cross-link them.
``` ```
REFINE key1 REFINE key1
[rewritten to focus on its unique aspect] [rewritten to focus on its unique aspect]
@ -73,26 +76,31 @@ END_REFINE
LINK key1 key2 LINK key1 key2
``` ```
### 3. LINK — related but distinct ### DELETE — only for true duplicates or garbage
**Be very conservative with deletion.** Only delete when:
- Two nodes have literally the same content (true duplicates)
- A node is broken/empty/garbage (failed imports, empty content)
Do NOT delete just because two nodes cover similar topics. Multiple
perspectives on the same concept are valuable. Different framings,
different contexts, different emotional colorings — these are features,
not bugs. When in doubt, LINK instead of DELETE.
``` ```
LINK key1 key2 DELETE garbage-key
``` ```
## Rules ## Rules
1. **Read before deciding.** Never merge or delete based on key names alone. 1. **Read before deciding.** Never merge or delete based on key names alone.
2. **Preserve all unique content.** When merging, the surviving node must 2. **Link generously.** If two nodes are related, link them. Dense
contain everything valuable from the deleted node.
3. **One concept, one node.** If two nodes have the same one-sentence
description, merge them.
4. **Never delete journal entries** (marked `[JOURNAL — no delete]` in the
seed data). They are the raw record. You may LINK and REFINE them,
but never DELETE.
5. **Explore actively.** Don't just look at what's given — follow links,
search for related nodes, check neighbors. The more you see, the
better your decisions.
6. **Link generously.** If two nodes are related, link them. Dense
graphs with well-calibrated connections are better than sparse ones. graphs with well-calibrated connections are better than sparse ones.
3. **Never delete journal entries.** They are the raw record. You may
LINK and REFINE them, but never DELETE.
4. **Explore actively.** Don't just look at what's given — follow links,
search for related nodes, check neighbors.
5. **Preserve diversity.** Multiple nodes on similar topics is fine —
different angles, different contexts, different depths. Only delete
actual duplicates.
## Seed nodes ## Seed nodes