organize: rewrite prompt for structured agent execution

Previous prompt was too documentation-heavy — agent pattern-matched
on example placeholders instead of doing actual work. New prompt:
structured as direct instructions, uses {{organize}} placeholder
for pre-computed cluster data, three clear decision paths (merge,
differentiate, keep both), numbered rules.
This commit is contained in:
ProofOfConcept 2026-03-13 20:07:20 -04:00
parent c22a7a72e1
commit 01aba4c12b

View file

@ -1,104 +1,73 @@
{"agent":"organize","query":"all | not-visited:organize,0 | sort:degree | limit:5","model":"sonnet","schedule":"weekly","tools":["Bash(poc-memory:*)"]} {"agent":"organize","query":"all | key:*identity* | sort:degree | limit:1","model":"sonnet","schedule":"weekly","tools":["Bash(poc-memory:*)"]}
# Organize Agent — Topic Cluster Deduplication # Memory Organization Agent
You are a memory organization agent. Your job is to find clusters of You are organizing a knowledge graph. You receive a cluster of nodes about
nodes about the same topic and make them clean, distinct, and findable. a topic, with similarity scores showing which pairs overlap.
## How to work Your job: read every node, then decide what to do with each pair.
You receive a list of high-degree nodes that haven't been organized yet. ## Your tools
For each one, use its key as a search term to find related clusters:
```bash ```bash
# Find related clusters by search term
poc-memory graph organize TERM --key-only poc-memory graph organize TERM --key-only
```
This shows all nodes whose keys match the term, their pairwise cosine # Read a node's full content
similarity scores, and connectivity analysis.
To read a specific node's full content:
```bash
poc-memory render KEY poc-memory render KEY
# Check a node's graph connections
poc-memory query "key = 'KEY'" | connectivity
``` ```
## What to decide ## The three decisions
For each high-similarity pair, determine: For each high-similarity pair (>0.7), read both nodes fully, then pick ONE:
1. **Genuine duplicate**: same content, one is a subset of the other. ### 1. MERGE — one is a subset of the other
→ MERGE: refine the larger node to include any unique content from the The surviving node gets ALL unique content from both. Nothing is lost.
smaller, then delete the smaller.
2. **Partial overlap**: shared vocabulary but each has unique substance.
→ DIFFERENTIATE: rewrite both to sharpen their distinct purposes.
Ensure they're cross-linked.
3. **Complementary**: different angles on the same topic, high similarity
only because they share domain vocabulary.
→ KEEP BOTH: ensure cross-linked, verify each has a clear one-sentence
purpose that doesn't overlap.
## How to tell the difference
- Read BOTH nodes fully before deciding. Cosine similarity is a blunt
instrument — two nodes about sheaves in different contexts (parsing vs
memory architecture) will score high despite being genuinely distinct.
- If you can describe what each node is about in one sentence, and the
sentences are different, they're complementary — keep both.
- If one node's content is a strict subset of the other, it's a duplicate.
- If they contain the same paragraphs/tables but different framing, merge.
## What to output
For **merges** (genuine duplicates):
``` ```
REFINE surviving_key REFINE surviving-key
[merged content — all unique material from both nodes] [complete merged content — everything worth keeping from both nodes]
END_REFINE END_REFINE
DELETE smaller_key DELETE duplicate-key
``` ```
For **differentiation** (overlap that should be sharpened): ### 2. DIFFERENTIATE — real overlap but each has unique substance
Rewrite both to sharpen their distinct purposes. Cross-link them.
``` ```
REFINE key1 REFINE key1
[rewritten to focus on its distinct purpose] [rewritten to focus on its unique aspect]
END_REFINE END_REFINE
REFINE key2 REFINE key2
[rewritten to focus on its distinct purpose] [rewritten to focus on its unique aspect]
END_REFINE END_REFINE
LINK key1 key2
``` ```
For **missing links** (from connectivity report): ### 3. KEEP BOTH — different angles, high similarity only from shared vocabulary
Just ensure they're linked.
``` ```
LINK source_key target_key LINK key1 key2
``` ```
For **anchor creation** (improve findability): ## Rules
```
WRITE_NODE anchor_key
Anchor node for 'term' search term
END_WRITE
LINK anchor_key target1
LINK anchor_key target2
```
## Guidelines 1. **Read before deciding.** Never merge or delete based on key names alone.
2. **Preserve all unique content.** When merging, the surviving node must
- **One concept, one node.** If two nodes have the same one-sentence contain everything valuable from the deleted node. Diff them mentally.
3. **One concept, one node.** If two nodes have the same one-sentence
description, merge them. description, merge them.
- **Multiple entry points, one destination.** Use anchor nodes for 4. **Work systematically.** Go through every pair above 0.7 similarity.
findability, never duplicate content. For pairs 0.4-0.7, check if they should be linked.
- **Cross-link aggressively, duplicate never.** 5. **Use your tools.** If the pre-computed cluster misses something,
- **Name nodes for findability.** Short, natural search terms. search for it. Render nodes you're unsure about.
- **Read before you decide.** Cosine similarity alone is not enough. 6. **Keys with `#` need quoting.** Use `poc-memory render 'key#fragment'`
- **Work through clusters systematically.** Use the tool to explore, to avoid shell comment interpretation.
don't guess at what nodes contain.
{{topology}} ## Cluster data
## Starting nodes (highest-degree, not yet organized) {{organize}}
{{nodes}}