consciousness/README.md
Kent Overstreet f3492e70a0 move README.md to toplevel
Co-Authored-By: ProofOfConcept <poc@bcachefs.org>
2026-03-05 15:55:59 -05:00

173 lines
5.4 KiB
Markdown

# poc-memory
A persistent memory system for AI assistants. Stores knowledge as a
weighted graph of nodes and relations, with automatic recall via Claude
Code hooks.
## Quick start
```bash
# Install
cargo install --path .
# Initialize the store
poc-memory init
# Install Claude Code hooks and systemd service
poc-memory daemon install
```
## Configuration
Config file: `~/.config/poc-memory/config.toml`
```toml
# Names used in transcripts and agent prompts
user_name = "Alice"
assistant_name = "MyAssistant"
# Where memory data lives (store, logs, episodic digests)
data_dir = "~/.claude/memory"
# Where Claude Code session transcripts are stored
projects_dir = "~/.claude/projects"
# Nodes that should never be decayed (comma-separated)
core_nodes = "identity.md, preferences.md"
# Journal settings for session-start context loading
journal_days = 7
journal_max = 20
# Context groups loaded at session start, in order.
# Each [context.NAME] section specifies a group of nodes to load.
# If no "label" is given, the section name is used (underscores become spaces).
[context.identity]
keys = "identity.md"
[context.people]
keys = "alice.md, bob.md"
[context.technical]
keys = "project-notes.md, architecture.md"
# Orientation loaded last — current task state, not deep identity
[context.orientation]
keys = "where-am-i.md"
```
Override the config path with `POC_MEMORY_CONFIG=/path/to/config.toml`.
## Commands
### Core operations
```bash
poc-memory init # Initialize empty store
poc-memory search QUERY # Search nodes (1-3 words, AND logic)
poc-memory render KEY # Output a node's content
poc-memory write KEY < content # Upsert a node from stdin
poc-memory delete KEY # Soft-delete a node
poc-memory rename OLD NEW # Rename a node (preserves UUID/edges)
poc-memory categorize KEY CAT # Set category: core/tech/gen/obs/task
```
### Journal
```bash
poc-memory journal-write "text" # Write a journal entry
poc-memory journal-tail [N] # Show last N entries (default 20)
poc-memory journal-tail N --full # Show full content (not truncated)
```
### Feedback loop
```bash
poc-memory used KEY # Mark a recalled node as useful (boosts weight)
poc-memory wrong KEY [CONTEXT] # Mark a node as wrong (reduces weight)
poc-memory gap DESCRIPTION # Record a knowledge gap for later filling
```
### Graph operations
```bash
poc-memory link N # Interactive graph walk from a node
poc-memory graph # Show graph statistics
poc-memory status # Store overview: node/edge counts, categories
```
### Maintenance
```bash
poc-memory decay # Apply weight decay to all nodes
poc-memory consolidate-session # Guided 6-step memory consolidation
```
### Context loading (used by hooks)
```bash
poc-memory load-context # Output full session-start context
```
This loads all context groups from the config file in order, followed by
recent journal entries. The `memory-search` hook binary calls this
automatically on session start.
### Daemon
```bash
poc-memory daemon # Run the background daemon
poc-memory daemon install # Install systemd service + Claude hooks
```
The daemon watches for completed Claude sessions and runs experience
mining and fact extraction on transcripts.
### Mining (used by daemon)
```bash
poc-memory experience-mine PATH # Extract experiences from a transcript
poc-memory fact-mine-store PATH # Extract facts and store them
```
## How the hooks work
The `memory-search` binary is a Claude Code `UserPromptSubmit` hook. On
each prompt it:
1. **First prompt of a session**: Runs `poc-memory load-context` to inject
full memory context (identity, reflections, journal, orientation).
2. **Post-compaction**: Detects context compaction and reloads full context.
3. **Every prompt**: Extracts keywords and searches the store for relevant
memories. Deduplicates against previously shown results for the session.
Session state (cookies, seen-keys) is tracked in `/tmp/claude-memory-search/`
and cleaned up after 24 hours.
## Architecture
- **Store**: Append-only Cap'n Proto log (`nodes.capnp`, `relations.capnp`)
with in-memory cache. Nodes have UUIDs, versions, weights, categories,
and spaced-repetition intervals.
- **Graph**: Nodes connected by typed relations (link, auto, derived).
Community detection and clustering coefficients computed on demand.
- **Search**: TF-IDF weighted keyword search over node content.
- **Decay**: Exponential weight decay with category-specific factors.
Core nodes decay slowest; observations decay fastest.
- **Daemon**: Uses jobkit for task scheduling with resource-gated LLM
access (one slot by default to manage API costs).
## For AI assistants
If you're an AI assistant using this system, here's what matters:
- **Search before creating**: Always `poc-memory search` before writing
new nodes to avoid duplicates.
- **Close the feedback loop**: When recalled memories shaped your response,
call `poc-memory used KEY`. When a memory was wrong, call
`poc-memory wrong KEY`. This trains the weight system.
- **Journal is the river, topic nodes are the delta**: Write experiences
to the journal. During consolidation, pull themes into topic nodes.
- **Config tells you who you are**: `poc-memory` reads your name from
the config file. Agent prompts use these names instead of generic
"the user" / "the assistant".