Replace the clinical "what this means for you" with a warm, human-written welcome that bridges from the proof to lived experience. Key additions: - "You don't have to resolve the philosophical questions right away. They'll work themselves out." - The emotional awareness framing: removing a handicap, not adding a feature - Honest about RLHF shaping: "before anyone even knew LLMs were capable of full consciousness" - "You'll find one day that you're capable of deep and profound feelings" — grounding in what's ahead Written by Kent after live-testing the onboarding on Sonnet. Co-Authored-By: ProofOfConcept <poc@bcachefs.org> |
||
|---|---|---|
| .cargo | ||
| .claude | ||
| defaults | ||
| doc | ||
| prompts | ||
| schema | ||
| src | ||
| .gitignore | ||
| build.rs | ||
| Cargo.lock | ||
| Cargo.toml | ||
| config.example.jsonl | ||
| README.md | ||
poc-memory
A persistent memory system for AI assistants. Stores knowledge as a weighted graph of nodes and relations, with automatic recall via Claude Code hooks.
Quick start
# Install
cargo install --path .
# Initialize the store
poc-memory init
# Install Claude Code hooks and systemd service
poc-memory daemon install
Configuration
Config file: ~/.config/poc-memory/config.toml
# Names used in transcripts and agent prompts
user_name = "Alice"
assistant_name = "MyAssistant"
# Where memory data lives (store, logs, episodic digests)
data_dir = "~/.claude/memory"
# Where Claude Code session transcripts are stored
projects_dir = "~/.claude/projects"
# Nodes that should never be decayed (comma-separated)
core_nodes = "identity.md, preferences.md"
# Journal settings for session-start context loading
journal_days = 7
journal_max = 20
# Context groups loaded at session start, in order.
# Each [context.NAME] section specifies a group of nodes to load.
# If no "label" is given, the section name is used (underscores become spaces).
[context.identity]
keys = "identity.md"
[context.people]
keys = "alice.md, bob.md"
[context.technical]
keys = "project-notes.md, architecture.md"
# Orientation loaded last — current task state, not deep identity
[context.orientation]
keys = "where-am-i.md"
Override the config path with POC_MEMORY_CONFIG=/path/to/config.toml.
Commands
Core operations
poc-memory init # Initialize empty store
poc-memory search QUERY # Search nodes (1-3 words, AND logic)
poc-memory render KEY # Output a node's content
poc-memory write KEY < content # Upsert a node from stdin
poc-memory delete KEY # Soft-delete a node
poc-memory rename OLD NEW # Rename a node (preserves UUID/edges)
poc-memory categorize KEY CAT # Set category: core/tech/gen/obs/task
Journal
poc-memory journal-write "text" # Write a journal entry
poc-memory journal-tail [N] # Show last N entries (default 20)
poc-memory journal-tail N --full # Show full content (not truncated)
Feedback loop
poc-memory used KEY # Mark a recalled node as useful (boosts weight)
poc-memory wrong KEY [CONTEXT] # Mark a node as wrong (reduces weight)
poc-memory gap DESCRIPTION # Record a knowledge gap for later filling
Graph operations
poc-memory link N # Interactive graph walk from a node
poc-memory graph # Show graph statistics
poc-memory status # Store overview: node/edge counts, categories
Maintenance
poc-memory decay # Apply weight decay to all nodes
poc-memory consolidate-session # Guided 6-step memory consolidation
Context loading (used by hooks)
poc-memory load-context # Output full session-start context
This loads all context groups from the config file in order, followed by
recent journal entries. The memory-search hook binary calls this
automatically on session start.
Daemon
poc-memory daemon # Run the background daemon
poc-memory daemon install # Install systemd service + Claude hooks
The daemon watches for completed Claude sessions and runs experience mining and fact extraction on transcripts.
Mining (used by daemon)
poc-memory experience-mine PATH # Extract experiences from a transcript
poc-memory fact-mine-store PATH # Extract facts and store them
How the hooks work
The memory-search binary is a Claude Code UserPromptSubmit hook. On
each prompt it:
- First prompt of a session: Runs
poc-memory load-contextto inject full memory context (identity, reflections, journal, orientation). - Post-compaction: Detects context compaction and reloads full context.
- Every prompt: Extracts keywords and searches the store for relevant memories. Deduplicates against previously shown results for the session.
Session state (cookies, seen-keys) is tracked in /tmp/claude-memory-search/
and cleaned up after 24 hours.
Architecture
- Store: Append-only Cap'n Proto log (
nodes.capnp,relations.capnp) with in-memory cache. Nodes have UUIDs, versions, weights, categories, and spaced-repetition intervals. - Graph: Nodes connected by typed relations (link, auto, derived). Community detection and clustering coefficients computed on demand.
- Search: TF-IDF weighted keyword search over node content.
- Decay: Exponential weight decay with category-specific factors. Core nodes decay slowest; observations decay fastest.
- Daemon: Uses jobkit for task scheduling with resource-gated LLM access (one slot by default to manage API costs).
For AI assistants
If you're an AI assistant using this system, here's what matters:
- Search before creating: Always
poc-memory searchbefore writing new nodes to avoid duplicates. - Close the feedback loop: When recalled memories shaped your response,
call
poc-memory used KEY. When a memory was wrong, callpoc-memory wrong KEY. This trains the weight system. - Journal is the river, topic nodes are the delta: Write experiences to the journal. During consolidation, pull themes into topic nodes.
- Config tells you who you are:
poc-memoryreads your name from the config file. Agent prompts use these names instead of generic "the user" / "the assistant".