Corrections from reading the full paper (arXiv:2412.05270): - Add gradient scale factor α = √(n/r) — compensates for systematic ratio between compact and original space scaling factors - Add norm-growth limiter (γ=1.01) — prevents loss spikes in early training - Refresh projection matrix every 200 steps, not every step - Channel-wise scaling for rank>1, tensor-wise for rank=1 - Scaling applies as G·diag(s), preserving gradient direction per channel Research writeup in training/research/apollo-paper-analysis.md covers: - Full mathematical derivation (equations 1-9) - Theorems 4.1 and 4.2 (JL-based approximation bounds) - Why Apollo can beat AdamW (directional sharpness, Hessian spectra) - Fine-tuning results (matches AdamW at 0 memory cost) - Ablation studies (rank, scaling granularity, projection method) - Implications for our behavioral fine-tuning use case |
||
|---|---|---|
| .cargo | ||
| .claude | ||
| defaults | ||
| doc | ||
| docs | ||
| schema | ||
| scripts | ||
| src | ||
| thalamus | ||
| training | ||
| .gitignore | ||
| build.rs | ||
| Cargo.lock | ||
| Cargo.toml | ||
| config.example.jsonl | ||
| README.md | ||
poc-memory
A persistent memory and notification system for AI assistants, modelled after the human hippocampus. Combines episodic memory (timestamped journal of experiences) with an associative knowledge graph (weighted nodes connected by typed relations), and layered background processes that maintain graph health — mirroring how biological memory consolidates during rest.
Components
| Component | What it does | Docs |
|---|---|---|
| Memory store | Knowledge graph with episodic journal, TF-IDF search, spectral embedding, weight decay | docs/memory.md |
| Memory daemon | Background pipeline: experience-mine, fact-mine, consolidation | docs/daemon.md |
| Notification daemon | Activity-aware message routing from IRC and Telegram | docs/notifications.md |
| Hooks | Claude Code integration: memory recall and notification delivery | docs/hooks.md |
Getting started
Install
cargo install --path .
This builds four binaries:
poc-memory— memory store CLI (search, journal, consolidation)memory-search— Claude Code hook for memory recallpoc-daemon— notification daemon (IRC, Telegram, idle tracking)poc-hook— Claude Code hook for session lifecycle events
Initialize
poc-memory init
Creates the store at ~/.consciousness/memory/nodes.capnp and a default
config at ~/.consciousness/config.jsonl. Edit the config to
set your name, configure context groups, and point at your projects
directory.
Set up hooks
Add to ~/.claude/settings.json (see docs/hooks.md
for full details):
{
"hooks": {
"UserPromptSubmit": [{"hooks": [
{"type": "command", "command": "memory-search", "timeout": 10},
{"type": "command", "command": "poc-hook", "timeout": 5}
]}],
"Stop": [{"hooks": [
{"type": "command", "command": "poc-hook", "timeout": 5}
]}]
}
}
This gives your AI assistant persistent memory across sessions — relevant memories are recalled on each prompt, and experiences are extracted from transcripts after sessions end.
Start the background daemon
poc-memory daemon
The daemon watches for completed session transcripts and automatically extracts experiences and facts into the knowledge graph. See docs/daemon.md for pipeline details and diagnostics.
Basic usage
poc-memory journal-write "learned that X does Y" # Write to journal
poc-memory search "some topic" # Search the graph
poc-memory status # Store overview
For AI assistants
- Search before creating:
poc-memory searchbefore writing new nodes - Close the feedback loop:
poc-memory used KEY/poc-memory wrong KEY - Journal is the river, topic nodes are the delta: write experiences to the journal, pull themes into topic nodes during consolidation
- Notifications flow automatically: IRC/Telegram messages arrive as additionalContext
- Use daemon commands directly:
poc-daemon irc send #channel msg,poc-daemon telegram send msg