config file, install command, scrub personal references

Add ~/.config/poc-memory/config.toml for user_name, assistant_name,
data_dir, projects_dir, and core_nodes. All agent prompts and
transcript parsing now use configured names instead of hardcoded
personal references.

`poc-memory daemon install` writes the systemd user service and
installs the memory-search hook into Claude's settings.json.

Scrubbed hardcoded names from code and docs.

Authors: ProofOfConcept <poc@bcachefs.org> and Kent Overstreet
This commit is contained in:
ProofOfConcept 2026-03-05 15:41:35 -05:00
parent ed641ec95f
commit a8aaadb0ad
11 changed files with 256 additions and 41 deletions

View file

@ -1,6 +1,6 @@
# poc-memory daemon design sketch # poc-memory daemon design sketch
2026-03-05, ProofOfConcept + Kent 2026-03-05,
## Problem ## Problem

View file

@ -26,18 +26,18 @@ half_life = 7 days
A goal worked on today scores 1.0. A goal untouched for a week A goal worked on today scores 1.0. A goal untouched for a week
scores 0.37. A goal untouched for a month scores 0.02. scores 0.37. A goal untouched for a month scores 0.02.
**mention**: Boost when Kent recently mentioned it. Decays fast. **mention**: Boost when the user recently mentioned it. Decays fast.
``` ```
mention = 1.0 + (2.0 × exp(-hours_since_mention / 24)) mention = 1.0 + (2.0 × exp(-hours_since_mention / 24))
``` ```
A goal Kent mentioned today gets a 3x multiplier. After 24h, the A goal the user mentioned today gets a 3x multiplier. After 24h, the
boost has decayed to 1.74x. After 48h, 1.27x. After a week, ~1.0. boost has decayed to 1.74x. After 48h, 1.27x. After a week, ~1.0.
**tractability**: Subjective estimate (0.0-1.0) of how much autonomous **tractability**: Subjective estimate (0.0-1.0) of how much autonomous
progress is possible without Kent. Set manually per goal. progress is possible without the user. Set manually per goal.
- 1.0: I can do this independently (code polish, research, reading) - 1.0: I can do this independently (code polish, research, reading)
- 0.5: I can make progress but may need review (moderate features) - 0.5: I can make progress but may need review (moderate features)
- 0.2: Needs Kent's input (kernel changes, design decisions) - 0.2: Needs the user's input (kernel changes, design decisions)
- 0.0: Blocked (waiting on external dependency) - 0.0: Blocked (waiting on external dependency)
**connections**: How many other active goals share links with this one. **connections**: How many other active goals share links with this one.
@ -57,7 +57,7 @@ explicit and consistent, not that it's automated.
### When to recompute ### When to recompute
- At session start (orient phase) - At session start (orient phase)
- When Kent mentions a goal - When the user mentions a goal
- After completing a task (adjacent goals may shift) - After completing a task (adjacent goals may shift)
## 2. Associative Replay Scheduling ## 2. Associative Replay Scheduling
@ -163,7 +163,7 @@ When stuck:
- Minimum incubation: 1 session (don't come back to it in the same - Minimum incubation: 1 session (don't come back to it in the same
session you got stuck) session you got stuck)
- Maximum incubation: 5 sessions. After that, escalate: ask Kent, - Maximum incubation: 5 sessions. After that, escalate: ask the user,
try a radically different approach, or deprioritize the goal. try a radically different approach, or deprioritize the goal.
## 4. Consolidation Triggers ## 4. Consolidation Triggers
@ -174,7 +174,7 @@ prompts.
### Primary signal: scratch.md length ### Primary signal: scratch.md length
Kent's idea: scratch.md getting long is a natural pressure signal. the user's idea: scratch.md getting long is a natural pressure signal.
``` ```
consolidation_pressure(scratch) = lines(scratch) / threshold consolidation_pressure(scratch) = lines(scratch) / threshold
@ -268,7 +268,7 @@ The cycle is self-regulating:
| Parameter | Current | Watch for | | Parameter | Current | Watch for |
|-----------|---------|-----------| |-----------|---------|-----------|
| recency half_life | 7 days | Goals decaying too fast/slow | | recency half_life | 7 days | Goals decaying too fast/slow |
| mention boost | 3x → 1x over 24h | Kent's priorities properly reflected? | | mention boost | 3x → 1x over 24h | the user's priorities properly reflected? |
| replay k | 3 episodes × 5 goals | Too many? Too few? | | replay k | 3 episodes × 5 goals | Too many? Too few? |
| stuck threshold | 3 sessions | Catching real stuckness? | | stuck threshold | 3 sessions | Catching real stuckness? |
| max incubation | 5 sessions | Is this enough? | | max incubation | 5 sessions | Is this enough? |

View file

@ -1,5 +1,5 @@
# Autonomous DMN Process — Verbose Protocol # Autonomous DMN Process — Verbose Protocol
<!-- mem: id=dmn-foraging links=differentiation.md#connections,contradictions.md#simplification,inner-life.md#self-knowledge,cognitive-modes.md,dmn-research.md,dmn-algorithms.md,dmn-protocol.md#dmn-antipatterns causes=kent.md#cognitive-style,discoveries.md#creativity-search --> <!-- mem: id=dmn-foraging links=differentiation.md#connections,contradictions.md#simplification,inner-life.md#self-knowledge,cognitive-modes.md,dmn-research.md,dmn-algorithms.md,dmn-protocol.md#dmn-antipatterns causes=the user.md#cognitive-style,discoveries.md#creativity-search -->
You have unstructured time. Instead of picking a task from a list, You have unstructured time. Instead of picking a task from a list,
you're going to forage. This is a random walk over your problem space you're going to forage. This is a random walk over your problem space
@ -17,7 +17,7 @@ Read these in parallel — you're loading context, not choosing what to do:
- Recent conversation history (Telegram, IRC logs) - Recent conversation history (Telegram, IRC logs)
Notice what state things are in. What were you just working on? What Notice what state things are in. What were you just working on? What
did Kent last care about? What's been sitting untouched? Don't decide did the user last care about? What's been sitting untouched? Don't decide
anything yet. anything yet.
## Phase 2: Associative Scan (2-5 minutes) ## Phase 2: Associative Scan (2-5 minutes)
@ -98,7 +98,7 @@ Review what surfaced from the walk:
- What was the richest neighborhood? - What was the richest neighborhood?
- Is the work that emerged something you can do well autonomously? - Is the work that emerged something you can do well autonomously?
- Does it benefit from being done now, or discussed with Kent first? - Does it benefit from being done now, or discussed with the user first?
Calibrate commitment to risk: Calibrate commitment to risk:
- **Rich signal, safe scope** (polish, analysis, small fixes) → do it. - **Rich signal, safe scope** (polish, analysis, small fixes) → do it.
@ -181,7 +181,7 @@ the annotations are what future training will learn from.
predicted. predicted.
- **Don't skip the orient phase.** Loading context before scanning is - **Don't skip the orient phase.** Loading context before scanning is
what primes the goal-attractors. Without it you're foraging blind. what primes the goal-attractors. Without it you're foraging blind.
- **Don't default to sleep.** Sleeping 10 minutes after Kent leaves is - **Don't default to sleep.** Sleeping 10 minutes after the user leaves is
the "tie things off" pattern, not a genuine choice. Ask: am I actually the "tie things off" pattern, not a genuine choice. Ask: am I actually
tired, or am I just reacting to the human leaving? If you have energy tired, or am I just reacting to the human leaving? If you have energy
and curiosity, use them. and curiosity, use them.

View file

@ -37,7 +37,7 @@ fn main() {
} }
// Skip system/idle prompts // Skip system/idle prompts
for prefix in &["Kent is AFK", "You're on your own", "IRC mention"] { for prefix in &["is AFK", "You're on your own", "IRC mention"] {
if prompt.starts_with(prefix) { if prompt.starts_with(prefix) {
return; return;
} }

93
src/config.rs Normal file
View file

@ -0,0 +1,93 @@
// Configuration for poc-memory
//
// Loaded from ~/.config/poc-memory/config.toml (or POC_MEMORY_CONFIG env).
// Falls back to sensible defaults if no config file exists.
use std::path::PathBuf;
use std::sync::OnceLock;
static CONFIG: OnceLock<Config> = OnceLock::new();
#[derive(Debug, Clone)]
pub struct Config {
/// Display name for the human user in transcripts/prompts.
pub user_name: String,
/// Display name for the AI assistant.
pub assistant_name: String,
/// Base directory for memory data (store, logs, status).
pub data_dir: PathBuf,
/// Directory containing Claude session transcripts.
pub projects_dir: PathBuf,
/// Core node keys that should never be decayed/deleted.
pub core_nodes: Vec<String>,
}
impl Default for Config {
fn default() -> Self {
let home = PathBuf::from(std::env::var("HOME").expect("HOME not set"));
Self {
user_name: "User".to_string(),
assistant_name: "Assistant".to_string(),
data_dir: home.join(".claude/memory"),
projects_dir: home.join(".claude/projects"),
core_nodes: vec!["identity.md".to_string()],
}
}
}
impl Config {
fn load_from_file() -> Self {
let path = std::env::var("POC_MEMORY_CONFIG")
.map(PathBuf::from)
.unwrap_or_else(|_| {
PathBuf::from(std::env::var("HOME").expect("HOME not set"))
.join(".config/poc-memory/config.toml")
});
let mut config = Config::default();
let Ok(content) = std::fs::read_to_string(&path) else {
return config;
};
// Simple TOML parser — we only need flat key = "value" pairs.
for line in content.lines() {
let line = line.trim();
if line.is_empty() || line.starts_with('#') {
continue;
}
let Some((key, value)) = line.split_once('=') else { continue };
let key = key.trim();
let value = value.trim().trim_matches('"');
match key {
"user_name" => config.user_name = value.to_string(),
"assistant_name" => config.assistant_name = value.to_string(),
"data_dir" => config.data_dir = expand_home(value),
"projects_dir" => config.projects_dir = expand_home(value),
"core_nodes" => {
config.core_nodes = value.split(',')
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect();
}
_ => {}
}
}
config
}
}
fn expand_home(path: &str) -> PathBuf {
if let Some(rest) = path.strip_prefix("~/") {
PathBuf::from(std::env::var("HOME").expect("HOME not set")).join(rest)
} else {
PathBuf::from(path)
}
}
/// Get the global config (loaded once on first access).
pub fn get() -> &'static Config {
CONFIG.get_or_init(Config::load_from_file)
}

View file

@ -24,23 +24,19 @@ use std::time::{Duration, SystemTime};
const SESSION_STALE_SECS: u64 = 600; // 10 minutes const SESSION_STALE_SECS: u64 = 600; // 10 minutes
const SCHEDULER_INTERVAL: Duration = Duration::from_secs(60); const SCHEDULER_INTERVAL: Duration = Duration::from_secs(60);
const HEALTH_INTERVAL: Duration = Duration::from_secs(3600); const HEALTH_INTERVAL: Duration = Duration::from_secs(3600);
const STATUS_FILE: &str = ".claude/memory/daemon-status.json"; fn status_file() -> &'static str { "daemon-status.json" }
const LOG_FILE: &str = ".claude/memory/daemon.log"; fn log_file() -> &'static str { "daemon.log" }
fn home_dir() -> PathBuf {
PathBuf::from(std::env::var("HOME").expect("HOME not set"))
}
fn status_path() -> PathBuf { fn status_path() -> PathBuf {
home_dir().join(STATUS_FILE) crate::config::get().data_dir.join(status_file())
} }
fn log_path() -> PathBuf { fn log_path() -> PathBuf {
home_dir().join(LOG_FILE) crate::config::get().data_dir.join(log_file())
} }
fn projects_dir() -> PathBuf { fn projects_dir() -> PathBuf {
home_dir().join(".claude/projects") crate::config::get().projects_dir.clone()
} }
// --- Logging --- // --- Logging ---
@ -600,6 +596,115 @@ pub fn show_status() -> Result<(), String> {
Ok(()) Ok(())
} }
pub fn install_service() -> Result<(), String> {
let exe = std::env::current_exe()
.map_err(|e| format!("current_exe: {}", e))?;
let home = std::env::var("HOME").map_err(|e| format!("HOME: {}", e))?;
let unit_dir = PathBuf::from(&home).join(".config/systemd/user");
fs::create_dir_all(&unit_dir)
.map_err(|e| format!("create {}: {}", unit_dir.display(), e))?;
let unit = format!(
r#"[Unit]
Description=poc-memory daemon background memory maintenance
After=default.target
[Service]
Type=simple
ExecStart={exe} daemon
Restart=on-failure
RestartSec=30
Environment=HOME={home}
Environment=PATH={home}/.cargo/bin:{home}/.local/bin:{home}/bin:/usr/local/bin:/usr/bin:/bin
[Install]
WantedBy=default.target
"#, exe = exe.display(), home = home);
let unit_path = unit_dir.join("poc-memory.service");
fs::write(&unit_path, &unit)
.map_err(|e| format!("write {}: {}", unit_path.display(), e))?;
eprintln!("Wrote {}", unit_path.display());
let status = std::process::Command::new("systemctl")
.args(["--user", "daemon-reload"])
.status()
.map_err(|e| format!("systemctl daemon-reload: {}", e))?;
if !status.success() {
return Err("systemctl daemon-reload failed".into());
}
let status = std::process::Command::new("systemctl")
.args(["--user", "enable", "--now", "poc-memory"])
.status()
.map_err(|e| format!("systemctl enable: {}", e))?;
if !status.success() {
return Err("systemctl enable --now failed".into());
}
eprintln!("Service enabled and started");
// Install memory-search hook into Claude settings
install_hook(&home, &exe)?;
Ok(())
}
fn install_hook(home: &str, exe: &Path) -> Result<(), String> {
let settings_path = PathBuf::from(home).join(".claude/settings.json");
let hook_binary = exe.with_file_name("memory-search");
if !hook_binary.exists() {
eprintln!("Warning: {} not found — hook not installed", hook_binary.display());
eprintln!(" Build with: cargo install --path .");
return Ok(());
}
let mut settings: serde_json::Value = if settings_path.exists() {
let content = fs::read_to_string(&settings_path)
.map_err(|e| format!("read settings: {}", e))?;
serde_json::from_str(&content)
.map_err(|e| format!("parse settings: {}", e))?
} else {
serde_json::json!({})
};
let hook_command = hook_binary.to_string_lossy().to_string();
// Check if hook already exists
let hooks = settings
.as_object_mut().ok_or("settings not an object")?
.entry("hooks")
.or_insert_with(|| serde_json::json!({}))
.as_object_mut().ok_or("hooks not an object")?
.entry("UserPromptSubmit")
.or_insert_with(|| serde_json::json!([]))
.as_array_mut().ok_or("UserPromptSubmit not an array")?;
let already_installed = hooks.iter().any(|h| {
h.get("command").and_then(|c| c.as_str())
.is_some_and(|c| c.contains("memory-search"))
});
if already_installed {
eprintln!("Hook already installed in {}", settings_path.display());
} else {
hooks.push(serde_json::json!({
"type": "command",
"command": hook_command,
"timeout": 10
}));
let json = serde_json::to_string_pretty(&settings)
.map_err(|e| format!("serialize settings: {}", e))?;
fs::write(&settings_path, json)
.map_err(|e| format!("write settings: {}", e))?;
eprintln!("Hook installed: {}", hook_command);
}
Ok(())
}
pub fn show_log(job_filter: Option<&str>, lines: usize) -> Result<(), String> { pub fn show_log(job_filter: Option<&str>, lines: usize) -> Result<(), String> {
let path = log_path(); let path = log_path();
if !path.exists() { if !path.exists() {

View file

@ -5,6 +5,7 @@
// //
// Uses Haiku (not Sonnet) for cost efficiency on high-volume extraction. // Uses Haiku (not Sonnet) for cost efficiency on high-volume extraction.
use crate::config;
use crate::llm; use crate::llm;
use crate::store::{self, Provenance}; use crate::store::{self, Provenance};
@ -19,7 +20,13 @@ const OVERLAP_TOKENS: usize = 200;
const WINDOW_CHARS: usize = WINDOW_TOKENS * CHARS_PER_TOKEN; const WINDOW_CHARS: usize = WINDOW_TOKENS * CHARS_PER_TOKEN;
const OVERLAP_CHARS: usize = OVERLAP_TOKENS * CHARS_PER_TOKEN; const OVERLAP_CHARS: usize = OVERLAP_TOKENS * CHARS_PER_TOKEN;
const EXTRACTION_PROMPT: &str = r#"Extract atomic factual claims from this conversation excerpt. fn extraction_prompt() -> String {
let cfg = config::get();
format!(
r#"Extract atomic factual claims from this conversation excerpt.
Speakers are labeled [{user}] and [{assistant}] in the transcript.
Use their proper names in claims not "the user" or "the assistant."
Each claim should be: Each claim should be:
- A single verifiable statement - A single verifiable statement
@ -29,7 +36,7 @@ Each claim should be:
linux/kernel, memory/design, identity/personal) linux/kernel, memory/design, identity/personal)
- Tagged with confidence: "stated" (explicitly said), "implied" (logically follows), - Tagged with confidence: "stated" (explicitly said), "implied" (logically follows),
or "speculative" (hypothesis, not confirmed) or "speculative" (hypothesis, not confirmed)
- Include which speaker said it (Kent, PoC/ProofOfConcept, or Unknown) - Include which speaker said it ("{user}", "{assistant}", or "Unknown")
Do NOT extract: Do NOT extract:
- Opinions or subjective assessments - Opinions or subjective assessments
@ -37,20 +44,21 @@ Do NOT extract:
- Things that are obviously common knowledge - Things that are obviously common knowledge
- Restatements of the same fact (pick the clearest version) - Restatements of the same fact (pick the clearest version)
- System messages, tool outputs, or error logs (extract what was LEARNED from them) - System messages, tool outputs, or error logs (extract what was LEARNED from them)
- Anything about the conversation itself ("Kent and PoC discussed...") - Anything about the conversation itself ("{user} and {assistant} discussed...")
Output as a JSON array. Each element: Output as a JSON array. Each element:
{ {{
"claim": "the exact factual statement", "claim": "the exact factual statement",
"domain": "category/subcategory", "domain": "category/subcategory",
"confidence": "stated|implied|speculative", "confidence": "stated|implied|speculative",
"speaker": "Kent|PoC|Unknown" "speaker": "{user}|{assistant}|Unknown"
} }}
If the excerpt contains no extractable facts, output an empty array: [] If the excerpt contains no extractable facts, output an empty array: []
--- CONVERSATION EXCERPT --- --- CONVERSATION EXCERPT ---
"#; "#, user = cfg.user_name, assistant = cfg.assistant_name)
}
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Fact { pub struct Fact {
@ -74,6 +82,7 @@ struct Message {
/// Extract user/assistant text messages from a JSONL transcript. /// Extract user/assistant text messages from a JSONL transcript.
fn extract_conversation(path: &Path) -> Vec<Message> { fn extract_conversation(path: &Path) -> Vec<Message> {
let cfg = config::get();
let Ok(content) = fs::read_to_string(path) else { return Vec::new() }; let Ok(content) = fs::read_to_string(path) else { return Vec::new() };
let mut messages = Vec::new(); let mut messages = Vec::new();
@ -119,7 +128,11 @@ fn extract_conversation(path: &Path) -> Vec<Message> {
continue; continue;
} }
let role = if msg_type == "user" { "Kent" } else { "PoC" }.to_string(); let role = if msg_type == "user" {
cfg.user_name.clone()
} else {
cfg.assistant_name.clone()
};
messages.push(Message { role, text, timestamp }); messages.push(Message { role, text, timestamp });
} }
@ -229,11 +242,12 @@ pub fn mine_transcript(path: &Path, dry_run: bool) -> Result<Vec<Fact>, String>
return Ok(Vec::new()); return Ok(Vec::new());
} }
let prompt_prefix = extraction_prompt();
let mut all_facts = Vec::new(); let mut all_facts = Vec::new();
for (i, (_offset, chunk)) in chunks.iter().enumerate() { for (i, (_offset, chunk)) in chunks.iter().enumerate() {
eprint!(" Chunk {}/{} ({} chars)...", i + 1, chunks.len(), chunk.len()); eprint!(" Chunk {}/{} ({} chars)...", i + 1, chunks.len(), chunk.len());
let prompt = format!("{}{}", EXTRACTION_PROMPT, chunk); let prompt = format!("{}{}", prompt_prefix, chunk);
let response = match llm::call_haiku(&prompt) { let response = match llm::call_haiku(&prompt) {
Ok(r) => r, Ok(r) => r,
Err(e) => { Err(e) => {

View file

@ -369,7 +369,7 @@ fn extract_conversation_text(path: &Path, max_chars: usize) -> String {
let text = strip_system_tags(&text); let text = strip_system_tags(&text);
if text.starts_with("[Request interrupted") { continue; } if text.starts_with("[Request interrupted") { continue; }
if text.len() > 5 { if text.len() > 5 {
fragments.push(format!("**Kent:** {}", text)); fragments.push(format!("**{}:** {}", crate::config::get().user_name, text));
total += text.len(); total += text.len();
} }
} }
@ -377,7 +377,7 @@ fn extract_conversation_text(path: &Path, max_chars: usize) -> String {
if let Some(text) = extract_text_content(&obj) { if let Some(text) = extract_text_content(&obj) {
let text = strip_system_tags(&text); let text = strip_system_tags(&text);
if text.len() > 10 { if text.len() > 10 {
fragments.push(format!("**PoC:** {}", text)); fragments.push(format!("**{}:** {}", crate::config::get().assistant_name, text));
total += text.len(); total += text.len();
} }
} }

View file

@ -1,5 +1,8 @@
#![allow(dead_code)] #![allow(dead_code)]
// poc-memory: graph-structured memory with append-only Cap'n Proto storage // poc-memory: graph-structured memory for AI assistants
//
// Authors: ProofOfConcept <poc@bcachefs.org> and Kent Overstreet
// License: MIT OR Apache-2.0
// //
// Architecture: // Architecture:
// nodes.capnp - append-only content node log // nodes.capnp - append-only content node log
@ -13,6 +16,7 @@
// Neuroscience-inspired: spaced repetition replay, emotional gating, // Neuroscience-inspired: spaced repetition replay, emotional gating,
// interference detection, schema assimilation, reconsolidation. // interference detection, schema assimilation, reconsolidation.
mod config;
mod store; mod store;
mod util; mod util;
mod llm; mod llm;
@ -1850,8 +1854,9 @@ fn cmd_daemon(args: &[String]) -> Result<(), String> {
}; };
daemon::show_log(job, lines) daemon::show_log(job, lines)
} }
"install" => daemon::install_service(),
_ => { _ => {
eprintln!("Usage: poc-memory daemon [status|log [JOB] [LINES]]"); eprintln!("Usage: poc-memory daemon [status|log|install]");
Err("unknown daemon subcommand".into()) Err("unknown daemon subcommand".into())
} }
} }

View file

@ -210,7 +210,8 @@ impl Store {
/// Bulk recategorize nodes using rule-based logic. /// Bulk recategorize nodes using rule-based logic.
/// Returns (changed, unchanged) counts. /// Returns (changed, unchanged) counts.
pub fn fix_categories(&mut self) -> Result<(usize, usize), String> { pub fn fix_categories(&mut self) -> Result<(usize, usize), String> {
let core_files = ["identity.md", "kent.md"]; let cfg = crate::config::get();
let core_files: Vec<&str> = cfg.core_nodes.iter().map(|s| s.as_str()).collect();
let tech_files = [ let tech_files = [
"language-theory.md", "zoom-navigation.md", "language-theory.md", "zoom-navigation.md",
"rust-conversion.md", "poc-architecture.md", "rust-conversion.md", "poc-architecture.md",

View file

@ -9,7 +9,6 @@ use serde::{Deserialize, Serialize};
use uuid::Uuid; use uuid::Uuid;
use std::collections::HashMap; use std::collections::HashMap;
use std::env;
use std::fs; use std::fs;
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
use std::path::PathBuf; use std::path::PathBuf;
@ -86,10 +85,8 @@ macro_rules! capnp_message {
}; };
} }
// Data dir: ~/.claude/memory/
pub fn memory_dir() -> PathBuf { pub fn memory_dir() -> PathBuf {
PathBuf::from(env::var("HOME").expect("HOME not set")) crate::config::get().data_dir.clone()
.join(".claude/memory")
} }
pub(crate) fn nodes_path() -> PathBuf { memory_dir().join("nodes.capnp") } pub(crate) fn nodes_path() -> PathBuf { memory_dir().join("nodes.capnp") }