store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
// Mutation operations on the store
|
|
|
|
|
//
|
remove legacy feedback commands (used, wrong, gap, etc.)
These were early experiments with manual feedback signals that
never worked well. The scoring system will handle this properly.
Removed:
- CLI: used, wrong, not-relevant, not-useful, gap
- MCP: memory_used
- Store: mark_used, mark_wrong, record_gap, modify_node
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-12 22:12:02 -04:00
|
|
|
// CRUD (upsert, delete), maintenance (decay, cap_degree), and graph metrics.
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
|
2026-04-13 19:31:28 -04:00
|
|
|
use super::{capnp, index, types::*, Store};
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
use anyhow::{anyhow, bail, Result};
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
use std::collections::{HashMap, HashSet};
|
|
|
|
|
|
2026-03-27 15:44:39 -04:00
|
|
|
/// Fallback provenance for non-tool-dispatch paths (CLI, digest, etc.).
|
|
|
|
|
/// Tool dispatch passes provenance directly through thought::dispatch.
|
2026-03-27 15:11:17 -04:00
|
|
|
pub fn current_provenance() -> String {
|
2026-03-27 15:44:39 -04:00
|
|
|
std::env::var("POC_PROVENANCE")
|
|
|
|
|
.unwrap_or_else(|_| "manual".to_string())
|
2026-03-17 18:06:06 -04:00
|
|
|
}
|
|
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
impl Store {
|
2026-04-13 19:10:08 -04:00
|
|
|
/// Add or update a node (appends to log + updates index).
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
pub fn upsert_node(&mut self, mut node: Node) -> Result<()> {
|
2026-04-13 19:31:28 -04:00
|
|
|
if let Some(existing) = self.get_node(&node.key)? {
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
node.uuid = existing.uuid;
|
|
|
|
|
node.version = existing.version + 1;
|
|
|
|
|
}
|
2026-04-13 19:13:25 -04:00
|
|
|
let offset = self.append_nodes(&[node.clone()])?;
|
2026-04-13 19:03:09 -04:00
|
|
|
if let Some(ref database) = self.db {
|
2026-04-13 19:10:08 -04:00
|
|
|
index::index_node(database, &node.key, offset, &node.uuid)?;
|
2026-04-13 19:03:09 -04:00
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
Ok(())
|
|
|
|
|
}
|
|
|
|
|
|
2026-04-13 21:32:48 -04:00
|
|
|
/// Add a relation (appends to log + indexes)
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
pub fn add_relation(&mut self, rel: Relation) -> Result<()> {
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
self.append_relations(std::slice::from_ref(&rel))?;
|
2026-04-13 21:12:47 -04:00
|
|
|
if let Some(db) = &self.db {
|
|
|
|
|
index::index_relation(db, &rel.source, &rel.target, rel.strength, rel.rel_type as u8)?;
|
|
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
Ok(())
|
|
|
|
|
}
|
|
|
|
|
|
2026-04-07 19:03:05 -04:00
|
|
|
/// Recent nodes by provenance, sorted newest-first. Returns (key, timestamp).
|
|
|
|
|
pub fn recent_by_provenance(&self, provenance: &str, limit: usize) -> Vec<(String, i64)> {
|
2026-04-13 19:31:28 -04:00
|
|
|
let db = match self.db.as_ref() {
|
|
|
|
|
Some(db) => db,
|
|
|
|
|
None => return Vec::new(),
|
|
|
|
|
};
|
|
|
|
|
let keys = match index::all_keys(db) {
|
|
|
|
|
Ok(keys) => keys,
|
|
|
|
|
Err(_) => return Vec::new(),
|
|
|
|
|
};
|
|
|
|
|
let mut nodes: Vec<_> = keys.iter()
|
|
|
|
|
.filter_map(|key| {
|
|
|
|
|
let offset = index::get_offset(db, key).ok()??;
|
|
|
|
|
let node = capnp::read_node_at_offset(offset).ok()?;
|
|
|
|
|
if !node.deleted && node.provenance == provenance {
|
|
|
|
|
Some((key.clone(), node.timestamp))
|
|
|
|
|
} else {
|
|
|
|
|
None
|
|
|
|
|
}
|
|
|
|
|
})
|
2026-04-07 19:03:05 -04:00
|
|
|
.collect();
|
|
|
|
|
nodes.sort_by(|a, b| b.1.cmp(&a.1));
|
|
|
|
|
nodes.truncate(limit);
|
|
|
|
|
nodes
|
|
|
|
|
}
|
|
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
/// Upsert a node: update if exists (and content changed), create if not.
|
|
|
|
|
/// Returns: "created", "updated", or "unchanged".
|
2026-03-06 21:42:39 -05:00
|
|
|
///
|
|
|
|
|
/// Provenance is determined by the POC_PROVENANCE env var if set,
|
|
|
|
|
/// otherwise defaults to Manual.
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
pub fn upsert(&mut self, key: &str, content: &str) -> Result<&'static str> {
|
2026-03-17 18:06:06 -04:00
|
|
|
let prov = current_provenance();
|
2026-03-11 01:19:52 -04:00
|
|
|
self.upsert_provenance(key, content, &prov)
|
2026-03-05 15:30:57 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Upsert with explicit provenance (for agent-created nodes).
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
pub fn upsert_provenance(&mut self, key: &str, content: &str, provenance: &str) -> Result<&'static str> {
|
2026-04-13 19:31:28 -04:00
|
|
|
if let Some(existing) = self.get_node(key)? {
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
if existing.content == content {
|
|
|
|
|
return Ok("unchanged");
|
|
|
|
|
}
|
2026-04-13 19:31:28 -04:00
|
|
|
let mut node = existing;
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
node.content = content.to_string();
|
2026-03-11 01:19:52 -04:00
|
|
|
node.provenance = provenance.to_string();
|
2026-03-20 12:16:45 -04:00
|
|
|
node.timestamp = now_epoch();
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
node.version += 1;
|
2026-04-13 19:13:25 -04:00
|
|
|
let offset = self.append_nodes(std::slice::from_ref(&node))?;
|
2026-04-13 19:03:09 -04:00
|
|
|
if let Some(ref database) = self.db {
|
2026-04-13 19:10:08 -04:00
|
|
|
index::index_node(database, &node.key, offset, &node.uuid)?;
|
2026-04-13 19:03:09 -04:00
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
Ok("updated")
|
|
|
|
|
} else {
|
2026-03-05 15:30:57 -05:00
|
|
|
let mut node = new_node(key, content);
|
2026-03-11 01:19:52 -04:00
|
|
|
node.provenance = provenance.to_string();
|
2026-04-13 19:13:25 -04:00
|
|
|
let offset = self.append_nodes(std::slice::from_ref(&node))?;
|
2026-04-13 19:03:09 -04:00
|
|
|
if let Some(ref database) = self.db {
|
2026-04-13 19:10:08 -04:00
|
|
|
index::index_node(database, &node.key, offset, &node.uuid)?;
|
2026-04-13 19:03:09 -04:00
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
Ok("created")
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2026-04-13 19:13:25 -04:00
|
|
|
/// Soft-delete a node (appends deleted version, removes from index).
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
pub fn delete_node(&mut self, key: &str) -> Result<()> {
|
2026-03-17 18:06:06 -04:00
|
|
|
let prov = current_provenance();
|
2026-03-17 18:04:59 -04:00
|
|
|
|
2026-04-13 19:31:28 -04:00
|
|
|
let node = self.get_node(key)?
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
.ok_or_else(|| anyhow!("No node '{}'", key))?;
|
2026-04-13 19:03:09 -04:00
|
|
|
let uuid = node.uuid;
|
2026-04-13 19:31:28 -04:00
|
|
|
let mut deleted = node;
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
deleted.deleted = true;
|
|
|
|
|
deleted.version += 1;
|
2026-03-17 18:04:59 -04:00
|
|
|
deleted.provenance = prov;
|
|
|
|
|
deleted.timestamp = now_epoch();
|
2026-04-13 19:13:25 -04:00
|
|
|
self.append_nodes(std::slice::from_ref(&deleted))?;
|
2026-04-13 19:03:09 -04:00
|
|
|
if let Some(ref database) = self.db {
|
2026-04-13 19:10:08 -04:00
|
|
|
index::remove_node(database, key, &uuid)?;
|
2026-04-13 19:03:09 -04:00
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
Ok(())
|
|
|
|
|
}
|
|
|
|
|
|
2026-03-05 10:24:18 -05:00
|
|
|
/// Rename a node: change its key, update debug strings on all edges.
|
|
|
|
|
///
|
|
|
|
|
/// Graph edges (source/target UUIDs) are unaffected — they're already
|
|
|
|
|
/// UUID-based. We update the human-readable source_key/target_key strings
|
|
|
|
|
/// on relations, and created_at is preserved untouched.
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
pub fn rename_node(&mut self, old_key: &str, new_key: &str) -> Result<()> {
|
2026-03-05 10:24:18 -05:00
|
|
|
if old_key == new_key {
|
|
|
|
|
return Ok(());
|
|
|
|
|
}
|
2026-04-13 19:31:28 -04:00
|
|
|
if self.contains_key(new_key)? {
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
bail!("Key '{}' already exists", new_key);
|
2026-03-05 10:24:18 -05:00
|
|
|
}
|
2026-04-13 19:31:28 -04:00
|
|
|
let node = self.get_node(old_key)?
|
|
|
|
|
.ok_or_else(|| anyhow!("No node '{}'", old_key))?;
|
2026-03-05 10:24:18 -05:00
|
|
|
|
2026-03-17 18:06:06 -04:00
|
|
|
let prov = current_provenance();
|
2026-03-17 18:04:59 -04:00
|
|
|
|
2026-03-05 10:24:18 -05:00
|
|
|
// New version under the new key
|
|
|
|
|
let mut renamed = node.clone();
|
|
|
|
|
renamed.key = new_key.to_string();
|
|
|
|
|
renamed.version += 1;
|
2026-03-17 18:04:59 -04:00
|
|
|
renamed.provenance = prov.clone();
|
|
|
|
|
renamed.timestamp = now_epoch();
|
2026-03-05 10:24:18 -05:00
|
|
|
|
|
|
|
|
// Deletion record for the old key (same UUID, independent version counter)
|
|
|
|
|
let mut tombstone = node.clone();
|
|
|
|
|
tombstone.deleted = true;
|
|
|
|
|
tombstone.version += 1;
|
2026-03-17 18:04:59 -04:00
|
|
|
tombstone.provenance = prov;
|
|
|
|
|
tombstone.timestamp = now_epoch();
|
2026-03-05 10:24:18 -05:00
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
// Persist node changes
|
2026-04-13 19:13:25 -04:00
|
|
|
let offset = self.append_nodes(&[renamed.clone(), tombstone.clone()])?;
|
2026-03-05 10:24:18 -05:00
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
// Update node index: remove old key, add renamed
|
2026-04-13 19:03:09 -04:00
|
|
|
if let Some(ref database) = self.db {
|
2026-04-13 19:10:08 -04:00
|
|
|
index::remove_node(database, old_key, &tombstone.uuid)?;
|
|
|
|
|
index::index_node(database, new_key, offset, &renamed.uuid)?;
|
2026-04-13 19:03:09 -04:00
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
// Find relations touching this node's UUID and update their key strings
|
|
|
|
|
let node_uuid = node.uuid;
|
|
|
|
|
let edges = index::edges_for_node(database, &node_uuid)?;
|
|
|
|
|
|
|
|
|
|
// Build uuid → key map for the other endpoints
|
|
|
|
|
let keys = index::all_keys(database)?;
|
|
|
|
|
let mut uuid_to_key: HashMap<[u8; 16], String> = HashMap::new();
|
|
|
|
|
for k in &keys {
|
|
|
|
|
if let Ok(Some(u)) = index::get_uuid_for_key(database, k) {
|
|
|
|
|
uuid_to_key.insert(u, k.clone());
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
// Update the renamed node's mapping
|
|
|
|
|
uuid_to_key.insert(node_uuid, new_key.to_string());
|
|
|
|
|
|
|
|
|
|
let mut updated_rels = Vec::new();
|
|
|
|
|
for (other_uuid, strength, rel_type, is_outgoing) in edges {
|
|
|
|
|
let other_key = uuid_to_key.get(&other_uuid).cloned().unwrap_or_default();
|
|
|
|
|
let (src_uuid, tgt_uuid, src_key, tgt_key) = if is_outgoing {
|
|
|
|
|
(node_uuid, other_uuid, new_key.to_string(), other_key)
|
|
|
|
|
} else {
|
|
|
|
|
(other_uuid, node_uuid, other_key, new_key.to_string())
|
|
|
|
|
};
|
|
|
|
|
let mut rel = new_relation(src_uuid, tgt_uuid,
|
|
|
|
|
RelationType::from_u8(rel_type), strength,
|
|
|
|
|
&src_key, &tgt_key);
|
|
|
|
|
rel.version = 2; // indicate update
|
|
|
|
|
updated_rels.push(rel);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if !updated_rels.is_empty() {
|
|
|
|
|
self.append_relations(&updated_rels)?;
|
2026-03-05 10:24:18 -05:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
|
}
|
|
|
|
|
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
/// Cap node degree by soft-deleting edges from mega-hubs.
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
pub fn cap_degree(&mut self, max_degree: usize) -> Result<(usize, usize)> {
|
2026-04-13 21:12:47 -04:00
|
|
|
let db = self.db.as_ref().ok_or_else(|| anyhow!("store not loaded"))?;
|
|
|
|
|
let keys = index::all_keys(db)?;
|
|
|
|
|
|
2026-04-13 21:19:47 -04:00
|
|
|
// Build uuid ↔ key maps and count degrees in one pass
|
2026-04-13 21:12:47 -04:00
|
|
|
let mut uuid_to_key: HashMap<[u8; 16], String> = HashMap::new();
|
2026-04-13 21:19:47 -04:00
|
|
|
let mut node_info: Vec<(String, [u8; 16], usize)> = Vec::new(); // (key, uuid, degree)
|
2026-04-13 21:12:47 -04:00
|
|
|
for key in &keys {
|
|
|
|
|
if let Ok(Some(uuid)) = index::get_uuid_for_key(db, key) {
|
2026-04-13 21:19:47 -04:00
|
|
|
let degree = index::edges_for_node(db, &uuid)?.len();
|
2026-04-13 21:12:47 -04:00
|
|
|
uuid_to_key.insert(uuid, key.clone());
|
2026-04-13 21:19:47 -04:00
|
|
|
node_info.push((key.clone(), uuid, degree));
|
2026-04-13 21:12:47 -04:00
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
}
|
|
|
|
|
|
2026-04-13 21:19:47 -04:00
|
|
|
// Build degree lookup
|
|
|
|
|
let node_degree: HashMap<&str, usize> = node_info.iter()
|
|
|
|
|
.map(|(k, _, d)| (k.as_str(), *d))
|
|
|
|
|
.collect();
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
let mut to_delete: HashSet<([u8; 16], [u8; 16])> = HashSet::new();
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
let mut hubs_capped = 0;
|
|
|
|
|
|
2026-04-13 21:19:47 -04:00
|
|
|
for (_key, uuid, degree) in &node_info {
|
|
|
|
|
if *degree <= max_degree { continue; }
|
|
|
|
|
let uuid = *uuid;
|
2026-04-13 21:12:47 -04:00
|
|
|
let edges = index::edges_for_node(db, &uuid)?;
|
|
|
|
|
if edges.len() <= max_degree { continue; }
|
|
|
|
|
|
|
|
|
|
// Separate auto vs manual edges: (source, target, sort_key)
|
|
|
|
|
let mut auto_edges: Vec<([u8; 16], [u8; 16], f32)> = Vec::new();
|
|
|
|
|
let mut link_edges: Vec<([u8; 16], [u8; 16], usize)> = Vec::new();
|
|
|
|
|
|
|
|
|
|
for (other_uuid, strength, rel_type, is_outgoing) in &edges {
|
2026-04-13 21:19:47 -04:00
|
|
|
// Canonical edge direction
|
2026-04-13 21:12:47 -04:00
|
|
|
let (src, tgt) = if *is_outgoing { (uuid, *other_uuid) } else { (*other_uuid, uuid) };
|
2026-04-13 21:19:47 -04:00
|
|
|
if to_delete.contains(&(src, tgt)) || to_delete.contains(&(tgt, src)) { continue; }
|
2026-04-13 21:12:47 -04:00
|
|
|
|
|
|
|
|
let other_key = match uuid_to_key.get(other_uuid) {
|
|
|
|
|
Some(k) => k,
|
|
|
|
|
None => continue,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
if *rel_type == RelationType::Auto as u8 {
|
|
|
|
|
auto_edges.push((src, tgt, *strength));
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
} else {
|
2026-04-13 21:19:47 -04:00
|
|
|
let other_deg = node_degree.get(other_key.as_str()).copied().unwrap_or(0);
|
2026-04-13 21:12:47 -04:00
|
|
|
link_edges.push((src, tgt, other_deg));
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
let active_count = auto_edges.len() + link_edges.len();
|
|
|
|
|
if active_count <= max_degree { continue; }
|
|
|
|
|
|
|
|
|
|
let excess = active_count - max_degree;
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
// Prune weakest auto edges first
|
|
|
|
|
auto_edges.sort_by(|a, b| a.2.total_cmp(&b.2));
|
|
|
|
|
for (src, tgt, _) in auto_edges.iter().take(excess) {
|
|
|
|
|
to_delete.insert((*src, *tgt));
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
}
|
|
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
// Then prune links to highest-degree nodes
|
|
|
|
|
let remaining = excess.saturating_sub(auto_edges.len());
|
|
|
|
|
if remaining > 0 {
|
|
|
|
|
link_edges.sort_by(|a, b| b.2.cmp(&a.2));
|
|
|
|
|
for (src, tgt, _) in link_edges.iter().take(remaining) {
|
|
|
|
|
to_delete.insert((*src, *tgt));
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
hubs_capped += 1;
|
|
|
|
|
}
|
|
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
// Collect edge info for deletion
|
|
|
|
|
let mut to_remove: Vec<([u8; 16], [u8; 16], f32, u8, String, String)> = Vec::new();
|
|
|
|
|
for (source_uuid, target_uuid) in &to_delete {
|
|
|
|
|
let edges = index::edges_for_node(db, source_uuid)?;
|
|
|
|
|
if let Some((_, strength, rel_type, _)) = edges.iter()
|
|
|
|
|
.find(|(other, _, _, out)| *other == *target_uuid && *out)
|
|
|
|
|
{
|
|
|
|
|
let source_key = uuid_to_key.get(source_uuid).cloned().unwrap_or_default();
|
|
|
|
|
let target_key = uuid_to_key.get(target_uuid).cloned().unwrap_or_default();
|
|
|
|
|
to_remove.push((*source_uuid, *target_uuid, *strength, *rel_type, source_key, target_key));
|
|
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
}
|
|
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
// Now mutate: remove from index and persist tombstones
|
|
|
|
|
let pruned_count = to_remove.len();
|
|
|
|
|
for (source_uuid, target_uuid, strength, rel_type, source_key, target_key) in to_remove {
|
|
|
|
|
if let Some(db) = &self.db {
|
|
|
|
|
index::remove_relation(db, &source_uuid, &target_uuid, strength, rel_type)?;
|
|
|
|
|
}
|
|
|
|
|
let mut rel = new_relation(source_uuid, target_uuid,
|
|
|
|
|
RelationType::from_u8(rel_type), strength,
|
|
|
|
|
&source_key, &target_key);
|
|
|
|
|
rel.deleted = true;
|
|
|
|
|
rel.version = 2;
|
|
|
|
|
self.append_relations(std::slice::from_ref(&rel))?;
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
}
|
|
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
Ok((hubs_capped, pruned_count))
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
}
|
|
|
|
|
|
2026-03-25 01:55:21 -04:00
|
|
|
/// Set a node's weight directly. Returns (old, new).
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
pub fn set_weight(&mut self, key: &str, weight: f32) -> Result<(f32, f32)> {
|
2026-03-25 01:55:21 -04:00
|
|
|
let weight = weight.clamp(0.01, 1.0);
|
2026-04-13 19:31:28 -04:00
|
|
|
let mut node = self.get_node(key)?
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
.ok_or_else(|| anyhow!("node not found: {}", key))?;
|
2026-03-25 01:55:21 -04:00
|
|
|
let old = node.weight;
|
2026-04-13 19:31:28 -04:00
|
|
|
if (old - weight).abs() < 0.001 {
|
|
|
|
|
return Ok((old, weight)); // unchanged
|
|
|
|
|
}
|
2026-03-25 01:55:21 -04:00
|
|
|
node.weight = weight;
|
2026-04-13 19:31:28 -04:00
|
|
|
node.version += 1;
|
|
|
|
|
node.timestamp = now_epoch();
|
|
|
|
|
let offset = self.append_nodes(std::slice::from_ref(&node))?;
|
|
|
|
|
if let Some(ref database) = self.db {
|
|
|
|
|
index::index_node(database, key, offset, &node.uuid)?;
|
|
|
|
|
}
|
2026-03-25 01:55:21 -04:00
|
|
|
Ok((old, weight))
|
|
|
|
|
}
|
|
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
/// Set the strength of a link between two nodes.
|
|
|
|
|
/// Returns the old strength. Creates link if it doesn't exist.
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
pub fn set_link_strength(&mut self, source: &str, target: &str, strength: f32) -> Result<f32> {
|
2026-03-25 01:55:21 -04:00
|
|
|
let strength = strength.clamp(0.01, 1.0);
|
2026-04-13 21:12:47 -04:00
|
|
|
|
|
|
|
|
let source_uuid = self.get_node(source)?
|
|
|
|
|
.map(|n| n.uuid)
|
|
|
|
|
.ok_or_else(|| anyhow!("source not found: {}", source))?;
|
|
|
|
|
let target_uuid = self.get_node(target)?
|
|
|
|
|
.map(|n| n.uuid)
|
|
|
|
|
.ok_or_else(|| anyhow!("target not found: {}", target))?;
|
|
|
|
|
|
|
|
|
|
// Find existing edge via index
|
|
|
|
|
let db = self.db.as_ref().ok_or_else(|| anyhow!("store not loaded"))?;
|
|
|
|
|
let edges = index::edges_for_node(db, &source_uuid)?;
|
|
|
|
|
let existing = edges.iter().find(|(other, _, _, _)| *other == target_uuid);
|
|
|
|
|
|
|
|
|
|
if let Some((_, old_strength, rel_type, _)) = existing {
|
|
|
|
|
let old = *old_strength;
|
|
|
|
|
// Remove old edge from index, add updated one
|
|
|
|
|
index::remove_relation(db, &source_uuid, &target_uuid, old, *rel_type)?;
|
|
|
|
|
index::index_relation(db, &source_uuid, &target_uuid, strength, *rel_type)?;
|
|
|
|
|
|
|
|
|
|
// Append updated relation to log
|
|
|
|
|
let mut rel = new_relation(source_uuid, target_uuid,
|
|
|
|
|
RelationType::from_u8(*rel_type), strength, source, target);
|
|
|
|
|
rel.version = 2; // indicate update
|
|
|
|
|
self.append_relations(std::slice::from_ref(&rel))?;
|
|
|
|
|
Ok(old)
|
|
|
|
|
} else {
|
|
|
|
|
// Create new link
|
2026-03-26 14:22:12 -04:00
|
|
|
self.add_link(source, target, "link_set")?;
|
2026-04-13 21:12:47 -04:00
|
|
|
// Update its strength
|
|
|
|
|
let db = self.db.as_ref().ok_or_else(|| anyhow!("store not loaded"))?;
|
|
|
|
|
index::remove_relation(db, &source_uuid, &target_uuid, 0.1, RelationType::Link as u8)?;
|
|
|
|
|
index::index_relation(db, &source_uuid, &target_uuid, strength, RelationType::Link as u8)?;
|
|
|
|
|
Ok(0.0)
|
2026-03-25 01:55:21 -04:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Add a link between two nodes with Jaccard-based initial strength.
|
|
|
|
|
/// Returns the strength, or a message if the link already exists.
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
pub fn add_link(&mut self, source: &str, target: &str, provenance: &str) -> Result<f32> {
|
2026-04-13 19:31:28 -04:00
|
|
|
let source_uuid = self.get_node(source)?
|
2026-03-25 01:55:21 -04:00
|
|
|
.map(|n| n.uuid)
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
.ok_or_else(|| anyhow!("source not found: {}", source))?;
|
2026-04-13 19:31:28 -04:00
|
|
|
let target_uuid = self.get_node(target)?
|
2026-03-25 01:55:21 -04:00
|
|
|
.map(|n| n.uuid)
|
Convert store and CLI to anyhow::Result for cleaner error handling
Replace Result<_, String> with anyhow::Result throughout:
- hippocampus/store module (persist, ops, types, view, mod)
- CLI modules (admin, agent, graph, journal, node)
- Run trait in main.rs
Use .context() and .with_context() instead of .map_err(|e| format!(...))
patterns. Add bail!() for early error returns.
Add access_local() helper in hippocampus/mod.rs that returns
Result<Arc<Mutex<Store>>> for direct local store access.
Fix store access patterns to properly lock Arc<Mutex<Store>> before
accessing fields in mind/unconscious.rs, mind/mod.rs, subconscious/learn.rs,
and hippocampus/memory.rs.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-13 18:05:04 -04:00
|
|
|
.ok_or_else(|| anyhow!("target not found: {}", target))?;
|
2026-03-25 01:55:21 -04:00
|
|
|
|
2026-04-13 21:12:47 -04:00
|
|
|
// Check for existing via index
|
|
|
|
|
if let Some(db) = &self.db {
|
|
|
|
|
let edges = index::edges_for_node(db, &source_uuid)?;
|
|
|
|
|
let exists = edges.iter().any(|(other, _, _, _)| *other == target_uuid);
|
|
|
|
|
if exists {
|
|
|
|
|
bail!("link already exists: {} ↔ {}", source, target);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2026-03-25 01:55:21 -04:00
|
|
|
let graph = self.build_graph();
|
|
|
|
|
let jaccard = graph.jaccard(source, target);
|
|
|
|
|
let strength = (jaccard * 3.0).clamp(0.1, 1.0) as f32;
|
|
|
|
|
|
|
|
|
|
let mut rel = new_relation(
|
|
|
|
|
source_uuid, target_uuid,
|
|
|
|
|
RelationType::Link, strength,
|
|
|
|
|
source, target,
|
|
|
|
|
);
|
|
|
|
|
rel.provenance = provenance.to_string();
|
|
|
|
|
self.add_relation(rel)?;
|
|
|
|
|
Ok(strength)
|
|
|
|
|
}
|
store: split mod.rs into persist.rs and ops.rs
mod.rs was 937 lines with all Store methods in one block.
Split into three files by responsibility:
- persist.rs (318 lines): load, save, replay, append, snapshot
— all disk IO and cache management
- ops.rs (300 lines): upsert, delete, modify, mark_used/wrong,
decay, fix_categories, cap_degree — all mutations
- mod.rs (356 lines): re-exports, key resolution, ingestion,
rendering, search — read-only operations
No behavioral changes; cargo check + full smoke test pass.
2026-03-03 16:40:32 -05:00
|
|
|
}
|