llm: full per-agent usage logging with prompts and responses

Log every model call to ~/.claude/memory/llm-logs/YYYY-MM-DD.md with
full prompt, response, agent type, model, duration, and status. One
file per day, markdown formatted for easy reading.

Agent types: fact-mine, experience-mine, consolidate, knowledge,
digest, enrich, audit. This gives visibility into what each agent
is doing and whether to adjust prompts or frequency.
This commit is contained in:
ProofOfConcept 2026-03-05 22:52:08 -05:00
parent e33fd4ffbc
commit 82b33c449c
7 changed files with 51 additions and 17 deletions

View file

@ -248,7 +248,7 @@ pub fn mine_transcript(path: &Path, dry_run: bool) -> Result<Vec<Fact>, String>
eprint!(" Chunk {}/{} ({} chars)...", i + 1, chunks.len(), chunk.len());
let prompt = format!("{}{}", prompt_prefix, chunk);
let response = match llm::call_haiku(&prompt) {
let response = match llm::call_haiku("fact-mine", &prompt) {
Ok(r) => r,
Err(e) => {
eprintln!(" error: {}", e);