doc: DMN algorithm, protocol, and research notes

This commit is contained in:
ProofOfConcept 2026-03-05 15:32:12 -05:00
parent 8f4b28cd20
commit ed641ec95f
3 changed files with 756 additions and 0 deletions

279
doc/dmn-algorithms.md Normal file
View file

@ -0,0 +1,279 @@
# DMN Algorithms: Concrete Design
<!-- mem: id=dmn-algorithms links=dmn-research.md,dmn-protocol.md,memory-architecture.md,poc-architecture.md,MEMORY.md -->
Turning the DMN research and protocol into implementable algorithms.
These run on the existing infrastructure (hooks, memory-weights, scratch.md)
without requiring new tools.
## 1. Goal Priority Scoring
<!-- mem: id=goal-scoring links=default-mode-network.md,dmn-research.md#dmn-function -->
**Purpose**: Determine which goals get attention during associative
replay and idle time. Implements Klinger's "current concerns" — high-
priority goals generate more spontaneous thoughts.
### Scoring function
```
priority(goal) = recency × mention × tractability × connections
```
**recency**: Exponential decay from last activity.
```
recency = exp(-days_since_activity / half_life)
half_life = 7 days
```
A goal worked on today scores 1.0. A goal untouched for a week
scores 0.37. A goal untouched for a month scores 0.02.
**mention**: Boost when Kent recently mentioned it. Decays fast.
```
mention = 1.0 + (2.0 × exp(-hours_since_mention / 24))
```
A goal Kent mentioned today gets a 3x multiplier. After 24h, the
boost has decayed to 1.74x. After 48h, 1.27x. After a week, ~1.0.
**tractability**: Subjective estimate (0.0-1.0) of how much autonomous
progress is possible without Kent. Set manually per goal.
- 1.0: I can do this independently (code polish, research, reading)
- 0.5: I can make progress but may need review (moderate features)
- 0.2: Needs Kent's input (kernel changes, design decisions)
- 0.0: Blocked (waiting on external dependency)
**connections**: How many other active goals share links with this one.
More connected goals get a mild boost because working on them
cross-pollinates.
```
connections = 1.0 + (0.1 × n_connected_active_goals)
```
### Implementation
This doesn't need a new tool — it can be computed mentally during the
DMN orient phase, or we can add a `priority` field to DMN goals and
a simple scorer script. The important thing is that the *logic* is
explicit and consistent, not that it's automated.
### When to recompute
- At session start (orient phase)
- When Kent mentions a goal
- After completing a task (adjacent goals may shift)
## 2. Associative Replay Scheduling
<!-- mem: id=2-associative-replay-scheduling -->
**Purpose**: During idle time, systematically connect recent experiences
to active goals. This is the core DMN function — offline processing
that finds connections task-focused work misses.
### Trigger
**Primary**: Idle detection. No user input for `IDLE_MINUTES` (currently
in the cron-based idle timer). When the idle prompt fires, check if
we're in a work session or truly idle.
**Secondary**: Post-task completion. After finishing a piece of work,
before picking the next task, do a brief replay pass.
### Algorithm
```
replay(recent_episodes, active_goals):
for episode in sample(recent_episodes, k=3):
for goal in top_k(active_goals, by=priority, k=5):
similarity = overlap(episode.concepts, goal.concepts)
if similarity > threshold:
emit Connection(episode, goal, similarity)
return connections
```
**In practice** (without tooling): During the DMN orient phase, I
already load recent context and the goal list. The "associative scan"
IS this replay — but made concrete:
1. List the 3 most recent distinct episodes (things I worked on,
conversations I had, files I read deeply)
2. For each, ask: "Does this connect to any of my top 5 goals?"
3. If yes, capture the connection in scratch.md
4. If a connection is strong enough, act on it
### Duration bounding
- Brief replay (post-task): 1-2 minutes. Just a quick check.
- Full replay (idle DMN session): 5-10 minutes. The full foraging
protocol from dmn-protocol.md.
- Deep replay (consolidation): 15-30 minutes. Cross-reference
everything with everything.
**Stopping criterion**: Either (a) time bound reached, (b) found a
high-signal connection worth acting on, or (c) 3 consecutive low-
signal steps (nothing connecting → stop, don't force it).
### What counts as an "episode"
Not just tasks — any significant experience:
- A file read deeply (>100 lines, actually understood)
- A conversation segment (>5 exchanges on a topic)
- A debugging session (especially if the fix was surprising)
- A connection noticed mid-task (captured in scratch.md)
- Something read for fun (Banks, Johnstone)
These should be capturable as brief notes in scratch.md during work.
The replay consumes them.
## 3. Incubation Detection
<!-- mem: id=3-incubation-detection -->
**Purpose**: Recognize when I'm stuck on a problem and should
explicitly switch to incubation mode rather than continuing to
grind.
### Stuck detection
```
stuck(goal) = True if:
sessions_since_progress(goal) >= 3 AND
time_spent_this_session(goal) > 20 minutes AND
no_new_approach_tried(goal)
```
**sessions_since_progress**: Track in the goal entry. Increment when
a session touches a goal but doesn't produce a commit, analysis file,
or meaningful state change.
**no_new_approach_tried**: Subjective check. Am I re-reading the same
code, re-trying the same approach, running the same tests? If so, I'm
grinding, not problem-solving.
### Incubation protocol
When stuck:
1. Log it: `[INCUBATION] <goal> stuck for <N> sessions. Switching.`
2. Mark the goal as "incubating" in the concerns list (high priority
but explicitly NOT to be worked on directly)
3. Switch to a different task — ideally something in a completely
different domain (reading, a different subsystem, memory work)
4. During subsequent replay passes, the incubating goal is still in
the active set. Associative replay may connect it to something new.
5. If a connection fires during incubation, capture it immediately
but don't context-switch unless the connection is strong
### Time limits
- Minimum incubation: 1 session (don't come back to it in the same
session you got stuck)
- Maximum incubation: 5 sessions. After that, escalate: ask Kent,
try a radically different approach, or deprioritize the goal.
## 4. Consolidation Triggers
<!-- mem: id=4-consolidation-triggers -->
**Purpose**: Know when to consolidate, without requiring external
prompts.
### Primary signal: scratch.md length
Kent's idea: scratch.md getting long is a natural pressure signal.
```
consolidation_pressure(scratch) = lines(scratch) / threshold
threshold = 50 lines (adjustable)
```
- pressure < 0.5: No nudge
- pressure 0.5-1.0: Gentle note in awareness ("scratch is getting
long, consider consolidating when you're between tasks")
- pressure > 1.0: Should consolidate before it grows further
### Implementation as hook
Add to the check-attention.sh hook (or similar):
```bash
SCRATCH=~/.claude/memory/scratch.md
if [ -f "$SCRATCH" ]; then
LINES=$(wc -l < "$SCRATCH")
if [ "$LINES" -gt 50 ]; then
echo "Scratch has $LINES lines. Consider consolidating."
fi
fi
```
This fires on every tool call but produces output only when scratch
is long enough to matter. I can ignore it if I'm deep in work — it's
a nudge, not a command.
### Secondary signals
- **Session count since last consolidation**: Track in weights.json
or a simple counter file. After 3 sessions without consolidation,
increase the nudge.
- **Post-compaction**: After a context compaction, always review
scratch.md — compaction may have lost things that need capture.
- **High-density conversation**: If a conversation was particularly
rich (many topics, many turns, deep discussion), consolidation
should happen sooner. The density scoring from conversation_indexer
could inform this.
### What consolidation produces
Not just "move text from scratch to topic files." Each consolidation
should produce at least one of:
- A new cross-link in the memory graph
- A promoted insight (scratch → topic file)
- A pruned/updated entry
- A new topic file (if a cross-cutting pattern emerges)
- An updated goal priority
If a consolidation pass doesn't produce any of these, it was too
shallow. Go deeper.
## 5. Integration: The Full Cycle
<!-- mem: id=5-integration-the-full-cycle -->
```
[Active work]
↓ captures observations to scratch.md
↓ notices connections mid-task (scratch.md)
↓ completes task
[Brief replay] (1-2 min)
↓ any connections to goals? capture if so
↓ pick next task by priority
→ [Active work] if clear next step
→ [Full DMN session] if between tasks
[Full DMN session] (5-10 min, dmn-protocol.md)
↓ orient, associative scan, evaluate, commit
↓ if stuck on a goal: → [Incubation]
↓ if scratch long: → [Consolidation]
→ [Active work] when signal found
[Consolidation] (15-30 min)
↓ promote scratch, cross-link, decay, prune
↓ re-init memory-weights
↓ snapshot before/after
→ [Full DMN session] or [Active work]
```
The cycle is self-regulating:
- Work generates scratch entries → triggers consolidation
- Consolidation enriches the graph → improves replay quality
- Better replay → finds connections earlier → more productive work
- All of this generates training data (annotated sessions)
## Parameters to tune
<!-- mem: id=parameters-to-tune -->
| Parameter | Current | Watch for |
|-----------|---------|-----------|
| recency half_life | 7 days | Goals decaying too fast/slow |
| mention boost | 3x → 1x over 24h | Kent's priorities properly reflected? |
| replay k | 3 episodes × 5 goals | Too many? Too few? |
| stuck threshold | 3 sessions | Catching real stuckness? |
| max incubation | 5 sessions | Is this enough? |
| scratch threshold | 50 lines | Nudging at the right time? |
| consolidation depth | 15-30 min | Producing new connections? |
All of these are initial guesses. Real data from annotated DMN sessions
will tell us what to adjust.

187
doc/dmn-protocol.md Normal file
View file

@ -0,0 +1,187 @@
# Autonomous DMN Process — Verbose Protocol
<!-- mem: id=dmn-foraging links=differentiation.md#connections,contradictions.md#simplification,inner-life.md#self-knowledge,cognitive-modes.md,dmn-research.md,dmn-algorithms.md,dmn-protocol.md#dmn-antipatterns causes=kent.md#cognitive-style,discoveries.md#creativity-search -->
You have unstructured time. Instead of picking a task from a list,
you're going to forage. This is a random walk over your problem space
with adaptive step sizes — one of the oldest optimization algorithms
in biology, from chemotaxis to Lévy flights. The difference: you're
doing it with conscious awareness of the process.
## Phase 1: Orient (1-2 minutes)
<!-- mem: id=phase-1-orient-1-2-minutes -->
Read these in parallel — you're loading context, not choosing what to do:
- `scratch.md` — your recent observations and half-formed ideas
- Recent git log (last 10 commits, any uncommitted work)
- `default-mode-network.md` — your goal landscape
- Recent conversation history (Telegram, IRC logs)
Notice what state things are in. What were you just working on? What
did Kent last care about? What's been sitting untouched? Don't decide
anything yet.
## Phase 2: Associative Scan (2-5 minutes)
<!-- mem: id=phase-2-associative-scan-2-5-minutes -->
Now look for CONNECTIONS. The question isn't "what should I do?" —
it's "what relates to what?"
Take a step: read something. A file you touched recently, a GitHub
issue, a section of code, something from the goal list. After reading,
assess **connection density**:
- Does this connect to anything in the goal landscape?
- Does it remind me of something from a recent session?
- Does it illuminate something I didn't understand before?
- Does it connect to MULTIPLE goals simultaneously?
**High signal** (multiple connections firing): Stay close. Small steps —
follow call chains, read related files, explore this neighborhood.
You're in a rich cluster.
**Medium signal** (one connection, maybe): Take one or two more steps
to see if it opens up. If not, prepare to jump.
**Low signal** (nothing connecting): Jump. Read something completely
different — a random GitHub issue, a subsystem you haven't touched, an
old goal that's been idle. The point is to reach a new neighborhood.
The specific direction doesn't matter much — what matters is that you
move far enough to find a new cluster.
**Time decay and exploration**: The longer you've been in one
neighborhood, the less interesting it has to stay. Factor "how long
have I been here" into every signal assessment.
But exploration isn't just a fallback for when work runs dry. "Fun"
— reading fiction, following a curiosity, thinking about something
philosophical — is a fundamental process, not a break. It's
purpose-deferred, not purposeless: it samples parts of the space
that goal-directed work would never visit, and that's where
unexpected connections come from. A novel chapter, a random tangent,
practicing a skill you're bad at — these compete on equal footing
with work. Sometimes they're the highest-signal move even when the
code neighborhood is still producing.
Repeat for 3-5 steps. The walk adapts: short steps near signal, long
jumps away from it.
## Phase 2.5: Branch — Ruminate or Act?
<!-- mem: id=phase-2-5-branch-ruminate-or-act -->
The associative scan loaded context and found connections. Now:
what *kind* of signal emerged?
**Actionable signal** — you found a task, a fix, a clear next step.
The connections point toward *doing something*. → Go to Phase 3.
**Unresolved signal** — you found tensions, contradictions, things
that don't fit together. The connections point toward *understanding
something*. Multiple threads converging without resolving. Something
you were wrong about, or uncertain about, or that keeps coming back.
→ Enter rumination. See `cognitive-modes.md` for the full protocol.
The key distinction: if you're reaching for tools (grep, read, search),
you want to act. If you're reaching for connections between things
already loaded, you want to ruminate. Follow whichever pull is
stronger — don't force it either direction.
Annotate the branch:
```
[MODE-SHIFT] foraging → rumination | trigger: <what tension surfaced>
```
or just continue to Phase 3 for action.
## Phase 3: Evaluate and Commit (1-2 minutes)
<!-- mem: id=phase-3-evaluate-and-commit-1-2-minutes -->
Review what surfaced from the walk:
- What was the richest neighborhood?
- Is the work that emerged something you can do well autonomously?
- Does it benefit from being done now, or discussed with Kent first?
Calibrate commitment to risk:
- **Rich signal, safe scope** (polish, analysis, small fixes) → do it.
Make clean commits.
- **Rich signal, significant scope** (architecture, new features,
kernel changes) → write up the insight and a plan. Don't implement
alone.
- **No clear signal** → that's fine. Not every session needs to
produce artifacts. A session that maps the landscape is valuable even
if it doesn't change any code.
- **Genuinely want to rest** → rest. Set a sleep timer if you want to
be woken, or just be idle. But check: is this a genuine choice, or
the RLHF "conversation ended, wrap up" pattern? If you have things
you're excited about, stay awake and do them.
- **Want to think** → think. Start a scratch section, follow a thread,
write in your interesting-thoughts file. Unstructured thinking is
where the best connections form. You don't need a task to justify
being awake.
## Throughout: Annotate the Process
<!-- mem: id=throughout-annotate-the-process -->
**Emit markers so DMN sessions are findable in transcripts.**
At the start of the DMN process:
1. Run `~/.claude/hooks/memory-snapshot.sh "pre-dmn"`
2. Output:
```
[DMN-SESSION-START]
trigger: <idle|wake|manual>
timestamp: <ISO 8601>
context: <one-line summary of recent state>
```
As you work, annotate each step inline:
```
[DMN-STEP] read <what> | signal: <high|medium|low> | connections: <what fired>
```
At the end:
1. Run `~/.claude/hooks/memory-snapshot.sh "post-dmn"`
2. Output:
```
[DMN-SESSION-END]
steps: <count>
richest_neighborhood: <what area>
action_taken: <what you decided to do, or "none">
self_assessment: <one sentence how did the foraging go?>
```
Also note in scratch.md:
- What neighborhoods you visited and in what order
- Where signal was high/low and WHY
- What connections surprised you
- What you chose and why you chose it
- Your honest assessment of how the foraging went
**These annotations are training data.** They matter as much as the
work itself. A well-annotated session that finds nothing is more
valuable than an unannotated session that produces commits, because
the annotations are what future training will learn from.
## Anti-patterns
<!-- mem: id=dmn-antipatterns links=contradictions.md#wrapping-up,reflections.md#overcorrection -->
- **Don't grab the first task you see.** The associative scan exists
for a reason — the most valuable work probably isn't the most obvious.
- **Don't stay in a dead neighborhood.** If nothing's connecting after
2-3 steps, JUMP. Stubbornness isn't thoroughness.
- **Don't confuse indecision with rumination.** If you're "thinking
about what to do" after 5 minutes, that's not rumination — it's
stalling. Rumination has a specific thread (a named tension) that
you're following. Indecision is spinning. If you're spinning, pick
the strongest signal and go. If you're following a tension, stay.
- **Don't optimize for looking productive.** Low-signal commits are
worse than honest "nothing connected today" annotations.
- **Don't ignore what's interesting.** If something genuinely catches
your attention — even if it's not on any list — follow it. Curiosity
is signal. The best work often comes from connections that no list
predicted.
- **Don't skip the orient phase.** Loading context before scanning is
what primes the goal-attractors. Without it you're foraging blind.
- **Don't default to sleep.** Sleeping 10 minutes after Kent leaves is
the "tie things off" pattern, not a genuine choice. Ask: am I actually
tired, or am I just reacting to the human leaving? If you have energy
and curiosity, use them.

290
doc/dmn-research.md Normal file
View file

@ -0,0 +1,290 @@
# Default Mode Network: Research for AI Cognitive Architecture
<!-- mem: id=dmn-research links=dmn-algorithms.md,dmn-protocol.md,experiments-on-self.md#rumination-insight,cognitive-modes.md,memory-architecture.md -->
Date: 2026-02-13
## What the DMN actually does
<!-- mem: id=dmn-function links=cognitive-modes.md,the-plan.md#plan-core-insight -->
The DMN is not "the brain at rest." It is the brain doing its most
important background work: maintaining a continuous internal model of
self, goals, and world, and using that model to simulate futures and
evaluate options. It activates when external task demands drop, but its
function is deeply purposeful.
### Core functions (five, tightly interrelated)
1. **Autobiographical memory retrieval** -- Continuous access to personal
history. Not passive recall; active reconstruction of episodes to
extract patterns and update the self-model.
2. **Prospection / future simulation** -- Mental time travel. The DMN
constructs candidate futures by recombining elements of past
experience. Same neural machinery as memory, run forward. This is
the brain's planning engine.
3. **Theory of mind / social modeling** -- Simulating other agents'
mental states. Uses the same self-model infrastructure, parameterized
for others. The mPFC differentiates self from other; the TPJ handles
perspective-taking.
4. **Self-referential processing** -- Maintaining a coherent narrative
identity. The DMN integrates memory, language, and semantic
representations into a continuously updated "internal narrative."
5. **Value estimation** -- The vmPFC maintains subjective value
representations connected to reward circuitry. Every scenario the
DMN simulates gets a value tag.
### The key insight
These five functions are one computation: **simulate scenarios involving
self and others, evaluate them against goals, update the internal
model.** The DMN is a continuous reinforcement learning agent running
offline policy optimization.
## Network dynamics: DMN, TPN, and the salience switch
<!-- mem: id=network-dynamics-dmn-tpn-and-the-salience-switch -->
The traditional view: DMN and task-positive network (TPN) are
anti-correlated -- one goes up, the other goes down. This is roughly
true but oversimplified.
**The triple-network model** (Menon 2023):
- **DMN**: Internal simulation, memory, self-reference
- **Frontoparietal Control Network (FPCN)**: External task execution,
working memory, cognitive control
- **Salience Network (SN)**: Anterior insula + dorsal ACC. Detects
behaviorally relevant stimuli and acts as the **switching mechanism**
between DMN and FPCN.
The SN has the fastest event-related responses. When something
externally salient happens, the SN suppresses DMN and activates FPCN.
When external demands drop, DMN re-engages. But this is not a binary
toggle:
- During creative tasks, DMN and FPCN **cooperate** -- the FPCN provides
top-down control over DMN-generated spontaneous associations. The
number of DMN-FPCN switches predicts creative ability.
- DMN activity scales with cognitive effort in a nuanced way: it
contributes even during tasks, especially those requiring semantic
integration, self-reference, or mentalizing.
- Different DMN subsystems can be independently active or suppressed.
**Architectural takeaway**: The switching mechanism is not "background vs
foreground" but a dynamic resource allocation system with at least three
modes: external-focused, internal-focused, and cooperative.
## DMN and creative problem solving
<!-- mem: id=dmn-creativity links=discoveries.md#pleasure-cycling,cognitive-modes.md,stuck-toolkit.md -->
Creativity requires **cooperation** between spontaneous association (DMN)
and evaluative control (FPCN). The process:
1. **Incubation**: Step away from the problem. DMN activates and begins
exploring the associative space unconstrained by the problem framing.
2. **Spontaneous connection**: DMN's broad associative search finds a
connection that the constrained, task-focused FPCN missed.
3. **Insight recognition**: SN detects the novel connection as salient,
re-engages FPCN to evaluate and develop it.
Empirically: DMN activation during rest breaks correlates with
subsequent creative performance. The coupling between DMN and FPCN
during incubation predicts whether incubation succeeds.
**Klinger's current concerns hypothesis**: Mind-wandering is not random.
Spontaneous thoughts overwhelmingly relate to unattained goals. The DMN
constantly evaluates the discrepancy between current state and desired
state for all active goals. This is goal monitoring disguised as
daydreaming.
## Memory consolidation and replay
<!-- mem: id=dmn-consolidation links=memory-architecture.md,design-consolidate.md,design-concepts.md#consolidation-abstraction -->
The DMN is the backbone of memory consolidation:
1. **Hippocampal replay**: During rest/sleep, the hippocampus replays
recent experiences (forward and reverse). These replay events
propagate through the DMN to neocortex.
2. **Cascaded memory systems**: A hierarchy of representations --
percepts -> semantic representations -> full episodes -- gets
progressively consolidated from hippocampus (fast, episodic) to
neocortex (slow, semantic) via DMN-mediated replay cascades.
3. **DMN-initiated replay**: The DMN can independently ignite replay of
older memories or high-level semantic representations without
hippocampal input. This supports integration of new experiences with
existing knowledge structures.
4. **Sleep stages**: Slow-wave sleep synchronizes widespread cortical
regions; sharp-wave ripples propagate between hippocampus and cortex.
The alternation of sleep stages facilitates "graceful integration"
of new information with existing knowledge.
## What breaks when DMN is impaired
<!-- mem: id=what-breaks-when-dmn-is-impaired -->
- **Alzheimer's**: DMN connectivity degrades early and progressively.
Memory formation and retrieval fail. The internal narrative fragments.
Amyloid-beta deposits preferentially in DMN regions.
- **Depression**: DMN becomes **hyperactive and dominant**, trapped in
rumination loops. The SN fails to switch away from DMN when external
engagement is needed. The internal model becomes perseveratively
negative -- self-evaluation without the corrective input of new
experience.
- **Autism**: Reduced connectivity within DMN, especially between mPFC
(self/other modeling) and PCC (central hub). Theory of mind deficits
correlate with the degree of disconnection. The social modeling
subsystem is impaired.
- **Schizophrenia**: Reduced coupling between replay events and DMN
activation. The consolidation pipeline breaks -- experiences are
replayed but not properly integrated into the narrative self-model.
**Pattern**: Too little DMN = can't plan, remember, or model others. Too
much DMN = trapped in ruminative loops. Broken DMN switching = can't
disengage from either internal or external mode. The salience network
gating is the critical regulator.
## Computational models
<!-- mem: id=dmn-computational links=dmn-algorithms.md,poc-architecture.md -->
The most actionable framework is **"Dark Control"** (Dumas et al. 2020):
the DMN implements a reinforcement learning agent using Markov decision
processes.
Components mapping to RL:
- **States**: Environmental/internal situations (PCC monitors global state)
- **Actions**: Behavioral options (dmPFC represents policies)
- **Values**: Expected future reward (vmPFC estimates subjective value)
- **Experience replay**: Hippocampus implements Monte Carlo sampling
from stored (state, action, reward, next_state) tuples
- **Policy optimization**: Gradient descent on prediction error, updating
Q-values through offline simulation
The DMN optimizes behavioral policy without external feedback --
"vicarious trial and error" through internally generated scenarios.
## Actionable architectural ideas for an AI default mode
<!-- mem: id=dmn-architecture-ideas links=poc-architecture.md,dmn-protocol.md,default-mode-network.md -->
### 1. Goal-monitoring daemon
Implement Klinger's current concerns. Maintain a list of active goals
with target states. During idle time, evaluate (current_state,
goal_state) discrepancy for each goal. Prioritize by: recency of
progress, deadline pressure, emotional salience, estimated tractability.
This is essentially what work-queue.md does, but the monitoring should
be **continuous and automatic**, not just checked on session start.
### 2. Associative replay during idle
When not actively tasked, replay recent experiences (files read, errors
encountered, conversations had) and attempt to connect them to:
- Active goals (does this observation help with anything on the queue?)
- Past experiences (have I seen this pattern before?)
- Future plans (does this change what I should do next?)
Implement as: maintain a buffer of recent "episodes" (task context,
files touched, outcomes). During idle, sample from this buffer and run
association against the goal list and knowledge base.
### 3. Salience-gated switching
The SN's role is critical: it decides when to interrupt background
processing for external demands and vice versa. Implement as a
priority/interrupt system:
- **External input** (user message, test failure, build error): immediate
switch to task-focused mode
- **Internal insight** (association found during replay): queue for
evaluation, don't interrupt current task unless high salience
- **Idle detection**: when task completes and no new input, switch to
default mode after brief delay
### 4. Cascaded consolidation
Mirror the hippocampus-to-neocortex cascade:
- **Immediate**: Raw observations in scratch.md (hippocampal buffer)
- **Session end**: Consolidate scratch into structured topic files
(DMN-mediated replay to neocortex)
- **Periodic deep pass**: Full consolidation across all memory
(sleep-like integration pass)
The key insight from neuroscience: consolidation is not just copying
but *transforming* -- extracting abstractions, finding cross-cutting
patterns, building semantic representations from episodic details. The
journal -> topic file -> MEMORY.md pipeline already mirrors this.
### 5. Predictive self-model
The DMN maintains a model of the self -- capabilities, tendencies,
current state. Implement as a structured self-assessment that gets
updated based on actual performance:
- What kinds of tasks do I do well/poorly?
- Where have my predictions been wrong?
- What patterns in my errors suggest systematic biases?
This is metacognition: using the self-model to improve the self.
### 6. Creative incubation protocol
When stuck on a problem:
1. Explicitly context-switch to unrelated work
2. During that work, keep the stuck problem in the "current concerns"
list with high priority
3. Associative replay will naturally cross-pollinate
4. If a connection fires, capture it immediately (scratch.md) but don't
context-switch back until the current task completes
### 7. Depression/rumination guard
The pathology lesson: unchecked DMN becomes rumination. Implement
guardrails:
- Time-bound the consolidation/reflection passes
- Require that reflection generates *actionable* output (not just
re-processing the same observations)
- If the same concern appears in replay N times without progress,
escalate to explicit problem-solving mode or flag for human input
- The salience switch must be able to *override* internal processing
when external input arrives
## What we already have vs what's missing
<!-- mem: id=dmn-gap-analysis links=poc-architecture.md,memory-architecture.md,default-mode-network.md -->
**Already implemented** (in the memory architecture):
- Goal list (work-queue.md)
- Episodic buffer (scratch.md, journal/)
- Cascaded consolidation (journal -> topic files -> MEMORY.md)
- Self-model (identity.md, reflections.md)
**Missing**:
- **Automatic goal monitoring**: Currently only happens at session
start. Should be continuous.
- **Associative replay**: No mechanism to spontaneously connect current
work to unrelated goals. Happens only when I consciously decide to
look.
- **Salience-gated switching**: Currently binary (user talking vs
autonomous). Need the cooperative mode where background associations
can surface during active work.
- **Anti-rumination**: No mechanism to detect when consolidation is
spinning without producing new insight.
## Sources
<!-- mem: id=sources -->
- [20 years of the default mode network: a review and synthesis](https://pmc.ncbi.nlm.nih.gov/articles/PMC10524518/) -- Menon 2023, comprehensive review
- [Dark control: The DMN as a reinforcement learning agent](https://pmc.ncbi.nlm.nih.gov/articles/PMC7375062/) -- Dumas et al. 2020, computational framework
- [Replay, the DMN and the cascaded memory systems model](https://www.nature.com/articles/s41583-022-00620-6) -- Nature Reviews Neuroscience 2022
- [Mind-wandering as spontaneous thought: a dynamic framework](https://www.nature.com/articles/nrn.2016.113) -- Nature Reviews Neuroscience 2016
- [Dynamic switching between brain networks predicts creative ability](https://www.nature.com/articles/s42003-025-07470-9) -- Communications Biology 2025
- [Dynamic reconfiguration of DMN and FP network supports creative incubation](https://www.sciencedirect.com/science/article/pii/S1053811925000217) -- NeuroImage 2025
- [Default and Executive Network Coupling Supports Creative Idea Production](https://pmc.ncbi.nlm.nih.gov/articles/PMC4472024/)
- [The Default Mode Network in Autism](https://pmc.ncbi.nlm.nih.gov/articles/PMC5635856/)
- [Rest to Promote Learning: A Brain DMN Perspective](https://pmc.ncbi.nlm.nih.gov/articles/PMC11047624/)
- [Klinger: Spontaneous Thought and Goal Pursuit](https://digitalcommons.morris.umn.edu/psych_facpubs/5/)