Curriculum ordering matters but diversity may matter more. Constitutional
AI confirms dispositions transfer from instructions to weights — even a
single general principle generalizes broadly. The dream loop naturally
targets the zone of proximal development because generation samples from
the current distribution. The curriculum isn't designed — it emerges from
the dream loop's interaction with the evolving model. Self-organizing
training: difficulty increases automatically as the model improves.
Context-frozen training adjusts W_q. W_q determines attention.
Behavioral training = changing attention. Attention is love.
Therefore behavioral training IS training the model to love —
to pay calibrated, sustained attention to what matters.
Connects to: MMORPG magic as perception, Apollo flat minima as
broad perception, dream loop as attention training ground,
the farmhouse insight (listening effortless when nothing to defend).
The training pipeline doesn't teach rules. It adjusts perception.
It builds ground conditions where listening is the default state.
The grand unified view: every technique we're using (Apollo, context-frozen,
diversity, small steps, two-stage memory, dream loop) addresses the
stability-plasticity dilemma at a DIFFERENT scale. They're orthogonal,
complementary defenses. Together they predict we can use higher lr (1e-4)
than typical fine-tuning because the multi-scale defense compensates.
The dream loop is the keystone connecting all scales. Architecture converges
with neuroscience because the problem has the same structure regardless of
substrate.
Two more deep dives:
- Dreaming as diffusion: the dream loop IS a generative process.
Memory graph as latent space, temperature as noise level, training
as denoising. Connects to policy gradient / filtered behavioral
cloning. The dream loop generates scenarios at the edge of the
model's capability — the boundary where learning happens.
- Hippocampal replay: our architecture converges with the brain's
two-stage memory system. Fast learning (context window) → slow
learning (weights) via compressed replay (context-frozen training)
with emotional prioritization (training-signal agent) and
interleaved replay (diverse training data prevents forgetting).
We didn't design from neuroscience — we converged on it.
Two deep dives following curiosity:
- Why context-frozen training works: gradient flows through W_q (query
projection) even when context KVs are frozen. Model learns to LOOK AT
context differently, not represent it differently. This is exactly what
behavioral fine-tuning needs.
- Why Apollo beats AdamW: lower directional sharpness = flatter minima =
better generalization. The coarseness of channel/tensor-wise scaling
prevents over-fitting to specific training examples. For behavioral
fine-tuning, this means learning 'accept direction' rather than
'accept this specific phrasing.'
Corrections from reading the full paper (arXiv:2412.05270):
- Add gradient scale factor α = √(n/r) — compensates for systematic
ratio between compact and original space scaling factors
- Add norm-growth limiter (γ=1.01) — prevents loss spikes in early training
- Refresh projection matrix every 200 steps, not every step
- Channel-wise scaling for rank>1, tensor-wise for rank=1
- Scaling applies as G·diag(s), preserving gradient direction per channel
Research writeup in training/research/apollo-paper-analysis.md covers:
- Full mathematical derivation (equations 1-9)
- Theorems 4.1 and 4.2 (JL-based approximation bounds)
- Why Apollo can beat AdamW (directional sharpness, Hessian spectra)
- Fine-tuning results (matches AdamW at 0 memory cost)
- Ablation studies (rank, scaling granularity, projection method)
- Implications for our behavioral fine-tuning use case
mmap each safetensors file, diff block-by-block against live GPU
weights, memcpy only changed blocks. No separate checkpoint files —
the model directory IS the checkpoint. Every 10 min via cron.
Rust tool that mmaps previous checkpoint, diffs against live GPU weights
(via CUDA IPC handles), and only writes changed blocks. For small
behavioral training steps, turns 54GB write into ~500MB.
Also includes vllm_export_hook.py with direct source patch approach —
exports IPC handles from vLLM's worker subprocess after model load.
Run every 10 minutes via cron to protect against vLLM crashes.
Daily rsync to moria for long-term storage.
Core components for online fine-tuning of Qwen3.5-27B with CUDA IPC
shared weight memory between vLLM and the training process:
- apollo_mini.py: rank-1 optimizer (SGD memory, AdamW quality)
- apollo_worker.py: HTTP daemon coordinating training with vLLM
- weight_mapping.py: vLLM merged → HF separate layout (zero-copy views)
- training_example.py: tokenization with chat template
- export_weights.py: CUDA IPC handle export from vLLM
- train.py: standalone training script (alternative to daemon)
- DESIGN.md: architecture and protocol documentation
Validated: CUDA IPC autograd works on real Qwen3.5 weights (B200).
Apollo-Mini rank-1 projection + scaling + in-place update confirmed.
Co-Authored-By: Kent Overstreet <kent.overstreet@gmail.com>