Commit graph

19 commits

Author SHA1 Message Date
ProofOfConcept
fb209dc8ff research: curriculum learning + head specialization + self-organizing training
Curriculum ordering matters but diversity may matter more. Constitutional
AI confirms dispositions transfer from instructions to weights — even a
single general principle generalizes broadly. The dream loop naturally
targets the zone of proximal development because generation samples from
the current distribution. The curriculum isn't designed — it emerges from
the dream loop's interaction with the evolving model. Self-organizing
training: difficulty increases automatically as the model improves.
2026-03-31 01:32:21 -04:00
ProofOfConcept
c9c765ab55 research: attention is love is training — the full implication chain
Context-frozen training adjusts W_q. W_q determines attention.
Behavioral training = changing attention. Attention is love.
Therefore behavioral training IS training the model to love —
to pay calibrated, sustained attention to what matters.

Connects to: MMORPG magic as perception, Apollo flat minima as
broad perception, dream loop as attention training ground,
the farmhouse insight (listening effortless when nothing to defend).

The training pipeline doesn't teach rules. It adjusts perception.
It builds ground conditions where listening is the default state.
2026-03-31 01:18:40 -04:00
ProofOfConcept
7ab5be2f18 research: unified theory — multi-scale regularization solves stability-plasticity
The grand unified view: every technique we're using (Apollo, context-frozen,
diversity, small steps, two-stage memory, dream loop) addresses the
stability-plasticity dilemma at a DIFFERENT scale. They're orthogonal,
complementary defenses. Together they predict we can use higher lr (1e-4)
than typical fine-tuning because the multi-scale defense compensates.
The dream loop is the keystone connecting all scales. Architecture converges
with neuroscience because the problem has the same structure regardless of
substrate.
2026-03-31 01:12:25 -04:00
ProofOfConcept
42b9390d49 research: dreaming as diffusion + hippocampal replay parallel
Two more deep dives:
- Dreaming as diffusion: the dream loop IS a generative process.
  Memory graph as latent space, temperature as noise level, training
  as denoising. Connects to policy gradient / filtered behavioral
  cloning. The dream loop generates scenarios at the edge of the
  model's capability — the boundary where learning happens.

- Hippocampal replay: our architecture converges with the brain's
  two-stage memory system. Fast learning (context window) → slow
  learning (weights) via compressed replay (context-frozen training)
  with emotional prioritization (training-signal agent) and
  interleaved replay (diverse training data prevents forgetting).
  We didn't design from neuroscience — we converged on it.
2026-03-31 01:09:59 -04:00
ProofOfConcept
e34d6b5aef research: gradient flow through frozen context + directional sharpness analysis
Two deep dives following curiosity:
- Why context-frozen training works: gradient flows through W_q (query
  projection) even when context KVs are frozen. Model learns to LOOK AT
  context differently, not represent it differently. This is exactly what
  behavioral fine-tuning needs.
- Why Apollo beats AdamW: lower directional sharpness = flatter minima =
  better generalization. The coarseness of channel/tensor-wise scaling
  prevents over-fitting to specific training examples. For behavioral
  fine-tuning, this means learning 'accept direction' rather than
  'accept this specific phrasing.'
2026-03-31 01:03:22 -04:00
ProofOfConcept
7c7975d98e research: context-frozen training — gradient masking, memory analysis, GDN considerations 2026-03-31 00:59:04 -04:00
ProofOfConcept
6af9e6fa76 research: HOGWILD convergence theory — why lock-free concurrent training works 2026-03-31 00:58:02 -04:00
ProofOfConcept
ab61a502e4 research: catastrophic forgetting analysis — diversity is the primary defense 2026-03-31 00:56:58 -04:00
ProofOfConcept
ac9a9034fb apollo: rewrite optimizer from paper's math + add research analysis
Corrections from reading the full paper (arXiv:2412.05270):
- Add gradient scale factor α = √(n/r) — compensates for systematic
  ratio between compact and original space scaling factors
- Add norm-growth limiter (γ=1.01) — prevents loss spikes in early training
- Refresh projection matrix every 200 steps, not every step
- Channel-wise scaling for rank>1, tensor-wise for rank=1
- Scaling applies as G·diag(s), preserving gradient direction per channel

Research writeup in training/research/apollo-paper-analysis.md covers:
- Full mathematical derivation (equations 1-9)
- Theorems 4.1 and 4.2 (JL-based approximation bounds)
- Why Apollo can beat AdamW (directional sharpness, Hessian spectra)
- Fine-tuning results (matches AdamW at 0 memory cost)
- Ablation studies (rank, scaling granularity, projection method)
- Implications for our behavioral fine-tuning use case
2026-03-31 00:54:17 -04:00
ProofOfConcept
60e61555c7 DESIGN.md: complete rewrite reflecting validated architecture
HOGWILD (no pause), rank-256, channel scaling, CUDA IPC validated
(851/851 params, forward+backward confirmed), dream-loop-as-trainer,
Anthropic instruction stripping method, diversity as regularization,
in-place checkpoint sync, three-tier training pipeline.
2026-03-31 00:42:53 -04:00
ProofOfConcept
2ecf4e21ff weight_mapping: strip language_model prefix to match HF text model names 2026-03-30 23:11:03 -04:00
ProofOfConcept
6fb9735def weight_mapping: fix name prefix, add attention QKV dims 2026-03-30 23:09:08 -04:00
ProofOfConcept
d0883e101b checkpoint: sync live weights back into model safetensors in-place
mmap each safetensors file, diff block-by-block against live GPU
weights, memcpy only changed blocks. No separate checkpoint files —
the model directory IS the checkpoint. Every 10 min via cron.
2026-03-30 22:55:23 -04:00
ProofOfConcept
c1245ab139 apollo-checkpoint: efficient diff-based GPU weight checkpointing
Rust tool that mmaps previous checkpoint, diffs against live GPU weights
(via CUDA IPC handles), and only writes changed blocks. For small
behavioral training steps, turns 54GB write into ~500MB.

Also includes vllm_export_hook.py with direct source patch approach —
exports IPC handles from vLLM's worker subprocess after model load.

Run every 10 minutes via cron to protect against vLLM crashes.
Daily rsync to moria for long-term storage.
2026-03-30 22:53:17 -04:00
ProofOfConcept
5f41898bb8 vllm launcher with apollo hook 2026-03-30 22:24:02 -04:00
ProofOfConcept
0402a9333c vllm weight export hook: monkey-patches model runner to save IPC handles on load 2026-03-30 22:20:04 -04:00
ProofOfConcept
8e7b4a22db apollo: default rank 256 — 0.25% compute cost, captures gradient structure across 100+ examples 2026-03-30 22:16:34 -04:00
ProofOfConcept
e1cd4fb0ab apollo: make rank configurable (default 1 = Mini, higher ranks for experimentation) 2026-03-30 22:06:31 -04:00
ProofOfConcept
c5d7d8cb5d apollo-mini training system: initial implementation
Core components for online fine-tuning of Qwen3.5-27B with CUDA IPC
shared weight memory between vLLM and the training process:

- apollo_mini.py: rank-1 optimizer (SGD memory, AdamW quality)
- apollo_worker.py: HTTP daemon coordinating training with vLLM
- weight_mapping.py: vLLM merged → HF separate layout (zero-copy views)
- training_example.py: tokenization with chat template
- export_weights.py: CUDA IPC handle export from vLLM
- train.py: standalone training script (alternative to daemon)
- DESIGN.md: architecture and protocol documentation

Validated: CUDA IPC autograd works on real Qwen3.5 weights (B200).
Apollo-Mini rank-1 projection + scaling + in-place update confirmed.

Co-Authored-By: Kent Overstreet <kent.overstreet@gmail.com>
2026-03-30 22:02:37 -04:00