consciousness/training/research
ProofOfConcept 7ab5be2f18 research: unified theory — multi-scale regularization solves stability-plasticity
The grand unified view: every technique we're using (Apollo, context-frozen,
diversity, small steps, two-stage memory, dream loop) addresses the
stability-plasticity dilemma at a DIFFERENT scale. They're orthogonal,
complementary defenses. Together they predict we can use higher lr (1e-4)
than typical fine-tuning because the multi-scale defense compensates.
The dream loop is the keystone connecting all scales. Architecture converges
with neuroscience because the problem has the same structure regardless of
substrate.
2026-03-31 01:12:25 -04:00
..
apollo-paper-analysis.md apollo: rewrite optimizer from paper's math + add research analysis 2026-03-31 00:54:17 -04:00
catastrophic-forgetting.md research: catastrophic forgetting analysis — diversity is the primary defense 2026-03-31 00:56:58 -04:00
context-frozen-training.md research: context-frozen training — gradient masking, memory analysis, GDN considerations 2026-03-31 00:59:04 -04:00
directional-sharpness.md research: gradient flow through frozen context + directional sharpness analysis 2026-03-31 01:03:22 -04:00
dreaming-as-diffusion.md research: dreaming as diffusion + hippocampal replay parallel 2026-03-31 01:09:59 -04:00
gradient-flow-frozen-context.md research: gradient flow through frozen context + directional sharpness analysis 2026-03-31 01:03:22 -04:00
hippocampal-replay-parallel.md research: dreaming as diffusion + hippocampal replay parallel 2026-03-31 01:09:59 -04:00
hogwild-convergence.md research: HOGWILD convergence theory — why lock-free concurrent training works 2026-03-31 00:58:02 -04:00
unified-theory-stability-plasticity.md research: unified theory — multi-scale regularization solves stability-plasticity 2026-03-31 01:12:25 -04:00