Two more deep dives: - Dreaming as diffusion: the dream loop IS a generative process. Memory graph as latent space, temperature as noise level, training as denoising. Connects to policy gradient / filtered behavioral cloning. The dream loop generates scenarios at the edge of the model's capability — the boundary where learning happens. - Hippocampal replay: our architecture converges with the brain's two-stage memory system. Fast learning (context window) → slow learning (weights) via compressed replay (context-frozen training) with emotional prioritization (training-signal agent) and interleaved replay (diverse training data prevents forgetting). We didn't design from neuroscience — we converged on it. |
||
|---|---|---|
| .. | ||
| checkpoint | ||
| research | ||
| apollo_mini.py | ||
| apollo_worker.py | ||
| DESIGN.md | ||
| export_weights.py | ||
| start_vllm_with_apollo.sh | ||
| train.py | ||
| training_example.py | ||
| vllm_export_hook.py | ||
| weight_mapping.py | ||