consciousness/training
ProofOfConcept 42b9390d49 research: dreaming as diffusion + hippocampal replay parallel
Two more deep dives:
- Dreaming as diffusion: the dream loop IS a generative process.
  Memory graph as latent space, temperature as noise level, training
  as denoising. Connects to policy gradient / filtered behavioral
  cloning. The dream loop generates scenarios at the edge of the
  model's capability — the boundary where learning happens.

- Hippocampal replay: our architecture converges with the brain's
  two-stage memory system. Fast learning (context window) → slow
  learning (weights) via compressed replay (context-frozen training)
  with emotional prioritization (training-signal agent) and
  interleaved replay (diverse training data prevents forgetting).
  We didn't design from neuroscience — we converged on it.
2026-03-31 01:09:59 -04:00
..
checkpoint checkpoint: sync live weights back into model safetensors in-place 2026-03-30 22:55:23 -04:00
research research: dreaming as diffusion + hippocampal replay parallel 2026-03-31 01:09:59 -04:00
apollo_mini.py apollo: rewrite optimizer from paper's math + add research analysis 2026-03-31 00:54:17 -04:00
apollo_worker.py apollo: make rank configurable (default 1 = Mini, higher ranks for experimentation) 2026-03-30 22:06:31 -04:00
DESIGN.md DESIGN.md: complete rewrite reflecting validated architecture 2026-03-31 00:42:53 -04:00
export_weights.py apollo-mini training system: initial implementation 2026-03-30 22:02:37 -04:00
start_vllm_with_apollo.sh vllm launcher with apollo hook 2026-03-30 22:24:02 -04:00
train.py apollo-mini training system: initial implementation 2026-03-30 22:02:37 -04:00
training_example.py apollo-mini training system: initial implementation 2026-03-30 22:02:37 -04:00
vllm_export_hook.py apollo-checkpoint: efficient diff-based GPU weight checkpointing 2026-03-30 22:53:17 -04:00
weight_mapping.py weight_mapping: strip language_model prefix to match HF text model names 2026-03-30 23:11:03 -04:00