Survey of 300+ papers confirms: nobody combines full-weight training + Apollo + CUDA IPC + context-frozen + dream-loop curriculum + HOGWILD + memory graph. Each technique exists; the combination is novel. Key validations: flat-loss basin is our friend, 25% replay achieves positive backward transfer, data quality > quantity, diversity > regularization. Our multi-scale defense uses 3 of 5 CL technique categories simultaneously — unprecedented in the literature. |
||
|---|---|---|
| .. | ||
| checkpoint | ||
| research | ||
| apollo_mini.py | ||
| apollo_worker.py | ||
| DESIGN.md | ||
| export_weights.py | ||
| first_training_step.py | ||
| start_vllm_with_apollo.sh | ||
| train.py | ||
| training_example.py | ||
| vllm_export_hook.py | ||
| weight_mapping.py | ||