Optimizer state (momentum, variance estimates) now persists between
training sessions:
- Saved to /tmp/apollo_optimizer_state.pt during checkpoint sync
- Restored on next /train call if available
- Preserves training continuity for incremental learning
Previously each /train call started with fresh optimizer state,
losing accumulated gradient history.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>
Remove standalone worker.py daemon. Training now runs inside vLLM:
- train_router.py: FastAPI router patched into vLLM's build_app()
- /train served on same port as /completions, /score
- Lazy-loads HF model with vLLM weight views on first request
- HOGWILD training: no pause, weights updated in-place
The previous architecture had a separate daemon on port 8080 that
communicated with vLLM via pause/resume endpoints. This was wrong -
training should run in-process, sharing GPU memory directly.
Co-Authored-By: Proof of Concept <poc@bcachefs.org>