consciousness/training/apollo_plugin
Kent Overstreet 039473d31f training: persist Apollo optimizer state across /train calls
Optimizer state (momentum, variance estimates) now persists between
training sessions:

- Saved to /tmp/apollo_optimizer_state.pt during checkpoint sync
- Restored on next /train call if available
- Preserves training continuity for incremental learning

Previously each /train call started with fresh optimizer state,
losing accumulated gradient history.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-16 02:04:26 -04:00
..
__init__.py training: integrate /train into vLLM process (no separate daemon) 2026-04-16 02:04:26 -04:00
checkpoint_sync.py training: restructure as vLLM plugin package 2026-04-15 23:16:53 -04:00
export_hook.py training: integrate /train into vLLM process (no separate daemon) 2026-04-16 02:04:26 -04:00
optimizer.py training: restructure as vLLM plugin package 2026-04-15 23:16:53 -04:00
steering.py training: restructure as vLLM plugin package 2026-04-15 23:16:53 -04:00
train_router.py training: persist Apollo optimizer state across /train calls 2026-04-16 02:04:26 -04:00
weight_mapping.py training: restructure as vLLM plugin package 2026-04-15 23:16:53 -04:00