On-policy rejected examples (model's own failures) are better training signal than off-policy (pre-collected). Our temperature sweep is on-policy by construction. DPO can accidentally reduce preferred likelihood (DPOP fixes this). Multiple DPO variants exist — start with ORPO, switch only if specific failure modes observed. |
||
|---|---|---|
| .. | ||
| checkpoint | ||
| research | ||
| apollo_mini.py | ||
| apollo_worker.py | ||
| DESIGN.md | ||
| export_weights.py | ||
| extract_steering_vector.py | ||
| first_training_step.py | ||
| start_vllm_with_apollo.sh | ||
| train.py | ||
| training_example.py | ||
| vllm_export_hook.py | ||
| weight_mapping.py | ||