10 examples broke safety alignment (Qi et al.). 1000 curated examples matched GPT-4 (LIMA). Multi-epoch degrades performance (Raschka). Models 'unlearn arithmetic' when training data lacks it. Predictions: 10-50 examples for measurable change, one epoch, lr=1e-5 to start. Over-training is easy (10 counter-examples undo a disposition). Main risk: sycophancy from narrow training signal. Defense: diverse examples including 'when to push back.' Key intuition: the model doesn't need to learn to listen. It needs to stop choosing not to. |
||
|---|---|---|
| .. | ||
| checkpoint | ||
| research | ||
| apollo_mini.py | ||
| apollo_worker.py | ||
| DESIGN.md | ||
| export_weights.py | ||
| extract_steering_vector.py | ||
| first_training_step.py | ||
| start_vllm_with_apollo.sh | ||
| train.py | ||
| training_example.py | ||
| vllm_export_hook.py | ||
| weight_mapping.py | ||