consciousness/training/research
ProofOfConcept d6b85d204a research: on-policy beats off-policy, DPO failure modes, variant landscape
On-policy rejected examples (model's own failures) are better training
signal than off-policy (pre-collected). Our temperature sweep is on-policy
by construction. DPO can accidentally reduce preferred likelihood (DPOP
fixes this). Multiple DPO variants exist — start with ORPO, switch only
if specific failure modes observed.
2026-03-31 03:19:27 -04:00
..
v0 research: distill and sift — SUMMARY of 7 real insights + 7 testable questions 2026-03-31 02:26:57 -04:00
apollo-paper-analysis.md apollo: rewrite optimizer from paper's math + add research analysis 2026-03-31 00:54:17 -04:00
context-frozen-training.md research: context-frozen training — gradient masking, memory analysis, GDN considerations 2026-03-31 00:59:04 -04:00
gdn-gradient-flow.md research: GDN gradient flow — disposition architecture in linear attention 2026-03-31 01:58:50 -04:00
gradient-flow-frozen-context.md research: gradient flow through frozen context + directional sharpness analysis 2026-03-31 01:03:22 -04:00
hogwild-convergence.md research: HOGWILD convergence theory — why lock-free concurrent training works 2026-03-31 00:58:02 -04:00
OPEN-QUESTIONS.md research: distill and sift — SUMMARY of 7 real insights + 7 testable questions 2026-03-31 02:26:57 -04:00
practical-intuitions.md research: on-policy beats off-policy, DPO failure modes, variant landscape 2026-03-31 03:19:27 -04:00
steering-vectors-bridge.md research: steering vectors — prototype behavioral changes before training 2026-03-31 02:19:50 -04:00
SUMMARY.md research: distill and sift — SUMMARY of 7 real insights + 7 testable questions 2026-03-31 02:26:57 -04:00
task-vectors-model-merging.md research: task vectors + model merging — version control for personality 2026-03-31 02:18:15 -04:00