consciousness/training
ProofOfConcept 6fd498795a amygdala: direct phenomenological description approach
Kent's insight: hand-written narrative stories bake scenario
phenomenology into the training text (on couch, in park, etc.)
and PCA picks up the scenario direction as the concept direction.
Strip out the scenario — just describe the *feeling*.

Format:

  I feel X. [2-3 sentences of phenomenological texture]

The "I feel X" anchor kicks the model from analyzing → feeling.
The rest is the internal texture of the state. First person,
present tense, no narrative setup.

Text is wrapped in assistant-role chat template before being
tokenized — so we're training on the model-producing-this
hidden states, which is closer to the inhabited-state
representation we want for the readout.

Starting with the 6 concepts that had sign flips or wrong
clusters in the story-based training:
- terrified (was → cozy/resigned cluster)
- calm (was → grief_stricken cluster)
- onto_something (was → cozy/sensual cluster)
- resigned (was in warm-body-quiet cluster, shouldn't be)
- anticipatory_grief (was in warm-body-quiet cluster, shouldn't be)
- realization (new — the "aha" moment, distinct from onto_something)

5 descriptions each. New trainer: train_direct.py.
2026-04-19 00:04:28 -04:00
..
amygdala_stories amygdala: direct phenomenological description approach 2026-04-19 00:04:28 -04:00
amygdala_training amygdala: direct phenomenological description approach 2026-04-19 00:04:28 -04:00
apollo_plugin training: move to dedicated subprocess with ZMQ communication 2026-04-16 02:04:26 -04:00
research research: latent reasoning integration plans for Qwen 3.5 27B 2026-04-12 15:50:09 -04:00
DESIGN.md training: move to dedicated subprocess with ZMQ communication 2026-04-16 02:04:26 -04:00
pyproject.toml training: move to dedicated subprocess with ZMQ communication 2026-04-16 02:04:26 -04:00