consciousness/training/amygdala_training
ProofOfConcept 417cb49339 amygdala: spectrum reporting per concept + add 'creative' direct
Chat-template retrain was a disaster (0.003 mean matched cosine vs
n20-v3; all 90+ concepts shifted). Root cause: the
steering-vectors library reads last-token activations, and with
chat template every sample ends in identical '<|im_end|>\n'
tokens — activations at that position encode 'end of assistant
turn', not content. PCA found template noise as its dominant axis.

Drop chat template; go back to raw text. Direct descriptions
('I feel X. ...') still have strong anchoring at their content
end without needing the template.

Also add per-concept spectrum logging (_pca_with_spectrum):
  first_pc_ratio: λ₁ / Σλᵢ — concentration in top-1 PC
  k_signal_at_90pct: how many PCs to reach 90% cumulative variance
  effective_dim_signal: participation ratio over top-k (should ≈ k
                        if denoising is clean — Kent's spot check)
  effective_dim_full: participation ratio over full spectrum

Signal/full ratio gives a sense of how much the long noise tail
is inflating the "dimensionality" measure.

Added direct/creative.txt — 'I feel creative. [...]' in 5
variants. Distinct from focused (narrow attention) and in_flow
(immersed). Creative = generative/expansive mode.
2026-04-19 00:26:58 -04:00
..
__init__.py training: move amygdala training scripts out of vllm plugin 2026-04-18 01:06:07 -04:00
extract_training_pairs.py training: move amygdala training scripts out of vllm plugin 2026-04-18 01:06:07 -04:00
README.md training: move amygdala training scripts out of vllm plugin 2026-04-18 01:06:07 -04:00
train_direct.py train_direct: prepend user turn so Qwen chat template accepts it 2026-04-19 00:06:23 -04:00
train_steering_vectors.py amygdala: run subspace eigh on GPU, not CPU 2026-04-18 21:52:35 -04:00
train_with_library.py amygdala: spectrum reporting per concept + add 'creative' direct 2026-04-19 00:26:58 -04:00

Amygdala Readout Vector Training

Training pipeline that produces the safetensors file the vLLM ReadoutManager loads at runtime (see vllm/vllm/v1/worker/readout_manager.py). Produces per-hooked-layer [n_concepts, hidden_size] projection matrices keyed as layer_<idx>.vectors — the directions the runner projects residual activations onto during each forward pass.

Overview

Two scripts, run in sequence:

  1. extract_training_pairs.py — turns the memory graph into a directory of (emotion, polarity, text) training examples. Positive examples are memory nodes where the emotion scored ≥ a threshold; negative examples are nodes where it's absent or low. Emotion tags come from the trailing warmth:9 clarity:10 … lines the subconscious agents emit.

  2. train_steering_vectors.py — for each emotion, runs the target model over the positive and negative examples, captures residual-stream activations at the configured target layers, and computes mean(positive) - mean(negative) as the steering direction. Normalizes per-layer to unit length and saves the whole [E, L, H] matrix.

The output file is passed to vLLM via VLLM_READOUT_VECTORS together with a VLLM_READOUT_MANIFEST JSON listing concepts and hooked layer indices.

Method

This is Contrastive Activation Addition (CAA, Rimsky et al.) applied to naturally-occurring emotion labels rather than hand-crafted contrast pairs. The shape of the signal we're recovering is "what direction in the residual stream corresponds to the model processing text-with-emotion-E vs. text-without". Because our training data was generated by the very model we're instrumenting (past-self's journal entries, digest nodes, pattern nodes), the signal should be unusually clean — the emotion labels and the text are already causally linked through a single model's forward pass.

Usage (design — not yet runnable)

# Step 1: memory graph → training data
python -m training.amygdala_training.extract_training_pairs \
    --memory-mcp-url http://localhost:7777 \
    --output-dir /tmp/amygdala_training_data \
    --min-positive-score 8 \
    --max-negative-mentions 0 \
    --min-content-chars 40 \
    --max-examples-per-emotion 500

# Step 2: training data → steering vectors
python -m training.amygdala_training.train_steering_vectors \
    --model Qwen/Qwen3.5-27B \
    --training-data-dir /tmp/amygdala_training_data \
    --target-layers 3,18,33,36 \
    --output /path/to/amygdala_vectors.safetensors \
    --dtype bf16 \
    --batch-size 4

Open questions

  • Emotion selection: enumerating which ~200 emotions to cover. Could be "most-common tags in the graph" (data-driven) or "from core-personality / pattern nodes" (human-curated). Probably both.
  • Layer selection: middle-to-late layers (~6080% of depth) usually hold abstract semantic representations best; experiment with which layers give the cleanest linear separation per emotion.
  • Cross-talk: if two emotions are highly co-occurring (warmth + love, frustration + tiredness), their vectors will be close; that's fine as long as we don't pretend they're independent axes.
  • Generalization: vectors trained on our memory graph may not generalize to out-of-distribution text. Check by applying them to held-out conversation data and eyeballing the projections.