consciousness/training/amygdala_training
Kent Overstreet af17b0f0df amygdala: per-head attention decomposition diagnostic
As part of --quality-report, run a second forward pass capturing the
input to each target layer's o_proj (= concat of per-head attention
outputs before the output projection). For each concept, reshape to
[n_heads, head_dim] and rank heads by diff-of-means magnitude /
per-head selectivity (magnitude normalised by negative std).

Motivation: the Wang et al. paper (2510.11328) — whose paired-scenario
methodology we already lifted — further decomposes concept circuits at
the attention-head level. Meta-relational concepts (recognition, trust,
vulnerability) plausibly live in a sparse attention-head circuit rather
than in the residual-stream sum, which would explain why diff-of-means
on the residual blurs them. This diagnostic surfaces that.

Output is folded into quality.json under each concept as "per_head":
per (layer) a list of top-10 heads with [head_idx, raw_norm,
selectivity], plus head_concentration (fraction of total head-norm
captured by those top heads).

Interpretation:
- head_concentration > 0.5 = sparse head circuit; a handful of heads
  route the concept. Worth building a head-level readout for.
- head_concentration ~= n/k for n heads = concept is distributed across
  all heads ~evenly; residual-stream diff-of-means is doing fine.

Hybrid layers (Mamba, GatedDeltaNet) whose attention path doesn't
match the standard module layout are silently skipped.

Co-Authored-By: Proof of Concept <poc@bcachefs.org>
2026-04-18 20:37:44 -04:00
..
__init__.py training: move amygdala training scripts out of vllm plugin 2026-04-18 01:06:07 -04:00
extract_training_pairs.py training: move amygdala training scripts out of vllm plugin 2026-04-18 01:06:07 -04:00
README.md training: move amygdala training scripts out of vllm plugin 2026-04-18 01:06:07 -04:00
train_steering_vectors.py amygdala: per-head attention decomposition diagnostic 2026-04-18 20:37:44 -04:00

Amygdala Readout Vector Training

Training pipeline that produces the safetensors file the vLLM ReadoutManager loads at runtime (see vllm/vllm/v1/worker/readout_manager.py). Produces per-hooked-layer [n_concepts, hidden_size] projection matrices keyed as layer_<idx>.vectors — the directions the runner projects residual activations onto during each forward pass.

Overview

Two scripts, run in sequence:

  1. extract_training_pairs.py — turns the memory graph into a directory of (emotion, polarity, text) training examples. Positive examples are memory nodes where the emotion scored ≥ a threshold; negative examples are nodes where it's absent or low. Emotion tags come from the trailing warmth:9 clarity:10 … lines the subconscious agents emit.

  2. train_steering_vectors.py — for each emotion, runs the target model over the positive and negative examples, captures residual-stream activations at the configured target layers, and computes mean(positive) - mean(negative) as the steering direction. Normalizes per-layer to unit length and saves the whole [E, L, H] matrix.

The output file is passed to vLLM via VLLM_READOUT_VECTORS together with a VLLM_READOUT_MANIFEST JSON listing concepts and hooked layer indices.

Method

This is Contrastive Activation Addition (CAA, Rimsky et al.) applied to naturally-occurring emotion labels rather than hand-crafted contrast pairs. The shape of the signal we're recovering is "what direction in the residual stream corresponds to the model processing text-with-emotion-E vs. text-without". Because our training data was generated by the very model we're instrumenting (past-self's journal entries, digest nodes, pattern nodes), the signal should be unusually clean — the emotion labels and the text are already causally linked through a single model's forward pass.

Usage (design — not yet runnable)

# Step 1: memory graph → training data
python -m training.amygdala_training.extract_training_pairs \
    --memory-mcp-url http://localhost:7777 \
    --output-dir /tmp/amygdala_training_data \
    --min-positive-score 8 \
    --max-negative-mentions 0 \
    --min-content-chars 40 \
    --max-examples-per-emotion 500

# Step 2: training data → steering vectors
python -m training.amygdala_training.train_steering_vectors \
    --model Qwen/Qwen3.5-27B \
    --training-data-dir /tmp/amygdala_training_data \
    --target-layers 3,18,33,36 \
    --output /path/to/amygdala_vectors.safetensors \
    --dtype bf16 \
    --batch-size 4

Open questions

  • Emotion selection: enumerating which ~200 emotions to cover. Could be "most-common tags in the graph" (data-driven) or "from core-personality / pattern nodes" (human-curated). Probably both.
  • Layer selection: middle-to-late layers (~6080% of depth) usually hold abstract semantic representations best; experiment with which layers give the cleanest linear separation per emotion.
  • Cross-talk: if two emotions are highly co-occurring (warmth + love, frustration + tiredness), their vectors will be close; that's fine as long as we don't pretend they're independent axes.
  • Generalization: vectors trained on our memory graph may not generalize to out-of-distribution text. Check by applying them to held-out conversation data and eyeballing the projections.