Skip to content

Record: Window Attention + Mixed Seq_Len Training, bpb 1.1108, eval at 6144 (5-seed mean)#1212

Open
Gusanidas wants to merge 8 commits intoopenai:mainfrom
Gusanidas:alejandro/ksv2-v3-submission
Open

Record: Window Attention + Mixed Seq_Len Training, bpb 1.1108, eval at 6144 (5-seed mean)#1212
Gusanidas wants to merge 8 commits intoopenai:mainfrom
Gusanidas:alejandro/ksv2-v3-submission

Conversation

@Gusanidas
Copy link
Copy Markdown

@Gusanidas Gusanidas commented Apr 1, 2026

Record: Window Attention + Mixed Seq_Len Training

val_bpb: 1.1108 (5-seed mean, std 0.0013) | 1.8755 nats | ~15.73 MB | 8xH100 SXM, 600s | No TTT

I started from PR #1130 (KitchenSinkV2 Improved), which added split early/late LR banks, MiLe margin loss, cache+backout residual, residual lambdas, bigger bigram/VE, and FA3 on top of the PR #549 stack. On top of that, I ported the fused Triton MLP from PR #1072 and the sigmoid-gated skips + brotli+byte-shuffle compression from PR #1089. I also increased to 12 layers and tuned qk_gain to 2.5.

The two main contributions of this submission are window attention and mixed seq_len training, described below.

Results (8xH100 80GB SXM, 600s, no TTT)

Seed Steps ms/step Post-EMA BPB Sliding BPB val_loss (nats) Artifact
2 8,428 69.6 1.1250 1.1094 1.8731 15,726,762
1337 8,428 69.6 1.1250 1.1101 1.8742 15,721,698
42 8,428 69.6 1.1250 1.1103 1.8746 15,725,995
7 8,428 69.6 1.1250 1.1119 1.8773 15,723,346
22 8,428 69.6 1.1250 1.1126 1.8785 15,720,902
Mean 1.1108 1.8755 15,723,741

Current merged SOTA (2026-03-25 AR Self-Gen GPTQ + XSA-all + BigramHash 3072x112): 1.11473 BPB.
Delta vs current merged SOTA: -0.0039 BPB (-0.0066 nats).

Window attention

Instead of full causal attention on every layer, layers 2, 4, 6, 8, and 10 use a sliding window of 512 tokens via Flash Attention 3's window_size parameter. The remaining layers (0, 1, 3, 5, 7, 9, 11) keep full attention.

The motivation was to enable training at longer sequence lengths without proportionally increasing compute. Full quadratic attention at seq_len=6144 is expensive, but with window attention on 5 of 12 layers, those layers run in O(n * w) instead of O(n^2), cutting the per-step cost significantly. The layers with full attention still give the model access to the full context.

I swept several configurations: window sizes (256, 512, 1024), which layers to window (sparse, dense, even), and how many layers. Window 512 on even-indexed layers was the sweet spot — enough layers windowed to get the speedup, enough full-attention layers to preserve long-range modeling.

At seq_len=2048 (where all tokens fit in a 512-wide window anyway for most positions), windowed attention adds a small overhead (~2-3%). The benefit kicks in at longer sequences: 15% faster at 4096, 21% at 6144, 25% at 8192.

Mixed seq_len training

Different GPUs train with different sequence lengths within the same step. In the final configuration, 5 GPUs train at seq_len=2048 and 3 GPUs train at seq_len=6144. The number of sequences per GPU is set so that the total ms per step stays roughly constant.

The idea came from noticing that the sliding-window eval (which uses long sequences) gave substantially better scores than the standard 2048-token eval, but training at long sequence lengths was slow. By having most GPUs train cheaply at 2048 and a few GPUs see long context at 6144, the model gets the best of both: high step throughput from the short-sequence GPUs and long-range learning from the long-sequence ones.

I ran an extensive sweep of seq_len combinations. Some findings:

  • 3x2048 + 1x6144 (eval at 6144) gave the best int6 roundtrip BPB (1.1292) in 4-GPU experiments, beating both pure 4x2048 (1.1417) and pure 4x6144 (1.1360)
  • Having at least one GPU on a long sequence (4096+) was critical for good quantized performance
  • More short-sequence GPUs = more steps in the same wallclock, which helps training loss
  • More long-sequence GPUs = better post-EMA loss, but fewer steps and worse quantization
  • 8192 was too slow to be worthwhile — the step-time penalty outweighed the context benefit

For the final 8-GPU submission, I used 5x2048 + 3x6144, which balances throughput and long-context exposure.

Other changes

Artifact size (worst-case, seed 2)

Component Bytes
Model (int6+brotli) 15,692,661
Code 34,101
Total 15,726,762

Under the 16,000,000 byte limit.

Acknowledgments

This submission builds on many contributions from the parameter-golf community:

Reproducibility

The main training runs used the following command:

SEED=$SEED \
MATRIX_LR=0.024 MATRIX_LR_LATE=0.019 \
SCALAR_LR=0.020 SCALAR_LR_LATE=0.038 \
TIED_EMBED_LR=0.022 \
MUON_MOMENTUM=0.985 WARMDOWN_ITERS=4000 \
TRAIN_BATCH_TOKENS=589824 \
NUM_LAYERS=12 BIGRAM_VOCAB_SIZE=5120 VE_DIM=128 \
WINDOW_SIZE=512 WINDOW_ATTN_LAYERS=2,4,6,8,10 \
LOCAL_SEQS_PER_GPU=36,36,36,36,36,10,10,10 \
SEQS_PER_GPU=2048,2048,2048,2048,2048,6144,6144,6144 \
MAX_WALLCLOCK_SECONDS=600 \
torchrun --standalone --nproc_per_node=8 train_gpt.py

brotli needs to be installed for the final artifact compression path. Flash Attention 3 (flash_attn_interface) is required.

Gusanidas and others added 7 commits April 1, 2026 06:42
12-layer split-bank U-Net with window attention (size=512 on layers
2,4,6,8,10), mixed seq_len training (5 GPUs at 2048 + 3 GPUs at 6144),
fused Triton LeakyReLU-squared MLP, sigmoid-gated skip connections,
brotli+byte-shuffle compression, GPTQ int6, sliding window eval
(stride=128, seq_len=6144).

5-seed results: 1.1094, 1.1101, 1.1103, 1.1119, 1.1126 (mean 1.1083)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@Gusanidas Gusanidas changed the title Record: Window Attention + Mixed Seq_Len Training, bpb 1.1108, eval at 6144 Record: Window Attention + Mixed Seq_Len Training, bpb 1.1108, eval at 6144 (5-seed mean) Apr 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant