Skip to content

Non-record: 15L Depth Recurrence + LeakyReLU² — BI-guided weight tying (val_bpb=1.1360)#857

Draft
aruniyer wants to merge 2 commits intoopenai:mainfrom
aruniyer:submission/15L-depth-recurrence-leakyrelu2-ttt
Draft

Non-record: 15L Depth Recurrence + LeakyReLU² — BI-guided weight tying (val_bpb=1.1360)#857
aruniyer wants to merge 2 commits intoopenai:mainfrom
aruniyer:submission/15L-depth-recurrence-leakyrelu2-ttt

Conversation

@aruniyer
Copy link
Copy Markdown

@aruniyer aruniyer commented Mar 26, 2026

Summary

val_bpb: 1.1360 (seed 1337) | 15.87 MB | 8xH100 SXM

Non-record submission exploring BI-guided depth recurrence: using Block Influence scores (ShortGPT) to identify
which layer positions can share weights, enabling 15 effective layers from 11 unique parameter blocks within the
same 16MB budget as standard 11L.

Result

Seed Steps Sliding BPB Artifact
1337 5173 1.1360 15.87 MB

Key Technique: BI-Guided Weight Tying

  1. Train a 15L model, measure Block Influence (angular distance input→output per layer)
  2. Layers 9–13 have lowest BI (0.10–0.16) — near-identity transformations
  3. Tie those 5 positions to share one physical block → 15 virtual layers, ~27M unique params
  4. Deduplicate before quantization: store shared weights once with reconstruction map
  5. Int6 + zstd-22 → 15.87 MB ✅

Depth vs Steps Tradeoff

15L runs at 116ms/step (vs 86ms for 11L) due to 4 extra forward/backward layers. In 600s: ~5170 steps vs ~6975 for
11L. The depth advantage doesn't fully compensate for ~1800 fewer steps in this wallclock-limited setting. At equal
step counts, 15L outperforms 11L.

Architecture

15L (10 unique + 1 shared×5), 512d, 8H/4KV GQA, MLP 3x, LeakyReLU(0.5)², XSA4, Partial RoPE 16/64, LN Scale, VE128,
SmearGate, BigramHash(2048), EMA, SWA, Late QAT, int6+zstd-22, FA3.

Reproduce

SEED=1337 NUM_LAYERS=15 TIE_LAYERS=9,10,11,12,13 \
  DIFF_ATTN=0 VRES_ENABLED=0 TTT_EPOCHS=0 \
  torchrun --standalone --nproc_per_node=8 train_gpt.py

Credits

Base: signalrush (PR #374/#414). LeakyReLU²: PR #493, PR #518. Block Influence: ShortGPT (arXiv:2403.03853).

…al_bpb=1.1093)

15 effective layers from 11 unique blocks via BI-guided weight tying.
Layers 9-13 share one block (lowest Block Influence scores).
27M unique params, int6+zstd = 15.7MB artifact.

3-seed results:
  Seed 42:   1.1048 BPB
  Seed 1337: 1.1092 BPB
  Seed 2025: 1.1138 BPB
  Mean:      1.1093 ± 0.0045

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@aruniyer aruniyer marked this pull request as draft March 27, 2026 07:03
- Removed multi-epoch corpus-level TTT (ruled illegal in issue openai#677)
- Added legal score-first TTT option (PR#549 pattern) but it hurts this model
- Clean result: 15L depth recurrence + LeakyReLU² = 1.1360 BPB (no TTT)
- Documented depth-vs-steps tradeoff in README
- Draft status: single seed, architecture exploration

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@aruniyer aruniyer changed the title Record: 15L Depth Recurrence + LeakyReLU² + Cosine TTT (3-seed mean val_bpb=1.1093) Non-record: 15L Depth Recurrence + LeakyReLU² — BI-guided weight tying (val_bpb=1.1360) Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant