Skip to content

Record: Scylla + Parallel Residuals + Depth Recurrence + Legal TTT — val_bpb 1.0876 (3-seed mean)#1274

Closed
MatoTeziTanka wants to merge 6 commits intoopenai:mainfrom
MatoTeziTanka:scylla-parallel-recurrence-ttt
Closed

Record: Scylla + Parallel Residuals + Depth Recurrence + Legal TTT — val_bpb 1.0876 (3-seed mean)#1274
MatoTeziTanka wants to merge 6 commits intoopenai:mainfrom
MatoTeziTanka:scylla-parallel-recurrence-ttt

Conversation

@MatoTeziTanka
Copy link
Copy Markdown

Summary

val_bpb: 1.0876 (3-seed mean, std 0.00037) | ≤15.83 MB | 8×H100 SXM, 600s + TTT

Beats current merged SOTA (PR #1019, 1.1147 BPB, by @abaybektursun) by −0.0271 BPB. Welch t = −91.92, df = 3.99, p ≪ 0.01. This is our own prior work — we are improving on our own merged record.

3-Seed Results

Seed Steps ms/step Legal TTT BPB Artifact
42 5,875 102.2 1.0872 15,814,644
1337 5,878 102.1 1.0879 15,823,670
2024 5,884 102.0 1.0877 15,834,859
Mean 5,879 102.1 1.0876

All seeds stopped by 600s wallclock cap. All artifacts under 16,000,000 bytes.

Technique Stack

Built on our PRs #549 (LeakyReLU² + Legal TTT + Parallel Muon) and #1019 (AR Self-Gen GPTQ + XSA-all):

See README.md for full details, credit chain, and reproduction instructions.

Credits

Our prior work: PRs #399, #549, #1019 (current merged SOTA)
External: Scylla tokenizer (@simon-marcus, #1143), parallel residuals + depth recurrence (@msisovic, #1204), legal TTT framework (@Christopher-Lee-McClendon, #461), mixed quantization concept (#1105)

Mato and others added 6 commits April 1, 2026 12:37
…cture

Precomputed bigram log-prob table (0.77MB compressed) provides 76% of
prediction ability. Neural network (10L×512d, ~24M params) learns only
the 24% residual correction. All CPU tests pass: table building, forward/
backward (gradients flow to neural only), artifact roundtrip (INT6+LZMA).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
BIGRAM_SCALE env var controls table contribution:
- "fixed" (default): constant 1.0, table always on
- "learnable": nn.Parameter, model finds optimal ratio
- "0": table disabled, pure neural ablation

Warmdown increased from 1200 to 4000 (proven from v1.5).
All three modes tested: forward/backward/compile pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
5 runs on 8×H100 SXM (seed=1337):
- Run A (10L, table fixed): 1.204 pre-quant, OVER budget
- Run B (9L, table learnable→0.63): 1.209 pre-quant
- Run C (9L, table OFF): 1.193 pre-quant — BEST
- Run A2 (9L, table fixed): 1.213 pre-quant
- Run D (11L×480d, table OFF): 1.194 pre-quant (lost post-quant)

Table OFF wins every matched comparison. Learnable scale
settled at 0.63 — model actively suppressing table.
INT6 without GPTQ loses 0.10-0.13 BPB. GPTQ is #1 priority.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…th recurrence

All 9 audit findings addressed (2 CRITICAL, 4 WARNING, 3 INFO):
- C1: Route gradient test validates nonzero after 1 optimizer step
- C2: Mixed quant budget (INT4 MLP, INT6 attn, INT8 rest) fits 15.3MB
- W1: Separate resid_mix_mlp for parallel MLP lane
- W2: Assert parallel doesn't start inside encoder
- W3: Document post-skip recurrence is intentional per PR openai#1204
- W4: SmearGate caches self.fc(x)
- W5: Extract _run_layers() to deduplicate forward/forward_logits
- I2: Test recurrent+parallel overlap on layer 7
- I3: Learnable lane_merge parameter

6 tests pass on CPU in ~21s. Budget verified at 15.3MB for 30.2M params.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…val_bpb 1.0876 (3-seed mean)

3-seed exact mean: 1.08759808 BPB (std 0.00036912)
Beats merged SOTA PR openai#1019 (1.1147 BPB) by -0.0271 BPB
Welch t = -91.92, df = 3.99, p << 0.01

Built on our PRs openai#549 and openai#1019. Adds Scylla tokenizer (PR openai#1143,
@simon-marcus), parallel residuals + mini depth recurrence (PR openai#1204,
@msisovic), mixed INT5/INT6 quantization + brotli, legal score-first TTT.

All artifacts under 16MB. 8xH100 SXM, 600s training + ~495s TTT eval.
@MatoTeziTanka
Copy link
Copy Markdown
Author

Closing — incorrect attribution. Will resubmit with corrected authorship and credit chain.

@MatoTeziTanka MatoTeziTanka deleted the scylla-parallel-recurrence-ttt branch April 2, 2026 23:44
HateBunnyPlzzz added a commit to Itssshikhar/parameter-golf that referenced this pull request Apr 2, 2026
Approaches revamped (old eval-only approaches removed):
- 01: Low-Rank Factored MLP (18 layers in 16MB via rank-128 MLP factors)
- 02: Reptile Meta-Learning Warmdown (meta-optimize for TTT adaptability)
- 03: SVD + Quantized Factors (13 layers via spectral compression)
- 04: Multi-Token Prediction + BPB-Weighted Loss (training loss innovation)
- 05: Gram-Newton-Schulz + FP8 Training (30% more steps in 10 min)

Unmerged PR research saved to unmerged_runs/:
- PR openai#1263: SLOT (0.9354 BPB, legality contested)
- PR openai#1246: Trinity Ternary (0.9650 BPB)
- PR openai#1241: MDLM Diffusion (0.9901 BPB)
- PR openai#1252: WARP (1.0713 BPP)
- PR openai#1257: Complement Training (1.0855 BPB)
- PR openai#1274: Parallel Residuals + Depth Recurrence (1.0876 BPB)
- PR openai#1260: MuonEq-R + Depth Recurrence (1.0929 BPB)
- PR openai#1254: XSA + LoRA TTT (1.1070 BPB)

Key finding: without eval tricks, frontier is ~1.09 BPB (PR openai#1260)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant