Record: Scylla + Parallel Residuals + Depth Recurrence + Legal TTT — val_bpb 1.0876 (3-seed mean)#1274
Closed
MatoTeziTanka wants to merge 6 commits intoopenai:mainfrom
Closed
Conversation
…cture Precomputed bigram log-prob table (0.77MB compressed) provides 76% of prediction ability. Neural network (10L×512d, ~24M params) learns only the 24% residual correction. All CPU tests pass: table building, forward/ backward (gradients flow to neural only), artifact roundtrip (INT6+LZMA). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
BIGRAM_SCALE env var controls table contribution: - "fixed" (default): constant 1.0, table always on - "learnable": nn.Parameter, model finds optimal ratio - "0": table disabled, pure neural ablation Warmdown increased from 1200 to 4000 (proven from v1.5). All three modes tested: forward/backward/compile pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
5 runs on 8×H100 SXM (seed=1337): - Run A (10L, table fixed): 1.204 pre-quant, OVER budget - Run B (9L, table learnable→0.63): 1.209 pre-quant - Run C (9L, table OFF): 1.193 pre-quant — BEST - Run A2 (9L, table fixed): 1.213 pre-quant - Run D (11L×480d, table OFF): 1.194 pre-quant (lost post-quant) Table OFF wins every matched comparison. Learnable scale settled at 0.63 — model actively suppressing table. INT6 without GPTQ loses 0.10-0.13 BPB. GPTQ is #1 priority. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…th recurrence All 9 audit findings addressed (2 CRITICAL, 4 WARNING, 3 INFO): - C1: Route gradient test validates nonzero after 1 optimizer step - C2: Mixed quant budget (INT4 MLP, INT6 attn, INT8 rest) fits 15.3MB - W1: Separate resid_mix_mlp for parallel MLP lane - W2: Assert parallel doesn't start inside encoder - W3: Document post-skip recurrence is intentional per PR openai#1204 - W4: SmearGate caches self.fc(x) - W5: Extract _run_layers() to deduplicate forward/forward_logits - I2: Test recurrent+parallel overlap on layer 7 - I3: Learnable lane_merge parameter 6 tests pass on CPU in ~21s. Budget verified at 15.3MB for 30.2M params. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…val_bpb 1.0876 (3-seed mean) 3-seed exact mean: 1.08759808 BPB (std 0.00036912) Beats merged SOTA PR openai#1019 (1.1147 BPB) by -0.0271 BPB Welch t = -91.92, df = 3.99, p << 0.01 Built on our PRs openai#549 and openai#1019. Adds Scylla tokenizer (PR openai#1143, @simon-marcus), parallel residuals + mini depth recurrence (PR openai#1204, @msisovic), mixed INT5/INT6 quantization + brotli, legal score-first TTT. All artifacts under 16MB. 8xH100 SXM, 600s training + ~495s TTT eval.
Author
|
Closing — incorrect attribution. Will resubmit with corrected authorship and credit chain. |
HateBunnyPlzzz
added a commit
to Itssshikhar/parameter-golf
that referenced
this pull request
Apr 2, 2026
Approaches revamped (old eval-only approaches removed): - 01: Low-Rank Factored MLP (18 layers in 16MB via rank-128 MLP factors) - 02: Reptile Meta-Learning Warmdown (meta-optimize for TTT adaptability) - 03: SVD + Quantized Factors (13 layers via spectral compression) - 04: Multi-Token Prediction + BPB-Weighted Loss (training loss innovation) - 05: Gram-Newton-Schulz + FP8 Training (30% more steps in 10 min) Unmerged PR research saved to unmerged_runs/: - PR openai#1263: SLOT (0.9354 BPB, legality contested) - PR openai#1246: Trinity Ternary (0.9650 BPB) - PR openai#1241: MDLM Diffusion (0.9901 BPB) - PR openai#1252: WARP (1.0713 BPP) - PR openai#1257: Complement Training (1.0855 BPB) - PR openai#1274: Parallel Residuals + Depth Recurrence (1.0876 BPB) - PR openai#1260: MuonEq-R + Depth Recurrence (1.0929 BPB) - PR openai#1254: XSA + LoRA TTT (1.1070 BPB) Key finding: without eval tricks, frontier is ~1.09 BPB (PR openai#1260) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
8 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
val_bpb: 1.0876 (3-seed mean, std 0.00037) | ≤15.83 MB | 8×H100 SXM, 600s + TTT
Beats current merged SOTA (PR #1019, 1.1147 BPB, by @abaybektursun) by −0.0271 BPB. Welch t = −91.92, df = 3.99, p ≪ 0.01. This is our own prior work — we are improving on our own merged record.
3-Seed Results
All seeds stopped by 600s wallclock cap. All artifacts under 16,000,000 bytes.
Technique Stack
Built on our PRs #549 (LeakyReLU² + Legal TTT + Parallel Muon) and #1019 (AR Self-Gen GPTQ + XSA-all):
See README.md for full details, credit chain, and reproduction instructions.
Credits
Our prior work: PRs #399, #549, #1019 (current merged SOTA)
External: Scylla tokenizer (@simon-marcus, #1143), parallel residuals + depth recurrence (@msisovic, #1204), legal TTT framework (@Christopher-Lee-McClendon, #461), mixed quantization concept (#1105)