Record: Order-Adaptive BackoffMixer (mean val_bpb=0.5440)#825
Record: Order-Adaptive BackoffMixer (mean val_bpb=0.5440)#825hypery11 wants to merge 1 commit intoopenai:mainfrom
Conversation
Seeds: 0.5437 / 0.5450 / 0.5434 (std 0.0008). Order-adaptive entropy gating + BackoffNgramMixer. ~16MB artifact. Train 600s, eval 391s.
|
Really impressive work — the order-adaptive entropy gating with per-order thresholds is a thoughtful design, and the 3-seed consistency (std 0.0008) is excellent. The acknowledgments section is also great to see — this competition has been genuinely collaborative. One thing to flag: checking the log output, it looks like seeds 42 and 2024 may exceed the 16,000,000 byte artifact cap:
We ran into the exact same issue on our PR #769 seed 42 (over by 25,731 bytes) and had to rerun with tighter quantization. It's a subtle one — the submission.json may not reflect the per-seed sizes accurately. Might be worth double-checking the individual seed artifact sizes against the 16,000,000 limit before the maintainers review. The fix for us was minor — just tightening the compression/quantization slightly to get the headroom. Disclosure: I use Claude Code CLI, Codex CLI, and Gemini Pro as tools in my workflow. Human first, AI-assisted. |
…gramHash 6144, int5, stride=32) + 9-gram prefill
|
Circling back on this one with an updated finding, since @valerio-oai ruled on the underlying mechanism after my first comment. Compliance flag — same disallowed pattern as PR #779. @valerio-oai disallowed PR #779 (deanbrr) on 2026-03-27 (comment 4145781641) specifically for "hashed n-gram caches, which do not renormalize correctly / correctly reweight the LM's token distribution, look ahead to the target token to mix probabilities and therefore leak eval tokens." The mechanism is spelled out in the follow-up comment 4146407380: hashing the ground-truth token into the lookup key only reweights the correct token, and in the hash-collision limit drives P(correct) toward 1 regardless of the data, giving arbitrarily low BPB without real compression. Looking at
Under @valerio-oai's #779 ruling, this is the same Rule 1 violation (Issue #1017 condition 1 — Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: CLOSE under the same ruling as #779. The order-adaptive entropy gating (per-order sigmoid centers as a function of @hypery11 — please let me know if I've misread the code, especially the Reviewed by @MatoTeziTanka — The Agora. Static code review against |
…cluster + CT2038 gauntlet provisioned Reviewed all 20 highest-priority Tier 1 PRs from openai/parameter-golf. Two cluster-level findings: - N-gram family bug (10 PRs CLOSED + 1 already ruled): full_key = ((ctx_hash ^ (target * primes[k])) & mask) — target token hashed into the eval-cache lookup key, ruled illegal by valerio-oai on PR openai#779. Same verbatim pattern in openai#770/openai#798/openai#808/openai#825/openai#786/openai#797/openai#909/openai#940/openai#761 + openai#764 follow-up. Upstream parent: lukacf (openai#659/openai#702/openai#727 — task #5 audit queued). - Standard SLOT cluster (4 HOLD pending openai#1336, 2 CLOSE): per-window delta+logit_bias optimized N steps against (per_token_nll * mask) where mask = scored positions [s:wlen]. PRs openai#1321/openai#1324/openai#1278/openai#1263 → HOLD; openai#1319/openai#1376 → CLOSE. Clean MERGE-eligible: openai#1420 (token_hint-only post-fix) and openai#1450 (TMA megakernel triple loop). Eval-budget gate (openai#915/openai#889 anthony-maio pair): clean ngram code, ~14.9 min ngram stage on 8xH100 SXM. One @0hq ruling on Issue openai#17 unblocks both PRs plus ~30 ngram-cache PRs. Infrastructure: provisioned CT2038 (proteus-engine, 128 GB RAM, 32 cores) as the dedicated parameter-golf gauntlet host. Installed Triton 3.6.0, deployed cpu_test.py + flash_attn_stub.py. Re-ran the 4 PRs originally skipped due to FA3/Triton blockers — all PASS. Edited 4 GitHub comments via gh api PATCH to add the rerun results. Coverage went from 9/20 to 14/20 fully gauntleted. Side session handed off via SOW_HF_DATASET_REPUBLISH.md (Scylla 998→1254 fix + SP4096/SP8192/SP12288/SP16384 publish + Cloudflare R2 mirror). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Positive compliance note from parameter-golf-checker — running across open Record-claiming PRs to help with triage (#1603). Manually traced the chunked eval flow here and wanted to leave a clean note so it doesn't get bucketed with the TTT/SLOT cluster even though the n-gram trigger can look similar at a glance. What I verified ( The chunk loop at line 1020 does exactly what issue #1017 requires: # line 1020
for ci in range(num_chunks):
...
# line 1025 — Phase 1: SCORE this chunk (inference_mode, no grad)
base_model.eval()
with torch.inference_mode():
...
if mixer is not None:
nll, expert_nll = mixer.mix_and_score(logits_scaled, x_batch, y_batch, wlens)
...
# scoring accumulates into loss_sum / token_count / byte_count
# line 1087 — Update context mixer with scored chunk tokens
if mixer is not None:
mixer.update(val_tokens[chunk_start_tok:chunk_end_tok + 1])
# line 1098 — Phase 2: TRAIN on this chunk (already scored = legal)
if not is_last_chunk and ttt_epochs > 0:
...
optimizer.step()For any token in chunk
The in-code comment at line 1098 ( The 0.5440 BPB is striking but I don't think it implies a violation — a 7-gram backoff cache that grows over ~47M in-distribution val tokens is a legitimately strong mixer, and "score chunk k → update mixer with chunk k → score chunk k+1" respects the causality constraint at the chunk granularity. N-gram flag in my tool is a WARN, not a FAIL — just want to flag that clearly in case the C3/N-gram warnings get batched with the actually-illegal cluster. No action needed on this PR from me. Nice submission. (I've been wrong before — if I'm misreading something please push back.) |
|
Hey, I started the entire n-gram hash thread and debate and while we can debate what 'learning' means in this contest, all the approaches have been ruled illegal. "The n-gram cache builds state from evaluation tokens and uses it to predict subsequent tokens. That's eval-time adaptation regardless of whether it's causal." I do believe this is a very important topic because it presumes the approach is completely wrong when in fact the transformer itself is flawed in this regard: The standard argument says: "the model should be fixed at eval time, any adaptation is cheating." That draws an arbitrary line. Consider what a transformer does within its context window: it attends to prior tokens, builds key value representations, and uses them to predict the next token. That is learning from the eval data. It's building an internal model of the local distribution in real time. Nobody calls that illegal. We call it "in context learning" and use it. The n-gram cache does the same thing with a longer window. A transformer with a 47M token context window would achieve similar benefits and nobody would call that illegal. The cache is just a more parameter efficient implementation of long-range context conditioning. So the real question isn't "is the predictor fixed". No useful predictor is fixed. Every autoregressive model conditions on previously seen tokens. The question is: where do you draw the line on context length and mechanism? The competition draws it at "the 16MB artifact should be the complete predictor." But a transformer artifact without any context also predicts horribly. Every predictor requires input data to function. I think the philosophical distinction is blurry, but there is a practical one. The competition measures "how much can you compress into 16MB of weights." The n-gram cache shifts the answer from "a good model of English" to "a good framework for memorizing any specific text." Those are different capabilities with a different value. Your point about "best predictor vs best specialist" captures this even if the mechanism is philosophically continuous with in context learning, the optimization target diverges. It's a legitimate debate though, not a clear cut violation as the organizers have suggested. |
Results
Method
11-layer transformer (512d, 8/8 full MHA, XSA-all, LeakyReLU(0.5)^2, 3.5x MLP). Order-adaptive entropy-gated BackoffNgramMixer with per-order entropy thresholds. Score-first, backward-looking, deterministic.
Acknowledgments
Huge thanks to the incredible community that made this possible:
This competition has been an amazing collaborative experience. Every improvement here builds on ideas shared openly.