Conversation
The L1SloadInspector and related infrastructure (inspect flag, create_block_executor_with_inspector, trace_l1sload_calls stub) are dead code. NMC handles L1SLOAD autonomously via direct L1 RPC calls, so no proposer/driver tracing is needed. Raiko detects L1SLOAD calls via direct transaction scanning in collect_l1_storage_proofs(). Removed: - lib/src/builder/l1sload_inspector.rs (entire file) - inspect field from TaikoWithOptimisticBlockExecutor - create_block_executor_with_inspector method - trace_l1sload_calls stub from preflight/util.rs - Unused parameters from collect_l1_storage_proofs
Update all alethia-reth dependencies to the feat/l1sload-precompile-nmc branch which is based on NMC's main and includes the L1SLOAD precompile with the correct 84-byte API (address + storage key + block number). Also fixes compilation issues from cherry-pick conflicts: - RpcBlockDataProvider::new() signature (1 arg, not 2) - Remove block_numbers assertion (field doesn't exist on master) - Use TaikoBlock type in collect_l1_storage_proofs signature
…hasta mode In Shasta mode, l1_header is set to l1_inclusion_block - 1 which differs from the anchor block referenced by the anchor transaction. The L1SLOAD code incorrectly compared anchor_state_root against l1_header.state_root, causing "Anchor state root mismatch" errors for every batch. Changes: - Remove broken anchor state root validation in preflight (both single-block and batch paths) that compared two different L1 blocks' state roots - Fix execute_transactions() and execute_transaction_batch() to extract the actual anchor block info from the anchor tx instead of using l1_header - Re-export get_anchor_tx_info_by_fork for use in the execution phase Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- PR #2: Add L1SLOAD execution lock to serialize concurrent block executions that use L1SLOAD, preventing global cache races. - PR #4: Verify newest ancestor header immediately precedes anchor block in build_verified_state_root_map. - PR #5: Optimize get_and_verify_value to single verification pass instead of double-verifying for existing keys. - PR #6: Distinguish extension vs leaf nodes in get_leaf_value via HP flag check, preventing misparse of extension nodes. - PR #7: Include l1_storage_proofs in single-block GuestInput (was missing due to ..Default::default()). - PR #9: Replace .expect() with proper ? error propagation in L1SLOAD verification paths. - PR #10: Batch storage key proof collection by address to reduce RPC calls from one-per-key to one-per-block-number. - PR #14: Create l1_provider once before spawn loop and clone into each chunk task instead of creating per-chunk.
Update alethia-reth dependency to 136e51a0 which replaces eprintln! with tracing::trace! in the L1SLOAD precompile.
…oofs When the MPT proof terminates at a branch node (non-existent account), get_leaf_value was misinterpreting the branch's child hashes as leaf path+value because it only checked the HP flag without first verifying the node type via element count. A branch node's first child hash starting with 0x3X (75% chance) passes the HP flag >= 2 check, causing garbage value extraction and proof verification failure. Fix: count RLP list elements before HP flag check (17 = branch, 2 = leaf/extension), matching alloy-trie's TrieNode::decode pattern.
Add [jmadibekov] tags to: - preflight: scan detection, proof collection, indirect calls - proving: verify/populate cycle, state root map, MPT verification - cache: populate/clear operations
The log line at preflight/mod.rs:168 referenced `input.l1_storage_proofs` etc., but `input` (GuestInput) isn't constructed until a few lines later. The correct references are the local tuple bindings: `l1_storage_proofs`, `l1_ancestor_headers`, `l1_successor_headers`.
…allback - Delete collect_l1_storage_proofs() (~210 lines) — all L1SLOAD calls now discovered via RPC fallback during execution, proofs fetched after - Remove pre-scan/merge logic from preflight() and batch_preflight() - Remove redundant lookback validation and range checks (already validated by the precompile at runtime) - Eliminate all [jmadibekov] debug logs; only log when L1SLOAD calls exist - Trim verbose doc comments on verification functions and input fields - Remove unused L1SLOAD_MAX_BLOCK_LOOKBACK re-export E2E verified: all scenarios (direct, indirect, forward/backward block reads, error cases) pass, blocks 22-24 proved and accepted on-chain with zero log noise for non-L1SLOAD blocks.
- Merge fetch_l1_ancestor_headers + fetch_l1_successor_headers into fetch_l1_headers_in_range (~30 lines saved) - Remove redundant linear search in fetch_l1_proofs_for_rpc_served_calls; reconstruct B256 from u64 directly (~10 lines saved) - Remove dead code: unreachable `sorted_numbers.len() == 1` branch in build_verified_state_root_map - Hoist nibbles/proof_refs construction in get_and_verify_value - Downgrade redundant info logs to debug in verify_and_populate_l1sload_proofs
…nchor Simplify L1SLOAD state root verification to use a single backward walk from L1 origin header, replacing the bidirectional walk from anchor. This is possible because L1 origin is already trusted (verified via verify_shasta_anchor_linkage and bound to on-chain originBlockHash). Changes: - Merge l1_ancestor_headers + l1_successor_headers into single l1_headers - build_verified_state_root_map takes l1_origin_header + headers array - Change valid range from [anchor-256, l1origin] to [l1origin-256, l1origin] - fetch_l1_proofs uses l1_origin_block_number as reference point - Remove @jmadibekov tags from gaiko TODO comments
The l1_headers array does NOT include the L1 origin itself (the range is exclusive on the upper end). So the initial expected_hash must be the origin's parent_hash, not its own hash, since the first header checked in reverse is block (l1_origin - 1).
The gaiko TODO comment cleanup was not needed and hasn't been pushed to the gaiko remote. Reverting the submodule pointer back.
|
@claude please review |
There was a problem hiding this comment.
Pull request overview
Adds end-to-end support for the RIP-7728-style L1SLOAD precompile (0x10001) by collecting L1 storage reads during preflight (with RPC fallback), fetching eth_getProof + header chain data, and verifying those proofs inside the prover to deterministically re-execute.
Changes:
- Introduces
lib::l1_precompileswith L1SLOAD cache management, global execution locking, and MPT proof verification + cache population. - Extends
GuestInputto carryl1_storage_proofsand the needed L1 header chain for state-root verification. - Adds preflight RPC fallback + proof/header collection, plus prover-side verification/initialization paths; refactors RPC proof batching to be reusable for L1SLOAD.
Reviewed changes
Copilot reviewed 13 out of 14 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
lib/src/lib.rs |
Exposes the new l1_precompiles module. |
lib/src/l1_precompiles/mod.rs |
Public API surface for L1SLOAD helpers + RPC fallback re-exports. |
lib/src/l1_precompiles/l1sload.rs |
Implements L1SLOAD global lock, cache ops, header-chain state-root map, and MPT proof verification. |
lib/src/input.rs |
Adds L1StorageProof and new GuestInput fields (l1_storage_proofs, l1_headers). |
lib/src/builder/mod.rs |
Populates/clears L1SLOAD cache before block execution for header calculation paths. |
lib/src/anchor.rs |
Adds a shared helper to extract anchor block info from the anchor tx by fork. |
lib/Cargo.toml |
Adds alloy-trie dependency needed for proof verification. |
core/src/provider/rpc.rs |
Refactors proof fetching into a shared helper; adds get_l1_storage_proofs. |
core/src/provider/mod.rs |
Extends BlockDataProvider with get_l1_storage_proofs. |
core/src/preflight/util.rs |
Adds L1SLOAD call → (proofs + headers) collection utilities. |
core/src/preflight/mod.rs |
Wires in RPC fallback execution, served-call capture, and L1 proof/header fetching into preflight + batch preflight. |
core/src/lib.rs |
Adds prover-side L1SLOAD preparation (lock + verify/populate) before execution. |
Cargo.toml |
Pins alethia-reth-* to a specific revision and adds alloy-trie. |
Cargo.lock |
Lockfile updates for the new/updated dependencies. |
Comments suppressed due to low confidence (2)
lib/src/builder/mod.rs:344
- Same concurrency issue in the batch loop: the L1SLOAD cache is cleared/populated without the global execution lock, and the subsequent
execute_transactionsruns without serialization. Ifcalculate_batch_blocks_final_headeris called concurrently with other proving/preflight work, global L1SLOAD state can be corrupted. Wrap each iteration’s clear → populate → execute → finalize inacquire_l1sload_lock()and keep the guard alive until after the block is finalized.
clear_l1sload_cache();
if input.inputs[i].chain_spec.is_taiko() {
let anchor_tx = input.inputs[i]
.taiko
.anchor_tx
.as_ref()
.expect("anchor tx required for L1SLOAD in batch");
let fork = input.inputs[i]
.chain_spec
.active_fork(
input.inputs[i].block.header.number,
input.inputs[i].block.header.timestamp,
)
.expect("failed to determine active fork for L1SLOAD in batch");
let (anchor_block_number, _) = get_anchor_tx_info_by_fork(fork, anchor_tx)
.expect("failed to decode anchor tx info in batch");
let l1_origin_block_number = input.inputs[i].taiko.l1_header.number;
populate_l1sload_cache(
&input.inputs[i].l1_storage_proofs,
anchor_block_number,
l1_origin_block_number,
);
}
let mut builder = RethBlockBuilder::new(
&input.inputs[i],
create_mem_db(&mut input.inputs[i].clone()).unwrap(),
)
.set_is_first_block_in_proposal(i == 0);
let mut execute_tx = vec![input.inputs[i].taiko.anchor_tx.clone().unwrap()];
execute_tx.extend_from_slice(&pool_txs.0);
builder
.execute_transactions(execute_tx.clone(), false)
.expect("execute");
final_blocks.push(
builder
.finalize_block()
.expect("execute single batched block"),
);
lib/src/builder/mod.rs:292
clear_l1sload_cache()/populate_l1sload_cache()are manipulating the global L1SLOAD precompile cache without holding the L1SLOAD execution lock. This can race with concurrent preflight/proving tasks and corrupt (or wipe) another block’s cache mid-execution. Acquireacquire_l1sload_lock()and hold the guard for the entire clear → populate →execute_transactions→ finalize cycle here (same lock is already used elsewhere).
clear_l1sload_cache();
if input.chain_spec.is_taiko() {
let anchor_tx = input
.taiko
.anchor_tx
.as_ref()
.expect("anchor tx required for L1SLOAD");
let fork = input
.chain_spec
.active_fork(input.block.header.number, input.block.header.timestamp)
.expect("failed to determine active fork for L1SLOAD");
let (anchor_block_number, _) =
get_anchor_tx_info_by_fork(fork, anchor_tx).expect("failed to decode anchor tx info");
let l1_origin_block_number = input.taiko.l1_header.number;
populate_l1sload_cache(
&input.l1_storage_proofs,
anchor_block_number,
l1_origin_block_number,
);
}
let mut builder = RethBlockBuilder::new(input, db);
let pool_tx = generate_transactions(
&input.chain_spec,
&input.taiko.block_proposed,
&input.taiko.tx_data,
&input.taiko.anchor_tx,
);
let cycle_tracker = CycleTracker::start("execute_transactions");
builder
.execute_transactions(pool_tx, false)
.expect("execute");
cycle_tracker.end();
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
# Conflicts: # Cargo.lock # lib/src/builder/mod.rs
|
@claude please review |
|
Claude finished @smartprogrammer93's task in 2m 44s —— View job Code Review: Add L1SLOAD Precompile
This is a well-structured PR that correctly implements the E2E L1SLOAD flow across preflight and proving phases. The high-level design (RPC fallback → Critical / High1. Panic on invalid RPC URL in preflight closures let client = alloy_rpc_client::ClientBuilder::default()
.http(l1_rpc_url.parse().unwrap()); // ← panics on bad URLBoth the single-block and batch preflight install an L1 RPC fallback that constructs a new HTTP client on every cache miss, and both call Fix this →%20Parse%20the%20URL%20before%20the%20closure%20and%20propagate%20the%20error%20instead%20of%20unwrapping%2C%202)%20Build%20the%20RpcClient%20once%20before%20the%20closure%20and%20clone%20it%20into%20the%20closure.&repo=NethermindEth/raiko) 2. Missing L1SLOAD execution lock in
The 3. Unguarded slice indexing on untrusted proof input Several slice operations on proof bytes (sourced from external L1 RPCs and carried through // line 310 — no bounds check:
let mut count_data = &data[..list_header.payload_length];
// line 340 — no bounds check:
let path_bytes = &data[..path_header.payload_length];
// line 360 — no bounds check:
let value = data[..value_header.payload_length].to_vec();
// line 423 — no bounds check (though length checked at 414):
let storage_root_bytes = &data[..32]; // data was advanced past other fields — may be shortA malformed or truncated node will cause a panic, crashing the prover process rather than returning a verification error. All four sites should use Medium4. Global lock unconditionally acquired for all chains fn prepare_l1sload_for_execution(input: &GuestInput) -> RaikoResult<MutexGuard<'static, ()>> {
let guard = acquire_l1sload_lock(); // ← always acquired
clear_l1sload_cache();
if input.l1_storage_proofs.is_empty() {
return Ok(guard);
}
...
}This serializes every call to 5.
6. Unnecessary async fn fetch_storage_proofs_internal(
&self,
block_number: u64,
accounts: HashMap<Address, Vec<U256>>, // ← ownership taken...
...
) -> RaikoResult<MerkleProof> {
...
let mut accounts = accounts.clone(); // ← ...then immediately clonedSince Low / Improvements7. No unit tests for MPT proof verification The 8. Duplicate L1SLOAD preflight wiring The logic for acquiring the lock, setting up the RPC fallback closure, calling 9. The backward walk verifies Minor Nits
OverallThe approach is architecturally correct and the E2E design (preflight RPC fallback → proof collection → ZK-side MPT verification → deterministic re-execution) is clean. The most important issues to address before merge are items 1–3 (panic potential and the missing execution lock in |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 15 out of 16 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
…ove unnecessary clone, format
|
@claude please review |
|
Claude finished @smartprogrammer93's task in 2m 52s —— View job Code Review: Add L1SLOAD Precompile (Updated)
This is a re-review following the author's responses. Several previous issues were fixed; one new critical issue was found in the ZK guest execution path. Status of Previous Issues
Critical: ZK guest path skips L1 storage proof verification
The // lib/src/builder/mod.rs:259-277 (called by RISC0/SP1/SGX guests)
clear_l1sload_cache();
if input.chain_spec.is_taiko() {
...
populate_l1sload_cache( // ← just stores values, no MPT verification
&input.l1_storage_proofs,
anchor_block_number,
l1_origin_block_number,
);
}The native prover ( Impact: A malicious prover could supply arbitrary L1 storage values in The fix is to replace High: Lock still missing in
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 24 out of 26 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (2)
lib/src/builder/mod.rs:293
calculate_block_headerclears and populates the process-global L1SLOAD cache, then executes the EVM (builder.execute_transactions) without holding the L1SLOAD execution mutex. Since the cache/context are global, concurrent executions in the same process can race and read/write the wrong cache entries. Acquireacquire_l1sload_lock()and hold the guard across the full clear → populate → execute sequence (and do the same for the batch path).
clear_l1sload_cache();
if input.chain_spec.is_taiko() {
let anchor_tx = input
.taiko
.anchor_tx
.as_ref()
.expect("anchor tx required for L1SLOAD");
let fork = input
.chain_spec
.active_fork(input.block.header.number, input.block.header.timestamp)
.expect("failed to determine active fork for L1SLOAD");
let (anchor_block_number, _) =
get_anchor_tx_info_by_fork(fork, anchor_tx).expect("failed to decode anchor tx info");
let l1_origin_block_number = input.taiko.l1_header.number;
populate_l1sload_cache(
&input.l1_storage_proofs,
anchor_block_number,
l1_origin_block_number,
);
}
let pool_tx = generate_transactions(
&input.chain_spec,
&input.taiko.block_proposed,
&input.taiko.tx_data,
&input.taiko.anchor_tx,
);
let guest_input = mem::take(input);
let mut builder = RethBlockBuilder::new(guest_input, db);
let cycle_tracker = CycleTracker::start("execute_transactions");
builder
.execute_transactions(pool_tx, false)
.expect("execute");
lib/src/builder/mod.rs:347
calculate_batch_blocks_final_headerclears/populates the process-global L1SLOAD cache and then executes transactions for each block, but it does not hold the global L1SLOAD mutex during this sequence. This can race with other concurrent proving/execution tasks in the same process. Holdacquire_l1sload_lock()for the full clear → populate → execute cycle (ideally once per block iteration).
clear_l1sload_cache();
if input.inputs[i].chain_spec.is_taiko() {
let anchor_tx = input.inputs[i]
.taiko
.anchor_tx
.as_ref()
.expect("anchor tx required for L1SLOAD in batch");
let fork = input.inputs[i]
.chain_spec
.active_fork(
input.inputs[i].block.header.number,
input.inputs[i].block.header.timestamp,
)
.expect("failed to determine active fork for L1SLOAD in batch");
let (anchor_block_number, _) = get_anchor_tx_info_by_fork(fork, anchor_tx)
.expect("failed to decode anchor tx info in batch");
let l1_origin_block_number = input.inputs[i].taiko.l1_header.number;
populate_l1sload_cache(
&input.inputs[i].l1_storage_proofs,
anchor_block_number,
l1_origin_block_number,
);
}
// First, create the MemDb using a mutable reference (no clone needed —
// create_mem_db only mem::takes `contracts` and storage `slots`).
let db = create_mem_db(&mut input.inputs[i]).unwrap();
// Then, take ownership of the GuestInput for the builder (no clone needed —
// parent_state_trie and parent_storage tries are still intact after create_mem_db).
let guest_input = mem::take(&mut input.inputs[i]);
let mut builder =
RethBlockBuilder::new(guest_input, db).set_is_first_block_in_proposal(i == 0);
let mut execute_tx = vec![builder.input.taiko.anchor_tx.clone().unwrap()];
execute_tx.extend_from_slice(&pool_txs.0);
builder
.execute_transactions(execute_tx.clone(), false)
.expect("execute");
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
|
@claude review |
|
Claude finished @smartprogrammer93's task in 2m 12s —— View job Code Review: Add L1SLOAD Precompile (Third Review)
Good progress since the last review. The two critical issues are now resolved. Status of Previously Flagged Issues
Remaining Issues1. Missing lock in
This was flagged in both the first and second reviews and confirmed again by Copilot. The fix is straightforward: // In calculate_block_header:
let _l1sload_guard = acquire_l1sload_lock(); // add this
clear_l1sload_cache();
if input.chain_spec.is_taiko() {
verify_and_populate_l1sload_proofs(...).expect("...");
}
// hold guard through execute_transactions2. The backward walk verifies let expected_number = l1_origin_number - 1 - position as u64;
if header.number != expected_number {
bail!("L1 header block number mismatch at position {}: expected {}, got {}",
position, expected_number, header.number);
}3. Minor: The Copilot comment on line 872 about using either keccak([]) or keccak([0x80]) remains. Author acknowledged, follow-up PR planned. OverallThe critical ZK soundness issue (ZK circuit accepting unverified L1 storage values) and all bounds-check panics are now fixed — this PR is in good shape. The two remaining items are a defensive lock in |
…, and clean up review artifacts
|
@claude review again and be concise! |
|
Claude finished @jmadibekov's task in 1m 7s —— View job Code Review (Final)
All previously flagged issues are now resolved.
This PR is ready to merge. The one acknowledged non-fix (test fixture for |
Gohnnyman
left a comment
There was a problem hiding this comment.
Looks great!
I thought maybe let's merge this PR after real-time proving & zisk stuff, as they have more priority and already somehow tested? I'll help rebasing stuff
What is L1SLOAD?
L1SLOAD (https://github.com/ethereum/RIPs/blob/master/RIPS/rip-7728.md) is a precompile at address
0x10001that lets L2 smart contracts read L1 Ethereum storage. For example, an L2 DeFi protocol can check an L1 oracle price or verify L1 token balances without a bridge or messaging system.An L2 transaction calls the precompile with 84 bytes - an L1 contract address (20 bytes), a storage key (32 bytes), and an L1 block number (32 bytes) - and gets back the 32-byte storage value from that slot at that block.
What this PR does
Adds e2e L1SLOAD support to Raiko's preflight (witness collection) and proving (verification) phases.
E2E flow
Block range
Valid L1SLOAD requests must target a block in
[l1origin - 256, l1origin]. This is enforced both by the precompile at runtime (in alethia-reth) and by the prover's state root map (only populated for blocks in this range).RPC fallback for indirect calls
During preflight, an RPC callback is set via
set_l1_rpc_fetcher(). This handles cache misses from indirect L1SLOAD calls — when a smart contract internally calls0x10001, the callback fetches the value from L1, caches it, and records the call for later proof collection. During proving, the callback is not set — cache misses are hard errors.Unit tests
25 unit tests added in
lib/src/l1_precompiles/l1sload.rsHow to run
Test output
Cross references
eth_getProof: https://eips.ethereum.org/EIPS/eip-1186