Skip to content

Add L1SLOAD precompile#50

Open
jmadibekov wants to merge 25 commits intomasterfrom
feat/l1sload-raiko-v2
Open

Add L1SLOAD precompile#50
jmadibekov wants to merge 25 commits intomasterfrom
feat/l1sload-raiko-v2

Conversation

@jmadibekov
Copy link
Copy Markdown

@jmadibekov jmadibekov commented Feb 23, 2026

What is L1SLOAD?

L1SLOAD (https://github.com/ethereum/RIPs/blob/master/RIPS/rip-7728.md) is a precompile at address 0x10001 that lets L2 smart contracts read L1 Ethereum storage. For example, an L2 DeFi protocol can check an L1 oracle price or verify L1 token balances without a bridge or messaging system.

An L2 transaction calls the precompile with 84 bytes - an L1 contract address (20 bytes), a storage key (32 bytes), and an L1 block number (32 bytes) - and gets back the 32-byte storage value from that slot at that block.

What this PR does

Adds e2e L1SLOAD support to Raiko's preflight (witness collection) and proving (verification) phases.

E2E flow

┌─────────────────────────────── PREFLIGHT (has network) ───────────────────────────────┐
│                                                                                       │
│  1. Execute block with RPC fallback                                                   │
│     ┌──────────┐     CALL 0x10001     ┌────────────────┐    eth_getStorageAt    ┌───┐ │
│     │ L2 Block │ ──────────────────►  │ L1SLOAD cache  │ ───────────────────►   │L1 │ │
│     │ (EVM)    │ ◄── return value ──  │ (alethia-reth) │ ◄── storage value ──   │RPC│ │
│     └──────────┘                      └────────────────┘                        └───┘ │
│                                           │ records                                   │
│                                           ▼ (addr, key, block)                        │
│  2. Collect Merkle proofs for all recorded calls                                      │
│     ┌─────────────────────┐     eth_getProof      ┌───┐                               │
│     │ L1StorageProof[]    │ ◄──────────────────── │L1 │   (account + storage proofs)  │
│     │ + L1 header chain   │                       │RPC│   + headers [l1origin-N..     │
│     └─────────────────────┘                       └───┘     l1origin-1]               │
│                                                                                       │
└───────────────────────────────────────────┬───────────────────────────────────────────┘
                                            │ GuestInput
                                            ▼
┌────────────────────────── PROVING (no network, inside ZK) ────────────────────────────┐
│                                                                                       │
│  3. Verify header chain (backward walk from L1 origin)                                │
│     L1 origin header ◄── parent_hash ── header[N-1] ◄── ... ◄── header[N-k]           │
│         │ (trusted: verified on-chain via Proposal.originBlockHash)                   │
│         ▼                                                                             │
│     block_number → state_root map                                                     │
│                                                                                       │
│  4. Verify each proof via MPT                                                         │
│     state_root ──► account proof (verify_proof) ──► storage_root                      │
│                                                        │                              │
│                                    storage proof (verify_proof) ──► value             │
│                                                                                       │
│  5. Populate L1SLOAD cache with verified values → re-execute block deterministically  │
│                                                                                       │
└───────────────────────────────────────────────────────────────────────────────────────┘

Block range

Valid L1SLOAD requests must target a block in [l1origin - 256, l1origin]. This is enforced both by the precompile at runtime (in alethia-reth) and by the prover's state root map (only populated for blocks in this range).

RPC fallback for indirect calls

During preflight, an RPC callback is set via set_l1_rpc_fetcher(). This handles cache misses from indirect L1SLOAD calls — when a smart contract internally calls 0x10001, the callback fetches the value from L1, caches it, and records the call for later proof collection. During proving, the callback is not set — cache misses are hard errors.

Unit tests

25 unit tests added in lib/src/l1_precompiles/l1sload.rs

How to run

# L1SLOAD tests only
cargo test -p raiko-lib --lib l1_precompiles::l1sload::tests

# Full raiko-lib suite (71 pass, 2 ignored — unrelated to L1SLOAD)
cargo test -p raiko-lib

Test output

running 25 tests
test l1_precompiles::l1sload::tests::test_acquire_lock_returns_guard ... ok
test l1_precompiles::l1sload::tests::test_block_number_from_b256_max_u64 ... ok
test l1_precompiles::l1sload::tests::test_block_number_from_b256_overflow ... ok
test l1_precompiles::l1sload::tests::test_block_number_from_b256_valid ... ok
test l1_precompiles::l1sload::tests::test_block_number_from_b256_zero ... ok
test l1_precompiles::l1sload::tests::test_get_leaf_value_branch_node_rejected ... ok
test l1_precompiles::l1sload::tests::test_get_leaf_value_empty_proof ... ok
test l1_precompiles::l1sload::tests::test_get_leaf_value_extension_node_rejected ... ok
test l1_precompiles::l1sload::tests::test_get_leaf_value_leaf_node ... ok
test l1_precompiles::l1sload::tests::test_get_storage_root_invalid_rlp ... ok
test l1_precompiles::l1sload::tests::test_get_storage_root_large_balance ... ok
test l1_precompiles::l1sload::tests::test_get_storage_root_not_a_list ... ok
test l1_precompiles::l1sload::tests::test_get_storage_root_valid_account ... ok
test l1_precompiles::l1sload::tests::test_get_storage_root_zero_nonce_zero_balance ... ok
test l1_precompiles::l1sload::tests::test_populate_cache_empty ... ok
test l1_precompiles::l1sload::tests::test_populate_cache_stores_values ... ok
test l1_precompiles::l1sload::tests::test_state_root_map_broken_chain ... ok
test l1_precompiles::l1sload::tests::test_state_root_map_chain_of_three ... ok
test l1_precompiles::l1sload::tests::test_state_root_map_origin_only ... ok
test l1_precompiles::l1sload::tests::test_state_root_map_single_parent ... ok
test l1_precompiles::l1sload::tests::test_state_root_map_wrong_block_number ... ok
test l1_precompiles::l1sload::tests::test_verify_empty_proofs_succeeds ... ok
test l1_precompiles::l1sload::tests::test_verify_proof_missing_state_root ... ok
test l1_precompiles::l1sload::tests::test_verify_proof_nonexistent_account_empty_proof ... ok
test l1_precompiles::l1sload::tests::test_verify_proof_value_mismatch_fails ... ok

test result: ok. 25 passed; 0 failed; 0 ignored; 0 measured; 49 filtered out; finished in 0.00s

Cross references

jmadibekov and others added 10 commits February 24, 2026 15:43
The L1SloadInspector and related infrastructure (inspect flag,
create_block_executor_with_inspector, trace_l1sload_calls stub) are
dead code. NMC handles L1SLOAD autonomously via direct L1 RPC calls,
so no proposer/driver tracing is needed. Raiko detects L1SLOAD calls
via direct transaction scanning in collect_l1_storage_proofs().

Removed:
- lib/src/builder/l1sload_inspector.rs (entire file)
- inspect field from TaikoWithOptimisticBlockExecutor
- create_block_executor_with_inspector method
- trace_l1sload_calls stub from preflight/util.rs
- Unused parameters from collect_l1_storage_proofs
Update all alethia-reth dependencies to the feat/l1sload-precompile-nmc
branch which is based on NMC's main and includes the L1SLOAD precompile
with the correct 84-byte API (address + storage key + block number).

Also fixes compilation issues from cherry-pick conflicts:
- RpcBlockDataProvider::new() signature (1 arg, not 2)
- Remove block_numbers assertion (field doesn't exist on master)
- Use TaikoBlock type in collect_l1_storage_proofs signature
…hasta mode

In Shasta mode, l1_header is set to l1_inclusion_block - 1 which differs
from the anchor block referenced by the anchor transaction. The L1SLOAD
code incorrectly compared anchor_state_root against l1_header.state_root,
causing "Anchor state root mismatch" errors for every batch.

Changes:
- Remove broken anchor state root validation in preflight (both single-block
  and batch paths) that compared two different L1 blocks' state roots
- Fix execute_transactions() and execute_transaction_batch() to extract
  the actual anchor block info from the anchor tx instead of using l1_header
- Re-export get_anchor_tx_info_by_fork for use in the execution phase

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- PR #2: Add L1SLOAD execution lock to serialize concurrent block
  executions that use L1SLOAD, preventing global cache races.
- PR #4: Verify newest ancestor header immediately precedes anchor block
  in build_verified_state_root_map.
- PR #5: Optimize get_and_verify_value to single verification pass
  instead of double-verifying for existing keys.
- PR #6: Distinguish extension vs leaf nodes in get_leaf_value via HP
  flag check, preventing misparse of extension nodes.
- PR #7: Include l1_storage_proofs in single-block GuestInput (was
  missing due to ..Default::default()).
- PR #9: Replace .expect() with proper ? error propagation in L1SLOAD
  verification paths.
- PR #10: Batch storage key proof collection by address to reduce RPC
  calls from one-per-key to one-per-block-number.
- PR #14: Create l1_provider once before spawn loop and clone into
  each chunk task instead of creating per-chunk.
Update alethia-reth dependency to 136e51a0 which replaces eprintln!
with tracing::trace! in the L1SLOAD precompile.
…oofs

When the MPT proof terminates at a branch node (non-existent account),
get_leaf_value was misinterpreting the branch's child hashes as leaf
path+value because it only checked the HP flag without first verifying
the node type via element count. A branch node's first child hash
starting with 0x3X (75% chance) passes the HP flag >= 2 check,
causing garbage value extraction and proof verification failure.

Fix: count RLP list elements before HP flag check (17 = branch,
2 = leaf/extension), matching alloy-trie's TrieNode::decode pattern.
@jmadibekov jmadibekov changed the title Add L1SLOAD prover support Add L1SLOAD (RIP-7728) proof collection and ZK verification for Taiko L2 Mar 3, 2026
Add [jmadibekov] tags to:
- preflight: scan detection, proof collection, indirect calls
- proving: verify/populate cycle, state root map, MPT verification
- cache: populate/clear operations
The log line at preflight/mod.rs:168 referenced `input.l1_storage_proofs`
etc., but `input` (GuestInput) isn't constructed until a few lines later.
The correct references are the local tuple bindings: `l1_storage_proofs`,
`l1_ancestor_headers`, `l1_successor_headers`.
…allback

- Delete collect_l1_storage_proofs() (~210 lines) — all L1SLOAD calls now
  discovered via RPC fallback during execution, proofs fetched after
- Remove pre-scan/merge logic from preflight() and batch_preflight()
- Remove redundant lookback validation and range checks (already validated
  by the precompile at runtime)
- Eliminate all [jmadibekov] debug logs; only log when L1SLOAD calls exist
- Trim verbose doc comments on verification functions and input fields
- Remove unused L1SLOAD_MAX_BLOCK_LOOKBACK re-export

E2E verified: all scenarios (direct, indirect, forward/backward block reads,
error cases) pass, blocks 22-24 proved and accepted on-chain with zero
log noise for non-L1SLOAD blocks.
- Merge fetch_l1_ancestor_headers + fetch_l1_successor_headers into
  fetch_l1_headers_in_range (~30 lines saved)
- Remove redundant linear search in fetch_l1_proofs_for_rpc_served_calls;
  reconstruct B256 from u64 directly (~10 lines saved)
- Remove dead code: unreachable `sorted_numbers.len() == 1` branch in
  build_verified_state_root_map
- Hoist nibbles/proof_refs construction in get_and_verify_value
- Downgrade redundant info logs to debug in verify_and_populate_l1sload_proofs
…nchor

Simplify L1SLOAD state root verification to use a single backward walk
from L1 origin header, replacing the bidirectional walk from anchor.
This is possible because L1 origin is already trusted (verified via
verify_shasta_anchor_linkage and bound to on-chain originBlockHash).

Changes:
- Merge l1_ancestor_headers + l1_successor_headers into single l1_headers
- build_verified_state_root_map takes l1_origin_header + headers array
- Change valid range from [anchor-256, l1origin] to [l1origin-256, l1origin]
- fetch_l1_proofs uses l1_origin_block_number as reference point
- Remove @jmadibekov tags from gaiko TODO comments
The l1_headers array does NOT include the L1 origin itself (the range is
exclusive on the upper end). So the initial expected_hash must be the
origin's parent_hash, not its own hash, since the first header checked
in reverse is block (l1_origin - 1).
The gaiko TODO comment cleanup was not needed and hasn't been pushed
to the gaiko remote. Reverting the submodule pointer back.
@jmadibekov jmadibekov changed the title Add L1SLOAD (RIP-7728) proof collection and ZK verification for Taiko L2 Add L1SLOAD (RIP-7728) proof collection and verification for Taiko L2 Mar 6, 2026
@jmadibekov jmadibekov changed the title Add L1SLOAD (RIP-7728) proof collection and verification for Taiko L2 Add L1SLOAD precompile Mar 6, 2026
@jmadibekov jmadibekov marked this pull request as ready for review March 6, 2026 12:20
@smartprogrammer93
Copy link
Copy Markdown

@claude please review

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds end-to-end support for the RIP-7728-style L1SLOAD precompile (0x10001) by collecting L1 storage reads during preflight (with RPC fallback), fetching eth_getProof + header chain data, and verifying those proofs inside the prover to deterministically re-execute.

Changes:

  • Introduces lib::l1_precompiles with L1SLOAD cache management, global execution locking, and MPT proof verification + cache population.
  • Extends GuestInput to carry l1_storage_proofs and the needed L1 header chain for state-root verification.
  • Adds preflight RPC fallback + proof/header collection, plus prover-side verification/initialization paths; refactors RPC proof batching to be reusable for L1SLOAD.

Reviewed changes

Copilot reviewed 13 out of 14 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
lib/src/lib.rs Exposes the new l1_precompiles module.
lib/src/l1_precompiles/mod.rs Public API surface for L1SLOAD helpers + RPC fallback re-exports.
lib/src/l1_precompiles/l1sload.rs Implements L1SLOAD global lock, cache ops, header-chain state-root map, and MPT proof verification.
lib/src/input.rs Adds L1StorageProof and new GuestInput fields (l1_storage_proofs, l1_headers).
lib/src/builder/mod.rs Populates/clears L1SLOAD cache before block execution for header calculation paths.
lib/src/anchor.rs Adds a shared helper to extract anchor block info from the anchor tx by fork.
lib/Cargo.toml Adds alloy-trie dependency needed for proof verification.
core/src/provider/rpc.rs Refactors proof fetching into a shared helper; adds get_l1_storage_proofs.
core/src/provider/mod.rs Extends BlockDataProvider with get_l1_storage_proofs.
core/src/preflight/util.rs Adds L1SLOAD call → (proofs + headers) collection utilities.
core/src/preflight/mod.rs Wires in RPC fallback execution, served-call capture, and L1 proof/header fetching into preflight + batch preflight.
core/src/lib.rs Adds prover-side L1SLOAD preparation (lock + verify/populate) before execution.
Cargo.toml Pins alethia-reth-* to a specific revision and adds alloy-trie.
Cargo.lock Lockfile updates for the new/updated dependencies.
Comments suppressed due to low confidence (2)

lib/src/builder/mod.rs:344

  • Same concurrency issue in the batch loop: the L1SLOAD cache is cleared/populated without the global execution lock, and the subsequent execute_transactions runs without serialization. If calculate_batch_blocks_final_header is called concurrently with other proving/preflight work, global L1SLOAD state can be corrupted. Wrap each iteration’s clear → populate → execute → finalize in acquire_l1sload_lock() and keep the guard alive until after the block is finalized.
        clear_l1sload_cache();
        if input.inputs[i].chain_spec.is_taiko() {
            let anchor_tx = input.inputs[i]
                .taiko
                .anchor_tx
                .as_ref()
                .expect("anchor tx required for L1SLOAD in batch");
            let fork = input.inputs[i]
                .chain_spec
                .active_fork(
                    input.inputs[i].block.header.number,
                    input.inputs[i].block.header.timestamp,
                )
                .expect("failed to determine active fork for L1SLOAD in batch");
            let (anchor_block_number, _) = get_anchor_tx_info_by_fork(fork, anchor_tx)
                .expect("failed to decode anchor tx info in batch");
            let l1_origin_block_number = input.inputs[i].taiko.l1_header.number;
            populate_l1sload_cache(
                &input.inputs[i].l1_storage_proofs,
                anchor_block_number,
                l1_origin_block_number,
            );
        }

        let mut builder = RethBlockBuilder::new(
            &input.inputs[i],
            create_mem_db(&mut input.inputs[i].clone()).unwrap(),
        )
        .set_is_first_block_in_proposal(i == 0);

        let mut execute_tx = vec![input.inputs[i].taiko.anchor_tx.clone().unwrap()];
        execute_tx.extend_from_slice(&pool_txs.0);
        builder
            .execute_transactions(execute_tx.clone(), false)
            .expect("execute");
        final_blocks.push(
            builder
                .finalize_block()
                .expect("execute single batched block"),
        );

lib/src/builder/mod.rs:292

  • clear_l1sload_cache() / populate_l1sload_cache() are manipulating the global L1SLOAD precompile cache without holding the L1SLOAD execution lock. This can race with concurrent preflight/proving tasks and corrupt (or wipe) another block’s cache mid-execution. Acquire acquire_l1sload_lock() and hold the guard for the entire clear → populate → execute_transactions → finalize cycle here (same lock is already used elsewhere).
    clear_l1sload_cache();
    if input.chain_spec.is_taiko() {
        let anchor_tx = input
            .taiko
            .anchor_tx
            .as_ref()
            .expect("anchor tx required for L1SLOAD");
        let fork = input
            .chain_spec
            .active_fork(input.block.header.number, input.block.header.timestamp)
            .expect("failed to determine active fork for L1SLOAD");
        let (anchor_block_number, _) =
            get_anchor_tx_info_by_fork(fork, anchor_tx).expect("failed to decode anchor tx info");
        let l1_origin_block_number = input.taiko.l1_header.number;
        populate_l1sload_cache(
            &input.l1_storage_proofs,
            anchor_block_number,
            l1_origin_block_number,
        );
    }

    let mut builder = RethBlockBuilder::new(input, db);
    let pool_tx = generate_transactions(
        &input.chain_spec,
        &input.taiko.block_proposed,
        &input.taiko.tx_data,
        &input.taiko.anchor_tx,
    );

    let cycle_tracker = CycleTracker::start("execute_transactions");
    builder
        .execute_transactions(pool_tx, false)
        .expect("execute");
    cycle_tracker.end();

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread core/src/lib.rs
Comment thread lib/src/l1_precompiles/l1sload.rs Outdated
Comment thread core/src/provider/rpc.rs
Comment thread lib/src/l1_precompiles/l1sload.rs
Comment thread lib/src/l1_precompiles/l1sload.rs
Comment thread core/src/preflight/mod.rs
Comment thread core/src/preflight/mod.rs
@smartprogrammer93
Copy link
Copy Markdown

@claude please review

@claude
Copy link
Copy Markdown

claude bot commented Mar 12, 2026

Claude finished @smartprogrammer93's task in 2m 44s —— View job


Code Review: Add L1SLOAD Precompile

  • Gather context and understand the PR
  • Review lib/src/l1_precompiles/l1sload.rs
  • Review lib/src/l1_precompiles/mod.rs
  • Review core/src/lib.rs
  • Review core/src/preflight/mod.rs
  • Review core/src/preflight/util.rs
  • Review core/src/provider/rpc.rs
  • Review lib/src/input.rs, lib/src/anchor.rs, lib/src/builder/mod.rs
  • Compile and summarize findings

This is a well-structured PR that correctly implements the E2E L1SLOAD flow across preflight and proving phases. The high-level design (RPC fallback → eth_getProof collection → header-chain verification → MPT proof check → cache population) is sound. Below are issues ordered by severity.


Critical / High

1. Panic on invalid RPC URL in preflight closures
core/src/preflight/mod.rs:204 and core/src/preflight/mod.rs:539

let client = alloy_rpc_client::ClientBuilder::default()
    .http(l1_rpc_url.parse().unwrap());  // ← panics on bad URL

Both the single-block and batch preflight install an L1 RPC fallback that constructs a new HTTP client on every cache miss, and both call .unwrap(). A misconfigured RPC URL crashes the whole preflight. Beyond the panic risk, building a fresh client per call is expensive. The URL should be parsed and validated once before the closure is installed, and the client (or at minimum the parsed Url) should be captured by the closure.

Fix this →%20Parse%20the%20URL%20before%20the%20closure%20and%20propagate%20the%20error%20instead%20of%20unwrapping%2C%202)%20Build%20the%20RpcClient%20once%20before%20the%20closure%20and%20clone%20it%20into%20the%20closure.&repo=NethermindEth/raiko)


2. Missing L1SLOAD execution lock in builder/mod.rs
lib/src/builder/mod.rs:262-278 (single block) and lib/src/builder/mod.rs:310-332 (batch loop)

calculate_block_final_header and calculate_batch_blocks_final_header call clear_l1sload_cache() and populate_l1sload_cache() without holding the L1SLOAD_EXECUTION_LOCK. If these functions run concurrently with any proving or preflight task that holds the lock, the global L1SLOAD state is silently corrupted mid-execution.

The acquire_l1sload_lock() pattern used correctly in core/src/lib.rs must wrap the entire clear → populate → execute_transactions cycle in these two functions as well. The Copilot review also flagged this.

Fix this →


3. Unguarded slice indexing on untrusted proof input
lib/src/l1_precompiles/l1sload.rs:310, 340, 360, 423

Several slice operations on proof bytes (sourced from external L1 RPCs and carried through GuestInput) do not guard against the case where payload_length exceeds the remaining data length:

// line 310 — no bounds check:
let mut count_data = &data[..list_header.payload_length];

// line 340 — no bounds check:
let path_bytes = &data[..path_header.payload_length];

// line 360 — no bounds check:
let value = data[..value_header.payload_length].to_vec();

// line 423 — no bounds check (though length checked at 414):
let storage_root_bytes = &data[..32];  // data was advanced past other fields — may be short

A malformed or truncated node will cause a panic, crashing the prover process rather than returning a verification error. All four sites should use data.get(..n).ok_or_else(|| anyhow!("...")) before slicing.

Fix this →


Medium

4. Global lock unconditionally acquired for all chains
core/src/lib.rs:46-47

fn prepare_l1sload_for_execution(input: &GuestInput) -> RaikoResult<MutexGuard<'static, ()>> {
    let guard = acquire_l1sload_lock();  // ← always acquired
    clear_l1sload_cache();
    if input.l1_storage_proofs.is_empty() {
        return Ok(guard);
    }
    ...
}

This serializes every call to execute_transactions and execute_transaction_batch across the entire process — including non-Taiko chains. The lock and cache clear should only occur when input.chain_spec.is_taiko() is true. For non-Taiko inputs the function could return a dummy/Option<MutexGuard> to skip the overhead entirely.


5. std::sync::Mutex acquired inside tokio::spawn tasks
core/src/preflight/mod.rs:528

acquire_l1sload_lock() blocks the calling thread synchronously on a std::sync::Mutex. Inside a tokio::spawn task this blocks a tokio worker thread until the lock is available, reducing the thread pool's throughput. Each chunk spawns a task that may block waiting for this lock. Consider wrapping the lock acquisition in tokio::task::block_in_place or restructuring so the serialized section sits outside the spawned tasks.


6. Unnecessary .clone() in fetch_storage_proofs_internal
core/src/provider/rpc.rs:50

async fn fetch_storage_proofs_internal(
    &self,
    block_number: u64,
    accounts: HashMap<Address, Vec<U256>>,  // ← ownership taken...
    ...
) -> RaikoResult<MerkleProof> {
    ...
    let mut accounts = accounts.clone();  // ← ...then immediately cloned

Since accounts is already owned (taken by value), the .clone() is redundant. Remove the clone and use let mut accounts = accounts; — or accept &mut HashMap<...> if mutation of the original is not required by any caller.


Low / Improvements

7. No unit tests for MPT proof verification
lib/src/l1_precompiles/l1sload.rs:40

The verify_l1_proof, get_leaf_value, and get_storage_root functions implement non-trivial, security-sensitive MPT proof verification with no test coverage. At a minimum, tests should cover: valid account+storage proof round-trip, non-existent account (empty proof), non-existent storage slot, and malformed/truncated proof nodes (to confirm errors are returned rather than panics). The Copilot review also noted this gap.


8. Duplicate L1SLOAD preflight wiring
core/src/preflight/mod.rs:191-227 and core/src/preflight/mod.rs:526-566

The logic for acquiring the lock, setting up the RPC fallback closure, calling execute_txs, collecting served calls, and tearing down the fallback is copy-pasted between the single-block and batch paths. A small helper (e.g., execute_with_l1sload_fallback(builder, pool_txs, anchor_block_id, l1_origin_block_id, l1_rpc_url)) would eliminate the duplication and ensure both paths stay in sync.


9. build_verified_state_root_map does not validate block numbers
lib/src/l1_precompiles/l1sload.rs:153-186

The backward walk verifies parent_hash linkage but does not check that header.number is monotonically decreasing and contiguous with l1_origin_number. A crafted l1_headers array with correct hashes but wrong block numbers would pass the walk and insert state roots under the wrong block numbers. Adding an assertion like header.number == l1_origin_header.number - 1 - position would close this gap.


Minor Nits

  • get_anchor_tx_info_by_fork is now duplicated: it exists in both lib/src/anchor.rs and is re-exported via core/src/preflight/util.rs. Ensure callers in core/ use the canonical location.
  • In get_leaf_value (line 356), format!("Failed to decode value header") should be a string literal to avoid allocating an empty format, or include contextual information.
  • GuestInput.l1_storage_proofs and l1_headers are marked #[serde(default)] which is correct for backward-compat, but there is no #[cfg_attr(feature = "...")] guard — make sure these fields serialize to zero-cost placeholders (empty vecs) for non-Taiko use cases where proof data is always absent.
  • The proof_type field is not carried in BatchPreflightData but is needed for the TDX early-return at line 501 — it is accessed via captured closure variable. This works but is worth a comment to explain why the outer scope's proof_type is referenced inside the spawned task.

Overall

The approach is architecturally correct and the E2E design (preflight RPC fallback → proof collection → ZK-side MPT verification → deterministic re-execution) is clean. The most important issues to address before merge are items 1–3 (panic potential and the missing execution lock in builder/mod.rs). Items 4–5 are performance issues that could become significant under load.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 15 out of 16 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment thread core/src/provider/rpc.rs
Comment thread core/src/provider/rpc.rs
Comment thread lib/src/l1_precompiles/l1sload.rs
Comment thread core/src/preflight/mod.rs Outdated
Comment thread lib/src/l1_precompiles/l1sload.rs
Comment thread lib/src/l1_precompiles/l1sload.rs
Comment thread lib/src/l1_precompiles/l1sload.rs
@smartprogrammer93
Copy link
Copy Markdown

@claude please review

@claude
Copy link
Copy Markdown

claude bot commented Mar 13, 2026

Claude finished @smartprogrammer93's task in 2m 52s —— View job


Code Review: Add L1SLOAD Precompile (Updated)

  • Gather context and read key files
  • Verify claimed fixes from previous review
  • Check ZK guest execution paths
  • Post final review

This is a re-review following the author's responses. Several previous issues were fixed; one new critical issue was found in the ZK guest execution path.


Status of Previous Issues

# Issue Status
1 Panic on invalid RPC URL ✅ Fixed
2 Missing lock in builder/mod.rs ⚠️ See below
3 Unguarded slice indexing ✅ Fixed (bounds checks added at all sites)
4 Global lock for all chains 🔶 Won't fix (acceptable tradeoff per author)
5 std::sync::Mutex in tokio::spawn 🔶 Partially mitigated (execution uses block_in_place, but lock acquisition itself still blocks)
6 Unnecessary .clone() in rpc.rs ✅ Fixed
7 No unit tests for MPT verification ✅ Fixed (24 tests added)
8 Duplicate preflight wiring 🔶 Not fixed (still duplicated between single-block and batch paths)

Critical: ZK guest path skips L1 storage proof verification

lib/src/builder/mod.rs:259-278 (single block) and lib/src/builder/mod.rs:310-332 (batch)

The calculate_block_header and calculate_batch_blocks_final_header functions — called directly by all ZK guest programs (RISC0, SP1) and the SGX prover — only call populate_l1sload_cache without verifying the L1 storage proofs via MPT:

// lib/src/builder/mod.rs:259-277 (called by RISC0/SP1/SGX guests)
clear_l1sload_cache();
if input.chain_spec.is_taiko() {
    ...
    populate_l1sload_cache(       // ← just stores values, no MPT verification
        &input.l1_storage_proofs,
        anchor_block_number,
        l1_origin_block_number,
    );
}

The native prover (core/src/lib.rs:70-80) correctly calls verify_and_populate_l1sload_proofs which verifies each proof via MPT against the header-chain state roots before populating the cache. The ZK path bypasses this entirely.

Impact: A malicious prover could supply arbitrary L1 storage values in GuestInput.l1_storage_proofs (the values field, not the proofs), and the ZK circuit would accept them without any MPT verification. The ZK proof would be unsound for any L1SLOAD-using transaction — the proof would attest to correct execution, but the L1 storage reads could have used fabricated values.

The fix is to replace populate_l1sload_cache with verify_and_populate_l1sload_proofs in calculate_block_header and calculate_batch_blocks_final_header. The l1_headers and l1_header (origin) are already available in GuestInput, so the API is compatible.

Fix this →


High: Lock still missing in builder/mod.rs

lib/src/builder/mod.rs:259 and lib/src/builder/mod.rs:310

The previous review and Copilot both flagged that calculate_block_header / calculate_batch_blocks_final_header manipulate global L1SLOAD state without holding acquire_l1sload_lock(). While ZK guest programs are single-threaded (no race), the SGX prover (provers/sgx/guest/src/one_shot.rs:146) runs in a native environment and could be called concurrently. The lock should be acquired for correctness parity with the native prover path, and to guard against future multi-threaded calling contexts. This is especially important once the verification issue above is fixed (since verify_and_populate_l1sload_proofs accesses the same global state).


Minor: get_storage_root still has unguarded slice access

lib/src/l1_precompiles/l1sload.rs:446

// storage_root_header.payload_length is checked == 32 at line 437, but:
let storage_root_bytes = &data[..32];  // ← data may be shorter than 32 bytes

The length check at line 437 validates the RLP header's declared payload_length, but data is the remaining raw bytes after skipping nonce and balance. If those fields were encoded with unexpected sizes (e.g., due to malformed but structurally valid RLP), data could be shorter than 32 bytes. The previous response said "Already fixed — length check exists at line 414 via storage_root_header.payload_length != 32 guard." That guard validates the declared payload_length, not whether data actually contains 32 bytes. Use data.get(..32).ok_or_else(...) here.

Fix this →


Minor: execute_txs iterates up to 100 times while holding the L1SLOAD lock

core/src/preflight/mod.rs:218-226

The preflight code holds _l1sload_guard across the entire multi-iteration execute_txs loop (which can run up to 100 iterations, each fetching state from the RPC). This means the global L1SLOAD lock is held for potentially a long time during preflight, blocking other concurrent preflight tasks from acquiring it. Consider releasing the lock after take_l1_rpc_served_calls() and before the RPC proof collection step, since the lock is only needed for the EVM execution part.


Overall

The ZK proof soundness issue is the main blocker: the ZK circuit must verify L1 storage proofs before accepting them, or any proof involving L1SLOAD could be forged. The other architectural decisions (duplicate preflight wiring, lock acquired unconditionally) are acceptable as-is. The bounds check and lock fixes in the builder are necessary for correctness and security.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 24 out of 26 changed files in this pull request and generated 1 comment.

Comments suppressed due to low confidence (2)

lib/src/builder/mod.rs:293

  • calculate_block_header clears and populates the process-global L1SLOAD cache, then executes the EVM (builder.execute_transactions) without holding the L1SLOAD execution mutex. Since the cache/context are global, concurrent executions in the same process can race and read/write the wrong cache entries. Acquire acquire_l1sload_lock() and hold the guard across the full clear → populate → execute sequence (and do the same for the batch path).
    clear_l1sload_cache();
    if input.chain_spec.is_taiko() {
        let anchor_tx = input
            .taiko
            .anchor_tx
            .as_ref()
            .expect("anchor tx required for L1SLOAD");
        let fork = input
            .chain_spec
            .active_fork(input.block.header.number, input.block.header.timestamp)
            .expect("failed to determine active fork for L1SLOAD");
        let (anchor_block_number, _) =
            get_anchor_tx_info_by_fork(fork, anchor_tx).expect("failed to decode anchor tx info");
        let l1_origin_block_number = input.taiko.l1_header.number;
        populate_l1sload_cache(
            &input.l1_storage_proofs,
            anchor_block_number,
            l1_origin_block_number,
        );
    }

    let pool_tx = generate_transactions(
        &input.chain_spec,
        &input.taiko.block_proposed,
        &input.taiko.tx_data,
        &input.taiko.anchor_tx,
    );

    let guest_input = mem::take(input);
    let mut builder = RethBlockBuilder::new(guest_input, db);

    let cycle_tracker = CycleTracker::start("execute_transactions");
    builder
        .execute_transactions(pool_tx, false)
        .expect("execute");

lib/src/builder/mod.rs:347

  • calculate_batch_blocks_final_header clears/populates the process-global L1SLOAD cache and then executes transactions for each block, but it does not hold the global L1SLOAD mutex during this sequence. This can race with other concurrent proving/execution tasks in the same process. Hold acquire_l1sload_lock() for the full clear → populate → execute cycle (ideally once per block iteration).
        clear_l1sload_cache();
        if input.inputs[i].chain_spec.is_taiko() {
            let anchor_tx = input.inputs[i]
                .taiko
                .anchor_tx
                .as_ref()
                .expect("anchor tx required for L1SLOAD in batch");
            let fork = input.inputs[i]
                .chain_spec
                .active_fork(
                    input.inputs[i].block.header.number,
                    input.inputs[i].block.header.timestamp,
                )
                .expect("failed to determine active fork for L1SLOAD in batch");
            let (anchor_block_number, _) = get_anchor_tx_info_by_fork(fork, anchor_tx)
                .expect("failed to decode anchor tx info in batch");
            let l1_origin_block_number = input.inputs[i].taiko.l1_header.number;
            populate_l1sload_cache(
                &input.inputs[i].l1_storage_proofs,
                anchor_block_number,
                l1_origin_block_number,
            );
        }

        // First, create the MemDb using a mutable reference (no clone needed —
        // create_mem_db only mem::takes `contracts` and storage `slots`).
        let db = create_mem_db(&mut input.inputs[i]).unwrap();
        // Then, take ownership of the GuestInput for the builder (no clone needed —
        // parent_state_trie and parent_storage tries are still intact after create_mem_db).
        let guest_input = mem::take(&mut input.inputs[i]);
        let mut builder =
            RethBlockBuilder::new(guest_input, db).set_is_first_block_in_proposal(i == 0);

        let mut execute_tx = vec![builder.input.taiko.anchor_tx.clone().unwrap()];
        execute_tx.extend_from_slice(&pool_txs.0);
        builder
            .execute_transactions(execute_tx.clone(), false)
            .expect("execute");

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment thread lib/src/l1_precompiles/l1sload.rs
@smartprogrammer93
Copy link
Copy Markdown

@claude review

@claude
Copy link
Copy Markdown

claude bot commented Mar 17, 2026

Claude finished @smartprogrammer93's task in 2m 12s —— View job


Code Review: Add L1SLOAD Precompile (Third Review)

  • Gather context from previous reviews
  • Read lib/src/builder/mod.rs — verify ZK guest verification fix
  • Read lib/src/l1_precompiles/l1sload.rs — verify bounds checks
  • Read core/src/lib.rs — verify lock/verify flow
  • Check remaining open issues
  • Post final review

Good progress since the last review. The two critical issues are now resolved.

Status of Previously Flagged Issues

# Issue Status
Critical: ZK guest skips MPT verification Fixed in 2c34626calculate_block_header and calculate_batch_blocks_final_header now call verify_and_populate_l1sload_proofs
High: Unguarded slice indexing at lines 310/340/360/423 Fixed — all four sites now use .get(..n).ok_or_else(...)
High: get_storage_root unguarded &data[..32] Fixed in 2c34626 at line 446 ✅
High: Panic on invalid RPC URL Fixed (URL parsed once, reusable client) ✅
Medium: Unnecessary .clone() in rpc.rs Fixed
24 unit tests for MPT verification Added

Remaining Issues

1. Missing lock in builder/mod.rs (medium)
lib/src/builder/mod.rs:259 and lib/src/builder/mod.rs:311

calculate_block_header and calculate_batch_blocks_final_header call clear_l1sload_cache() followed by verify_and_populate_l1sload_proofs() without holding acquire_l1sload_lock(). These functions are called by RISC0/SP1 guests (single-threaded — no race) and by the SGX prover (provers/sgx/guest/src/one_shot.rs). While the SGX context may be single-threaded in practice, the lock is still necessary for:

  • Correctness parity with the native prover path (core/src/lib.rs:46), which always acquires the lock
  • Safety against future multi-threaded calling contexts

This was flagged in both the first and second reviews and confirmed again by Copilot. The fix is straightforward:

// In calculate_block_header:
let _l1sload_guard = acquire_l1sload_lock();  // add this
clear_l1sload_cache();
if input.chain_spec.is_taiko() {
    verify_and_populate_l1sload_proofs(...).expect("...");
}
// hold guard through execute_transactions

Fix this →


2. build_verified_state_root_map does not validate block numbers (low)
lib/src/l1_precompiles/l1sload.rs:173

The backward walk verifies parent_hash linkage but does not check that each header's number is correct relative to the L1 origin. A crafted l1_headers array with valid hashes but wrong .number fields would pass the chain walk and insert state roots under incorrect block numbers — causing the prover to use the wrong state root for a given block. The fix is to add a check inside the loop:

let expected_number = l1_origin_number - 1 - position as u64;
if header.number != expected_number {
    bail!("L1 header block number mismatch at position {}: expected {}, got {}",
          position, expected_number, header.number);
}

Fix this →


3. Minor: test_verify_proof_nonexistent_account_empty_proof test non-specificity (acknowledged — follow-up PR)

The Copilot comment on line 872 about using either keccak([]) or keccak([0x80]) remains. Author acknowledged, follow-up PR planned.


Overall

The critical ZK soundness issue (ZK circuit accepting unverified L1 storage values) and all bounds-check panics are now fixed — this PR is in good shape. The two remaining items are a defensive lock in builder/mod.rs (issue #1, straightforward fix) and a block-number validation in the header chain walk (issue #2). Recommend addressing issue #1 before merge; issue #2 is a useful hardening but lower urgency.

@jmadibekov
Copy link
Copy Markdown
Author

@claude review again and be concise!

@claude
Copy link
Copy Markdown

claude bot commented Mar 18, 2026

Claude finished @jmadibekov's task in 1m 7s —— View job


Code Review (Final)

  • Read key files
  • Post concise review

All previously flagged issues are now resolved.

Issue Status
Critical: ZK guest skips MPT verification ✅ Fixed
High: Panic on invalid RPC URL ✅ Fixed
High: Unguarded slice indexing (4 sites) ✅ Fixed
High: get_storage_root unguarded &data[..32] ✅ Fixed
Medium: Missing lock in builder/mod.rs (single + batch) ✅ Fixed — acquire_l1sload_lock() now wraps the full clear → verify → execute cycle
Medium: Unnecessary .clone() in rpc.rs ✅ Fixed
Low: build_verified_state_root_map missing block number validation ✅ Fixed — block numbers are now checked in the backward walk
25 unit tests for MPT verification ✅ Added

This PR is ready to merge. The one acknowledged non-fix (test fixture for test_verify_proof_nonexistent_account_empty_proof) is tracked as a follow-up PR.

Copy link
Copy Markdown

@Gohnnyman Gohnnyman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!
I thought maybe let's merge this PR after real-time proving & zisk stuff, as they have more priority and already somehow tested? I'll help rebasing stuff

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants