Skip to content

Conversation

LouisTsai-Csie
Copy link
Collaborator

@LouisTsai-Csie LouisTsai-Csie commented Jul 24, 2025

🗒️ Description

As EIP-7825 is introduced in Fusaka upgrade, most of the legacy test case would fail. This issue add two test wrappers, benchmark_test and benchmark_state_test, to replace pure blockchain_test and state_test test type.

🔗 Related Issues or PRs

Issue #1896

✅ Checklist

  • All: Ran fast tox checks to avoid unnecessary CI fails, see also Code Standards and Enabling Pre-commit Checks:
    uvx --with=tox-uv tox -e lint,typecheck,spellcheck,markdownlint
  • All: PR title adheres to the repo standard - it will be used as the squash commit message and should start type(scope):.
  • All: Considered adding an entry to CHANGELOG.md.
  • All: Considered updating the online docs in the ./docs/ directory. (will come in a separate PR)
  • All: Set appropriate labels for the changes (only maintainers can apply labels).

@LouisTsai-Csie LouisTsai-Csie self-assigned this Jul 24, 2025
@LouisTsai-Csie LouisTsai-Csie force-pushed the benchmark-test-type branch 2 times, most recently from 641036c to af00ec2 Compare August 8, 2025 10:07
@LouisTsai-Csie LouisTsai-Csie marked this pull request as ready for review August 11, 2025 09:52
@LouisTsai-Csie
Copy link
Collaborator Author

There are some issue in generating the fixture. I compare to the newly created fixture, and the size is much larger than the original one. This should not happen and there should be the same content, so the same size. But this is not a big problem now.

The major issue now is to resolve the failing test in CI, which I could not reproduce now locally.

@LouisTsai-Csie LouisTsai-Csie marked this pull request as draft August 14, 2025 16:30
@CPerezz
Copy link
Contributor

CPerezz commented Aug 29, 2025

This can come in handy for benchmark tests as basically they force the consumption of all the gas available. And that condition forces us to implement padding techniques to consume EXACTLY all the gas available in a block.

When in reality, for a benchmark, we don't care about this at all.
PRs affected:

@LouisTsai-Csie
Copy link
Collaborator Author

@CPerezz I think this is still necessary for Nethermind team (Increasing gas limit) and zkEVM team (proving the entire block)? For gas limit testing, I am not sure if they can only run 1 tx and then derive the entire block execution time from it

@CPerezz
Copy link
Contributor

CPerezz commented Aug 30, 2025

@CPerezz I think this is still necessary for Nethermind team (Increasing gas limit) and zkEVM team (proving the entire block)? For gas limit testing, I am not sure if they can only run 1 tx and then derive the entire block execution time from it

But you can emit a warning if needed. Why does it need to be a failure not spending ALL the gas exactly? I agree it has to be within a bound. Sure. But to the unit in precision is really different. Specially when you have to account for mem expansion and other costs. It's almost impossible to not need padding.

I'm not advocating to remove this completely. But to relax it maybe. Or at least, it would be useful to know why does it need to fail specifically? When and Why was this introduced?

@LouisTsai-Csie
Copy link
Collaborator Author

@CPerezz Thank you for explanation, it is very clear! I will review the features included again and discuss with the team.

As you see this is still a draft and we welcome any feedback, we also want to know what does stateless client team need for benchmarking, what's your consideration when benchmarking?

@CPerezz
Copy link
Contributor

CPerezz commented Sep 1, 2025

@LouisTsai-Csie So I'm just speaking in regards of "State bottlenecks" project. Which is within the stateless-consensus team. Our goal is to measure how different client impls behave when under heavy load and different state sizes among other things.

For that, we need these kind of benchmarks. But it results quite tricky to match perfectly the gas spent. And it's not required at all to be spent. 1% of wiggle room is enough to consider the benchmark useful even if it doesn't spend all the gas of the block.

Copy link
Member

@marioevz marioevz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After going through the current implementation and thinking about it I think this PR is mostly on the right track.

My suggestions would be:

  • We have a single new spec benchmark_tests that receives setup_txs and workload_txs, or a generator.
  • We have multiple generator subclasses all of which subclass BenchmarkCodeGenerator and an implement generate_setup_txs and generate_workload_txs (and perhaps deploy_contracts).
  • Internally benchmark_tests takes setup_txs (or calls generator.generate_setup_txs()) and, if any, generates a first setup block, and then takes workload_txs (or calls generator.generate_workload_txs()) and puts them in the a different block.

@LouisTsai-Csie
Copy link
Collaborator Author

LouisTsai-Csie commented Sep 11, 2025

I refactor the helper function and add the context manager feature.

During the update, some question and todo came to my mind:

  • Where would be the best place for the benchmark_code_generator.py file? Now it is under ethereum_test_benchmark? I originally put it under ethereum_test_tools, but i keep facing circular import issue between ethereum_test_tools <-> ethereum_test_spec package
  • I have not yet removed the benchmark_state_test fixture, I will do so after we confirm it is not necessary with geth team
  • Should we also add metadata here? like how it does in the PR feat(execute): Add identifiers to sent txs #2056

@LouisTsai-Csie LouisTsai-Csie marked this pull request as ready for review September 11, 2025 14:45
@marioevz
Copy link
Member

marioevz commented Sep 11, 2025

Regarding the questions you have:

Where would be the best place for the benchmark_code_generator.py file? Now it is under ethereum_test_benchmark? I originally put it under ethereum_test_tools, but i keep facing circular import issue between ethereum_test_tools <-> ethereum_test_spec package
I think having the ethereum_test_benchmark package is great, because we are going to keep growing the tools we use for benchmarking in the repo,

Maybe we could move the abstract class BenchmarkCodeGenerator to src/ethereum_test_specs/benchmark.py (while leaving JumpLoopGenerator and ExtCallGenerator in src/ethereum_test_benchmark/benchmark_code_generator.py) because you can use it as an input field to BenchmarkTest/BenchmarkStateTest and you can avoid the circular dependency in that case.

I have not yet removed the benchmark_state_test fixture, I will do so after we confirm it is not necessary with geth team

Sgtm, I'm still open to be convinced that we indeed need it.

Should we also add metadata here? like how it does in the PR

That might be out of scope for this PR and we should leave that for the PR that touches the execute command to better align it with the new formats.

Copy link
Member

@marioevz marioevz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking really good! I think the code generators are fantastic, and the only part I feel we should take out and move into another PR is the BenchmarkManager.

@CPerezz
Copy link
Contributor

CPerezz commented Sep 16, 2025

Unsure if this is somehow related. But JIC mentioning it here.

In #2090 we arrived to the following conclusion:

State-related tests might execute only in 2 ways:

  1. You use stubed-contracts feat(execute): Support for contract address stubs #2073 because the state already has the contracts/accounts deployed.
  2. You deploy the contracts/accounts and then proceed as in 1.

For that reason, we realized that benchmark-state-tests always end up being executed in execute mode. Never in fill.

Therefore, the way I found to profit off of this dual mode is to allow fix-mode to take care of the pre-state deployment/generation (making sure it doesn't run in case it identifies that the state is about to deploy already is).
_Notice here, that things like the gas_benchmark_value are useful as they let us understand how much gas we want to spend in execute-mode and deploy as many contracts/accounts as necessary to enable such attack using CREATE2 deterministic addressing for example.
Then, execute-mode runs as a usual benchmark-test. Though things like #2155 would come in handy to make our life easier.

LMK what you think @LouisTsai-Csie @fselmo .

If this approach doesn't make sense. Could you let me know what;'s the best way to bring all Bloatnet benchmarks into EEST?

Copy link
Collaborator

@fselmo fselmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I honestly don't have a lot to add here, this looks amazing 🔥. Really elegant approach. I added a lot of words (that's just my nature 😆) but there's really just some minor miscalculations that we should have sanity checks for anyhow. Otherwise this is looking excellent!

Major question I have is whether this will all still work if we rip out the phase manager and leave it for another PR. I think we can... is there a reason to keep it?

Copy link
Collaborator

@fselmo fselmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This mostly lgtm now! Just one outstanding comment on the pydantic core schema for Bytecode as that's currently broken when trying to model_dump() so I'm not sure how we're even using it if it's working well? 🤔. Either way I think we want hex str there and I provided maybe one approach for this. Let's fix that before merging.

Also gave some test cases I found where we no longer need to define some vars there, related to #2166 as this will give us errors in the future. Note, I didn't really look very thoroughly specifically for unused args, I just found these along the way (there may be more?).

Another set of eyes couldn't hurt as well!

@fselmo
Copy link
Collaborator

fselmo commented Sep 23, 2025

@LouisTsai-Csie this looks good and I've approved and marked all comments as resolved. Please add a CHANGELOG entry for this before merging. We should add some documentation for this as well. Is this important to get in quick (defer docs to another followup PR) or should we do this here before merge?

Copy link
Collaborator

@fselmo fselmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving again 🙂. See my previous comment about documentation. Are you willing to add it here or should we merge this and address separately? We should also add a CHANGELOG entry here.

@LouisTsai-Csie
Copy link
Collaborator Author

LouisTsai-Csie commented Sep 24, 2025

I add a changelog entry, and I prefer to add the documentation in the follow-up PR, as there might be some small changes in the interface! Thanks @fselmo

Note: please feel free to edit the changelog and update the description directly!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants