Skip to content

feat(benchmark): add opcode count field to fixture formats for easier relative opcode benchmarking #1573

@danceratopz

Description

@danceratopz

Assuming that timing Engine API payload execution becomes the canonical method to benchmark opcodes, it'd be valuable to be able to report the absolute average time taken to execute a single opcode from a benchmark test. For example, how long does it take (on average) to execute an ADD versus an ADDMOD?

If the --opcode.count functionality, cc:

The worst case block execution time for a crafted benchmark test targeting a particular opcode is obviously a valuable result and almost equivalent; but knowing the average time taken per opcode could be easier as a basis for discussion.

Considerations:

  • The number of, resp. gas used by, other opcodes executed within the block. Can we normalize the result by taking this into consideration? Perhaps it's negligible?
  • We could add this information to every test fixture, but perhaps it only make sense for benchmark test fixtures? (Only benchmark tests target a specific opcode adequately for the result to make sense). Perhaps there are other use cases though?
  • ...

Perhaps there's a better way? We should check how single opcodes have been benchmarked on a relative basis in the past.

Metadata

Metadata

Assignees

No one assigned

    Labels

    A-test-benchmarkArea: Tests Benchmarks—Performance measurement (eg. `tests/benchmark/*`, `p/t/s/e/benchmark/*`)A-test-cli-fillArea: Tests Fill CLI—runs tests and generates fixtures (eg. `p/t/s/e/cli/pytest_commands/fill.py`)C-enhanceCategory: an improvement or new featureS-needs-discussionStatus: needs discussion

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions