You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the analyzer generates the keccak hashes of the first 1000 storage slots in order to help with slot identification.
According to @thevaizman, some contracts have quite large static arrays > 1000 elements, so we likely need to calculate more slots to be safe. During a meeting I proposed 10 million, but upon thinking about it that causes a residency of approximately 4GB just for the hashes. 1 million is approx. 400MB.
Either way, if we want to increase the limit this much, we should work on a way to retain this data between runs so we don't have to compute it at every start up.
Spec
Work out how to handle this.
Implement the solution.
The text was updated successfully, but these errors were encountered:
Description
Currently the analyzer generates the keccak hashes of the first 1000 storage slots in order to help with slot identification.
According to @thevaizman, some contracts have quite large static arrays > 1000 elements, so we likely need to calculate more slots to be safe. During a meeting I proposed 10 million, but upon thinking about it that causes a residency of approximately 4GB just for the hashes. 1 million is approx. 400MB.
Either way, if we want to increase the limit this much, we should work on a way to retain this data between runs so we don't have to compute it at every start up.
Spec
The text was updated successfully, but these errors were encountered: