Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensuring reproducibility and comparability between mesa & mesa-frames models #106

Open
adamamer20 opened this issue Oct 1, 2024 · 2 comments
Labels
enhancement Improvements to existing features or performance.

Comments

@adamamer20
Copy link
Collaborator

Comparability between mesa and mesa-frames models are crucial for validating the correctness of model implementations.
To achieve this given the same seed and initial conditions, we need:

  1. A compatibility layer that transforms a mesa.Model into a mesa-frames.ModelDF (this can be implemented relatively easily, I have already done some work on a local branch)
  2. Consistent randomized operations (also including shuffling of agents) across frameworks. This is more challenging because of the different random number generators (mesa uses random.Random and mesa-frames uses numpy.random.Generator) and different shuffling (mesa uses random.Random.shuffle and mesa-frames uses the native DFs shuffle operation). Maybe with an appropriate decorator, we could substitute random operations for mesa-frames models at runtime, at the cost of performance but gaining the reproducibility.
@adamamer20 adamamer20 added the enhancement Improvements to existing features or performance. label Oct 1, 2024
@rht
Copy link
Contributor

rht commented Oct 1, 2024

For migrating then scaling up, I think writing qualitative test cases that ensure specific behavior outcomes (see e.g. https://github.com/projectmesa/mesa-examples/blob/main/examples/sugarscape_g1mt/tests.py) and check that both versions have the same outcome, is a more effective use of the modeler's time, than trying to reproduce down to the floating points of the original Mesa model. The latter gets much harder the more complex the model is. mesa-frames' activation is order-independent, and can only have an equivalent in Mesa if the latter uses staged activation that is carefully designed to be order independent. Any random activation won't be translatable due to the first mover advantage.

@adamamer20
Copy link
Collaborator Author

You raise a good point. I wanted something more automated and "safer", but qualitative tests may be better in practice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Improvements to existing features or performance.
Projects
None yet
Development

No branches or pull requests

2 participants