Check Settings, Spec, and Coefficients Files #1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR will address #784.
The proposed approach includes some relatively simple "smoke test" of YAML, SPEC, and coefficients config files:
simulate.eval_coefficients
orsimulate.eval_nest_coefficients
) is run.As scoped, validating expressions is not included, and the code isn't worried about tables at all.
Currently, there is a small subset of models that get tested, which include simpler configs, one with a preprocessor, and one with nested coefficients. Some basic logging is included. There may be more cases that need to be solved, but I did want to get some feedback / validation before pressing onward. I'll highlight some of my own thoughts below, but also want to get some eyes on this to make sure the contribution meets the issue-brief and will be useful.
Autodiscovery of Model Settings YAML Files
Currently, there is a dict that maps the names of component models to a Pydantic model class, as well as the relevant model file name. This will be pretty cumbersome to maintain.
A better way forward would be identify the default arguments to
model_settings
(Pydantic) andmodel_settings_file_name
(YAML file) from the signature of each step. I'm not totally sure how to extract this information from the step wrapper around the callable.Error Handling
An important design decision (which is still TBD) will be if any custom errors or warnings are raised by the
settings_checker
module (currently, this is not included. In my thinking, the main point of the settings checker is to surface any exceptions or warnings that would otherwise be raised at runtime earlier in the run process -- not necessarily to introduce any new error handling. If that is a decent assumption, responsibility for raising problems should be delegated upstream to the underlying Pydantic models and ActivitySim methods.Nested Coefficients
Currently, I am using
simulate.eval_nested_coefficients
to evaluate models with the NESTS property. So far, this seems to behave as I expect -- but I am a little wary that it is misguided. With the settings checker turned off, I have not been been able to identify anything in theprototype_mtc
model that actually calls this function.Calling the checker
The checker is currently being called from
cli.run
for convenience of development. But I suspect that it needs to be migrated (or added) to theState.run
method.