Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check Settings, Spec, and Coefficients Files #1

Draft
wants to merge 13 commits into
base: dev
Choose a base branch
from
Draft

Conversation

andkay
Copy link
Collaborator

@andkay andkay commented Mar 20, 2025

This PR will address #784.

The proposed approach includes some relatively simple "smoke test" of YAML, SPEC, and coefficients config files:

  • The component settings are first loaded into the relevant Pydantic data model.
  • Then, the coefficients file is then read into memory, if defined at the top level of the data model.
  • Then, the SPEC file is read into memory, similarly, if defined at the top level of the data model.
  • Finally, one of the two available methods (simulate.eval_coefficients or simulate.eval_nest_coefficients) is run.

As scoped, validating expressions is not included, and the code isn't worried about tables at all.

Currently, there is a small subset of models that get tested, which include simpler configs, one with a preprocessor, and one with nested coefficients. Some basic logging is included. There may be more cases that need to be solved, but I did want to get some feedback / validation before pressing onward. I'll highlight some of my own thoughts below, but also want to get some eyes on this to make sure the contribution meets the issue-brief and will be useful.

Autodiscovery of Model Settings YAML Files
Currently, there is a dict that maps the names of component models to a Pydantic model class, as well as the relevant model file name. This will be pretty cumbersome to maintain.

A better way forward would be identify the default arguments to model_settings (Pydantic) and model_settings_file_name (YAML file) from the signature of each step. I'm not totally sure how to extract this information from the step wrapper around the callable.

Error Handling
An important design decision (which is still TBD) will be if any custom errors or warnings are raised by the settings_checker module (currently, this is not included. In my thinking, the main point of the settings checker is to surface any exceptions or warnings that would otherwise be raised at runtime earlier in the run process -- not necessarily to introduce any new error handling. If that is a decent assumption, responsibility for raising problems should be delegated upstream to the underlying Pydantic models and ActivitySim methods.

Nested Coefficients
Currently, I am using simulate.eval_nested_coefficients to evaluate models with the NESTS property. So far, this seems to behave as I expect -- but I am a little wary that it is misguided. With the settings checker turned off, I have not been been able to identify anything in the prototype_mtc model that actually calls this function.

Calling the checker
The checker is currently being called from cli.run for convenience of development. But I suspect that it needs to be migrated (or added) to the State.run method.

@andkay andkay requested review from seanmcatee and jpn-- March 20, 2025 22:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant