Skip to content

Conversation

@Jeadie
Copy link

@Jeadie Jeadie commented Oct 15, 2024

Fixes #.

Summary

Other Information

dennisbader and others added 30 commits March 4, 2024 19:07
* update changelog

* bump u8darts 0.27.2 to 0.28.0

* update changelog
* update code owners

* udpated PR template
* Remove unnessesary `pass` statements

* Rename ForecastingModel_is_probabilistic to supports_probabilistic_prediction, rearrange some documentation

* Remove redundant overrides

* Reformat

* Add CHANGELOG entry

---------

Co-authored-by: Dennis Bader <[email protected]>
* fix type hinting for _with_sanity_checks

* update changelog
* Add optional inverse transform in historical forecast

* Update variables names and docstrings

* Move the inverse transform to InvertibleDataTransformer

* Fix single element list

* Update docstrings

* Move the inverse transform of list of lists to inverse_transform method

* make invertible transformers act on list of lists of series

* add tests

* update changelog

---------

Co-authored-by: dennisbader <[email protected]>
* lxml_html_clean for nbshinx

* update changelog
* fix lighgbm segmentation fualt

* update changelog

* parameterize unit tests
* fix lighgbm segmentation fualt

* update changelog

* parameterize unit tests

* make metric_kwargs metric specific rather than infereing which kwarg belongs to which metric

* update hierarchical reconciliation notebook

* fix failing residuals tests
* use pytest to skip torch tests

* fix some mistakes in tsmixer notebook
* add TimesSeries.from_group_dataframe parallel mode

* remove code mess

* add doc string for new parameters

* update CHANGELOG.md

* add miss dtype

* fix static covariates

* make parallel function as local and fix tests

* fix parallel utils imports

* update changelog

* Update CHANGELOG.md

---------

Co-authored-by: Bohdan Bilonoh <[email protected]>
Co-authored-by: dennisbader <[email protected]>
* bump black[jupyter] 24.1.1 to 24.3.0

* update changeloig
* add codecov token to merge and dev ci pipelines

* Update CHANGELOG.md
* fix monte carlo dropout

* add mc dropout to models that used regular dropout before

* update changelog

* add unit tests

* codecov fix test

* codecov fix test 2

* codecov fix test 3
* bump codecov-action from v3 to v4

* further tests

* add back token

* add back codecov comment

* update changelog
* fix: reorder lagged features per lags when they are provided component-wise

* fix: parametrize lagged_features_names test

* feat: added tests for lagged_features_names when lags are component-specific

* fix: create_lagged_name is not affected by lags order different than the components

* fix: improve comment

* feat: tests verify that list and dict lags yield the same result

* fix: remove staticmethod for the tests to pass on python 3.9

* feat: properly reorder features during autoregression, added corresponding test

* update changelog

* fix: adressing review comments

* fix: moved autoregression lags extraction to tabularization

* fix: refactor tests to reduce code duplication

* fix: adress review comment

* fix: remove usage of strict argument in zip, not support in python 3.9

* further refactor lagged data extraction for autoregression

* allow coverage diffs for codecov upload

* use codecov v3

* precompute lagged and ordered feature indices

---------

Co-authored-by: Dennis Bader <[email protected]>
* add progress bar to regression models for hist fc

* update changelog

* remove line
* simplify hist fc tests part 1

* refactor torch hist fc auto start

* future cov hist fcs tests

* fix rnn model historical forecasts

* fix failing unit tests

* update changelog

* fix discrepancies in test comments

* fix failing unit tests
* lint: switch `flake8` to Ruff

* fixing issues

* build gradle

* noqa: E721

* revert changes of #2327

* ruff

* Apply suggestions from code review

* chlog
* add release notes section to documentation page

* add body to gh release linking to the release notes

* update changelog
* bump u8darts 0.28.0 to 0.29.0

* update changelog for new version

* update changelog
dennisbader and others added 30 commits August 18, 2025 16:23
…es (#2877)

* update gh action workflow to install darts locally without dependencies

* fix torch imports in unit tests
* random state added in historical forecasting

* test historical forecast modified

* Update darts/models/forecasting/conformal_models.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/models/forecasting/sf_model.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/models/forecasting/sf_model.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/tests/models/forecasting/test_probabilistic_models.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/tests/models/forecasting/test_probabilistic_models.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/tests/models/forecasting/test_probabilistic_models.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/tests/models/forecasting/test_probabilistic_models.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/tests/models/forecasting/test_probabilistic_models.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/tests/models/forecasting/test_probabilistic_models.py

Co-authored-by: Dennis Bader <[email protected]>

* Fixed end of line error

* First commit future covariates for BlockRNN

* BlockRNN update

* Cleaning

* No shift in the future covariates

* Change after first review

* update blockrnn model

* update example notebook

* update docs

* update changelog

* remove unused static cov dim

* clean up

---------

Co-authored-by: Dennis Bader <[email protected]>
Co-authored-by: madtoinou <[email protected]>
* feat: Add add_regressor_configs to Prophet model

* minor updates

---------

Co-authored-by: Ramsay <[email protected]>
Co-authored-by: dennisbader <[email protected]>
* bump minimum lightning version to >=2.0.0

* update changelog
* increase the decimal places to 3

* Update Changelog

* Update likelihood models

* Update test files

* update changelog

---------

Co-authored-by: Dennis Bader <[email protected]>
* Narwhalify_from_group_dataframe

* fix some errors

* fix some errors

* set time_index to nw.Datetime(time_unit="ns")

* update from_group_dataframe

* add polars to from_group_dataframe_tests

* update changelog

* update changelog

* keep old ts

* improve efficiency

* maintain order

* cleen up tests after maintaining order

* remove old from_group_dataframe

* Update CHANGELOG.md

* set index for pandas df for perfomance boost

* remove old from_group_df

* improve code coverage

---------

Co-authored-by: dennisbader <[email protected]>
* Fix Croston link in README.md

* reformat table

---------

Co-authored-by: dennisbader <[email protected]>
)

* Add mixed precision and 16-bit support to `TorchForecastingModel`

Previously, the `"precision"` option set in `pl_trainer_kwargs` is
ignored in favour of precision of the time series data type. That led to
the lack of support for mixed precision training and 16-bit true
precison despite `pytorch-lightning.Trainer` native support.

I added support for mixed precision and any other preicison options
supported by Lightning in `TorchForecastingModel`.

Here, by default when the `"precision"` option is not set in
`pl_trainer_kwargs` or a custom trainer, the model precision is
determined by the data type. When it is set, we can assume that the user
understand the ramifications of that option and the option is always
passed along to the trainer, even when there is a mismatch between the
model precision and the data type.

Sometimes the mismatch is necessary when the data type is 32-bit but the
model can be trained in mixed precision. The other times the mismatch
would cause an error, say, training 32-bit model on 64-bit data.

**Precision is an advanced option for `TorchForecastingModel` training and
users should only use it at their own benefits and risks!**

* Update CHANGELOG with mixed & 16-true precision support

* Fix test fails due to Python 3.13 error hint

Two tests failed on Python 3.13 due to NEW keyword argument suggestion
in Python 3.13. See [What’s New In Python
3.13](https://docs.python.org/3/whatsnew/3.13.html) and
[gh-107944](python/cpython#107944) for
details.

Those fails are unrelated to the new precision option support.

* Fix a bug when `"bf16-mixed"` precision is used

While there are many mixed precision options on GPU, the only mixed
precision on CPU currently is `"bf16-mixed"`. The model output would be
"BFloat16" and numpy does not support conversion from it. We must
convert the output to float32 first.

* Add two tests of preicison option for better code coverage

1. Add `test_auto_precision_casting` to test auto model preicision
   casting based on time series data type, when precision is not set.
2. Add `test_mixed_precision_training` to test mixed precision training
   when precision option is `"16"` (`"16-mixed"` on Lightning>=2.0.0).

* update changelog

* remove redundant diffs

* add back diffs for testing with python 3.13

* Produce warnings for 16-bit time series and user-defined precision

* Add mixed-precision tests & adjust 16-bit tests

1. Add tests for both mixed preicison options, i.e., "16-mixed" and
   "bf16-mixed:.
2. Adjust tests when inputs are 16-bit to allow for 32-bit predictions.

* Apply suggestions from code review

Log to into for user-defined precision and to warning when float16-like precision is being used.

Co-authored-by: Dennis Bader <[email protected]>

* Test valid precision options for custom & built-in trainers

Also ensure there are no NaN values in predictions.

---------

Co-authored-by: dennisbader <[email protected]>
* feat: add extra arguments for Exponential Smoothing model

* changelog update

* doc changes and PR suggestions
* move hfc param checks into decorator

* integrate val length

* improve min train series length

* improve train length handling

* improve reconcile hfc

* add val length

* update _target_train_sample_lengths

* fix data transformer issue

* add unit tests

* add unit tests

* update

* update docs

* improve coverage

* improve coverage part 2

* improve code coverage ensemble

* improve code covarage ckpt

* revert min_samples renaming

* fix ensemble min train series length

* fix failing tests

* fix ensemble tests

* fix untrained_model for ensemble models

* fix ensemble model train requirements ckpt

* fix ensemble model pt2

* fix ensemble model pt3

* remove max target train output length from extreme lags

* rename the target train series lengths property

* simplify regression ensemble model

* add last ensemble model tests

* docs udpate

* cleanup

* PR ready for review

* fix old reference in quickstart

* update changelog

* improve code coverage
* add load_best to torch_forecasting_model

* update changelog

* Update CHANGELOG.md

Co-authored-by: Alain Gysi <[email protected]>

* update test_load_best

* Update darts/models/forecasting/torch_forecasting_model.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/models/forecasting/torch_forecasting_model.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/tests/models/forecasting/test_torch_forecasting_model.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/models/forecasting/torch_forecasting_model.py

* Delete useless whitespace

* Update test_load_best

* Fix test_load_best error

* Add test_load_best_ignored

* Update CHANGELOG.md

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/models/forecasting/torch_forecasting_model.py

Co-authored-by: Dennis Bader <[email protected]>

* Update darts/tests/models/forecasting/test_torch_forecasting_model.py

Co-authored-by: Dennis Bader <[email protected]>

* Update torch_forecasting_model.py

* Update test_torch_forecasting_model.py

* minor updates

* remove some lines

---------

Co-authored-by: Alain Gysi <[email protected]>
Co-authored-by: Dennis Bader <[email protected]>
* small fix on assert condition

* Moved XGB, SF to notorch flavor

* Update CHANGELOG.md

* minor updates

* PR fixes

* minor updates

* removed core tests from develop workflow

---------

Co-authored-by: dennisbader <[email protected]>
* Allow skipped resampling in TFT for faster inference

Resampling in TFT's VariableSelectionNetwork has introduced overheads
for training due to slow `interpolate()` implementation in PyTorch. I've
added an option `skip_resampling` to skip over such operations in TFT
while accuracy are largely not affected.

- `skip_resampling` defaults to `False` and TFT would retain the old
  behaviour of applying interpolation on feature embeddings.
- When set to `True`, all interpolation operations are skipped (in
  `_GatedResidualNetwork`) or replaced by projection (`_ResampleNorm`).
- Quite a few typing errors are fixed in TFT.

* Add back `forward()` to `_ResampleNorm`

* Fix a TFT kernel error on MPS device

* Add TFT tests for MPS devices and `skip_resampling` option

* Update CHANGELOG for TFT skip resampling & MPS bug

* Fix `test_on_mps` w/ `tfm_kwargs§ deep copy

Previous shallow copy of `tfm_kwargs` modified `"pl_trainer_kwargs"` for
other tests and led to many test fails. We modify a deep copy here for
MPS test to fix it.

* Remove TFT test on MPS devices

MPS memory is not available on GitHub. TFT test on MPS is removed.

* Fix a bug in `_VariableSelectionNetwork`

* Expand TFT static covariate test w/ `skip_resampling`

Test static covariate support with and without `skip_resampling`

* Update CHANGELOG.md

Co-authored-by: Dennis Bader <[email protected]>

* Update CHANGELOG.md

* Replace interpolation with linear projection

- Rename `skip_resampling` option to `skip_interpolation`.
- When set to `True`, all interpolation would be replaced by linear
  projection during the feature embedding sampling operations.

* Update CHANGELOG for renamed `skip_interpolation` option

---------

Co-authored-by: Dennis Bader <[email protected]>
---
updated-dependencies:
- dependency-name: jupyterlab
  dependency-version: 4.4.8
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* fix pass verbose to fit in historical_forecast

* update changelog

* add verbose to fit (catboost not working)

* fix verbose for catboost

* lint format

* add verbose to more tests

* fix "fit" docstrings

* fix verbose passing

* fix passing verbose for baselines

* revert masking of verbose and replace with inspect fit

* improve change log

* improve test (revert verbose from test_fit but add to runnability test

* linting

* fix tests

* add verbose to predict kwargs

* fix passing verbose in hfc

* fix optimized hfc issue

* update changelog

---------

Co-authored-by: dennisbader <[email protected]>
* add global hfc mode

* update docs

* add val length to global hfc

* extend datatransformer support for hfc

* integrate global hfc into regular hfc

* fix logic

* update docs

* add first unit tests

* add data transformer tests

* extend tests

* fix failing backtest

* add tests for new drop before/after

* add more tests

* clean up for pr

* update changelog

* update changelog

* update docs
* bumped min python version to 3.10

* changelog update

* update docker imaged and others

---------

Co-authored-by: dennisbader <[email protected]>
* add copy button to code blocks

* improve code copying

* update code examples

* update changelog
* group changelog entries

* improve changelog

* bump u8darts 0.37.1 to 0.38.0

* update changelog

* clean up outdated m1 instructions
* .map() improvements

* changelog update

* example fix

* again example fix

* codecov changes

* add shape check and update docs

* update docs

---------

Co-authored-by: dennisbader <[email protected]>
* add onnxscript to optional dependencies for tests

* update docs and fix further tests

* fix onnx and shap issues
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.