Releases: WenjieDu/PyPOTS
v0.8.1 🪲 Fix model saving issues
We fixed two model-saving issues in some conditions:
- unintended overwrite of the existing model file when calling func
.save()
even with argoverwrite
default to False; model_saving_strategy=best
does not work and pypots still save every better model;
What's Changed
- Fix unintended overwrite saving by @WenjieDu in #516
- Update conda_dev_env.yml by @WenjieDu in #517
- Fix unintended overwrite when saving models and update conda dependencies by @WenjieDu in #518
model_saving_strategy=best
does not work by @WenjieDu in #514- Update docs by @WenjieDu in #523
- Fix model saving issue and update docs by @WenjieDu in #524
- Fix error link to Reformer on openreview by @WenjieDu in #527
- Update docs and release v0.8.1 by @WenjieDu in #528
Full Changelog: v0.8...v0.8.1
v0.8 🚀 New models
We bring you new models ModernTCN (ICLR 2024
), TimeMixer (ICLR 2024
), and TEFN in this release ;-)
Kudos to our new contributors Eljas (@eroell) and Tianxiang (@ztxtech)!
What's Changed
- Update testing workflows and dependencies, and refactor by @WenjieDu in #482
- Update docs by @WenjieDu in #483
- Initialize the client of Gungnir and update docs by @WenjieDu in #484
- Update docs by @WenjieDu in #487
- Update docs by @WenjieDu in #489
- Refactor Gungnir logging and update docs by @WenjieDu in #490
- Allow failure when PR gets merged before jobs get finished by @WenjieDu in #492
- Update docs by @WenjieDu in #493
- Update tsdb.load_dataset to tsdb.load in doc of load_specific_dataset by @eroell in #494
- Doc update Quickstart Example by @eroell in #497
- Update docs by @WenjieDu in #498
- Implement TimeMixer as an imputation model by @WenjieDu in #499
- Update the docs for TimeMixer by @WenjieDu in #500
- Add TimeMixer by @WenjieDu in #501
- Implement ModernTCN as an imputation model by @WenjieDu in #502
- Add ModernTCN docs by @WenjieDu in #503
- Add ModernTCN by @WenjieDu in #504
- Add TEFN model by @ztxtech in #505
- Add TEFN and implement it as an imputation model by @WenjieDu in #507
- Apply line-length=120 to black format by @WenjieDu in #509
- Import random walk funcs from BenchPOTS and add AI4TS as a dependency by @WenjieDu in #510
- Apply line-length=120 to refactor code, update dependencies and pre-commit config by @WenjieDu in #512
New Contributors
Full Changelog: v0.7.1...v0.8
v0.7.1 Fix missing load_specific_dataset()
Previously we removed pypots.data.load_specific_datasets packages since the preprocessing functions have been all gathered and managed in BenchPOTS. The removal caused some incompatibility (see #474), hence we added it back in this minor version. But it still will be deprecated in the near future and we encourage users to use BenchPOTS for dataset preprocessing, which now supports 170+ public time series datasets. Also, we
- added a visualization function to plot the map of attention weights. 👍Kudos to Anshu @gugababa for his contribution;
- deprecated setup.py and added pyproject.toml to config the project;
What's Changed
- Visualize attention matrix in SAITS by @gugababa in #302
- Add attention map visualization func by @WenjieDu in #475
- Gather requirements in one dir by @WenjieDu in #477
- Add toml config and gather dependency files by @WenjieDu in #478
- Add pyproject.toml, gather dependency files, and fix flake8 with toml config file by @WenjieDu in #480
- Fix missing load_specific_dataset(), update testing_daily workflow, release v0.7.1 by @WenjieDu in #481
New Contributors
Full Changelog: v0.7...v0.7.1
v0.7 New Algos & Bug Fix
Update summary for v0.7 release:
- included ImputeFormer [KDD'24], kudos👍 to @tongnie, also the author of ImputeFormer;
- implemented Lerp (Linear Interpolation), thanks👍 to @colesussmeier;
- added TCN as an imputation model, with SAITS embedding and training methodology applied;
- fixed a minor bug in RevIN for POTS data;
- fixed failed model saving when
model_saving_strategy
is set asbetter
; - added
pypots.data.utils.inverse_sliding_window
func to help restore time series samples sliced bysliding_window
func;
What's Changed
- Make the number of max steps adjustable in TimesNet by @WenjieDu in #438
- Enable to restore from
sliding_window()
by @WenjieDu in #441 - Add
inverse_sliding_window()
and enable TimesNet to work with len>5000 samples by @WenjieDu in #442 - Update docs by @WenjieDu in #443
- Use
inspect
to fetch models arguments and update docs by @WenjieDu in #444 - Expose new models for tuning, add get_class_full_path(), and test visual funcs by @WenjieDu in #447
- Update docs by @WenjieDu in #448
- Make classification GRUD more robust, and update docs by @WenjieDu in #449
- Update Imputeformer by @tongnie in #450
- Update docs by @WenjieDu in #452
- Add ImputeFormer, fix RevIN, and update docs by @WenjieDu in #454
- Implement Linear Interpolation (Lerp) Imputation Method by @colesussmeier in #459
- Update docs conf by @WenjieDu in #461
- Add Lerp as an imputation method and update the docs config by @WenjieDu in #462
- Update dependencies in conda env files by @WenjieDu in #463
- Update docs and conda env dependencies by @WenjieDu in #465
- Add TCN as an imputation model by @WenjieDu in #467
- Add TCN and update docs by @WenjieDu in #468
- Fix saving failed when the strategy is 'better' by @WenjieDu in #469
- Use xeLatex engine to avoid Unicode error by @WenjieDu in #472
- Fix failed saving strategy "better", update docs, and release v0.7 by @WenjieDu in #470
New Contributors
- @tongnie made their first contribution in #450
- @colesussmeier made their first contribution in #459
Full Changelog: v0.6...v0.7
v0.6 🔥🪭 Nine New Models
In v0.4 and v0.5, PyPOTS brought you new models. Now, let's fan🪭 the frame🔥 in v0.6!
- Non-stationary Transformer, Pyraformer, Reformer, SCINet, RevIN, Koopa, MICN, TiDE, StemGNN are included in this new release;
- another new PyPOTS Ecosystem library
BenchPOTS
has been released and supports preprocessing pipelines of 170 public time series datasets for benchmarking machine learning on POTS data; - add the argument
verbose
to mute all info level logging;
👍 Kudos to our new contributor @LinglongQian.
Please refer to the changelog below for more details.
What's Changed
- Implement Non-stationary Transformer as an imputation model by @WenjieDu in #388
- Implement Pyraformer as an imputation model by @WenjieDu in #389
- Add Nonstationary Transformer and Pyraformer, update docs by @WenjieDu in #390
- Treat keyboard interruption during training as a warning, and update the docs by @WenjieDu in #391
- Add SCINet modules and implement it as an imputation model by @WenjieDu in #406
- Add RevIN modules and implement it as an imputation model by @WenjieDu in #407
- Add Koopa modules and implement it as an imputation model by @WenjieDu in #403
- Add MICN modules and implement it as an imputation model by @WenjieDu in #401
- Update docs and references by @WenjieDu in #410
- Add TiDE modules and implement it as an imputation model by @WenjieDu in #402
- Add Koopa, SCINet, RevIN, MICN and TiDE, and update the docs by @WenjieDu in #412
- Add StemGNN modules and implement it as an imputation model by @WenjieDu in #415
- Add GRU-D as an imputation model by @WenjieDu in #417
- Update README and docs by @WenjieDu in #420
- Implement StemGNN and GRU-D as an imputation model by @WenjieDu in #421
- Update set_random_seed() by @WenjieDu in #423
- Enable tuning new added models by @WenjieDu in #424
- ETSformer hyperparameters mismatch during NNI tuning by @LinglongQian in #425
- Fix ETSformer tuning bug, and release v0.6rc1 by @WenjieDu in #427
- Add arg
verbose
to control logging by @WenjieDu in #428 - Add Reformer as an imputation model by @WenjieDu in #433
- Add Reformer, add option
version
to control training log, and add benchpots as a dependency by @WenjieDu in #434 - Raise the minimum support python version to v3.8 by @WenjieDu in #436
- Fix linting error by @WenjieDu in #437
Full Changelog: v0.5...v0.6
v0.6 RC
v0.5 🔥 New Models & Features
Here is the summary of this new version's changelog:
- the modules of iTransformer, FiLM, and FreTS are included in PyPOTS. The three have been implemented as imputation models in this version;
- CSDI is implemented as a forecasting model;
MultiHeadAttention
is enabled to manipulate all attention operators in PyPOTS;
What's Changed
- Fix failed doc building, fix a bug in gene_random_walk(), and refactor unit testing configs by @WenjieDu in #355
- Implement CSDI as a forecasting model by @WenjieDu in #354
- Update the templates by @WenjieDu in #356
- Implement forecasting CSDI and update the templates by @WenjieDu in #357
- Update README by @WenjieDu in #359
- Update docs by @WenjieDu in #362
- Implement FiLM as an imputation model by @WenjieDu in #369
- Implement FreTS as an imputation model by @WenjieDu in #370
- Implement iTransformer as an imputation model by @WenjieDu in #371
- Add iTransformer, FreTS, FiLM by @WenjieDu in #372
- Fix failed CI testing on macOS with Python 3.7 by @WenjieDu in #373
- Add SaitsEmbedding, fix failed CI on macOS with Python3.7, and update docs by @WenjieDu in #374
- Fix error in gene_random_walk by @WenjieDu in #375
- Try to import torch_geometric only when init Raindrop by @WenjieDu in #381
- Enable all attention operators to work with
MultiHeadAttention
by @WenjieDu in #383 - Fix a bug in gene_random_walk, import pyg only when initing Raindrop, and make MultiHeadAttention work with all attention operators by @WenjieDu in #384
- Refactor code and update docstring by @WenjieDu in #385
- 添加中文版README文件 by @Justin0388 in #386
- Refactor code and update docs by @WenjieDu in #387
New Contributors
- @Justin0388 made their first contribution in #386
We also would like to thank Sijia @phoebeysj, Haitao @CowboyH, and Dewang @aizaizai1989 for their help in polishing Chinese README.
Full Changelog: v0.4.1...v0.5
v0.4.1 🚧 Refactor&Modularization
In this refactoring version, we
- applied SAITS loss function to the newly added imputation models (Crossformer, PatchTST, DLinear, ETSformer, FEDformer, Informer, and Autoformer) in v0.4, and add the arguments
MIT_weight
andORT_weight
in them for users to balance the multi-task learning; - modularized all neural network models and put their modules in the package
pypots.nn.modules
; - removed deprecated metric funcs (e.g.
pypots.utils.metrics.cal_mae
that has been replaced bypypots.utils.metrics.calc_mae
);
What's Changed
- Apply SAITS loss to newly added models and update the docs by @WenjieDu in #346
- Modularize neural network models by @WenjieDu in #348
- Modularize NN models, remove deprecated metric funcs, and update docs by @WenjieDu in #349
- Remove
pypots.imputation.locf.modules
and add assertions for BTTF by @WenjieDu in #350 - Test building package during CI by @WenjieDu in #353
- Avoid the import error
MessagePassing not defined
by @WenjieDu in #351
Full Changelog: v0.4...v0.4.1
v0.4 🔥 New models
- applied the SAITS embedding strategy to models Crossformer, PatchTST, DLinear, ETSformer, FEDformer, Informer, and Autoformer to make them applicable to POTS data as imputation methods;
- fixed a bug in USGAN loss function;
- gathered several Transformer embedding methods into the package
pypots.nn.modules.transformer.embedding
; - added the attribute
best_epoch
for NN models to record the best epoch num and log it after model training; - made the self-attention operator replaceable in the class
MultiHeadAttention
for Transformer models; - renamed the argument
d_inner
of all models in previous versions intod_ffn
. This is for unified argument naming and easier understanding; - removed deprecated functions
save_model()
andload_model()
in all NN model classes, which are now replaced bysave()
andload()
;
What's Changed
- Removing deprecated functions by @WenjieDu in #318
- Add Autoformer as an imputation model by @WenjieDu in #320
- Removing deprecated save_model and load_model, adding the imputation model Autoformer by @WenjieDu in #321
- Simplify MultiHeadAttention by @WenjieDu in #322
- Add PatchTST as an imputation model by @WenjieDu in #323
- Renaming d_inner into d_ffn by @WenjieDu in #325
- Adding PatchTST, renaming d_innner into d_ffn, and refactoring Autofomer by @WenjieDu in #326
- Add DLinear as an imputation model by @WenjieDu in #327
- Add ETSformer as an imputation model by @WenjieDu in #328
- Add Crossformer as an imputation model by @WenjieDu in #329
- Add FEDformer as an imputation model by @WenjieDu in #330
- Add Crossformer, Autoformer, PatchTST, DLinear, ETSformer, FEDformer as imputation models by @WenjieDu in #331
- Refactor embedding package, remove the unused part in Autoformer, and update the docs by @WenjieDu in #332
- Make the self-attention operator replaceable in Transformer by @WenjieDu in #334
- Add informer as an imputation model by @WenjieDu in #335
- Speed up testing procedure by @WenjieDu in #336
- Add Informer, speed up CI testing, and make self-attention operator replaceable by @WenjieDu in #337
- debug USGAN by @AugustJW in #339
- Fix USGAN loss function, and update the docs by @WenjieDu in #340
- Add the attribute
best_epoch
to record the best epoch num by @WenjieDu in #342 - Apply SAITS embedding strategy to new added models by @WenjieDu in #343
- Release v0.4, apply SAITS embedding strategy to the newly added models, and update README by @WenjieDu in #344
Full Changelog: v0.3.2...v0.4
v0.3.2 🐞 Bugfix
- fixed an issue that stopped us from running Raindrop on multiple CUDA devices;
- added Mean and Median as naive imputation methods;
What's Changed
- Refactor LOCF, fix Raindrop on multiple cuda devices, and update docs by @WenjieDu in #308
- Remind how to display the figs rather than invoking plt.show() by @WenjieDu in #310
- Update the docs and requirements by @WenjieDu in #311
- Fixing some bugs, updating the docs and requirements by @WenjieDu in #312
- Make CI workflows only test with Python v3.7 and v3.11 by @WenjieDu in #313
- Update the docs and release v0.3.2 by @WenjieDu in #314
- Add mean and median as imputation methods, and update docs by @WenjieDu in #317
Full Changelog: v0.3.1...v0.3.2