-
Notifications
You must be signed in to change notification settings - Fork 678
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to visualize the learned features? #741
Comments
What do you mean with "learned data features"? If you mean analysisng the importance of each feature in the model, I believe there's something like |
I use t-sne to put time series in a figure, and I want to observe the classification effect of different models. The input of the model is multivariate time series so will feature_importance function work when the input is not the extracted features? Also, I get confused about the functions in the explainability module, could you give me some instruction or example? Really Really Really thank you for the reply! |
feature_importance is a method of the Learner class, so it will consider the inputs for which the learner has been trained on. |
If the explainability part is very important for you, you can try the XCM model that has a focus on that: it has explanation for both the 1D and 2D CNNs, so this means at max the granularity of one time series element of each variable. Alternatively, this repository has changed some popular models for greater explainability, or you can try to do so yourself (e.g. visualise the 1D CNNs of InceptionTime with GRAD-CAM, which I happened to have done). Of course, you can also calculate the permutation or ablation feature importances for all models in tsai, with the method mentioned above. |
I tried the feature_importance function but I got 5 features (var_0 to var_4). The dataset has 5 types data but var_2 and var_3 got 0 in the permutation. I guess the 5 features are what I want And I wonder how can I get the inputs for which the learner has been trained on? I can't find the function definition in the files. Really Really Really thank you both for the support again!!! |
Hi @jiangye-git, |
Hi, I want to qualitatively observe the learned data features and compare the classification results of different models but I have no idea how to get the learned data features. I will appreciate it if you would like to give me some instruction. Thanks in advance!
The text was updated successfully, but these errors were encountered: