Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions for Experiment 5.2 #12

Open
omerkolcak opened this issue Apr 9, 2023 · 0 comments
Open

Questions for Experiment 5.2 #12

omerkolcak opened this issue Apr 9, 2023 · 0 comments

Comments

@omerkolcak
Copy link

Hello,

Thank you for sharing this great work. I have some question marks related with experiment 5.2. I'm not clear what is trying to be achieved in this experiment. As far as I understand, you train a base model that is naturally interpretable model(logistic regression or decision tree) to compare with the explainers like LIME, Parzen etc. For each instance, from these base model you get maximum 10 features as "gold features" and you check how many of these gold featueres are recovered by the explainers. If my understanding is correct, I have this question: If we have complex dataset so that logistic regression or decision tree gives a poor performance on the dataset, then are these selected gold features relaible to compare with explainers?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant