Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inaccurate “Cold Start” dataset partition method? Lack of evaluation process? #14

Open
Jie190916 opened this issue Sep 7, 2022 · 0 comments

Comments

@Jie190916
Copy link

The data set is divided in a way that seems is inconsistent with the motivation to be solved in this paper. What the paper hopes to solve is the universal cold start problem (very few interactive data). However, the three "cold start" situations obtained by dividing the user and item in chronological order do not seem to be the general "cold start" concept that we want to solve.
In fact, many newly registered users have many interaction records in the dataset partition obtained in this way, which does not seem to be a user cold start problem. Similarly, many new items also have a number of interaction records, which is not an item cold start condition.

In addition, this project lacks evaluation code. According to the idea of the experimental part of the paper, the author seems to mean that training and testing are carried out in each of the four data sets. However, this code does not divide each data set into training part and testing part, and there is no code for testing part.

In addition, training and testing in each of the four datasets separately does not seem to meet the motivation of meta-learning. According to the concept of meta-learning, it should be trained on warm-state data and then select one of the three cold start datasets for testing to verify the ability of meta-learning to quickly learn new tasks and cope with cold start conditions. But that doesn't seem to be the case in this paper.

The above is my personal thinking and doubt, I don't know if I understand that correctly.

@Jie190916 Jie190916 changed the title Inaccurate “Cold Start” dataset partition method? Lack of Inaccurate “Cold Start” dataset partition method? Lack of evaluation process Sep 7, 2022
@Jie190916 Jie190916 changed the title Inaccurate “Cold Start” dataset partition method? Lack of evaluation process Inaccurate “Cold Start” dataset partition method? Lack of evaluation process? Sep 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant