You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The data set is divided in a way that seems is inconsistent with the motivation to be solved in this paper. What the paper hopes to solve is the universal cold start problem (very few interactive data). However, the three "cold start" situations obtained by dividing the user and item in chronological order do not seem to be the general "cold start" concept that we want to solve.
In fact, many newly registered users have many interaction records in the dataset partition obtained in this way, which does not seem to be a user cold start problem. Similarly, many new items also have a number of interaction records, which is not an item cold start condition.
In addition, this project lacks evaluation code. According to the idea of the experimental part of the paper, the author seems to mean that training and testing are carried out in each of the four data sets. However, this code does not divide each data set into training part and testing part, and there is no code for testing part.
In addition, training and testing in each of the four datasets separately does not seem to meet the motivation of meta-learning. According to the concept of meta-learning, it should be trained on warm-state data and then select one of the three cold start datasets for testing to verify the ability of meta-learning to quickly learn new tasks and cope with cold start conditions. But that doesn't seem to be the case in this paper.
The above is my personal thinking and doubt, I don't know if I understand that correctly.
The text was updated successfully, but these errors were encountered:
Jie190916
changed the title
Inaccurate “Cold Start” dataset partition method? Lack of
Inaccurate “Cold Start” dataset partition method? Lack of evaluation process
Sep 7, 2022
Jie190916
changed the title
Inaccurate “Cold Start” dataset partition method? Lack of evaluation process
Inaccurate “Cold Start” dataset partition method? Lack of evaluation process?
Sep 7, 2022
The data set is divided in a way that seems is inconsistent with the motivation to be solved in this paper. What the paper hopes to solve is the universal cold start problem (very few interactive data). However, the three "cold start" situations obtained by dividing the user and item in chronological order do not seem to be the general "cold start" concept that we want to solve.
In fact, many newly registered users have many interaction records in the dataset partition obtained in this way, which does not seem to be a user cold start problem. Similarly, many new items also have a number of interaction records, which is not an item cold start condition.
In addition, this project lacks evaluation code. According to the idea of the experimental part of the paper, the author seems to mean that training and testing are carried out in each of the four data sets. However, this code does not divide each data set into training part and testing part, and there is no code for testing part.
In addition, training and testing in each of the four datasets separately does not seem to meet the motivation of meta-learning. According to the concept of meta-learning, it should be trained on warm-state data and then select one of the three cold start datasets for testing to verify the ability of meta-learning to quickly learn new tasks and cope with cold start conditions. But that doesn't seem to be the case in this paper.
The above is my personal thinking and doubt, I don't know if I understand that correctly.
The text was updated successfully, but these errors were encountered: