Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

javacpp -pytorch ,the dataset and dataloader ,when and how to do Tensor && Example split for batch size ? #1562

Open
mullerhai opened this issue Jan 2, 2025 · 0 comments

Comments

@mullerhai
Copy link

Hi:
now ,I want to use the javacpp-pytorch supply the dataset and dataloader for load my local mnist data file ,but I am not use Mnist Dataset and MnistDataLoader ,I just use ChunkRandomDataLoader, and ChunkDataReader ChunkDataset RandomSampler, whole data have 60000 ,set batch size 32, but I found when I load data for model forward every batch is whole size data!!! , so I not know why the dataset and dataloader all not split data ,I know when I use ChunkDataReader load data file 60000 data are as one Example ,not one data as one Example ,so the ExampleVector are only one Example object .so we need split Example object at dataReader or Dataset or dataloader init ,but I not know when to do this. please tell me the logic for data split position and so on ,thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants