You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I would like to create a subset of the pile that is ~5G in size. The final subset should follow the original distribution of datasets and the documents included should be randomly sampled from the datasets.
I tried to work with the --limit, --read_amount, and --make_dataset_samples parameters to reduce the download size, but when I run the script, each dataset is downloaded in the original size.
I would greatly appreciate it if you could tell me whether what I'm looking for is achievable with this repo and what the command for that would be.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi! I would like to create a subset of the pile that is ~5G in size. The final subset should follow the original distribution of datasets and the documents included should be randomly sampled from the datasets.
I tried to work with the
--limit
,--read_amount
, and--make_dataset_samples
parameters to reduce the download size, but when I run the script, each dataset is downloaded in the original size.I would greatly appreciate it if you could tell me whether what I'm looking for is achievable with this repo and what the command for that would be.
Thanks!
The text was updated successfully, but these errors were encountered: