Skip to content

Query: FFHQ Pre-Processing #39

@KomputerMaster64

Description

@KomputerMaster64

Thank you sir for sharing the scripts for dataset preparation. I am trying to implement the DDGAN model on the FFHQ 256x256 dataset. I have used the FFHQ 256x256 resized dataset from the kaggle since the FFHQ 1024x1024 dataset has a size of 90 GB, which exceeds the limits of my resources.


The Kaggle dataset has the files in archive.zip file, which has a directory "resized" which contains the 70k .jpg files.


The file structure is as follows:
archive.zip
├  resized
├ (70k images)


I am using google drive and colab notebooks for the implementation. I am using the file setup with CODE_DIR = "/content/drive/MyDrive/Repositories/NVAE" and DATA_DIR = "/content/drive/MyDrive/Repositories/NVAE/dataset_nvae". When I try to run the command !python create_ffhq_lmdb.py --ffhq_img_path=$DATA_DIR/ffhq/resized/ --ffhq_lmdb_path=$DATA_DIR/ffhq/ffhq-lmdb --split=train, I get the following error message:

Traceback (most recent call last):
  File "create_ffhq_lmdb.py", line 70, in <module>
    main(args.split, args.ffhq_img_path, args.ffhq_lmdb_path)
  File "create_ffhq_lmdb.py", line 46, in main
    im = Image.open(img_path)
  File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2843, in open
    fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/content/drive/MyDrive/Repositories/NVAE/dataset_nvae/ffhq/resized/55962.png'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions