DeepFaceEdit is an interactive face generation and editing tool built on top of StyleGAN2-ADA, designed for manipulating facial attributes in a highly intuitive way. It enables users to generate, project, and semantically edit high-quality face images using latent space traversal techniques all through a user-friendly GUI.
Whether you're experimenting with synthetic identity creation or exploring the interpretability of GANs, DeepFaceEdit provides powerful tools to visualize and manipulate the latent space of human faces.
-
🧬 Face Generation
Generate realistic and diverse human faces from random seeds or latent vectors. -
🛠️ Face Editing
Modify facial attributes such as age, gender, expression, and more through semantic vector controls. -
🖼️ Face Projection
Project real images into StyleGAN2’s latent space using inversion techniques, enabling you to edit real faces. -
🖱️ Interactive GUI
Built with FreeSimpleGUI, the interface offers an intuitive editing experience without writing code. -
⚡ Real-time Preview
Instantly visualize changes as you adjust latent parameters or apply vector edits. -
💾 Vector Reuse
Save and reload latent vectors for consistent editing and iterative experimentation.
Generate unique faces using multiple techniques:
- Random seed generation – Explore diverse identities with different seeds.
- Z-vector based generation – Sample from the standard latent space.
- W-vector based generation – Leverage StyleGAN2’s intermediate latent space for better control.
- Traverse interpretable directions in latent space to control age, gender, smile, and other high-level features.
- Combine multiple edits with fine-grained control over intensity.
- Align real faces using
dlib's facial landmarks. - Perform GAN inversion to embed real images into the latent space for further editing.
git clone --recurse-submodules https://github.com/Haseebasif7/GAN.git
cd GAN🧩 The --recurse-submodules flag ensures the stylegan2-ada-pytorch directory is properly cloned.
If you forgot --recurse-submodules, fix it with:
git submodule update --init --recursiveEnsure Python 3.11 is installed. You can download it from: https://www.python.org/downloads/release/python-3110/
Check installation:
python --versionCreate a virtual environment in the project root:
python -m venv venvActivate it:
Windows:
venv\Scripts\activateLinux/macOS:
source venv/bin/activateUpgrade pip:
pip install --upgrade pipThen install all required packages:
pip install -r requirements.txt💡 If you face issues installing dlib, make sure C++ build tools are available:
- Windows: Install Microsoft Build Tools
- Linux/macOS: Install cmake, g++, and Python headers
Two model files are required in the models/ folder.
Download from:
https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/ffhq.pkl
Then place it in the models/ folder:
GAN/
├── models/
│ └── ffhq.pkl
Download the compressed file:
http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
Place it in the models/ folder:
GAN/
├── models/
│ └── shape_predictor_68_face_landmarks.dat.bz2
Decompress it using the included script:
python models/decompress.py✅ After decompression, you will have:
GAN/
├── models/
│ ├── shape_predictor_68_face_landmarks.dat
│ └── shape_predictor_68_face_landmarks.dat.bz2
This project uses the StyleGAN2-ADA repo as a Git submodule.
Here’s the .gitmodules content:
[submodule "stylegan2-ada-pytorch"]
path = stylegan2-ada-pytorch
url = https://github.com/NVlabs/stylegan2-ada-pytorch.gitGit reads this automatically when you clone with --recurse-submodules.
- Generator – Responsible for generating high-quality face images using the StyleGAN2-ADA model.
- Shifter – Handles latent space edits to apply semantic transformations (e.g., smile, age).
- Projector – Projects real-world face images into the latent space of the generator.
- Controller – Orchestrates the interaction between backend modules and manages workflow logic.
- Graphical Interface (GUI) – A user-friendly front-end built with FreeSimpleGUI for interactive editing and visualization.
## 📁 Final Project Structure (After Setup)
gan-face-editor/
├── core/
│ ├── align_faces.py
│ ├── generator.py
│ ├── projector.py
│ └── shifter.py
├── controller.py
├── facegan-env/
├── gui/
│ ├── main.py
│ └── layouts/
│ ├── interface.py
│ ├── project.py
│ └── sliders.py
├── LICENSE
├── models/
│ ├── ffhq.pkl
│ ├── shape_predictor_68_face_landmarks.dat
│ ├── shape_predictor_68_face_landmarks.dat.bz2
│ └── decompress.py
├── README.md
├── requirements.txt
├── results/
│ ├── age(2).png
│ └── m_n(2).png
├── run.py
├── settings/
│ ├── config.py
│ └── __init__.py
├── stylegan2-ada-pytorch/
├── utils/
│ ├── helpers.py
│ └── __init__.py
├── vectors/
│ ├── age.npy
│ ├── eye_distance.npy
│ ├── eye_eyebrow_distance.npy
│ ├── eye_ratio.npy
│ ├── eyes_open.npy
│ ├── gender.npy
│ ├── lip_ratio.npy
│ ├── mouth_open.npy
│ ├── mouth_ratio.npy
│ ├── nose_mouth_distance.npy
│ ├── nose_ratio.npy
│ ├── nose_tip.npy
│ ├── pitch.npy
│ ├── roll.npy
│ ├── smile.npy
│ └── yaw.npy
- Python
- VGG16
- PyTorch
- StyleGAN2-ADA
- dlib
- FreeSimpleGUI
.png)
.png)