- 👯 We are looking self-motivated researcher to join/visit our Group.
 
[Homepage] [Google Scholar] [Twitter]
I am currently a postdoctoral researcher at Computer Vision Lab, ETH Zurich, Switzerland.
We released the code of XingVTON and CIT for virtual try-on, the code of TransDA for source-free domain adaptation using Transformer, the code of IEPGAN for 3D pose transfer, the code of TransDepth for monocular depth prediction using Transformer, the code GLANet for unpaired image-to-image translation, the code MHFormer for 3D human pose estimation.
- 3D-SGAN (ECCV 2022)
 
- MHFormer (CVPR 2022)
 
- TransDepth (ICCV 2021)
 - StructuredAttention (CVPR 2018 Spotlight)
 
- AnonyGAN (ICIAP 2021)
 
- XingGAN (ECCV 2020)
 - BiGraphGAN (BMVC 2020 Oral)
 - C2GAN (ACM MM 2019 Oral)
 - GestureGAN (ACM MM 2018 Oral & Best Paper Candidate)
 
- LGGAN (CVPR 2020)
 - DAGAN (ACM MM 2020)
 - DPGAN (TIP 2021)
 - SelectionGAN (CVPR 2019 Oral)
 - CrossMLP (BMVC 2021 Oral)
 - EdgeGAN
 - PanoGAN (TMM 2022)
 
- GLANet
 - AttentionGAN (IJCNN 2019 Oral)
 - GazeAnimation (ACM MM 2020)
 - AsymmetricGAN (ACCV 2018 Oral)
 
- DDLCN (WACV 2019 Oral)
 
- HandGestureRecognition (Neurocomputing 2019)
 


