Skip to content

fpankretic/sound-style-transfer-diffusion

Repository files navigation

sound-style-transfer-diffusion

Authors

Description

Music style transfer using stable diffusion model. In this project we try to recreate the paper Music Style Transfer with Time-Varying Inversion of Diffusion Models by Sifei Li, Yuxin Zhang, Fan Tang, Chongyang Ma, Weiming Dong, Changsheng Xu.

Environment

We are using Conda to manage our environments. Depending on if you are using CPU or GPU, you can create the environment with one of the following commands:

CPU:

conda env create -f environment-cpu.yaml

GPU:

conda env create -f environment-gpu.yaml

Dataset

To download the dataset, run the following command from root directory:

python dataset.py

About

Music style transfer using diffusion model

Resources

License

Stars

Watchers

Forks

Contributors