Skip to content

EDGE: Editable Dance Generation From Music #3817

Open
@clarencechen

Description

@clarencechen

Model/Pipeline/Scheduler description

The authors of this paper propose a diffusion-based method for generating human dance sequences (represented by joint angle trajectories) conditioned on music audio embeddings. The weights are stored in Google Docs and are open to the public, but translating the weights into a Diffusers-compatible format will require some surgery. Incorporating this model may also require implementing FiLM timestep conditioning and other possibly currently unavailable NN modules in Diffusers.

Open source status

  • The model implementation is available
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

Paper: https://arxiv.org/pdf/2211.10658.pdf
Website: https://edge-dance.github.io/
Github: https://github.com/Stanford-TML/EDGE

@jtseng20 @Stanford-TML

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions