Currently, virtual humans are widely used in various industries, and the purpose of this project is to help developers quickly migrate virtual human projects based on other GPU platforms to the MetaX GPU platform.
- Wan2.2-S2V is an audio-driven cinematic video generation model.
| Product Image | VirtualHuman Image | VirtualHuman Video |
|
|
virtual_video.mp4 |
- LatentSync is an end-to-end lip-sync method based on audio-conditioned latent diffusion models without any intermediate motion representation, diverging from previous diffusion-based lip-sync methods based on pixel-space diffusion or two-stage generation.
| Original Video | Translated Video |
original_video.mp4 |
translated_video.mp4 |
- CosyVoice is a powerful voice generation model, which supports multiple languages, with fast and stable generation.
| Input Voice | Output Voice |
cosyvoice_ori_audio.mp4 |
cosyvoice_demo_audio.mp4 |
- OpenAvatarChat is a modular interactive virtual human dialogue implementation that can run full functionality on a single PC. Currently, it supports MiniCPM-o as a multimodal language model or can use cloud APIs to replace the standard ASR + LLM + TTS implementation.
| Demo Video |
chat_demo.mp4 |
This project is released under the MIT. Contributions and usage are warmly welcomed.


