GFengG commited on
Commit
68258c7
·
verified ·
1 Parent(s): b0f9bbd

Delete vocal_separator/README.md

Browse files
Files changed (1) hide show
  1. vocal_separator/README.md +0 -222
vocal_separator/README.md DELETED
@@ -1,222 +0,0 @@
1
- # LongCat-Video-Avatar
2
-
3
- <div align="center">
4
- <img src="assets/longcat_logo.svg" width="45%" alt="LongCat-Video" />
5
- </div>
6
- <hr>
7
-
8
- <div align="center" style="line-height: 1;">
9
- <a href='https://huggingface.co/meituan-longcat/LongCat-Video-Avatar'><img src='https://img.shields.io/badge/Project-Page-green'></a>
10
- <a href='https://huggingface.co/meituan-longcat/LongCat-Video-Avatar'><img src='https://img.shields.io/badge/Technique-Report-red'></a>
11
- <a href='https://huggingface.co/meituan-longcat/LongCat-Video-Avatar'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a>
12
- </div>
13
-
14
- <div align="center" style="line-height: 1;">
15
- <a href='https://github.com/meituan-longcat/LongCat-Flash-Chat/blob/main/figures/wechat_official_accounts.png'><img src='https://img.shields.io/badge/WeChat-LongCat-brightgreen?logo=wechat&logoColor=white'></a>
16
- <a href='https://x.com/Meituan_LongCat'><img src='https://img.shields.io/badge/Twitter-LongCat-white?logo=x&logoColor=white'></a>
17
- </div>
18
-
19
- <div align="center" style="line-height: 1;">
20
- <a href='LICENSE'><img src='https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53'></a>
21
- </div>
22
-
23
- ## 🚀 Model Introduction
24
- We are excited to announce the release of LongCat-Video-Avatar, a unified model that delivers expressive and highly dynamic audio-driven character animation, supporting native tasks including Audio-Text-to-Video, Audio-Text-Image-to-Video, and Video Continuation with seamless compatibility for both single-stream and multi-stream audio inputs.
25
-
26
- ### Key Features
27
- - 🌟 **Support Multiple Generation Modes**: One unified model can be used for *audio-text-to-video (AT2V)* generation, *audio-text-image-to-video (ATI2V)* generation, and *Video Continuation*.
28
- - 🌟 **Natural Human Dynamics**: The disentangled unconditional guidance is designed to effectively decouple speech signals from motion dynamics for natural behavior.
29
- - 🌟 **Avoid Repetitive Content**: The reference skip attention is adopted to​ strategically incorporates reference cues to preserve identity while preventing excessive conditional image leakage.
30
- - 🌟 **Alleviate Error Accumulation from VAE**: Cross-Chunk Latent Stitching is designed to eliminates redundant VAE decode-encode cycles to reduce pixel degradation in long sequences.
31
-
32
- For more detail, please refer to the comprehensive [***LongCat-Video-Avatar Technical Report***](https://huggingface.co/meituan-longcat/LongCat-Video-Avatar).
33
-
34
- <div align="center">
35
- <img src="assets/teaser.png" width="80%" alt="LongCat-Video" />
36
- </div>
37
-
38
- ## 🌀 Preview Gallery
39
- <!-- <div align="center">
40
- <video src="https://github.com/user-attachments/assets/00fa63f0-9c4e-461a-a79e-c662ad596d7d" width="2264" height="384"> </video>
41
- </div> -->
42
- The following videos showcase example generations from our model.
43
- <table align="center">
44
- <tr>
45
- <td align="center">
46
- <video width="380" controls autoplay loop muted>
47
- <source src="assets/singer1.mp4" type="video/mp4">
48
- </video>
49
- </td>
50
- <td align="center">
51
- <video width="380" controls autoplay loop muted>
52
- <source src="assets/singer2.mp4" type="video/mp4">
53
- </video>
54
- </td>
55
- </tr>
56
-
57
- <tr>
58
- <td align="center">
59
- <video width="380" controls autoplay loop muted>
60
- <source src="assets/actor1.mp4" type="video/mp4">
61
- </video>
62
- </td>
63
- <td align="center">
64
- <video width="380" controls autoplay loop muted>
65
- <source src="assets/postcad1.mp4" type="video/mp4">
66
- </video>
67
- </td>
68
- </tr>
69
-
70
- <tr>
71
- <td align="center">
72
- <video width="380" controls autoplay loop muted>
73
- <source src="assets/actor2.mp4" type="video/mp4">
74
- </video>
75
- </td>
76
- <td align="center">
77
- <video width="380" controls autoplay loop muted>
78
- <source src="assets/sale1.mp4" type="video/mp4">
79
- </video>
80
- </td>
81
- </tr>
82
- </table>
83
-
84
-
85
- ## 📊 Human Evaluation
86
- Human evaluation on naturalness and realism of the synthesized videos. The benchmark EvalTalker [1] contains more than 400 testing samples with different difficulty levels for evaluating the single and multiple human video generation.
87
- <div align="center">
88
- <img src="assets/human_eval.png" width="80%" alt="LongCat-Video-Avatar" />
89
- </div>
90
-
91
- <p style="font-size:0.9em; color:gray;">
92
- Reference:<br>
93
- [1] Zhou Y, Zhu X, Ren S, et al. EvalTalker: Learning to Evaluate Real-Portrait-Driven Multi-Subject Talking Humans[J]. arXiv preprint arXiv:2512.01340, 2025.
94
- </p>
95
-
96
- ## 💡 Quick Start
97
- Clone the repo
98
-
99
- ```shell
100
- git clone --single-branch --branch main https://github.com/meituan-longcat/LongCat-Video
101
- cd LongCat-Video
102
- ```
103
-
104
- Install dependencies
105
-
106
- ```shell
107
- # create conda environment
108
- conda create -n longcat-video python=3.10
109
- conda activate longcat-video
110
-
111
- # install torch (configure according to your CUDA version)
112
- pip install torch==2.6.0+cu124 torchvision==0.21.0+cu124 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
113
-
114
- # install flash-attn-2
115
- pip install ninja
116
- pip install psutil
117
- pip install packaging
118
- pip install flash_attn==2.7.4.post1
119
-
120
- # install other requirements
121
- pip install -r requirements.txt
122
-
123
- # install longcat-video-avatar requirements
124
- pip install -r requirements_avatar.txt
125
- conda install -c conda-forge librosa
126
- conda install -c conda-forge ffmpeg
127
- ```
128
-
129
- FlashAttention-2 is enabled in the model config by default; you can also change the model config ("./weights/*/dit/config.json") to use FlashAttention-3 or xformers once installed.
130
-
131
- ### ⛽️ Model Download
132
-
133
- | Models | Description | Download Link |
134
- | --- | --- | --- |
135
- | LongCat-Video | foundational video generation | 🤗 [Huggingface](https://huggingface.co/meituan-longcat/LongCat-Video) |
136
- | LongCat-Video-Avatar-Single | single-character audio-driven video generation | 🤗 [Huggingface](https://huggingface.co/meituan-longcat/LongCat-Video-Avatar) |
137
- | LongCat-Video-Avatar-Multi | multi-character audio-driven video generation | 🤗 [Huggingface](https://huggingface.co/meituan-longcat/LongCat-Video-Avatar) |
138
-
139
- Download models using huggingface-cli:
140
- ```shell
141
- pip install "huggingface_hub[cli]"
142
- huggingface-cli download meituan-longcat/LongCat-Video --local-dir ./weights/LongCat-Video
143
- huggingface-cli download meituan-longcat/LongCat-Video-Avatar --local-dir ./weights/LongCat-Video-Avatar
144
- ```
145
-
146
- ### 🔑 Quick Inference
147
- Usage Tips
148
- > - Lip synchronization accuracy:​​ Audio CFG works optimally between 3–5. Increase the audio CFG value for better synchronization.
149
- > - Prompt Enhancement: Include clear verbal-action cues (e.g., talking, speaking) in the prompt to achieve more natural lip movements.
150
- > - Mitigate repeated actions: Setting the reference image index(--ref_img_index, default to 10) between 0 and 24 ensures better consistency, while selecting other ranges (e.g., -10 or 30) helps reduce repeated actions. Additionally, increasing the mask frame range (--mask_frame_range, default to 3) can further help mitigate repeated actions, but excessively large values may introduce artifacts.
151
- > - Super resolution: Our model is compatible with both 480P and 720P, which can be controlled via --resolution.
152
-
153
- #### Single-Person Animation
154
- ```shell
155
- # Audio-Text-to-Video
156
- torchrun --nproc_per_node=2 run_demo_avatar_single_audio_to_video.py --context_parallel_size=2 --checkpoint_dir=./weights/LongCat-Video-Avatar --stage_1=at2v --input_json=assets/avatar/single_example_1.json
157
-
158
- # Audio-Image-to-Video
159
- torchrun --nproc_per_node=2 run_demo_avatar_single_audio_to_video.py --context_parallel_size=2 --checkpoint_dir=./weights/LongCat-Video-Avatar --stage_1=ai2v --input_json=assets/avatar/single_example_1.json
160
-
161
- # Audio-Text-to-Video and Video-Continuation
162
- torchrun --nproc_per_node=2 run_demo_avatar_single_audio_to_video.py --context_parallel_size=2 --checkpoint_dir=./weights/LongCat-Video-Avatar --stage_1=at2v --input_json=assets/avatar/single_example_1.json --num_segments=5 --ref_img_index=10 --mask_frame_range=3
163
-
164
- # Audio-Image-to-Video and Video-Continuation
165
- torchrun --nproc_per_node=2 run_demo_avatar_single_audio_to_video.py --context_parallel_size=2 --checkpoint_dir=./weights/LongCat-Video-Avatar --stage_1=ai2v --input_json=assets/avatar/single_example_1.json --num_segments=5 --ref_img_index=10 --mask_frame_range=3
166
- ```
167
-
168
- #### Multi-Person Animation
169
- ```shell
170
- # Audio-Image-to-Video
171
- torchrun --nproc_per_node=2 run_demo_avatar_multi_audio_to_video.py --context_parallel_size=2 --checkpoint_dir=./weights/LongCat-Video-Avatar --input_json=assets/avatar/multi_example_1.json
172
-
173
- # Audio-Image-to-Video and Video-Continuation
174
- torchrun --nproc_per_node=2 run_demo_avatar_multi_audio_to_video.py --context_parallel_size=2 --checkpoint_dir=./weights/LongCat-Video-Avatar --input_json=assets/avatar/multi_example_1.json --num_segments=5 --ref_img_index=10 --mask_frame_range=3
175
- ```
176
-
177
-
178
- ## 📣 Community Works
179
-
180
- Community works are welcome! Please PR or inform us in Issue to add your work.
181
-
182
-
183
-
184
- ## ⚖️ License Agreement
185
-
186
- The **model weights** are released under the **MIT License**.
187
-
188
- Any contributions to this repository are licensed under the MIT License, unless otherwise stated. This license does not grant any rights to use Meituan trademarks or patents.
189
-
190
- See the [LICENSE](LICENSE) file for the full license text.
191
-
192
-
193
- ## 🧠 Usage Considerations
194
- This model has not been specifically designed or comprehensively evaluated for every possible downstream application.
195
-
196
- Developers should take into account the known limitations of large language models, including performance variations across different languages, and carefully assess accuracy, safety, and fairness before deploying the model in sensitive or high-risk scenarios.
197
- It is the responsibility of developers and downstream users to understand and comply with all applicable laws and regulations relevant to their use case, including but not limited to data protection, privacy, and content safety requirements.
198
-
199
- Nothing in this Model Card should be interpreted as altering or restricting the terms of the MIT License under which the model is released.
200
-
201
- ## 📖 Citation
202
- We kindly encourage citation of our work if you find it useful.
203
-
204
- ```
205
- @misc{meituanlongcatteam2025longcatvideoavatartechnicalreport,
206
- title={LongCat-Video-Avatar Technical Report},
207
- author={Meituan LongCat Team},
208
- year={2025},
209
- eprint={},
210
- archivePrefix={arXiv},
211
- primaryClass={cs.CV},
212
- url={},
213
- }
214
- ```
215
-
216
- ## 🙏 Acknowledgements
217
-
218
- We would like to thank the contributors to the [Wan](https://huggingface.co/Wan-AI), [UMT5-XXL](https://huggingface.co/google/umt5-xxl), [Diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research.
219
-
220
-
221
- ## 📞 Contact
222
- Please contact us at <a href="mailto:[email protected]">[email protected]</a> or join our WeChat Group if you have any questions.