hanzhn commited on
Commit
a79d020
·
verified ·
1 Parent(s): 6161fa7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +187 -1
README.md CHANGED
@@ -7,4 +7,190 @@ language:
7
  - zh
8
  base_model:
9
  - Wan-AI/Wan2.1-T2V-1.3B
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - zh
8
  base_model:
9
  - Wan-AI/Wan2.1-T2V-1.3B
10
+ ---
11
+
12
+ <p align="center">
13
+
14
+ <h1 align="center">VACE: All-in-One Video Creation and Editing</h1>
15
+ <p align="center">
16
+ <strong>Zeyinzi Jiang<sup>*</sup></strong>
17
+ ·
18
+ <strong>Zhen Han<sup>*</sup></strong>
19
+ ·
20
+ <strong>Chaojie Mao<sup>*&dagger;</sup></strong>
21
+ ·
22
+ <strong>Jingfeng Zhang</strong>
23
+ ·
24
+ <strong>Yulin Pan</strong>
25
+ ·
26
+ <strong>Yu Liu</strong>
27
+ <br>
28
+ <b>Tongyi Lab - <a href="https://github.com/Wan-Video/Wan2.1"><img src='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png' alt='wan_logo' style='margin-bottom: -4px; height: 20px;'></a> </b>
29
+ <br>
30
+ <br>
31
+ <a href="https://arxiv.org/abs/2503.07598"><img src='https://img.shields.io/badge/arXiv-VACE-red' alt='Paper PDF'></a>
32
+ <a href="https://ali-vilab.github.io/VACE-Page/"><img src='https://img.shields.io/badge/Project_Page-VACE-green' alt='Project Page'></a>
33
+ <a href="https://huggingface.co/ali-vilab/VACE-Wan2.1-1.3B-Preview"><img src='https://img.shields.io/badge/Model-VACE-yellow'></a>
34
+ <a href="https://modelscope.cn/collections/VACE-8fa5fcfd386e43"><img src='https://img.shields.io/badge/VACE-ModelScope-purple'></a>
35
+ <br>
36
+ </p>
37
+
38
+
39
+ ## Introduction
40
+ <strong>VACE</strong> is an all-in-one model designed for video creation and editing. It encompasses various tasks, including reference-to-video generation (<strong>R2V</strong>), video-to-video editing (<strong>V2V</strong>), and masked video-to-video editing (<strong>MV2V</strong>), allowing users to compose these tasks freely. This functionality enables users to explore diverse possibilities and streamlines their workflows effectively, offering a range of capabilities, such as Move-Anything, Swap-Anything, Reference-Anything, Expand-Anything, Animate-Anything, and more.
41
+
42
+ <img src='https://raw.githubusercontent.com/ali-vilab/VACE/refs/heads/main/assets/materials/teaser.jpg'>
43
+
44
+
45
+ ## 🎉 News
46
+ - [x] Mar 31, 2025: 🔥[VACE-Wan2.1-1.3B-Preview](https://huggingface.co/ali-vilab/VACE-Wan2.1-1.3B-Preview) and [VACE-LTX-Video-0.9](https://huggingface.co/ali-vilab/VACE-LTX-Video-0.9) models are now available at HuggingFace and [ModelScope](https://modelscope.cn/collections/VACE-8fa5fcfd386e43)!
47
+ - [x] Mar 31, 2025: 🔥Release code of model inference, preprocessing, and gradio demos.
48
+ - [x] Mar 11, 2025: We propose [VACE](https://ali-vilab.github.io/VACE-Page/), an all-in-one model for video creation and editing.
49
+
50
+
51
+ ## 🪄 Models
52
+ | Models | Download Link | Video Size | License |
53
+ |--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|-----------------------------------------------------------------------------------------------|
54
+ | VACE-Wan2.1-1.3B-Preview | [Huggingface](https://huggingface.co/ali-vilab/VACE-Wan2.1-1.3B-Preview) 🤗 [ModelScope](https://modelscope.cn/models/iic/VACE-Wan2.1-1.3B-Preview) 🤖 | ~ 81 x 480 x 832 | [Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/LICENSE.txt) |
55
+ | VACE-Wan2.1-1.3B | [To be released](https://github.com/Wan-Video) <img src='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png' alt='wan_logo' style='margin-bottom: -4px; height: 15px;'> | ~ 81 x 480 x 832 | [Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/LICENSE.txt) |
56
+ | VACE-Wan2.1-14B | [To be released](https://github.com/Wan-Video) <img src='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png' alt='wan_logo' style='margin-bottom: -4px; height: 15px;'> | ~ 81 x 720 x 1080 | [Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B/blob/main/LICENSE.txt) |
57
+ | VACE-LTX-Video-0.9 | [Huggingface](https://huggingface.co/ali-vilab/VACE-LTX-Video-0.9) 🤗 [ModelScope](https://modelscope.cn/models/iic/VACE-LTX-Video-0.9) 🤖 | ~ 97 x 512 x 768 | [RAIL-M](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.license.txt) |
58
+
59
+ - The input supports any resolution, but to achieve optimal results, the video size should fall within a specific range.
60
+ - All models inherit the license of the original model.
61
+
62
+
63
+ ## ⚙️ Installation
64
+ The codebase was tested with Python 3.10.13, CUDA version 12.4, and PyTorch >= 2.5.1.
65
+
66
+ ### Setup for Model Inference
67
+ You can setup for VACE model inference by running:
68
+ ```bash
69
+ git clone https://github.com/ali-vilab/VACE.git && cd VACE
70
+ pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu124 # If PyTorch is not installed.
71
+ pip install -r requirements.txt
72
+ pip install wan@git+https://github.com/Wan-Video/Wan2.1 # If you want to use Wan2.1-based VACE.
73
+ pip install ltx-video@git+https://github.com/Lightricks/[email protected] sentencepiece --no-deps # If you want to use LTX-Video-0.9-based VACE. It may conflict with Wan.
74
+ ```
75
+ Please download your preferred base model to `<repo-root>/models/`.
76
+
77
+ ### Setup for Preprocess Tools
78
+ If you need preprocessing tools, please install:
79
+ ```bash
80
+ pip install -r requirements/annotator.txt
81
+ ```
82
+ Please download [VACE-Annotators](https://huggingface.co/ali-vilab/VACE-Annotators) to `<repo-root>/models/`.
83
+
84
+ ### Local Directories Setup
85
+ It is recommended to download [VACE-Benchmark](https://huggingface.co/ali-vilab) to `<repo-root>/benchmarks/` as examples in `run_vace_xxx.sh`.
86
+
87
+ We recommend to organize local directories as:
88
+ ```angular2html
89
+ VACE
90
+ ├── ...
91
+ ├── benchmarks
92
+ │ └── VACE-Benchmark
93
+ │ └── assets
94
+ │ └── examples
95
+ │ ├── animate_anything
96
+ │ │ └── ...
97
+ │ └── ...
98
+ ├── models
99
+ │ ├── VACE-Annotators
100
+ │ │ └── ...
101
+ │ ├── VACE-LTX-Video-0.9
102
+ │ │ └── ...
103
+ │ └── VACE-Wan2.1-1.3B-Preview
104
+ │ └── ...
105
+ └── ...
106
+ ```
107
+
108
+ ## 🚀 Usage
109
+ In VACE, users can input **text prompt** and optional **video**, **mask**, and **image** for video generation or editing.
110
+ Detailed instructions for using VACE can be found in the [User Guide](https://github.com/ali-vilab/VACE/blob/main/UserGuide.md).
111
+
112
+ ### Inference CIL
113
+ #### 1) End-to-End Running
114
+ To simply run VACE without diving into any implementation details, we suggest an end-to-end pipeline. For example:
115
+ ```bash
116
+ # run V2V depth
117
+ python vace/vace_pipeline.py --base wan --task depth --video assets/videos/test.mp4 --prompt 'xxx'
118
+
119
+ # run MV2V inpainting by providing bbox
120
+ python vace/vace_pipeline.py --base wan --task inpainting --mode bbox --bbox 50,50,550,700 --video assets/videos/test.mp4 --prompt 'xxx'
121
+ ```
122
+ This script will run video preprocessing and model inference sequentially,
123
+ and you need to specify all the required args of preprocessing (`--task`, `--mode`, `--bbox`, `--video`, etc.) and inference (`--prompt`, etc.).
124
+ The output video together with intermediate video, mask and images will be saved into `./results/` by default.
125
+
126
+ > 💡**Note**:
127
+ > Please refer to [run_vace_pipeline.sh](https://github.com/ali-vilab/VACE/blob/main/run_vace_pipeline.sh) for usage examples of different task pipelines.
128
+
129
+
130
+ #### 2) Preprocessing
131
+ To have more flexible control over the input, before VACE model inference, user inputs need to be preprocessed into `src_video`, `src_mask`, and `src_ref_images` first.
132
+ We assign each [preprocessor](https://github.com/ali-vilab/VACE/blob/main/vace/configs/__init__.py) a task name, so simply call [`vace_preprocess.py`](https://github.com/ali-vilab/VACE/blob/main/vace/vace_preproccess.py) and specify the task name and task params. For example:
133
+ ```angular2html
134
+ # process video depth
135
+ python vace/vace_preproccess.py --task depth --video assets/videos/test.mp4
136
+
137
+ # process video inpainting by providing bbox
138
+ python vace/vace_preproccess.py --task inpainting --mode bbox --bbox 50,50,550,700 --video assets/videos/test.mp4
139
+ ```
140
+ The outputs will be saved to `./proccessed/` by default.
141
+
142
+ > 💡**Note**:
143
+ > Please refer to [run_vace_pipeline.sh](https://github.com/ali-vilab/VACE/blob/main//run_vace_pipeline.sh) preprocessing methods for different tasks.
144
+ Moreover, refer to [vace/configs/](https://github.com/ali-vilab/VACE/blob/main/vace/configs/) for all the pre-defined tasks and required params.
145
+ You can also customize preprocessors by implementing at [`annotators`](https://github.com/ali-vilab/VACE/blob/main/vace/annotators/__init__.py) and register them at [`configs`](https://github.com/ali-vilab/VACE/blob/main/vace/configs).
146
+
147
+
148
+ #### 3) Model inference
149
+ Using the input data obtained from **Preprocessing**, the model inference process can be performed as follows:
150
+ ```bash
151
+ # For Wan2.1 single GPU inference
152
+ python vace/vace_wan_inference.py --ckpt_dir <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"
153
+
154
+ # For Wan2.1 Multi GPU Acceleration inference
155
+ pip install "xfuser>=0.4.1"
156
+ torchrun --nproc_per_node=8 vace/vace_wan_inference.py --dit_fsdp --t5_fsdp --ulysses_size 1 --ring_size 8 --ckpt_dir <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"
157
+
158
+ # For LTX inference, run
159
+ python vace/vace_ltx_inference.py --ckpt_path <path-to-model> --text_encoder_path <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"
160
+ ```
161
+ The output video together with intermediate video, mask and images will be saved into `./results/` by default.
162
+
163
+ > 💡**Note**:
164
+ > (1) Please refer to [vace/vace_wan_inference.pyhttps://github.com/ali-vilab/VACE/blob/main/vace/vace_wan_inference.py) and [vace/vace_ltx_inference.py](https://github.com/ali-vilab/VACE/blob/main/vace/vace_ltx_inference.py) for the inference args.
165
+ > (2) For LTX-Video and English language Wan2.1 users, you need prompt extension to unlock the full model performance.
166
+ Please follow the [instruction of Wan2.1](https://github.com/Wan-Video/Wan2.1?tab=readme-ov-file#2-using-prompt-extension) and set `--use_prompt_extend` while running inference.
167
+
168
+
169
+ ### Inference Gradio
170
+ For preprocessors, run
171
+ ```bash
172
+ python vace/gradios/preprocess_demo.py
173
+ ```
174
+ For model inference, run
175
+ ```bash
176
+ # For Wan2.1 gradio inference
177
+ python vace/gradios/vace_wan_demo.py
178
+
179
+ # For LTX gradio inference
180
+ python vace/gradios/vace_ltx_demo.py
181
+ ```
182
+
183
+ ## Acknowledgement
184
+
185
+ We are grateful for the following awesome projects, including [Scepter](https://github.com/modelscope/scepter), [Wan](https://github.com/Wan-Video/Wan2.1), and [LTX-Video](https://github.com/Lightricks/LTX-Video).
186
+
187
+
188
+ ## BibTeX
189
+
190
+ ```bibtex
191
+ @article{vace,
192
+ title = {VACE: All-in-One Video Creation and Editing},
193
+ author = {Jiang, Zeyinzi and Han, Zhen and Mao, Chaojie and Zhang, Jingfeng and Pan, Yulin and Liu, Yu},
194
+ journal = {arXiv preprint arXiv:2503.07598},
195
+ year = {2025}
196
+ }