Update README.md
Browse files
README.md
CHANGED
|
@@ -28,10 +28,10 @@ base_model:
|
|
| 28 |
<b>Tongyi Lab - <a href="https://github.com/Wan-Video/Wan2.1"><img src='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png' alt='wan_logo' style='margin-bottom: -4px; height: 20px;'></a> </b>
|
| 29 |
<br>
|
| 30 |
<br>
|
| 31 |
-
<a href="https://arxiv.org/abs/2503.07598"><img src='https://img.shields.io/badge/arXiv-
|
| 32 |
-
<a href="https://ali-vilab.github.io/VACE-Page/"><img src='https://img.shields.io/badge/Project_Page-
|
| 33 |
-
<a href="https://huggingface.co/ali-vilab/
|
| 34 |
-
<a href="https://modelscope.cn/collections/VACE-8fa5fcfd386e43"><img src='https://img.shields.io/badge/VACE-
|
| 35 |
<br>
|
| 36 |
</p>
|
| 37 |
|
|
@@ -43,7 +43,7 @@ base_model:
|
|
| 43 |
|
| 44 |
|
| 45 |
## 🎉 News
|
| 46 |
-
- [x] Mar 31, 2025: 🔥
|
| 47 |
- [x] Mar 31, 2025: 🔥Release code of model inference, preprocessing, and gradio demos.
|
| 48 |
- [x] Mar 11, 2025: We propose [VACE](https://ali-vilab.github.io/VACE-Page/), an all-in-one model for video creation and editing.
|
| 49 |
|
|
@@ -82,7 +82,7 @@ pip install -r requirements/annotator.txt
|
|
| 82 |
Please download [VACE-Annotators](https://huggingface.co/ali-vilab/VACE-Annotators) to `<repo-root>/models/`.
|
| 83 |
|
| 84 |
### Local Directories Setup
|
| 85 |
-
It is recommended to download [VACE-Benchmark](https://huggingface.co/ali-vilab) to `<repo-root>/benchmarks/` as examples in `run_vace_xxx.sh`.
|
| 86 |
|
| 87 |
We recommend to organize local directories as:
|
| 88 |
```angular2html
|
|
@@ -129,7 +129,7 @@ The output video together with intermediate video, mask and images will be saved
|
|
| 129 |
|
| 130 |
#### 2) Preprocessing
|
| 131 |
To have more flexible control over the input, before VACE model inference, user inputs need to be preprocessed into `src_video`, `src_mask`, and `src_ref_images` first.
|
| 132 |
-
We assign each [preprocessor](https://
|
| 133 |
```angular2html
|
| 134 |
# process video depth
|
| 135 |
python vace/vace_preproccess.py --task depth --video assets/videos/test.mp4
|
|
@@ -140,7 +140,7 @@ python vace/vace_preproccess.py --task inpainting --mode bbox --bbox 50,50,550,7
|
|
| 140 |
The outputs will be saved to `./proccessed/` by default.
|
| 141 |
|
| 142 |
> 💡**Note**:
|
| 143 |
-
> Please refer to [run_vace_pipeline.sh](https://github.com/ali-vilab/VACE/blob/main
|
| 144 |
Moreover, refer to [vace/configs/](https://github.com/ali-vilab/VACE/blob/main/vace/configs/) for all the pre-defined tasks and required params.
|
| 145 |
You can also customize preprocessors by implementing at [`annotators`](https://github.com/ali-vilab/VACE/blob/main/vace/annotators/__init__.py) and register them at [`configs`](https://github.com/ali-vilab/VACE/blob/main/vace/configs).
|
| 146 |
|
|
@@ -161,7 +161,7 @@ python vace/vace_ltx_inference.py --ckpt_path <path-to-model> --text_encoder_pat
|
|
| 161 |
The output video together with intermediate video, mask and images will be saved into `./results/` by default.
|
| 162 |
|
| 163 |
> 💡**Note**:
|
| 164 |
-
> (1) Please refer to [vace/vace_wan_inference.
|
| 165 |
> (2) For LTX-Video and English language Wan2.1 users, you need prompt extension to unlock the full model performance.
|
| 166 |
Please follow the [instruction of Wan2.1](https://github.com/Wan-Video/Wan2.1?tab=readme-ov-file#2-using-prompt-extension) and set `--use_prompt_extend` while running inference.
|
| 167 |
|
|
|
|
| 28 |
<b>Tongyi Lab - <a href="https://github.com/Wan-Video/Wan2.1"><img src='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png' alt='wan_logo' style='margin-bottom: -4px; height: 20px;'></a> </b>
|
| 29 |
<br>
|
| 30 |
<br>
|
| 31 |
+
<a href="https://arxiv.org/abs/2503.07598"><img src='https://img.shields.io/badge/VACE-arXiv-red' alt='Paper PDF'></a>
|
| 32 |
+
<a href="https://ali-vilab.github.io/VACE-Page/"><img src='https://img.shields.io/badge/VACE-Project_Page-green' alt='Project Page'></a>
|
| 33 |
+
<a href="https://huggingface.co/collections/ali-vilab/vace-67eca186ff3e3564726aff38"><img src='https://img.shields.io/badge/VACE-HuggingFace_Model-yellow'></a>
|
| 34 |
+
<a href="https://modelscope.cn/collections/VACE-8fa5fcfd386e43"><img src='https://img.shields.io/badge/VACE-ModelScope_Model-purple'></a>
|
| 35 |
<br>
|
| 36 |
</p>
|
| 37 |
|
|
|
|
| 43 |
|
| 44 |
|
| 45 |
## 🎉 News
|
| 46 |
+
- [x] Mar 31, 2025: 🔥VACE-Wan2.1-1.3B-Preview and VACE-LTX-Video-0.9 models are now available at [HuggingFace](https://huggingface.co/collections/ali-vilab/vace-67eca186ff3e3564726aff38) and [ModelScope](https://modelscope.cn/collections/VACE-8fa5fcfd386e43)!
|
| 47 |
- [x] Mar 31, 2025: 🔥Release code of model inference, preprocessing, and gradio demos.
|
| 48 |
- [x] Mar 11, 2025: We propose [VACE](https://ali-vilab.github.io/VACE-Page/), an all-in-one model for video creation and editing.
|
| 49 |
|
|
|
|
| 82 |
Please download [VACE-Annotators](https://huggingface.co/ali-vilab/VACE-Annotators) to `<repo-root>/models/`.
|
| 83 |
|
| 84 |
### Local Directories Setup
|
| 85 |
+
It is recommended to download [VACE-Benchmark](https://huggingface.co/datasets/ali-vilab/VACE-Benchmark) to `<repo-root>/benchmarks/` as examples in `run_vace_xxx.sh`.
|
| 86 |
|
| 87 |
We recommend to organize local directories as:
|
| 88 |
```angular2html
|
|
|
|
| 129 |
|
| 130 |
#### 2) Preprocessing
|
| 131 |
To have more flexible control over the input, before VACE model inference, user inputs need to be preprocessed into `src_video`, `src_mask`, and `src_ref_images` first.
|
| 132 |
+
We assign each [preprocessor](https://raw.githubusercontent.com/ali-vilab/VACE/refs/heads/main/vace/configs/__init__.py) a task name, so simply call [`vace_preprocess.py`](https://raw.githubusercontent.com/ali-vilab/VACE/refs/heads/main/vace/vace_preproccess.py) and specify the task name and task params. For example:
|
| 133 |
```angular2html
|
| 134 |
# process video depth
|
| 135 |
python vace/vace_preproccess.py --task depth --video assets/videos/test.mp4
|
|
|
|
| 140 |
The outputs will be saved to `./proccessed/` by default.
|
| 141 |
|
| 142 |
> 💡**Note**:
|
| 143 |
+
> Please refer to [run_vace_pipeline.sh](https://github.com/ali-vilab/VACE/blob/main/run_vace_pipeline.sh) preprocessing methods for different tasks.
|
| 144 |
Moreover, refer to [vace/configs/](https://github.com/ali-vilab/VACE/blob/main/vace/configs/) for all the pre-defined tasks and required params.
|
| 145 |
You can also customize preprocessors by implementing at [`annotators`](https://github.com/ali-vilab/VACE/blob/main/vace/annotators/__init__.py) and register them at [`configs`](https://github.com/ali-vilab/VACE/blob/main/vace/configs).
|
| 146 |
|
|
|
|
| 161 |
The output video together with intermediate video, mask and images will be saved into `./results/` by default.
|
| 162 |
|
| 163 |
> 💡**Note**:
|
| 164 |
+
> (1) Please refer to [vace/vace_wan_inference.py](https://github.com/ali-vilab/VACE/blob/main/vace/vace_wan_inference.py) and [vace/vace_ltx_inference.py](https://github.com/ali-vilab/VACE/blob/main/vace/vace_ltx_inference.py) for the inference args.
|
| 165 |
> (2) For LTX-Video and English language Wan2.1 users, you need prompt extension to unlock the full model performance.
|
| 166 |
Please follow the [instruction of Wan2.1](https://github.com/Wan-Video/Wan2.1?tab=readme-ov-file#2-using-prompt-extension) and set `--use_prompt_extend` while running inference.
|
| 167 |
|