|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- image-to-video |
|
|
tags: |
|
|
- video-generation |
|
|
- motion-control |
|
|
- point-trajectory |
|
|
--- |
|
|
|
|
|
# MoveBench of Wan-Move |
|
|
|
|
|
[](https://arxiv.org/abs/2512.08765) |
|
|
[](https://github.com/ali-vilab/Wan-Move) |
|
|
[](https://huggingface.co/Ruihang/Wan-Move-14B-480P) |
|
|
[](https://modelscope.cn/models/churuihang/Wan-Move-14B-480P) |
|
|
[](https://huggingface.co/datasets/Ruihang/MoveBench) |
|
|
[](https://www.youtube.com/watch?v=_5Cy7Z2NQJQ) |
|
|
[](https://wan-move.github.io/) |
|
|
|
|
|
|
|
|
|
|
|
## MoveBench: A Comprehensive and Well-Curated Benchmark to Access Motion Control in Videos |
|
|
|
|
|
|
|
|
MoveBench evaluates fine-grained point-level motion control in generated videos. We categorize the video library from [Pexels](https://www.pexels.com/videos/) into 54 content categories, 10-25 videos each, giving rise to 1018 cases to ensure a broad scenario coverage. All video clips maintain a 5-second duration to facilitate evaluation of long-range dynamics. Every clip is paired with detailed motion annotations for a single object. Addtional 192 clips have motion annotations for multiple objects. We ensure annotation quality by developing an interactive labeling pipeline, marrying annotation precision with automated scalability. |
|
|
|
|
|
Welcome everyone to use it! |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Statistics |
|
|
|
|
|
<p align="center" style="border-radius: 10px"> |
|
|
<img src="assets/construction.png" width="100%" alt="logo"/> |
|
|
<strong>The contruction pipeline of MoveBench </strong> |
|
|
</p> |
|
|
|
|
|
<p align="center" style="border-radius: 10px"> |
|
|
<img src="assets/statistics_1.png" width="100%" alt="logo"/> |
|
|
<strong>Balanced sample number per video category </strong> |
|
|
</p> |
|
|
|
|
|
<p align="center" style="border-radius: 10px"> |
|
|
<img src="assets/statistics_2.png" width="100%" alt="logo"/> |
|
|
<strong>Comparison with related benchmarks </strong> |
|
|
</p> |
|
|
|
|
|
## Download |
|
|
|
|
|
|
|
|
Download MoveBench from Hugging Face: |
|
|
``` sh |
|
|
huggingface-cli download Ruihang/MoveBench --local-dir ./MoveBench --repo-type dataset |
|
|
``` |
|
|
|
|
|
Extract the files below: |
|
|
``` sh |
|
|
tar -xzvf en.tar.gz |
|
|
tar -xzvf zh.tar.gz |
|
|
``` |
|
|
|
|
|
The file structure will be: |
|
|
|
|
|
``` |
|
|
MoveBench |
|
|
βββ en # English version |
|
|
β βββ single_track.txt |
|
|
β βββ multi_track.txt |
|
|
β βββ first_frame |
|
|
β β βββ Pexels_videoid_0.jpg |
|
|
β β βββ Pexels_videoid_1.jpg |
|
|
β β βββ ... |
|
|
β βββ video |
|
|
β β βββ Pexels_videoid_0.mp4 |
|
|
β β βββ Pexels_videoid_1.mp4 |
|
|
β β βββ ... |
|
|
β βββ track |
|
|
β β βββ single |
|
|
β β β βββ Pexels_videoid_0_tracks.npy |
|
|
β β β βββ Pexels_videoid_0_visibility.npy |
|
|
β β β βββ ... |
|
|
β β βββ multi |
|
|
β β β βββ Pexels_videoid_0_tracks.npy |
|
|
β β β βββ Pexels_videoid_0_visibility.npy |
|
|
β β β βββ ... |
|
|
βββ zh # Chinese version |
|
|
β βββ single_track.txt |
|
|
β βββ multi_track.txt |
|
|
β βββ first_frame |
|
|
β β βββ Pexels_videoid_0.jpg |
|
|
β β βββ Pexels_videoid_1.jpg |
|
|
β β βββ ... |
|
|
β βββ video |
|
|
β β βββ Pexels_videoid_0.mp4 |
|
|
β β βββ Pexels_videoid_1.mp4 |
|
|
β β βββ ... |
|
|
β βββ track |
|
|
β β βββ single |
|
|
β β β βββ Pexels_videoid_0_tracks.npy |
|
|
β β β βββ Pexels_videoid_0_visibility.npy |
|
|
β β β βββ ... |
|
|
β β βββ multi |
|
|
β β β βββ Pexels_videoid_0_tracks.npy |
|
|
β β β βββ Pexels_videoid_0_visibility.npy |
|
|
β β β βββ ... |
|
|
βββ bench.py # Evaluation script |
|
|
βββ utils # Evaluation code modules |
|
|
``` |
|
|
|
|
|
|
|
|
For evaluation, please refer to [Wan-Move](https://github.com/ali-vilab/Wan-Move) code base. Enjoy it! |
|
|
|
|
|
|
|
|
## Citation |
|
|
If you find our work helpful, please cite us. |
|
|
|
|
|
``` |
|
|
@article{chu2025wan, |
|
|
title={Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance}, |
|
|
author={Ruihang Chu and Yefei He and Zhekai Chen and Shiwei Zhang and Xiaogang Xu and Bin Xia and Dingdong Wang and Hongwei Yi and Xihui Liu and Hengshuang Zhao and Yu Liu and Yingya Zhang and Yujiu Yang}, |
|
|
year={2025}, |
|
|
eprint={2512.08765}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV} |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
## Contact Us |
|
|
If you would like to leave a message to our research teams, feel free to drop me an [Email]([email protected]). |