Dataset Viewer
Search is not available for this dataset
video
video |
|---|
End of preview. Expand
in Data Studio
3DV-TON: Textured 3D-Guided Consistent Video Try-on via Diffusion Models
Min Wei, Chaohui Yu, Jingkai Zhou, and Fan Wang. 2025. 3DV-TON: Textured 3D-Guided Consistent Video Try-on via Diffusion Models. In Proceedings of the 33rd ACM International Conference on Multimedia (MM ’25), October 27–31, 2025, Dublin, Ireland. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3746027.3754754
Installation
git clone https://github.com/2y7c3/3DV-TON.git
cd 3DV-TON
pip install -r requirements.txt
cd preprocess/model/DensePose/detectron2/projects/DensePose
pip install -e .
## install GVHMR
## see https://github.com/zju3dv/GVHMR/blob/main/docs/INSTALL.md
## replace GVHMR/hmr4d/utils/vis/renderer.py with our preprocess/renderer.py
Weights
Download Stable Diffusion, Motion module,VAE and Our 3DV-TON models in ./ckpts.
Download Cloth masker in ./preprocess/ckpts. Then you can use our cloth masker to generate agnostic mask videos for improved try-on results.
Inference
We provid three demo examples in ./demos/ — run the following commands to test them.
python infer.py --config ./configs/inference/demo_test.yaml
Or you can prepare your own example by following the steps below.
# 1. generate agnostic mask (type: 'upper', 'lower', 'overall')
cd preprocess
python seg_mask.py --input demos/videos/video.mp4 --output demos/ --type overall
# 2. use GVHMR to generate SMPL video
# 3. use image tryon model to generate tryon image (e.g. CaTVTON)
# 4. generate textured 3d mesh
# 5. modify demo_test.yaml, then run
python infer.py --config ./configs/inference/demo_test.yaml
BibTeX
@article{wei20253dv,
title={3dv-ton: Textured 3d-guided consistent video try-on via diffusion models},
author={Wei, Min and Yu, Chaohui and Zhou, Jingkai and Wang, Fan},
journal={arXiv preprint arXiv:2504.17414},
year={2025}
}
- Downloads last month
- 27