YAML Metadata Warning: The pipeline tag "audio-to-video" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

AnyTalker: Scaling Multi-Person Talking Video Generation with Interactivity Refinement

Zhizhou Zhong · Yicheng Ji · Zhe Kong · Yiying Liu* · Jiarui Wang · Jiasun Feng · Lupeng Liu · Xiangyi Wang · Yanjia Li · Yuqing She · Ying Qin · Huan Li

Shuiyang Mao · Wei Liu · Wenhan Luo

*Project Leader Corresponding Author

HF space 

TL; DR: AnyTalker is an audio-driven framework for generating multi-person talking videos. It features a flexible multi-stream structure to scale identities while ensuring seamless inter-identity interactions.

Video Demos (Generated with the 1.3B model; 14B results here)

Input Image Generated Video

🔥 Latest News

🔥 Nov 28, 2025: We release the AnyTalker weights, inference code, and project page.

📖 Dec 1, 2025: We release the technical report.

📑 Todo List

  • Inference code
  • 1.3B stage 1 checkpoint (trained exclusively on single-person data)
  • Benchmark for evaluate Interactivity
  • Technical report
  • 14B model (coming soon to the Video Rebirth's creation platform)

Quick Start

🛠️Installation

1. Create a conda environment and install pytorch

conda create -n AnyTalker python=3.10
conda activate AnyTalker 
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126

2. Other dependencies

pip install -r requirements.txt

3. Flash-attn installation:

pip install ninja 
pip install flash_attn==2.8.1 --no-build-isolation

4. FFmpeg installation

You need an FFmpeg build with x264 (libx264) support to encode H.264 videos. Depending on your environment, you can install it via one of the following commands:

# Ubuntu / Debian
apt-get install ffmpeg

or

# CentOS / RHEL
yum install ffmpeg ffmpeg-devel

or

# Conda (no root required) 
conda install -c conda-forge ffmpeg

⚠️ Note: If you install FFmpeg via conda and encounter the error Unknown encoder 'libx264', or if the following command does not list libx264:

ffmpeg -encoders | grep libx264

you can install a specific conda-forge build that includes libx264 support:

conda install -c conda-forge ffmpeg=7.1.0

Reference: bytedance/LatentSync#60

🧱Model Preparation

Models Download Link Notes
Wan2.1-Fun-V1.1-1.3B-InP 🤗 Huggingface Base model
wav2vec2-base 🤗 Huggingface Audio encoder
AnyTalker-1.3B 🤗 Huggingface Our weights

Download models using huggingface-cli:

# curl -LsSf https://hf.co/cli/install.sh | bash
hf download alibaba-pai/Wan2.1-Fun-V1.1-1.3B-InP --local-dir ./checkpoints/Wan2.1-Fun-1.3B-Inp
hf download facebook/wav2vec2-base-960h --local-dir ./checkpoints/wav2vec2-base-960h
hf download zzz66/AnyTalker-1.3B --local-dir ./checkpoints/AnyTalker

The directory shoube be organized as follows.

checkpoints/
├── Wan2.1-Fun-V1.1-1.3B-InP
├── wav2vec2-base-960h
└── AnyTalker

🔑 Quick Inference

The provided script currently performs 480p inference on a single GPU and automatically switches between single-person and multi-person generation modes according to the length of the input audio list.

#!/bin/bash
export CUDA_VISIBLE_DEVICES=0
python generate_a2v_batch_multiID.py \
        --ckpt_dir="./checkpoints/Wan2.1-Fun-1.3B-Inp" \
        --task="a2v-1.3B" \
        --size="832*480" \
        --batch_gen_json="./input_example/customize_your_input_here.json" \
        --batch_output="./outputs" \
        --post_trained_checkpoint_path="./checkpoints/AnyTalker/1_3B-single-v1.pth" \
        --sample_fps=24 \
        --sample_guide_scale=4.5 \
        --offload_model=True \
        --base_seed=44 \
        --dit_config="./checkpoints/AnyTalker/config_af2v_1_3B.json" \
        --det_thresh=0.15 \
        --mode="pad" \
        --use_half=True 

or

sh infer_a2v_1_3B_batch.sh

Descriptions on some hyper-parameters

--offload_model: Whether to offload the model to CPU after each model forward, reducing GPU memory usage.
--det_thresh: detection threshold for the InsightFace model; a lower value improves performance on abstract-style images.
--sample_guide_scale: recommended value is 4.5; applied to both text and audio.
--mode: select "pad" if every audio input track has already been zero-padded to a common length; select "concat" if you instead want the script to chain each speaker’s clips together and then zero-pad the non-speaker segments to reach a uniform length.
--use_half: Whether to enable half-precision (FP16) inference for faster acceleration.


Illustration of “pad” mode for audio inputs.

Benchmark

We provide the benchmark used in our paper to evaluate Interactivity, including the dataset and the metric computation script.

Download the Dataset from YoTube

1. Install yt-dlp

python -m pip install -U yt-dlp

2. Run the downlaod script

cd ./benchmark
python download.py

The directory shoube be organized as follows.

benchmark/
├── audio_left            # Audio for left speaker (zero-padded to full length)
├── audio_right           # Audio for right speaker (zero-padded to full length)
├── speaker_duration.json # Start/end timestamps for each speaker
├── interact_11.mp4       # Example video 
└── frames                # Reference image supplied as the first video frame

Interactivity evaluation

# single video
python calculate_interactivity.py --video interact_11.mp4

# entire directory
python calculate_interactivity.py --dir ./your_dir

The script prints the Interactivity score defined in the paper. Note: generated videos must keep the exact same names listed in speaker_duration.json.

📚 Citation

If you find our work useful in your research, please consider citing:

@article{zhong2025anytalker,
    title={AnyTalker: Scaling Multi-Person Talking Video Generation with Interactivity Refinement},
    author={Zhong, Zhizhou and Ji, Yicheng and Kong, Zhe and Liu, YiYing and Wang, Jiarui and Feng, Jiasun and Liu, Lupeng and Wang, Xiangyi and Li, Yanjia and She, Yuqing and Qin, Ying and Li, Huan and Mao, Shuiyang and Liu, Wei and Luo, Wenhan},
    journal={arXiv preprint},
    year={2025}
}

📜 License

The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generated contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Spaces using zzz66/AnyTalker-1.3B 6