AnyTalker / requirements.txt
zzz66's picture
Use flash_attn 2.7.4.post1 precompiled wheel (torch 2.6 compatible)
28f2de3
raw
history blame
819 Bytes
torch==2.6.0
torchvision==0.21.0
torchaudio==2.6.0
opencv-python==4.11.0.86
diffusers==0.34.0
tokenizers==0.21.4
accelerate==1.10.0
tqdm==4.67.1
imageio==2.37.0
easydict==1.13
ftfy==6.3.1
dashscope==1.24.1
imageio-ffmpeg==0.6.0
numpy==1.26.4
lightning==2.5.2
xfuser==0.4.4
yunchang==0.6.3.post1
moviepy==2.1.2
omegaconf==2.3.0
decord==0.6.0
ffmpeg-python==0.2.0
librosa==0.11.0
torchaudio==2.6.0
audio-separator==0.30.2
onnxruntime-gpu==1.22.0
insightface==0.7.3
transformers==4.52.0
huggingface_hub
ninja
# flash_attn 预编译 wheel (torch 2.6 + CUDA 12 + Python 3.10)
# 参考: https://huggingface.co/spaces/fffiloni/Meigen-MultiTalk/blob/main/requirements.txt
https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl