zzz66 commited on
Commit
bd2c73d
·
1 Parent(s): e7afc0e

Remove flash_attn (no precompiled wheel for torch 2.6)

Browse files
Files changed (1) hide show
  1. requirements.txt +2 -2
requirements.txt CHANGED
@@ -27,5 +27,5 @@ insightface==0.7.3
27
  transformers==4.52.0
28
  huggingface_hub
29
  ninja
30
- # 使用预编译的 flash_attn wheel (torch 2.6.0 + CUDA 12.2 + Python 3.10)
31
- https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.1/flash_attn-2.8.1+cu122torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
 
27
  transformers==4.52.0
28
  huggingface_hub
29
  ninja
30
+ # flash_attn 暂时移除,因为没有 torch 2.6 的预编译版本
31
+ # 模型会使用默认的 attention 实现