Releases: Dao-AILab/flash-attention
Releases Β· Dao-AILab/flash-attention
v2.5.9.post1
Limit to MAX_JOBS=1 with CUDA 12.2
v2.5.9
Bump to 2.5.9
v2.5.8
Bump to v2.5.8
v2.5.7
Bump to v2.5.7
v2.5.6
Bump to v2.5.6
v2.5.5
Bump to v2.5.5
v2.5.4
Bump to v2.5.4
v2.5.3
Bump to v2.5.3
v2.5.2
Bump to v2.5.2
v2.5.1.post1
[CI] Install torch 2.3 using index