Issues
- 15
ImportError: /home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
#975 opened by zzc0208 - 1
did NVIDIA L40 support flash-attention2?
#999 opened by qinsang - 0
- 1
谁成功在jetson上使用了 flash_attn
#986 opened by cthulhu-tww - 2
[BUG] Returning a pointer targeting a local variable
#997 opened by FC-Li - 1
[training] `python run.py` raises `ImportError: cannot import name 'GPTBigCodeConfig' from 'transformers'`
#996 opened by yumemio - 7
CUDA12.1 build got a extremly long time, about 2 hours still compling
#968 opened by MonolithFoundation - 6
Error Installing FlashAttention on Windows 11 with CUDA 11.8 - "CUDA_HOME environment variable is not set"
#982 opened by Mr-Natural - 4
[bug] build is verrrrrrrrrrrrrrrrrrrry slow
#945 opened by wongdi - 4
- 2
Error when running flash_attn_func
#994 opened by obhalerao97 - 4
- 1
I have flash attention installed but I got the ImportError: Flash Attention 2.0 is not available.
#990 opened by luisegehaijing - 4
- 4
[QST] Question about the Dropout
#993 opened by flytigerw - 0
ImportError: libtorch_cuda_cpp.so: cannot open shared object file: No such file or directory
#992 opened by jxxtin - 0
Error in Algorithm 1 of Flash Attention 2 paper
#991 opened by mbchang - 1
- 4
Deterministic Disscusion
#988 opened by YizhouZ - 3
Does flash attention support FP8?
#985 opened by Godlovecui - 2
Which dropout type is used for flash attention?
#987 opened by Avelina9X - 7
ImportError: flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c104cuda9SetDeviceEi
#966 opened by foreverpiano - 2
page not found in setup.py
#979 opened by kishida - 1
Apple Silicon Support
#977 opened by chigkim - 1
Fatal error
#972 opened by ByungKwanLee - 2
build failed under miniconda3
#943 opened by rkuo2000 - 4
DropoutAddRMSNorm using triton backend
#973 opened by fferroni - 2
flash attn安装后测试报错
#971 opened by zhangfan-algo - 4
how to install flash_attn in torch==2.1.0
#969 opened by foreverpiano - 1
Controlling stride of local attention window
#967 opened by EomSooHwan - 1
Does the new flash-attention support ROCm?
#965 opened by JiahuaZhao - 1
need a flash_attn-2.5.2+cu122torch2.2.0cxx11abiFALSE-cp312-cp312-win_amd64.whl whl
#959 opened by wallfacers - 2
ModuleNotFoundError: No module named 'einops'
#961 opened by zhangfan-algo - 0
I successfully compiled the flash_attn on window with cuda 12.1.1 for python 3.11
#962 opened by cyysky - 1
- 1
Three-dimensional local attention
#947 opened by JohannesGaessler - 3
Feature request: make build PEP 517 compatible
#955 opened by wjn0 - 1
Allow causal mask alignment configuration
#951 opened by joncarter1 - 1
- 2
Can we build fa with torch 2.3?
#954 opened by rangehow - 0
H20 compatibility
#953 opened by chk4991 - 2
flash decoding algorithm numerical error
#949 opened by hanzz2007 - 0
can not install
#948 opened by waldolin - 1
flash attention是否支持RTX8000
#944 opened by heart18z - 2
flash-attn error flash_attn_2_cuda.cpython-311-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
#940 opened by lainxx - 1
Relative postitions
#946 opened by Sunnikickback - 23
- 0
您好,如何在日志增加输出Tokens/gpu/s和TFLOPS
#942 opened by Liuweixiong0118 - 1
Does it support Swin Transformer
#939 opened by Doctor-James - 1