Pinned Repositories
flash-attention
Fast and memory-efficient exact attention
TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
CS153
RV32I
yangguoming.github.io