Pinned Repositories
bert
TensorFlow code and pre-trained models for BERT
BlindWatermark
使用盲水印保护创作者的知识产权using invisible watermark to protect creator's intellectual property
cutlass
CUDA Templates for Linear Algebra Subroutines
cutlass_fpA_intB_gemm
A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer
DejaVu
FasterTransformer
Transformer related optimization, including BERT, GPT
flash-attention
Fast and memory-efficient exact attention
googletest
GoogleTest - Google Testing and Mocking Framework
Megatron-LM
Ongoing research training transformer models at scale
resorcap's Repositories
resorcap/bert
TensorFlow code and pre-trained models for BERT
resorcap/BlindWatermark
使用盲水印保护创作者的知识产权using invisible watermark to protect creator's intellectual property
resorcap/cutlass
CUDA Templates for Linear Algebra Subroutines
resorcap/cutlass_fpA_intB_gemm
A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer
resorcap/DejaVu
resorcap/FasterTransformer
Transformer related optimization, including BERT, GPT
resorcap/flash-attention
Fast and memory-efficient exact attention
resorcap/googletest
GoogleTest - Google Testing and Mocking Framework
resorcap/Megatron-LM
Ongoing research training transformer models at scale
resorcap/serving
A flexible, high-performance serving system for machine learning models
resorcap/sparsegpt
Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".
resorcap/Swin-Transformer
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
resorcap/ViT-pytorch
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)