Pinned Repositories
ALMA-en2ko
This is repository for ALMA translation models.
AutoQuant
GPTQ-for-KoAlpaca
GPTQ-for-LLaMa
4 bits quantization of LLaMA using GPTQ
gptqlora
GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ
lama-with-maskdino
automatic image inpainting (lama(with refinement) and maskdino)
llama-danbooru-qlora
MaxVIT-pytorch
MaxVIT implementation(MaxViT: Multi-Axis Vision Transformer) This is an unofficial implementation. https://arxiv.org/abs/2204.01697
SoftPool
softpool implementation(Refining activation downsampling with SoftPool) This is an unofficial implementation. https://arxiv.org/pdf/2101.00440v2.pdf
stable-diffusion-webui-promptgen-danbooru
stable-diffusion-webui-promptgen
qwopqwop200's Repositories
qwopqwop200/GPTQ-for-LLaMa
4 bits quantization of LLaMA using GPTQ
qwopqwop200/gptqlora
GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ
qwopqwop200/lama-with-maskdino
automatic image inpainting (lama(with refinement) and maskdino)
qwopqwop200/stable-diffusion-webui-promptgen-danbooru
stable-diffusion-webui-promptgen
qwopqwop200/GPTQ-for-KoAlpaca
qwopqwop200/MaxVIT-pytorch
MaxVIT implementation(MaxViT: Multi-Axis Vision Transformer) This is an unofficial implementation. https://arxiv.org/abs/2204.01697
qwopqwop200/ALMA-en2ko
This is repository for ALMA translation models.
qwopqwop200/llama-danbooru-qlora
qwopqwop200/AutoQuant
qwopqwop200/Neighborhood-Attention-Transformer
NAT implementation(Neighborhood Attention Transformer) This is an unofficial implementation. https://arxiv.org/pdf/2204.07143.pdf
qwopqwop200/NatIR
NatIR: Image Restoration Using Neighborhood-Attention-Transformer
qwopqwop200/KoLIMA
qwopqwop200/Magneto-pytorch
Magneto implementation(Foundation Transformers) This is an unofficial implementation. https://arxiv.org/abs/2210.06423
qwopqwop200/D-Adaptation-Adan
Adan with D-Adaptation automatic step-sizes
qwopqwop200/algorithmica
A computer science textbook
qwopqwop200/Subtitles-generator-with-whisper
Subtitles generator using whisper and translator
qwopqwop200/Adan
Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
qwopqwop200/AQLM
Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.pdf
qwopqwop200/AutoAWQ-windows
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference.
qwopqwop200/AutoGPTQ-vllm-marlin
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
qwopqwop200/DAB-DETR
[ICLR 2022] DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR
qwopqwop200/dadaptation
D-Adaptation for SGD, Adam and AdaGrad
qwopqwop200/DN-DETR
[CVPR 2022 Oral]Official implementation of DN-DETR
qwopqwop200/mai-tools
mai-tools is a collection of useful tools for maimai and maimai DX.
qwopqwop200/manga-image-translator
Translate manga/image 一键翻译各类图片内文字 https://cotrans.touhou.ai/
qwopqwop200/mmpose
OpenMMLab Pose Estimation Toolbox and Benchmark.
qwopqwop200/omnivore
Omnivore: A Single Model for Many Visual Modalities
qwopqwop200/semantic-image-editing-with-null-inv
qwopqwop200/transformers-t5
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
qwopqwop200/ViTPose
PyTorch implementation of ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation