lloo099's Stars
yifanycc/loretta
[NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models
HazyResearch/halp
CASR-HKU/MSD-FCCM23
Open-source of MSD framework
COPT-Public/SOLNP_plus
SOLNP+: A derivative-free optimization software
cvxgrp/dccp
A CVXPY extension for convex-concave programming
MathFoundationRL/Book-Mathematical-Foundation-of-Reinforcement-Learning
This is the homepage of a new book entitled "Mathematical Foundations of Reinforcement Learning."
kamyu104/LeetCode-Solutions
🏋️ Python / Modern C++ Solutions of All 3204 LeetCode Problems (Weekly Update)
kzhangucsb/HLS_TNN
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
lambda7xx/awesome-AI-system
paper and its code for AI System
invictus717/MetaTransformer
Meta-Transformer for Unified Multimodal Learning
dongguanting/In-Context-Learning_PaperList
Paper List for In-context Learning 🌷
Lightning-AI/litgpt
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
SqueezeAILab/SqueezeLLM
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
HuangOwen/Awesome-LLM-Compression
Awesome LLM compression research papers and tools.
hossein1387/BARVINN
BARVINN: A Barrel RISC-V Neural Network Accelerator: https://barvinn.readthedocs.io/en/latest/
Xiuyu-Li/q-diffusion
[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.
cjg91/trans-fat
An FPGA Accelerator for Transformer Inference
amusi/CVPR2024-Papers-with-Code
CVPR 2024 论文和开源项目合集
edgeimpulse/courseware-embedded-machine-learning
sefaburakokcu/finn-quantized-yolo
Low-Precision YOLO on PYNQ with FINN
bytedance/lightseq
LightSeq: A High Performance Library for Sequence Processing and Generation
microsoft/Swin-Transformer
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
roatienza/Deep-Learning-Experiments
Videos, notes and experiments to understand deep learning
cctry/E.T.
leaderj1001/Synthesizer-Rethinking-Self-Attention-Transformer-Models
Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch
quic/aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
ChiRuiChen/AutoEncoder_hls4ml
an autoencoder for mnist denoising using hls4ml to synthesize
HazyResearch/fly