haodonglu's Stars
chenzomi12/AIFoundation
AIFoundation 主要是指AI系统遇到大模型,从底层到上层如何系统级地支持大模型训练和推理,全栈的核心技术。
tsfhd2024/tsf-hd
TSF-HD official implementation
SpikingChen/SNN-Daily-Arxiv
Update arXiv papers about Spiking Neural Networks daily.
mikeroyal/RISC-V-Guide
RISC-V Guide. Learn all about the RISC-V computer architecture along with the Development Tools and Operating Systems to develop on RISC-V hardware.
spcl/rapidchiplet
A toolchain for rapid design space exploration of chiplet architectures
mlflow/mlflow
Open source platform for the machine learning lifecycle
TL-System/plato
A federated learning framework to support scalable and reproducible research
youngfish42/Awesome-FL
Comprehensive and timely academic information on federated learning (papers, frameworks, datasets, tutorials, workshops)
SLAM-Hardware/acSLAM
FPGA Hardware Implementation for SLAM
umitkacar/ai-edge-computing-tiny-embedded
IBM/neuro-vector-symbolic-architectures
PyTorch Implementation of the paper "A Neuro-vector-symbolic architecture for Solving Raven's Progressive Matrices" published in Nature Machine Intelligence 2023.
mynkpl1998/Recurrent-Deep-Q-Learning
Solving POMDP using Recurrent networks
HyperdimensionalComputing/collection
Collection of Hyperdimensional Computing Projects
UCSD-SEELab/openhd
BRTResearch/AIChip_Paper_List
LeiWang1999/FPGA
帮助大家进行FPGA的入门,分享FPGA相关的优秀文章,优秀项目
idiap/fast-transformers
Pytorch library for fast transformer implementations
fastmachinelearning/hls4ml
Machine learning on FPGAs using HLS
OpenPPL/ppq
PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.
GaiZhenbiao/ChuanhuChatGPT
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
Jiawei-Yang/FreeNeRF
[CVPR23] FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization
hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
tnbar/awesome-tensorial-neural-networks
A thoroughly investigated survey for tensorial neural networks.
TylerYep/torchinfo
View model summaries in PyTorch!
pmichel31415/are-16-heads-really-better-than-1
Code for the paper "Are Sixteen Heads Really Better than One?"
phlippe/uvadlc_notebooks
Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2023
jacobgil/vit-explain
Explainability for Vision Transformers
WoosukKwon/retraining-free-pruning
[NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers
huggingface/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
usc-isi/PipeEdge
PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices