inference-engine
There are 273 repositories under inference-engine topic.
FedML-AI/FedML
FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, TensorOpera AI (https://TensorOpera.ai) is your generative AI platform at scale.
zjhellofss/KuiperInfer
校招、秋招、春招、实习好项目!带你从零实现一个高性能的深度学习推理库,支持大模型 llama2 、Unet、Yolov5、Resnet等模型的推理。Implement a high-performance deep learning inference library step by step
hyperjumptech/grule-rule-engine
Rule engine implementation in Golang
siliconflow/onediff
OneDiff: An out-of-the-box acceleration library for diffusion models.
aphrodite-engine/aphrodite-engine
Large-scale LLM inference engine
Tencent/FeatherCNN
FeatherCNN is a high performance inference engine for convolutional neural networks.
PaddlePaddle/Paddle.js
Paddle.js is a web project for Baidu PaddlePaddle, which is an open source deep learning framework running in the browser. Paddle.js can either load a pre-trained model, or transforming a model from paddle-hub with model transforming tools provided by Paddle.js. It could run in every browser with WebGL/WebGPU/WebAssembly supported. It could also run in Baidu Smartprogram and WX miniprogram.
zhihu/ZhiLight
A highly optimized LLM inference acceleration engine for Llama and its variants.
Adlik/Adlik
Adlik: Toolkit for Accelerating Deep Learning Inference
quic/ai-hub-models
The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
msnh2012/Msnhnet
🔥 (yolov3 yolov4 yolov5 unet ...)A mini pytorch inference framework which inspired from darknet.
insight-platform/Savant
Python Computer Vision & Video Analytics Framework With Batteries Included
Tencent/Forward
A library for high performance deep learning inference on NVIDIA GPUs.
pylint-dev/astroid
A common base representation of python source code for pylint and other projects
PaddlePaddle/Anakin
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.
HoloClean/holoclean
A Machine Learning System for Data Enrichment.
andrewkchan/yalm
Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O
ulfurinn/wongi-engine
A rule engine written in Ruby.
buguroo/pyknow
PyKnow: Expert Systems for Python
zjhellofss/KuiperLLama
校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。
chengzeyi/ParaAttention
https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching
ReactiveBayes/RxInfer.jl
Julia package for automated Bayesian inference on a factor graph with reactive message passing
quic/ai-hub-apps
The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
lofcz/LlmTornado
The .NET library to build AI systems with 100+ LLM APIs: Anthropic, Azure, Cohere, DeepInfra, DeepSeek, Google, Groq, Mistral, Ollama, OpenAI, OpenRouter, Perplexity, vLLM, Voyage, xAI, and many more!
interestingLSY/swiftLLM
A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of vLLM).
EfficientMoE/MoE-Infinity
PyTorch library for cost-effective, fast and easy serving of MoE models.
jd-opensource/xllm
A high-performance inference engine for LLMs, optimized for diverse AI accelerators.
gottingen/kumo-search
docs for search system and ai infra
SearchSavior/OpenArc
Lightweight Inference server for OpenVINO
ROCm/MIVisionX
MIVisionX toolkit is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into a single toolkit. AMD MIVisionX also delivers a highly optimized open-source implementation of the Khronos OpenVX™ and OpenVX™ Extensions.
BMW-InnovationLab/BMW-TensorFlow-Inference-API-CPU
This is a repository for an object detection inference API using the Tensorflow framework.
nilp0inter/experta
Expert Systems for Python
midea-ai/Aidget
Ai edge toolbox,专门面向边端设备尤其是嵌入式RTOS平台,AI模型部署工具链,包括模型推理引擎和模型压缩工具
CAS-CLab/CNN-Inference-Engine-Quick-View
A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.
matteocarnelos/microflow-rs
A robust and efficient TinyML inference engine.
haobosang/TinyTensor
TinyTensor is a tool for running already trained NN (Neural Network) models to be able to use them for inference of various tasks such as image classification, semantic segmentation, etc.