bezero's Stars
artificially-ai/ai-engineering
Controlling AI models distribution and versioning with MLflow and Minio/S3.
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
triton-inference-server/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
mateoguzman/openvino-docker
Openvino environment with docker