Pinned Repositories
Applied-AI
This folder contains projects from Applied AI, a machine learning bootcamp in Python.
javacpp-presets
The missing Java distribution of native C++ libraries
Miscellaneous-Projects
This is a repository for my miscellaneous, smaller projects.
onnx
Open standard for machine learning interoperability
open-gpu-kernel-modules
NVIDIA Linux open GPU kernel module source
TensorRT
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
triton-inference-server
The Triton Inference Server provides a cloud inferencing solution optimized for NVIDIA GPUs.
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
dyastremsky's Repositories
dyastremsky/Applied-AI
This folder contains projects from Applied AI, a machine learning bootcamp in Python.
dyastremsky/javacpp-presets
The missing Java distribution of native C++ libraries
dyastremsky/Miscellaneous-Projects
This is a repository for my miscellaneous, smaller projects.
dyastremsky/onnx
Open standard for machine learning interoperability
dyastremsky/open-gpu-kernel-modules
NVIDIA Linux open GPU kernel module source
dyastremsky/TensorRT
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
dyastremsky/triton-inference-server
The Triton Inference Server provides a cloud inferencing solution optimized for NVIDIA GPUs.
dyastremsky/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs