Pinned Repositories
angularjs-projects
Projects in AngularJS2 and up version
AutoDrive
AutoDrive Planning Research
Computer-Vision
CPP-Training
Deep Dive in C++, Bazel
CV-CUDA
CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.
DeepFaceLab-WS
DeepFaceLab Workspace
robot-modeling
Quadruped Robot controller design and simulation on Webots
TensorRT
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
Tensorrt-CV
Using TensorRT for Inference Model Deployment.
trt-samples-for-hackathon-cn
Simple samples for TensorRT programming
col-in-coding's Repositories
col-in-coding/Tensorrt-CV
Using TensorRT for Inference Model Deployment.
col-in-coding/cub
Cooperative primitives for CUDA C++.
col-in-coding/CV-CUDA
CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.
col-in-coding/TensorRT
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
col-in-coding/AITemplate
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
col-in-coding/apollo
An open autonomous driving platform
col-in-coding/bitsandbytes
8-bit CUDA functions for PyTorch
col-in-coding/cccl
CUDA C++ Core Libraries
col-in-coding/CUDA-Learn-Notes
🎉CUDA 笔记 / 大模型手撕CUDA / C++笔记,更新随缘: flash_attn、sgemm、sgemv、warp reduce、block reduce、dot product、elementwise、softmax、layernorm、rmsnorm、hist etc.
col-in-coding/cuda-samples
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
col-in-coding/cuda-training-series
Training materials associated with NVIDIA's CUDA Training Series (www.olcf.ornl.gov/cuda-training-series/)
col-in-coding/CUDALibrarySamples
CUDA Library Samples
col-in-coding/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
col-in-coding/GenerativeAIExamples
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
col-in-coding/latent-diffusion
High-Resolution Image Synthesis with Latent Diffusion Models
col-in-coding/llama
Inference code for LLaMA models
col-in-coding/stable-diffusion-tritonserver
Deploy stable diffusion model with onnx/tenorrt + tritonserver
col-in-coding/taming-transformers
Taming Transformers for High-Resolution Image Synthesis
col-in-coding/TensorRT-Bert
col-in-coding/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
col-in-coding/TensorRT-Model-Optimizer
TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization and sparsity. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
col-in-coding/tensorrt_plugin_generator
A simple tool that can generate TensorRT plugin code quickly.
col-in-coding/tensorRT_Pro
C++ library based on tensorrt integration
col-in-coding/text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, llama.cpp (GGUF), Llama models.
col-in-coding/tradingview-chartinglib-test
col-in-coding/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
col-in-coding/triton
Development repository for the Triton language and compiler
col-in-coding/TRT-Hackathon-2023-Final
col-in-coding/trt-samples-for-hackathon-cn
Simple samples for TensorRT programming
col-in-coding/Wav2Lip
This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020.