Pinned Repositories
ONNX-TensorRT-Inference-CRAFT-pytorch
Advance inference performance using TensorRT for CRAFT Text detection. Implemented modules to convert Pytorch -> ONNX -> TensorRT, with dynamic shapes (multi-size input) inference.
Table_TIES_DataGeneration_Docker
Dataset Generation Code for: S.R. Qasim, H. Mahmood, and F. Shafait, Rethinking Table Parsing using Graph Neural Networks (2019)
torch2tensorRT-dynamic-CRAFT-pytorch
Convenient Convert CRAFT Text detection pretrain Pytorch model into TensorRT engine directly, without ONNX step between
triton-exp
Repo for experimenting NVIDIA Triton Inference server
Triton-TensorRT-Inference-CRAFT-pytorch
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
VITON-HD-Docker
Official PyTorch implementation of "VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization" (CVPR 2021)
Efficient_Text_Detection
This repo combine the power of heat map based text detection method with advance backbone from efficientnet model.
k9ele7en's Repositories
k9ele7en/triton-exp
Repo for experimenting NVIDIA Triton Inference server
k9ele7en/Data-science-best-resources
Carefully curated resource links for data science in one place
k9ele7en/flutter-roadmap
Roadmap for Flutter developers in 2020
k9ele7en/hadoop-ops-course
k9ele7en/neural-doodle
Turn your two-bit doodles into fine artworks with deep neural networks, generate seamless textures from photos, transfer style from one image to another, perform example-based upscaling, but wait... there's more! (An implementation of Semantic Style Transfer.)
k9ele7en/onnx-exp
Experiment on onnx platform
k9ele7en/onnx-simplifier
Simplify your onnx model
k9ele7en/PyTorch-Quantization-Aware-Training
PyTorch Quantization Aware Training Example
k9ele7en/PyTorchZeroToAll
Simple PyTorch Tutorials Zero to ALL!
k9ele7en/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
k9ele7en/Simple-Inference-Server
Inference Server Implementation from Scratch for Machine Learning Models
k9ele7en/Spark-Streaming-In-Python
Apache Spark 3 - Structured Streaming Course Material
k9ele7en/TensorRT
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
k9ele7en/tensorRT-exp
Experiment on tensorRT create/inference engine
k9ele7en/triton-inference-server_client
Triton Python and C++ client libraries and example, and client examples for go, java and scala.
k9ele7en/typescript-lambda-xray
Example of TypeScript Lambda using AWS X-Ray