fp16
There are 18 repositories under fp16 topic.
SthPhoenix/InsightFace-REST
InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.
Maratyszcza/FP16
Conversion to/from half-precision floating point formats
kamalkraj/stable-diffusion-tritonserver
Deploy stable diffusion model with onnx/tenorrt + tritonserver
petamoriken/float16
Stage 3 IEEE 754 half-precision floating-point ponyfill
the0807/YOLOv8-ONNX-TensorRT
👀 Apply YOLOv8 exported with ONNX or TensorRT(FP16, INT8) to the Real-time camera
higham/chop
Round matrix elements to lower precision in MATLAB
fengwang/float16_t
CPP20 implementation of a 16-bit floating-point type mimicking most of the IEEE 754 behavior. Single file and header-only.
esteveste/dreamerV2-pytorch
Pytorch implementation of DreamerV2: Mastering Atari with Discrete World Models, based on the original implementation
jizhuoran/caffe-android-opencl-fp16
Optimised Caffe with OpenCL supporting for less powerful devices such as mobile phones
kentaroy47/pytorch-cifar10-fp16
Let's train CIFAR 10 Pytorch with Half-Precision!
afterdusk/flop
IEEE 754-style floating-point converter
quanvuhust/Export-ONNX-float-16
Export pytorch model to ONNX and convert ONNX from float32 to float 16
angelolamonaca/PyTorch-Precision-Converter
A flexible utility for converting tensor precision in PyTorch models and safetensors files, enabling efficient deployment across various platforms.
lbin/apextrainer_detectron2
apextrainer is an open source toolbox for fp16 trainer based on Detectron2 and Apex
ojy0216/floatConversion
Converts a floating-point number or hexadecimal representation of a floating-point numbers into various formats and displays them into binary/hexadecimal.
floriankark/transformer
Transformer implementation in pytorch trained on NVIDIA A100 in fp16
ZephyrLabs/aarch64-playgrounds
Just a bunch of hand crafted experiments to tinker with the capabilities of a really fast aarch64 based system.
khlee369/Pytorch2TensorRT
Simple Example of Pytorch -> TensorRT and Inference