anyj0527's Stars
ultralytics/ultralytics
Ultralytics YOLO11 🚀
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
google-ai-edge/LiteRT
LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
google-ai-edge/ai-edge-torch
Supporting PyTorch models with the Google AI Edge TFLite runtime.
google-ai-edge/model-explorer
A modern model graph visualizer and debugger
karpathy/llm.c
LLM training in simple, raw C/CUDA
phil-opp/blog_os
Writing an OS in Rust
heyman/heynote
A dedicated scratchpad for developers
nnstreamer/deviceMLOps.MLAgent
TBD: deviceMLOps.service or deviceMLOps.MLAgent.
VeriSilicon/TIM-VX
VeriSilicon Tensor Interface Module
nxp-imx/nnshark
Live Profiler for NNStreamer
nxp-imx/meta-imx
i.MX Yocto Project i.MX BSP Layer
VeriSilicon/tflite-vx-delegate
Tensorflow Lite external delegate based on TIM-VX
pytorch/executorch
On-device AI across mobile, embedded and edge for PyTorch
vitoplantamura/OnnxStream
Lightweight inference library for ONNX files, written in C++. It can run Stable Diffusion XL 1.0 on a RPI Zero 2 (or in 298MB of RAM) but also Mistral 7B on desktops and servers. ARM, x86, WASM, RISC-V supported. Accelerated by XNNPACK.
ggerganov/whisper.cpp
Port of OpenAI's Whisper model in C/C++
PINTO0309/tflite2tensorflow
Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
PINTO0309/PINTO_model_zoo
A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
google-ai-edge/mediapipe
Cross-platform, customizable ML solutions for live and streaming media.
Tencent/ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
nnstreamer/aitt
tensorflow/tensorflow
An Open Source Machine Learning Framework for Everyone
nnstreamer/nnstreamer
:twisted_rightwards_arrows: Neural Network (NN) Streamer, Stream Processing Paradigm for Neural Network Apps/Devices.
PINTO0309/TensorflowLite-bin
Prebuilt binary for TensorFlowLite's standalone installer. For RaspberryPi. A very lightweight installer. I provide a FlexDelegate, MediaPipe Custom OP and XNNPACK enabled binary.
nnstreamer/nnstreamer-edge
Remote source nodes for NNStreamer pipelines without GStreamer dependencies
RidgeRun/gst-shark
GstShark is a front-end for GStreamer traces
google/XNNPACK
High-efficiency floating-point neural network inference operators for mobile, server, and Web
Snec/gst-gz
GStreamer plugin for gzip and zlib compression and decompression.
nnstreamer/api
Machine Learning API (Origin: C++: SNAP, C/C#: Tizen API, Java: Samsung-Research ML API). For Web/JS, https://git.tizen.org/cgit/platform/core/api/webapi-plugins/
pytorch/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration