korabelnikov's Stars
jimmywarting/await-sync
Perform async work synchronously using web worker and SharedArrayBuffer
msqr1/Vosklet
A speech recognizer that can run on the browser, inspired by vosk-browser
brython-dev/brython
Brython (Browser Python) is an implementation of Python 3 running in the browser
extremeheat/JSPyBridge
🌉. Bridge to interoperate Node.js and Python
Cleric-K/Universal-RC-Joystick
Convert RC receiver into USB HID Joystick with a cheap STM32 dev board
Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB
💎1MB lightweight face detection model (1MB轻量级人脸检测模型)
ai-forever/ru-dolph
RUDOLPH: One Hyper-Tasking Transformer can be creative as DALL-E and GPT-3 and smart as CLIP
korabelnikov/AI-Chip
A list of ICs and IPs for AI, Machine Learning and Deep Learning.
adamjezek98/MPU6050-ESP8266-MicroPython
Simple library for MPU6050 on ESP8266 with micropython
alibaba/TinyNeuralNetwork
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
TexasInstruments/edgeai-torchvision
This repository has been moved. The new location is in https://github.com/TexasInstruments/edgeai-tensorlab
pascallanger/DIY-Multiprotocol-TX-Module
Multiprotocol TX Module (or MULTI-Module) is a 2.4GHz transmitter module which controls many different receivers and models.
CNugteren/CLBlast
Tuned OpenCL BLAS
prothesman/VivanteGPU
from https://github.com/laanwj/etna_viv
nxp-imx/gtec-demo-framework
intel/intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
analogdevicesinc/distiller
Fork of Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://nervanasystems.github.io/distiller
SolderedElectronics/Inkplate-6-hardware
Open Source Hardware (OSH) files for e-paper display Inkplate 6
sovrasov/flops-counter.pytorch
Flops counter for convolutional networks in pytorch framework
tminnigaliev/euler_angles
Chien-Hung/DetVisGUI
This is a GUI for easily visualizing detection results .
IntelLabs/distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
daquexian/onnx-simplifier
Simplify your onnx model
tensorflow/tpu
Reference models and tools for Cloud TPUs.
shap/shap
A game theoretic approach to explain the output of any machine learning model.
OpenMathLib/OpenBLAS
OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
conda-forge/miniforge
A conda-forge distribution.
huawei-noah/Efficient-AI-Backbones
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
Freescale/kernel-module-imx-gpu-viv
FSL Community fork of Vivante i.MX GPU Linux kernel driver
InterDigitalInc/HRFAE
Official implementation for paper High Resolution Face Age Editing