Pinned Repositories
Accelerating-CNN-with-FPGA
This project accelerates CNN computation with the help of FPGA, for more than 50x speed-up compared with CPU.
Alveo-PYNQ
Introductory examples for using PYNQ with Alveo
audio
Data manipulation and transformation for audio signal processing, powered by PyTorch
audio_ml
augmented-neural-odes
Pytorch implementation of Augmented Neural ODEs :sunflower:
BaseKit-code-samples
benchmark
TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.
blinky
Example LED blinking project for your FPGA dev board of choice
brevitas
Quantization-aware training in Pytorch
metrics
Machine learning metrics for distributed, scalable PyTorch applications.
mahinlma's Repositories
mahinlma/Accelerating-CNN-with-FPGA
This project accelerates CNN computation with the help of FPGA, for more than 50x speed-up compared with CPU.
mahinlma/Alveo-PYNQ
Introductory examples for using PYNQ with Alveo
mahinlma/audio
Data manipulation and transformation for audio signal processing, powered by PyTorch
mahinlma/audio_ml
mahinlma/BaseKit-code-samples
mahinlma/benchmark
TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.
mahinlma/blinky
Example LED blinking project for your FPGA dev board of choice
mahinlma/brevitas
Quantization-aware training in Pytorch
mahinlma/metrics
Machine learning metrics for distributed, scalable PyTorch applications.
mahinlma/c5p_OpenCL_experiments
mahinlma/examples-1
TensorFlow examples
mahinlma/FFmpeg
Mirror of git://source.ffmpeg.org/ffmpeg.git
mahinlma/FPGA-Devcloud
Get started using Intel® FPGA tools on the Devcloud with tutorials, workshops, advanced courses, and sample projects built specifically for students, researchers, and developers. Visit our official Intel® FPGA Devcloud website:
mahinlma/mlir
"Multi-Level Intermediate Representation" Compiler Infrastructure
mahinlma/netron
Visualizer for neural network, deep learning, and machine learning models
mahinlma/OLive
OLive, meaning ONNX Runtime(ORT) Go Live, is a python package that automates the process of accelerating models with ONNX Runtime(ORT). It contains two parts including model conversion to ONNX with correctness checking and auto performance tuning with ORT. Users can run these two together through a single pipeline or run them independently as needed.
mahinlma/oneAPI-samples
Samples for Intel oneAPI toolkits
mahinlma/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
mahinlma/open_model_zoo
Pre-trained Deep Learning models and samples (high quality and extremely fast)
mahinlma/OpenVINO-Custom-Layers
Tutorial for Using Custom Layers with OpenVINO (Intel Deep Learning Toolkit)
mahinlma/optimum
🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools
mahinlma/ort_build
mahinlma/PYNQ-experiment
This repository contains a "Hello World" introduction application to the Xilinx PYNQ framework.
mahinlma/pytorch_quantization
Pytorch Model Quantization, Layer Fusion and Optimization
mahinlma/SDAccel_Examples
SDAccel Examples
mahinlma/spooNN
FPGA-based neural network inference project with an end-to-end approach (from training to implementation to deployment)
mahinlma/tune
mahinlma/Vitis_Accel_Examples
Vitis_Accel_Examples
mahinlma/Vitis_Libraries
Vitis Libraries
mahinlma/whisper