Manta3's Stars
MorvanZhou/tutorials
机器学习相关教程
Parker-Lyu/TensorFLow-Learning
B站上炼数成金的公开课笔记
fengdu78/Coursera-ML-AndrewNg-Notes
吴恩达老师的机器学习课程个人笔记
TingNie/Machine-learning-in-action
个人使用jupyter notebook整理的peter的《机器学习实战》代码,使其更有层次感,更加连贯,也加了一些自己的修改,以及注释
xjwhhh/AndrewNgMachineLearning
imLogM/Machine_Learning_AndrewNg
Homework of Andrew Ng's "Machine Learning" course in Coursera
jiqizhixin/ML-Tutorial-Experiment
Coding the Machine Learning Tutorial for Learning to Learn
analogdevicesinc/linux
Linux kernel variant from Analog Devices; see README.md for details
MLEveryday/100-Days-Of-ML-Code
100-Days-Of-ML-Code中文版
yunjey/pytorch-tutorial
PyTorch Tutorial for Deep Learning Researchers
erhwenkuo/deep-learning-with-keras-notebooks
Jupyter notebooks for using & learning Keras
nlintz/TensorFlow-Tutorials
Simple tutorials using Google's TensorFlow Framework
pkmital/tensorflow_tutorials
From the basics to slightly more interesting applications of Tensorflow
leisurelicht/wtfpython-cn
wtfpython的中文翻译/持续🚧.../ 能力有限,欢迎帮我改进翻译
tensorflow/tensorflow
An Open Source Machine Learning Framework for Everyone
steveicarus/iverilog
Icarus Verilog
d2l-ai/d2l-zh
《动手学深度学习》:面向中文读者、能运行、可讨论。中英文版被70多个国家的500多所大学用于教学。
dragen1860/Deep-Learning-with-PyTorch-Tutorials
深度学习与PyTorch入门实战视频教程 配套源代码和PPT
dragen1860/TensorFlow-2.x-Tutorials
TensorFlow 2.x version's Tutorials and Examples, including CNN, RNN, GAN, Auto-Encoders, FasterRCNN, GPT, BERT examples, etc. TF 2.0版入门实例代码,实战教程。
DjangoPeng/tensorflow
Computation using data flow graphs for scalable machine learning
keras-team/keras
Deep Learning for humans
DjangoPeng/tensorflow-101
《TensorFlow 快速入门与实战》和《TensorFlow 2 项目进阶实战》课程代码与课件
canteen-man/CNN_FPGA_ZYNQ_PYNQ
hls code zynq 7020 pynq z2 CNN
hisrg/PYNQ_CNN_Accelerator_Tutorial
中文:
Shuep418Slw/OSlw_Code
Code for OSLW
mtmd/FPGA_Based_CNN
FPGA based acceleration of Convolutional Neural Networks. The project is developed by Verilog for Altera DE5 Net platform.
Hossamomar/EM070_New-FPGA-family-for-CNN-architectures-High-Speed-Soft-Neuron-Design
Who doesn’t dream of a new FPGA family that can provide embedded hard neurons in its silicon architecture fabric instead of the conventional DSP and multiplier blocks? The optimized hard neuron design will allow all the software and hardware designers to create or test different deep learning network architectures, especially the convolutional neural networks (CNN), more easily and faster in comparing to any previous FPGA family in the market nowadays. The revolutionary idea about this project is to open the gate of creativity for a precise-tailored new generation of FPGA families that can solve the problems of wasting logic resources and/or unneeded buses width as in the conventional DSP blocks nowadays. The project focusing on the anchor point of the any deep learning architecture, which is to design an optimized high-speed neuron block which should replace the conventional DSP blocks to avoid the drawbacks that designers face while trying to fit the CNN architecture design to it. The design of the proposed neuron also takes the parallelism operation concept as it’s primary keystone, beside the minimization of logic elements usage to construct the proposed neuron cell. The targeted neuron design resource usage is not to exceeds 500 ALM and the expected maximum operating frequency of 834.03 MHz for each neuron. In this project, ultra-fast, adaptive, and parallel modules are designed as soft blocks using VHDL code such as parallel Multipliers-Accumulators (MACs), RELU activation function that will contribute to open a new horizon for all the FPGA designers to build their own Convolutional Neural Networks (CNN). We couldn’t stop imagining INTEL ALTERA to lead the market by converting the proposed designed CNN block and to be a part of their new FPGA architecture fabrics in a separated new Logic Family so soon. The users of such proposed CNN blocks will be amazed from the high-speed operation per seconds that it can provide to them while they are trying to design their own CNN architectures. For instance, and according to the first coding trial, the initial speed of just one MAC unit can reach 3.5 Giga Operations per Second (GOPS) and has the ability to multiply up to 4 different inputs beside a common weight value, which will lead to a revolution in the FPGA capabilities for adopting the era of deep learning algorithms especially if we take in our consideration that also the blocks can work in parallel mode which can lead to increasing the data throughput of the proposed project to about 16 Tera Operations per Second (TOPS). Finally, we believe that this proposed CNN block for FPGA is just the first step that will leave no areas for competitions with the conventional CPUs and GPUs due to the massive speed that it can provide and its flexible scalability that it can be achieved from the parallelism concept of operation of such FPGA-based CNN blocks.
sumanth-kalluri/cnn_hardware_acclerator_for_fpga
This is a fully parameterized verilog implementation of computation kernels for accleration of the Inference of Convolutional Neural Networks on FPGAs
brianhill11/FPGA-CNN
This repo is for ECE44x (Fall2015-Spring2016)
xiangze/CNN_FPGA
verilog CNN generator for FPGA