This is a project to build a low cost and low power FPGA Accelerated Convolutional Neural Network system for inference on edge devices.
There is no shortage of embedded solutions to provide inference at the edge (Google Coral, Intel Neural Compute Stick, Raspberry Pi, etc.) there is the trade off of cost vs. inference speed vs. power draw. This project is to make a low cost FPGA (<$15) with custom written hardware inference engines (CNN's etc.) to perform image obect recognition.
This solution will be low cost (<$20), low power (<3W) and likely low latency (2-3+ FPS).