/-Approximate-Computing-Techniques-for-Deep-Neural-Networks-

Approximate computing technique is very much useful for improving efficiency (approx double) and reducing energy consumption. We will be using different adders and multipliers for this purpose and comparing their energy consumption and accuracy. And we have implemented it on Field Programmable Gate Array (FPGA) and ZEDBoard.

Primary LanguageVerilog

-Approximate-Computing-Techniques-for-Deep-Neural-Networks-

Approximate computing technique is very much useful for improving efficiency (approx double) and reducing energy consumption.
We will be using different adders and multipliers for this purpose and comparing their energy consumption and accuracy. And we have implemented it on Field Programmable Gate Array (FPGA) and ZEDBoard.

Deep neural networks are widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. DNNs are employed in a myriad of applications but comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and accuracy should be used. To minimize energy consumption while maintaining throughput, we have used approximate computing technique which possibly results in an inaccurate result but can be useful for applications where approximate computations are enough for its purpose and they don't require perfect accuracy. The approximate techniques which have used reduce the power consumption significantly although accuracy is compromised to some extent. At the same time it is very much useful for improving efficiency and reducing energy consumption. We have designed different multipliers for calculating power consumption using Xilinx Vivado. The proposed design of multiplier uses multiplexer so that allows us to highly simplify calculations as it allows inclusion and exclusion of any number of bit so it increases the flexibility as well as reduces power consumption.