Binarized Neural Network for Digit Recognition on FPGA
Overview
I took Bruce Land’s another iconic class “ECE 5760 Advanced Microcontroller Design” in Spring 2018, a class on learning and using a System on Chip (SoC) that contains a microprocessor for general purpose task and a FPGA for hardware acceleration. For the last 4-5 weeks of class, students worked on a final project on any topic they choose to pursue. My awesome lab partner Vidya and I wanted to work on a project that utilized computer vision and machine learning, so we decided to build a Binarized Neural Network for Digit Recognition on FPGA.
A Binarized Neural Network (BNN) is a special type of Convolutional Neural Network, which is a machine learning model based on the neural networks of the human brain and has extensive uses in image classification. We created a BNN in System Verilog and flashed it on the FPGA portion of the Intel DE1-SoC board. Our BNN consists of two convolutional layers, two pooling layers, and two fully connected layers.
When the microprocessor feeds a 7 by 7 two bit black and white image to the BNN, the BNN would make inference and classify what number is the input image in 4us, which takes far less time than the same implementation running Python on a PC (>40us or at least 10 times longer). This hardware acceleration is the result of our parallel computing algorithm in FPGA. Another advantage of running BNN on FPGA hardware is that it guarantees meeting real time deadline (whereas a typical program running in operating system is usually not real time), which is useful for applications like autonomous driving.
More information of this project can be found in our final report website: Link Click Here.
Video Demo