Accelerating Machine Learning Using NoC-Based Accelerators on FPGAs

Description
As machine learning (ML) continues to grow in popularity, the need for efficient hardware accelerators increases. Field Programmable Gate Arrays (FPGAs) have become a popular solution due to their reconfigurability. Network on Chips (NoCs) are a communication architecture gaining traction

As machine learning (ML) continues to grow in popularity, the need for efficient hardware accelerators increases. Field Programmable Gate Arrays (FPGAs) have become a popular solution due to their reconfigurability. Network on Chips (NoCs) are a communication architecture gaining traction on FPGAs because they can quickly and efficiently connect different components. In this Thesis, we attempted to show that NoCs can be applied to machine learning on FPGAs. We connected matrix-vector multipliers to the NoC and implemented a simple multi-layer perceptron (MLP) across the different components. In this thesis, we were able to prove that using NoCs is a viable solution for accelerating machine learning and that there is opportunity to apply NoCs to larger machine learning designs.

Downloads

One or more components are restricted to ASU affiliates. Please sign in to view the rest.
Restrictions Statement

Barrett Honors College theses and creative projects are restricted to ASU community members.

Details

Contributors
Date Created
2024-12
Resource Type

Additional Information

English
Series
  • Academic Year 2024-2025
Extent
  • 28 pages
Open Access
Peer-reviewed