Skip to main content
To KTH's start page

Scalable hardware solutions for machine learning: FPGA implementation and methodology analysis

Time: Mon 2024-12-16 13.00

Location: Kista Sal C, Kistagången 16, 164 40 Kista

Video link: https://kth-se.zoom.us/j/66637479462

Language: English

Subject area: Electrical Engineering

Doctoral student: Gracieth Batista , Elektronik och inbyggda system, Aeronautics Institute of Technology, São José dos Campos, Brasil

Opponent: Associate Professor Peeter Ellervee, Tallinn University of Technology, Tallinn, Estonia

Supervisor: Professor Carl-Mikael Zetterling, Elektronik och inbyggda system; Professor Osamu Saotome, Aeronautics Institute of Technology, São José dos Campos, Brasil; Johnny Öberg, Elektronik och inbyggda system

Export to calendar

QC 20241122

Abstract

Artificial Intelligence (AI) has evolved into a multifaceted field with significant developments across various domains, with Machine Learning (ML) playing a pivotal role. This thesis focuses on the intersection of ML and specialized hardware, specifically Field-Programmable Gate Arrays (FPGAs), to address modern challenges in performance, power efficiency, and real-time processing. ML models, especially Support Vector Machines (SVMs), have shown potential for hardware implementation due to their computational simplicity and efficiency. However, balancing performance and power efficiency remains challenging, particularly in low-power applications. In pursuing scalable hardware solutions for Machine Learning (ML), this thesis analyzes Field-Programmable Gate Array (FPGA) implementation and methodology. ML applications increasingly depend on the interaction between hardware and software to maximize efficiency, minimize latency, and support real-time learning capabilities. Focusing on Support Vector Machines (SVMs) and their regression modeling version, called Support Vector Regression (SVR), as a proof of concept, this work addresses critical challenges in deploying ML on resource-constrained hardware. The research introduces various techniques, including model optimization, pruning, quantization, and dynamic partial reconfiguration (DPR) on FPGAs. These methods aim to enhance performance while maintaining low power consumption, making them ideal for low-power applications such as edge computing. Three case studies are explored: an SVM-based speech recognition system, an SVR-DPR architecture for image edge detection, and an outlier detection system for structural health monitoring. Each case study demonstrates the feasibility and advantages of FPGA-based ML implementations in real-world applications. The findings of this thesis contribute to the development of scalable, energy-efficient hardware solutions for ML, offering flexible and high-performance systems that address the computational demands of modern AI applications.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-356753