Improved design for hardware implementation of graph-based large margin classifiers for embedded edge computing

dc.creatorJanier Arias García
dc.creatorAlan Cândido de Souza
dc.creatorLiliane dos Reis Gade
dc.creatorJones Y. Mori
dc.creatorFrederico Coelho
dc.creatorCristiano Leite de Castro
dc.creatorLuiz Carlos Bambirra Torres
dc.creatorAntonio de Padua Braga
dc.date.accessioned2025-06-04T14:03:38Z
dc.date.accessioned2025-09-08T23:25:26Z
dc.date.available2025-06-04T14:03:38Z
dc.date.issued2022
dc.identifier.doihttps://doi.org/10.1109/TNNLS.2022.3183236
dc.identifier.issn2162237X
dc.identifier.urihttps://hdl.handle.net/1843/82773
dc.languageeng
dc.publisherUniversidade Federal de Minas Gerais
dc.relation.ispartofIEEE Transactions on Neural Networks and Learning Systems
dc.rightsAcesso Restrito
dc.subjectInternet das coisas
dc.subject.otherHardware , Optimization , Embedded systems , Computational modeling , Numerical models , Internet of Things , Field programmable gate arrays
dc.subject.otherClassifiers , edge computing , embedded system , FPGA , Gabriel graph , hardware design of neural networks , Internet of Things (IoT) , large margin , latency-sensitive applications , system-on-a-chip
dc.subject.otherEdge Computing , Hardware Implementation , Classifier Implementation , Limited Resources , Learning Algorithms , Internet Of Things , Internet Of Things Devices , Hardware Architecture , Training Set , Computational Cost , Support Vector Machine , Distance Matrix , Power Consumption , Parallelization , System Architecture , Resource Consumption , Samples In Order , Time Slot , Distance Calculation , Mahalanobis Distance , Clock Cycles , Hardware Accelerators , ARM Processor , Offline Learning , Numerical Accuracy , Impact Of Reduction , Single Precision , Hardware Resources , Support Vector Machine Training , Scalable
dc.titleImproved design for hardware implementation of graph-based large margin classifiers for embedded edge computing
dc.typeArtigo de periódico
local.citation.spagexx
local.citation.volumexx
local.description.resumoThe number of connected embedded edge computing Internet of Things (IoT) devices has been increasing over the years, contributing to the significant growth of available data in different scenarios. Thereby, machine learning algorithms arise to enable task automation and process optimization based on those data. However, due to some learning methods’ computational complexity implementing geometric classifiers, it is a challenge to map these on embedded systems or devices with limited resources in size, processing, memory, and power, to accomplish the desired requirements. This hampers the applicability of these methods to complex industrial embedded edge applications. This work evaluates strategies to reduce classifiers’ implementation costs based on the CHIP-clas model, independent of hyperparameter tuning and optimization algorithms. The proposal aims to evaluate the tradeoff between numerical precision and model performance and analyze the hardware implementations of a distance-based classifier. Two 16-b floating-point formats were compared to the 32-b floating-point precision implementation. Also, a new hardware architecture was developed and then compared to the state-of-the-art reference. The results indicate that the model is robust to low precision computation, providing statistically equivalent results compared to the baseline model, also pointing out statistically equivalent performance and a global speed-up factor of approx 4.39 in processing time.
local.publisher.countryBrasil
local.publisher.departmentENG - DEPARTAMENTO DE ENGENHARIA ELETRÔNICA
local.publisher.initialsUFMG
local.url.externahttps://ieeexplore.ieee.org/document/9805692

Arquivos

Licença do pacote

Agora exibindo 1 - 1 de 1
Carregando...
Imagem de Miniatura
Nome:
License.txt
Tamanho:
1.99 KB
Formato:
Plain Text
Descrição: