Improving the efficiency of gabriel graph-based classifiers for hardware-optimized implementations

Carregando...
Imagem de Miniatura

Data

Título da Revista

ISSN da Revista

Título de Volume

Editor

Universidade Federal de Minas Gerais

Descrição

Tipo

Artigo de evento

Título alternativo

Primeiro orientador

Membros da banca

Resumo

This work evaluates strategies to reduce the computational cost of Gabriel graph-based classifiers, in order to make them more suitable for hardware implementation. An analysis of the impact of the bit precision provides insights on the model's robustness for lower precision arithmetic. Additionally, a parallelization technique is proposed in order to improve the efficiency of the support edges computation. The results show that the lower bit precision models are statistically equivalent to the reference double-precision ones. Also, the implementation of the proposed parallel algorithm provides a significant reduction in the running time when applied in large datasets, while maintaining its accuracy.

Abstract

Assunto

Computadores (Engenharia)

Palavras-chave

Hardware , Field programmable gate arrays , Computational modeling , Numerical models , Data models , Analytical models , Parallel processing, Classification , Gabriel graph , FPGA , numerical representation , parallel computing, Running Time , Analysis Of The Impact , Precise Model , Low Precision , Hardware Implementation , Least Significant Bit , Parallel Algorithm , Double Precision , Parallel Technique , Machine Learning , Model Performance , Training Data , Exponential Growth , Average Results , Graphics Processing Unit , Class Labels , Resource Consumption , Hyperplane , Reference Model , Graph Structure , Hardware Architecture , Vertex Degree , Single Precision , Numerical Representation , Filtering Techniques

Citação

Curso

Endereço externo

https://ieeexplore.ieee.org/document/8730227

Avaliação

Revisão

Suplementado Por

Referenciado Por