Learning regularization parameters of radial basis functions in embedded likelihoods space

Carregando...
Imagem de Miniatura

Data

Título da Revista

ISSN da Revista

Título de Volume

Editor

Universidade Federal de Minas Gerais

Descrição

Tipo

Artigo de evento

Título alternativo

Primeiro orientador

Membros da banca

Resumo

Neural networks with radial basis activation functions are typically trained in two different phases: the first consists in the construction of the hidden layer, while the second consists in finding the output layer weights. Constructing the hidden layer involves defining the number of units in it, as well as their centers and widths. The training process of the output layer can be done using least squares methods, usually setting a regularization term. This work proposes an approach for building the whole network using information extracted directly from the projected training data in the space formed by the likelihoods functions. One can, then, train RBF networks for pattern classification with minimal external intervention.

Abstract

Assunto

Redes neurais (Computação)

Palavras-chave

For valid generalization the size of the weights is more important than the size of the network, Orthogonal least squares learning algorithm for radial basis function networks, Statistical comparisons of classifiers over multiple data sets

Citação

Curso

Endereço externo

https://link.springer.com/chapter/10.1007/978-3-030-30244-3_24

Avaliação

Revisão

Suplementado Por

Referenciado Por