Parallel training considered harmful? Comparing series-parallel and parallel feedforward network training

dc.creatorAntônio Horta Ribeiro
dc.creatorLuis Antonio Aguirre
dc.date.accessioned2025-04-24T18:03:04Z
dc.date.accessioned2025-09-09T00:37:22Z
dc.date.available2025-04-24T18:03:04Z
dc.date.issued2018
dc.identifier.doihttps://doi.org/10.1016/j.neucom.2018.07.071
dc.identifier.issn0925-2312
dc.identifier.urihttps://hdl.handle.net/1843/81817
dc.languageeng
dc.publisherUniversidade Federal de Minas Gerais
dc.relation.ispartofNeurocomputing
dc.rightsAcesso Restrito
dc.subjectRedes neurais (Computação)
dc.subject.otherNeural network
dc.subject.otherParallel training
dc.subject.otherSeries-parallel training
dc.subject.otherSystem identification
dc.subject.otherOutput error models
dc.titleParallel training considered harmful? Comparing series-parallel and parallel feedforward network training
dc.typeArtigo de periódico
local.citation.epage231
local.citation.spage222
local.citation.volume316
local.description.resumoNeural network models for dynamic systems can be trained either in parallel or in series-parallel configurations. Influenced by early arguments, several papers justify the choice of series-parallel rather than parallel configuration claiming it has a lower computational cost, better stability properties during training and provides more accurate results. Other published results, on the other hand, defend parallel training as being more robust and capable of yielding more accurate long-term predictions. The main contribution of this paper is to present a study comparing both methods under the same unified framework with special attention to three aspects: (i) robustness of the estimation in the presence of noise; (ii) computational cost; and, (iii) convergence. A unifying mathematical framework and simulation studies show situations where each training method provides superior validation results and suggest that parallel training is generally better in more realistic scenarios. An example using measured data seems to reinforce such a claim. Complexity analysis and numerical examples show that both methods have similar computational cost although series-parallel training is more amenable to parallelization. Some informal discussion about stability and convergence properties is presented and explored in the examples.
local.publisher.countryBrasil
local.publisher.departmentENG - DEPARTAMENTO DE ENGENHARIA ELETRÔNICA
local.publisher.initialsUFMG
local.url.externahttps://www.sciencedirect.com/science/article/pii/S0925231218309068

Arquivos

Licença do pacote

Agora exibindo 1 - 1 de 1
Carregando...
Imagem de Miniatura
Nome:
License.txt
Tamanho:
1.99 KB
Formato:
Plain Text
Descrição: