Please use this identifier to cite or link to this item: http://hdl.handle.net/1843/61019
Full metadata record
DC FieldValueLanguage
dc.creatorAndré L. S. Meirellespt_BR
dc.creatorTahsin Kurcpt_BR
dc.creatorJun Kongpt_BR
dc.creatorRenato Antonio Celso Ferreirapt_BR
dc.creatorJoel H. Saltzpt_BR
dc.creatorGeorge Teodoropt_BR
dc.date.accessioned2023-11-16T21:23:05Z-
dc.date.available2023-11-16T21:23:05Z-
dc.date.issued2022-05-31-
dc.citation.volume9pt_BR
dc.citation.spage894430pt_BR
dc.citation.epage10pt_BR
dc.identifier.doihttps://doi.org/10.3389/fmed.2022.894430pt_BR
dc.identifier.issn2296-858Xpt_BR
dc.identifier.urihttp://hdl.handle.net/1843/61019-
dc.description.resumoBackground: Deep learning methods have demonstrated remarkable performance in pathology image analysis, but they are computationally very demanding. The aim of our study is to reduce their computational cost to enable their use with large tissue image datasets. Methods: We propose a method called Network Auto-Reduction (NAR) that simplifies a Convolutional Neural Network (CNN) by reducing the network to minimize the computational cost of doing a prediction. NAR performs a compound scaling in which the width, depth, and resolution dimensions of the network are reduced together to maintain a balance among them in the resulting simplified network. We compare our method with a state-of-the-art solution called ResRep. The evaluation is carried out with popular CNN architectures and a real-world application that identifies distributions of tumor-infiltrating lymphocytes in tissue images. Results: The experimental results show that both ResRep and NAR are able to generate simplified, more efficient versions of ResNet50 V2. The simplified versions by ResRep and NAR require 1.32× and 3.26× fewer floating-point operations (FLOPs), respectively, than the original network without a loss in classification power as measured by the Area under the Curve (AUC) metric. When applied to a deeper and more computationally expensive network, Inception V4, NAR is able to generate a version that requires 4× lower than the original version with the same AUC performance. Conclusions: NAR is able to achieve substantial reductions in the execution cost of two popular CNN architectures, while resulting in small or no loss in model accuracy. Such cost savings can significantly improve the use of deep learning methods in digital pathology. They can enable studies with larger tissue image datasets and facilitate the use of less expensive and more accessible graphics processing units (GPUs), thus reducing the computing costs of a study.pt_BR
dc.description.sponsorshipCNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológicopt_BR
dc.description.sponsorshipFAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Geraispt_BR
dc.description.sponsorshipCAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superiorpt_BR
dc.format.mimetypepdfpt_BR
dc.languageengpt_BR
dc.publisherUniversidade Federal de Minas Geraispt_BR
dc.publisher.countryBrasilpt_BR
dc.publisher.departmentICEX - INSTITUTO DE CIÊNCIAS EXATASpt_BR
dc.publisher.departmentICX - DEPARTAMENTO DE CIÊNCIA DA COMPUTAÇÃOpt_BR
dc.publisher.initialsUFMGpt_BR
dc.relation.ispartofFrontiers in Medicinept_BR
dc.rightsAcesso Abertopt_BR
dc.subjectDigital pathologypt_BR
dc.subjectDeep learningpt_BR
dc.subjectCNN simplificationpt_BR
dc.subjectTumor-infiltrating lymphocytespt_BR
dc.subjectEfficient CNNspt_BR
dc.subject.otherApendizado do computadorpt_BR
dc.subject.otherTumorespt_BR
dc.subject.otherLinfócitospt_BR
dc.subject.otherAprendizado profundo (Aprendizado do computador)pt_BR
dc.titleBuilding Efficient CNN Architectures for Histopathology Images Analysis: A Case-Study in Tumor-Infiltrating Lymphocytes Classificationpt_BR
dc.typeArtigo de Periódicopt_BR
dc.url.externahttps://www.frontiersin.org/articles/10.3389/fmed.2022.894430/fullpt_BR
dc.identifier.orcidhttps://orcid.org/0000-0002-4372-8996pt_BR
Appears in Collections:Artigo de Periódico



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.