Please use this identifier to cite or link to this item: http://hdl.handle.net/1843/JCES-AREGGR
Full metadata record
DC FieldValueLanguage
dc.contributor.advisor1Eduardo Magno Lages Figueiredopt_BR
dc.contributor.referee1Humberto Torres Marques Netopt_BR
dc.contributor.referee2Marco Tulio de Oliveira Valentept_BR
dc.creatorThanis Fernandes Paivapt_BR
dc.date.accessioned2019-08-10T04:18:45Z-
dc.date.available2019-08-10T04:18:45Z-
dc.date.issued2017-08-11pt_BR
dc.identifier.urihttp://hdl.handle.net/1843/JCES-AREGGR-
dc.description.resumoCode smells are code fragments that can hinder the evolution and maintenance of software systems. Their detection is a challenge for developers and their informal definition leads to the implementation of multiple detection techniques and tools. This paper investigates the presence and evolution of code smells in two software systems, namely MobileMedia and Health Watcher. We also evaluated and compared four code smell detection tools, namely inFusion, JDeodorant, PMD, and JSpIRIT, using five open source projects, namely ANTLR, ArgoUML, JFreeChart, JSPWiki, and JUnit. The tools were applied to all seven open source projects to calculate agreement and accuracy of the tools. We calculated the recall and precision of each tool in the detection of three code smells: God Class, God Method, and Feature Envy. In order to calculate the recall and precision of the tools, we created code smell reference lists by manually analyzing the source code and also using an automatic approach. Agreement was calculated among tools and between pairs of tools, considering the percentage agreement, chance corrected agreement, non-occurrence, and occurrence agreement. The results were analyzed to answer research questions related to the evolution of code smells and comparison of detection tools in terms of recall, precision, and agreement. Our main findings include the fact that, in general, code smells are present from the moment of creation of a class or method in 74.4% of the cases of MobileMedia and 87.5% of Health Watcher. We also found that the evaluated tools present different recall and precision values. However, for God Class and Feature Envy, inFusion has the lowest recall and highest precision, while JDeodorant has the lowest precision for God Class and God Method in all target systems. Considering the agreement, we found high averages for percentage, chance corrected, and non-occurrence agreement of over 90%, confirming that there is high agreement on classes and methods without code smells, regardless of differences in the detection techniques. On the other hand, we found lower values for occurrence agreement between pairs of tools, ranging from 0.38% to 64.56%, confirming that regardless of similarities in the detection techniques, each tool reports very different sets of classes and methods as code smells.pt_BR
dc.languageInglêspt_BR
dc.publisherUniversidade Federal de Minas Geraispt_BR
dc.publisher.initialsUFMGpt_BR
dc.rightsAcesso Abertopt_BR
dc.subjectMétricas de softwarept_BR
dc.subjectAnomalias de códigopt_BR
dc.subjectFerramentas de detecçãopt_BR
dc.subject.otherCode smellspt_BR
dc.subject.otherFerramentas Computaçãopt_BR
dc.subject.otherComputaçãopt_BR
dc.subject.otherQualidade Softwarept_BR
dc.titleOn the evaluation of code smells and detection toolspt_BR
dc.typeDissertação de Mestradopt_BR
Appears in Collections:Dissertações de Mestrado

Files in This Item:
File Description SizeFormat 
thanis_paiva.pdf1.49 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.