Please use this identifier to cite or link to this item: http://hdl.handle.net/1843/ESBF-B5UMFW
Full metadata record
DC FieldValueLanguage
dc.contributor.advisor1Eduardo Magno Lages Figueiredopt_BR
dc.contributor.advisor-co1Elder Jose Reioli Cirilopt_BR
dc.contributor.referee1Elder Jose Reioli Cirilopt_BR
dc.contributor.referee2Kecia Aline Marques Ferreirapt_BR
dc.contributor.referee3Marco Tulio de Oliveira Valentept_BR
dc.creatorAllan Victor Moript_BR
dc.date.accessioned2019-08-10T14:14:56Z-
dc.date.available2019-08-10T14:14:56Z-
dc.date.issued2018-08-24pt_BR
dc.identifier.urihttp://hdl.handle.net/1843/ESBF-B5UMFW-
dc.description.abstractSoftware metrics provide means to quantify several attributes of software systems. The effective measurement is dependent on appropriate metric thresholds as they allow characterizing the quality of software systems. Indeed, thresholds have been used for detecting a variety software anomalies. Previous methods to derive metric thresholds do not take characteristics of software domains into account, such as the difference between size and complexity of systems from different domains. Instead, they rely on generic thresholds that are derived from heterogeneous systems. Although derivation of reliable thresholds has long been a concern, we also lack empirical evidence about threshold variation across distinct software domains. This work proposes a method to derive domain-sensitive thresholds that respects metric statistics and is based on benchmarks of systems from the same domain. The proposed method is supported by a software tool. This tool helps the developer to write better code since the beginning, by providing a view with class metrics and warnings considering the system domain. To evaluate our method, we performed an evaluation with desktop and mobile systems. The first evaluation investigates whether and how thresholds vary across domains by presenting a large-scale study on 3,107 software systems from 15 desktop domains. For the second evaluation, we manually mined one hundred mobile applications from GitHub. We measured all these systems using a set of metrics, derived thresholds, and validated them through qualitative and quantitative analyses. As a result, we observed that our method gathered more reliable thresholds considering software domain as a factor when building benchmarks for threshold derivation. Moreover, for the desktop evaluation, we also observed that domain-specific metric thresholds are more appropriated than generic ones for code smell detection.pt_BR
dc.description.resumoSoftware metrics provide means to quantify several attributes of software systems. The effective measurement is dependent on appropriate metric thresholds as they allow characterizing the quality of software systems. Indeed, thresholds have been used for detecting a variety software anomalies. Previous methods to derive metric thresholds do not take characteristics of software domains into account, such as the difference between size and complexity of systems from different domains. Instead, they rely on generic thresholds that are derived from heterogeneous systems. Although derivation of reliable thresholds has long been a concern, we also lack empirical evidence about threshold variation across distinct software domains. This work proposes a method to derive domain-sensitive thresholds that respects metric statistics and is based on benchmarks of systems from the same domain. The proposed method is supported by a software tool. This tool helps the developer to write better code since the beginning, by providing a view with class metrics and warnings considering the system domain. To evaluate our method, we performed an evaluation with desktop and mobile systems. The first evaluation investigates whether and how thresholds vary across domains by presenting a large-scale study on 3,107 software systems from 15 desktop domains. For the second evaluation, we manually mined one hundred mobile applications from GitHub. We measured all these systems using a set of metrics, derived thresholds, and validated them through qualitative and quantitative analyses. As a result, we observed that our method gathered more reliable thresholds considering software domain as a factor when building benchmarks for threshold derivation. Moreover, for the desktop evaluation, we also observed that domain-specific metric thresholds are more appropriated than generic ones for code smell detection.pt_BR
dc.languageInglêspt_BR
dc.publisherUniversidade Federal de Minas Geraispt_BR
dc.publisher.initialsUFMGpt_BR
dc.rightsAcesso Abertopt_BR
dc.subjectMetrics Thresholdspt_BR
dc.subjectSoftware Engineeringpt_BR
dc.subjectSoftware Domainspt_BR
dc.subject.otherComputaçãopt_BR
dc.subject.otherEngenharia de softwarept_BR
dc.subject.otherDomínios de Softwarept_BR
dc.titleDesign and Evaluation of a Method to Derive Domain Metric Thresholdspt_BR
dc.typeDissertação de Mestradopt_BR
Appears in Collections:Dissertações de Mestrado

Files in This Item:
File Description SizeFormat 
allanvictormori.pdf1.28 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.