Please use this identifier to cite or link to this item:
http://hdl.handle.net/1843/ESBF-B5UMFW
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor1 | Eduardo Magno Lages Figueiredo | pt_BR |
dc.contributor.advisor-co1 | Elder Jose Reioli Cirilo | pt_BR |
dc.contributor.referee1 | Elder Jose Reioli Cirilo | pt_BR |
dc.contributor.referee2 | Kecia Aline Marques Ferreira | pt_BR |
dc.contributor.referee3 | Marco Tulio de Oliveira Valente | pt_BR |
dc.creator | Allan Victor Mori | pt_BR |
dc.date.accessioned | 2019-08-10T14:14:56Z | - |
dc.date.available | 2019-08-10T14:14:56Z | - |
dc.date.issued | 2018-08-24 | pt_BR |
dc.identifier.uri | http://hdl.handle.net/1843/ESBF-B5UMFW | - |
dc.description.abstract | Software metrics provide means to quantify several attributes of software systems. The effective measurement is dependent on appropriate metric thresholds as they allow characterizing the quality of software systems. Indeed, thresholds have been used for detecting a variety software anomalies. Previous methods to derive metric thresholds do not take characteristics of software domains into account, such as the difference between size and complexity of systems from different domains. Instead, they rely on generic thresholds that are derived from heterogeneous systems. Although derivation of reliable thresholds has long been a concern, we also lack empirical evidence about threshold variation across distinct software domains. This work proposes a method to derive domain-sensitive thresholds that respects metric statistics and is based on benchmarks of systems from the same domain. The proposed method is supported by a software tool. This tool helps the developer to write better code since the beginning, by providing a view with class metrics and warnings considering the system domain. To evaluate our method, we performed an evaluation with desktop and mobile systems. The first evaluation investigates whether and how thresholds vary across domains by presenting a large-scale study on 3,107 software systems from 15 desktop domains. For the second evaluation, we manually mined one hundred mobile applications from GitHub. We measured all these systems using a set of metrics, derived thresholds, and validated them through qualitative and quantitative analyses. As a result, we observed that our method gathered more reliable thresholds considering software domain as a factor when building benchmarks for threshold derivation. Moreover, for the desktop evaluation, we also observed that domain-specific metric thresholds are more appropriated than generic ones for code smell detection. | pt_BR |
dc.description.resumo | Software metrics provide means to quantify several attributes of software systems. The effective measurement is dependent on appropriate metric thresholds as they allow characterizing the quality of software systems. Indeed, thresholds have been used for detecting a variety software anomalies. Previous methods to derive metric thresholds do not take characteristics of software domains into account, such as the difference between size and complexity of systems from different domains. Instead, they rely on generic thresholds that are derived from heterogeneous systems. Although derivation of reliable thresholds has long been a concern, we also lack empirical evidence about threshold variation across distinct software domains. This work proposes a method to derive domain-sensitive thresholds that respects metric statistics and is based on benchmarks of systems from the same domain. The proposed method is supported by a software tool. This tool helps the developer to write better code since the beginning, by providing a view with class metrics and warnings considering the system domain. To evaluate our method, we performed an evaluation with desktop and mobile systems. The first evaluation investigates whether and how thresholds vary across domains by presenting a large-scale study on 3,107 software systems from 15 desktop domains. For the second evaluation, we manually mined one hundred mobile applications from GitHub. We measured all these systems using a set of metrics, derived thresholds, and validated them through qualitative and quantitative analyses. As a result, we observed that our method gathered more reliable thresholds considering software domain as a factor when building benchmarks for threshold derivation. Moreover, for the desktop evaluation, we also observed that domain-specific metric thresholds are more appropriated than generic ones for code smell detection. | pt_BR |
dc.language | Inglês | pt_BR |
dc.publisher | Universidade Federal de Minas Gerais | pt_BR |
dc.publisher.initials | UFMG | pt_BR |
dc.rights | Acesso Aberto | pt_BR |
dc.subject | Metrics Thresholds | pt_BR |
dc.subject | Software Engineering | pt_BR |
dc.subject | Software Domains | pt_BR |
dc.subject.other | Computação | pt_BR |
dc.subject.other | Engenharia de software | pt_BR |
dc.subject.other | Domínios de Software | pt_BR |
dc.title | Design and Evaluation of a Method to Derive Domain Metric Thresholds | pt_BR |
dc.type | Dissertação de Mestrado | pt_BR |
Appears in Collections: | Dissertações de Mestrado |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
allanvictormori.pdf | 1.28 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.