Use este identificador para citar o ir al link de este elemento: http://hdl.handle.net/1843/ESBF-B5UMFW
Tipo: Dissertação de Mestrado
Título: Design and Evaluation of a Method to Derive Domain Metric Thresholds
Autor(es): Allan Victor Mori
primer Tutor: Eduardo Magno Lages Figueiredo
primer Co-tutor: Elder Jose Reioli Cirilo
primer miembro del tribunal : Elder Jose Reioli Cirilo
Segundo miembro del tribunal: Kecia Aline Marques Ferreira
Tercer miembro del tribunal: Marco Tulio de Oliveira Valente
Resumen: Software metrics provide means to quantify several attributes of software systems. The effective measurement is dependent on appropriate metric thresholds as they allow characterizing the quality of software systems. Indeed, thresholds have been used for detecting a variety software anomalies. Previous methods to derive metric thresholds do not take characteristics of software domains into account, such as the difference between size and complexity of systems from different domains. Instead, they rely on generic thresholds that are derived from heterogeneous systems. Although derivation of reliable thresholds has long been a concern, we also lack empirical evidence about threshold variation across distinct software domains. This work proposes a method to derive domain-sensitive thresholds that respects metric statistics and is based on benchmarks of systems from the same domain. The proposed method is supported by a software tool. This tool helps the developer to write better code since the beginning, by providing a view with class metrics and warnings considering the system domain. To evaluate our method, we performed an evaluation with desktop and mobile systems. The first evaluation investigates whether and how thresholds vary across domains by presenting a large-scale study on 3,107 software systems from 15 desktop domains. For the second evaluation, we manually mined one hundred mobile applications from GitHub. We measured all these systems using a set of metrics, derived thresholds, and validated them through qualitative and quantitative analyses. As a result, we observed that our method gathered more reliable thresholds considering software domain as a factor when building benchmarks for threshold derivation. Moreover, for the desktop evaluation, we also observed that domain-specific metric thresholds are more appropriated than generic ones for code smell detection.
Abstract: Software metrics provide means to quantify several attributes of software systems. The effective measurement is dependent on appropriate metric thresholds as they allow characterizing the quality of software systems. Indeed, thresholds have been used for detecting a variety software anomalies. Previous methods to derive metric thresholds do not take characteristics of software domains into account, such as the difference between size and complexity of systems from different domains. Instead, they rely on generic thresholds that are derived from heterogeneous systems. Although derivation of reliable thresholds has long been a concern, we also lack empirical evidence about threshold variation across distinct software domains. This work proposes a method to derive domain-sensitive thresholds that respects metric statistics and is based on benchmarks of systems from the same domain. The proposed method is supported by a software tool. This tool helps the developer to write better code since the beginning, by providing a view with class metrics and warnings considering the system domain. To evaluate our method, we performed an evaluation with desktop and mobile systems. The first evaluation investigates whether and how thresholds vary across domains by presenting a large-scale study on 3,107 software systems from 15 desktop domains. For the second evaluation, we manually mined one hundred mobile applications from GitHub. We measured all these systems using a set of metrics, derived thresholds, and validated them through qualitative and quantitative analyses. As a result, we observed that our method gathered more reliable thresholds considering software domain as a factor when building benchmarks for threshold derivation. Moreover, for the desktop evaluation, we also observed that domain-specific metric thresholds are more appropriated than generic ones for code smell detection.
Asunto: Computação
Engenharia de software
Domínios de Software
Idioma: Inglês
Editor: Universidade Federal de Minas Gerais
Sigla da Institución: UFMG
Tipo de acceso: Acesso Aberto
URI: http://hdl.handle.net/1843/ESBF-B5UMFW
Fecha del documento: 24-ago-2018
Aparece en las colecciones:Dissertações de Mestrado

archivos asociados a este elemento:
archivo Descripción TamañoFormato 
allanvictormori.pdf1.28 MBAdobe PDFVisualizar/Abrir


Los elementos en el repositorio están protegidos por copyright, con todos los derechos reservados, salvo cuando es indicado lo contrario.