September 5, 2012
Title: Predicting Mutation Score Using Source Code and Test Suite Metrics
Abstract: Mutation testing has traditionally been used to evaluate the effectiveness of test suites and provide confidence in the testing process. Mutation testing involves the creation of any versions of a program each with a single syntactic fault. A test suite is evaluated against these program versions (mutants) in order to determine the percentage of mutants a test suite is able to identify (mutation score). A major drawback of mutation testing is that even a small program may yield thousands of mutants and can potentially make the process cost prohibitive. To improve the performance and reduce the cost of mutation testing, we proposed a machine learning approach to predict mutation score based on a combination of source code and test suite metrics. We conducted an empirical evaluation of our approach to evaluated its effectiveness using eight open source software systems. We achieved an average method-level prediction accuracy of 49.7920% using our eight test subjects. Experimentally we found a pair of configuration parameters that maximized our prediction accuracy over all our test subjects, without per-subject tuning. Finally we demonstrated that it is not necessary to train on 90% of the available data in order achieve near optimal prediction accuracy.
Biography: Kevin Jalbert is a Computer Science MSc student in the Faculty of Science under the supervision of Dr. Jeremy Bradbury. He has published four peer-reviewed papers and was the recipient of the best paper award at the Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE 2012).