Owolabi Legunsen, assistant professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, received a Distinguished Paper award at the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA). The award is for the paper, “More Precise Regression Test Selection via Reasoning about Semantics-Modifying Changes," which he co-authored with colleagues at the University of Texas, Austin.
“Researchers have made a lot of progress on cutting down the huge and growing costs of software testing by developing regression test selection techniques that safely rerun tests that are affected by code changes, instead of always rerunning all tests after every change," Legunsen said. "But, recent studies suggest that these techniques have hit a performance wall in terms of the amount of savings that they provide."
In the paper, the researchers found a way to break through that wall by revisiting a decades-old assumption that regression test selection techniques must always rerun tests that check behavior-modifying code changes, which can introduce bugs.
"Specifically, we showed for the first time that there is a class of behavior-modifying changes for which it is safe to not rerun some tests and that leveraging such changes speeds up existing techniques in a manner that does not miss bugs in practice,” Legunsen said.
The team first manually identified 11 types of behavior-modifying changes that could be used to speed up regression test selection without sacrificing the ability to find bugs. Then, they used these findings to revamp two regression test selection techniques: STARTS, which was developed previously by Legunsen, and EKSTAZI, which was developed by co-author Milos Gligoric at the University of Texas, Austin. The new techniques are named FINESTARTS and FINEEKSTAZI, respectively, because they walk a fine line to safely speed up regression testing by omitting tests that check these changes that alter the behavior of the program.
Lastly, the team evaluated the two new techniques on 1,150 versions of 23 open-source software projects. They found that FINESTARTS and FINEEKSTAZI selected 31.8% and 41.7% fewer tests, respectively, than the original versions. From start to finish, the improved versions required 28.7% and 33.7% less time than the original versions, without sacrificing safety.
Legunsen and his colleagues aim to continue refining their regression test selection techniques. Next, they want to automate the code inspection process so they can more easily identify code changes that don’t require testing.
“The high cost of software testing is such an important practical problem that a lot of developers face,” Legunsen said. “We hope that as we continue to improve these techniques and tools, developers will use them and provide feedback.”
Additional co-authors at the University of Texas, Austin include Yuki Liu, Jiyang Zhang, and Pengyu Nie.
This research received support from a Google Faculty Research Award and the National Science Foundation.
By Patricia Waldron, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.