Program Repair Automated vs. Manual

Welcome to visit the homepage of our work about the comparison between automated and manual program repair!

Various automated program repair (APR) techniques have been proposed to fix bugs automatically in the last decade. Although recent researches have made significant progress on the effectiveness and efficiency, it is still unclear how APR techniques perform with human intervention in a real debugging scenario. To bridge this gap, we conduct an extensive study to compare three state-of-the-art APR tools with manual program repair, and further investigate whether the assistance of APR tools (i.e., repair reports) can improve manual program repair. %To that end, We recruit 20 participates for a controlled experiment, where they attempt to fix eight real bugs with or without APR aids, resulting in a total of 160 repair tasks. To that end, we recruit 20 participants for a controlled experiment, resulting in a total of 160 repair tasks and a questionnaire survey. The experiment reveals several notable observations that (1) manual program repair may be influenced by the frequency of repair actions sometimes;(2) APR tools are more efficient in terms of debugging time, while manual program repair tend to generate a correct patch with fewer attempts; (3) APR tools can further improve manual program repair regarding the number of correctly-fixed bugs, while there exists a negative impact on the patch correctness;(4) participants are used to consuming more time to identify incorrect patches, while they are still misguided easily; (5)participants are positive about the tools' repair performance, while they generally lack confidence about the usability in practice. Besides, we provide some guidelines for improving the usability of APR tools (e.g., the misleading information in reports and the observation of feedback)

The homepage contains the selected bugs (with APR aids), all submissions , experimental results, and original figures in this work. The organizations of this homepage are described as follows.

bugs: contains the 8 bugs selected in this work with different aids (SimFix aid, ACS aid, PraPR aid, buggy location aid and unaid) resulting in 40 types of bugs.

submissions: contains all submissions by 20 participants used in our paper, resulting a total of 160 repair tasks.

results: contains the data we analyze to answer RQ1 and RQ2, and the raw answers by the participants to answer RQ3. Besides, the summarization of these reasons along with participants' original answers are also presented.

If you have any questions, please feel free to contact me with my email address xx@xx.edu.cn. Thank you very much.