Tezin Türü: Yüksek Lisans
Tezin Yürütüldüğü Kurum: Marmara Üniversitesi, Fen Bilimleri Enstitüsü, Bilgisayar Mühendisliği (İngilizce) Anabilim Dalı, Türkiye
Tezin Onay Tarihi: 2022
Tezin Dili: İngilizce
Öğrenci: ENGİN DURMAZ
Danışman: Mustafa Borahan Tümer
Özet:
Automatic software testing methods have become inevitable in producing test inputs.
Test tools generate events and execute these events on SUT to find a bug. Unfortunately,
when a crash occurs, an event sequence which includes many irrelevant events to the
failure is obtained. These irrelevant events make debugging process more difficult. To
locate and repair bugs with an emphasis on the crash scenarios, we present in this
work a reinforcement learning (RL) approach for finding the shortest input sequence(s)
leading to system crash or block where these represent the goal state of the RL problem.
We shoot for simplifying the bug scenario as much as possible so that developers would
analyze agent’s actions causing crashes or freeze. In this study, we developed Crash
Detection Module (CDM ) which consists of three main flows. We first simplify the crash
scenario given using Recursive Delta Debugging (RDD), then we apply RL algorithms
to explore for the shorter crashing sequence. We approach the exploration of crash
scenarios as a RL problem where the agent first attains the goal state of crash/blocking
by executing inputs, then shortens the input sequence with the help of the rewarding
mechanism. We apply both model-free on-policy and model-based planning-capable RL
agents to our problem. Furthermore, we present a novel RL approach, involving Detected
Goal Catalyst (DGC), which decreases the time complexity by avoiding grappling with
convergence via stopping learning at a small variance and attaining the shortest crash
sequence with an algorithm that recursively removes unrelated actions. Experiments
show that DGC significantly improves the learning performance of both SARSA and
Prioritized Sweeping algorithms on finding the shortest path.