Bakar, Nurul Asyikin Abu
(2021)
Deep Reinforcement Learning For Control.
Project Report.
Universiti Sains Malaysia, Pusat Pengajian Kejuruteraan Aeroangkasa.
(Submitted)
Abstract
Autonomous cars must be capable to operate in various conditions and learn from
unforeseen scenarios. Driving a car with a human driver may be a challenging
undertaking. As a result, autonomous driving seeks to reduce hazards in comparison to
human drivers. Furthermore, autonomous driving is difficult in terms of the outcomes
and safety judgments that must be taken. In this thesis work, a method using deep
reinforcement learning to train a controller with proper driving behavior has been
proposed. In essence, the method is to use a reward-based learning environment to watch
how the agent makes decisions. Potential actions must be taken based on prior
experiences using a trial and error process. However, determining the essential
behavioral outputs for autonomous driving vehicle systems or selecting the optimal
output features to learn from them is not easy. Deep Neural Networks were chosen as
function estimators because of their capacity to solve the complexity of high-dimensional
system issues. As a consequence, the agent is expected to have trained behaviors and
navigation without crashing. The complete project is carried out in the CARLA simulator
to determine how to operate in discrete action space using Deep Reinforcement Learning
(DRL) algorithms. Gathering and evaluating a large amount of data is time and effortintensive.
Learning a model in a virtual environment might potentially fail to generalize
to the actual world. As a result, the simulation environment makes it possible to collect
massive training datasets. Improving learning driving policies can be adopted fast in the
actual world. To generate the visual simulation in the simulator, the Python programming language is employed. The improved algorithm will help encourage the real-world
implementation of DRL in many autonomous driving applications.
Actions (login required)
|
View Item |