Matching Items (1)
157752-Thumbnail Image.png
Description
Autonomous vehicle technology has been evolving for years since the Automated Highway System Project. However, this technology has been under increased scrutiny ever since an autonomous vehicle killed Elaine Herzberg, who was crossing the street in Tempe, Arizona in March 2018. Recent tests of autonomous vehicles on public roads

Autonomous vehicle technology has been evolving for years since the Automated Highway System Project. However, this technology has been under increased scrutiny ever since an autonomous vehicle killed Elaine Herzberg, who was crossing the street in Tempe, Arizona in March 2018. Recent tests of autonomous vehicles on public roads have faced opposition from nearby residents. Before these vehicles are widely deployed, it is imperative that the general public trusts them. For this, the vehicles must be able to identify objects in their surroundings and demonstrate the ability to follow traffic rules while making decisions with human-like moral integrity when confronted with an ethical dilemma, such as an unavoidable crash that will injure either a pedestrian or the passenger.

Testing autonomous vehicles in real-world scenarios would pose a threat to people and property alike. A safe alternative is to simulate these scenarios and test to ensure that the resulting programs can work in real-world scenarios. Moreover, in order to detect a moral dilemma situation quickly, the vehicle should be able to identify objects in real-time while driving. Toward this end, this thesis investigates the use of cross-platform training for neural networks that perform visual identification of common objects in driving scenarios. Here, the object detection algorithm Faster R-CNN is used. The hypothesis is that it is possible to train a neural network model to detect objects from two different domains, simulated or physical, using transfer learning. As a proof of concept, an object detection model is trained on image datasets extracted from CARLA, a virtual driving environment, via transfer learning. After bringing the total loss factor to 0.4, the model is evaluated with an IoU metric. It is determined that the model has a precision of 100% and 75% for vehicles and traffic lights respectively. The recall is found to be 84.62% and 75% for the same. It is also shown that this model can detect the same classes of objects from other virtual environments and real-world images. Further modifications to the algorithm that may be required to improve performance are discussed as future work.
ContributorsSankaramangalam Ulhas, Sangeet (Author) / Berman, Spring (Thesis advisor) / Johnson, Kathryn (Committee member) / Yong, Sze Zheng (Committee member) / Arizona State University (Publisher)
Created2019