Matching Items (3)
Filtering by

Clear all filters

132937-Thumbnail Image.png
Description
In the next decade or so, there will be a shift in the industry of transportation across the world. Already today we have autonomous vehicles (AVs) tested in the Greater Phoenix area showing that the technology has improved to a level available to the public eye. Although this technology is

In the next decade or so, there will be a shift in the industry of transportation across the world. Already today we have autonomous vehicles (AVs) tested in the Greater Phoenix area showing that the technology has improved to a level available to the public eye. Although this technology is not yet released commercially (for the most part), it is being used and will continue to be used to develop a safer future. With a high incidence of human error causing accidents, many expect that autonomous vehicles will be safer than human drivers. They do still require driver attention and sometimes intervention to ensure safety, but for the most part are much safer. In just the United States alone, there were 40,000 deaths due to car accidents last year [1]. If traffic fatalities were considered a disease, this would be an epidemic. The technology behind autonomous vehicles will allow for a much safer environment and increased mobility and independence for people who cannot drive and struggle with public transport. There are many opportunities for autonomous vehicles in the transportation industry. Companies can save a lot more money on shipping by cutting the costs of human drivers and trucks on the road, even allowing for simpler drop shipments should the necessary AI be developed.Research is even being done by several labs at Arizona State University. For example, Dr. Spring Berman’s Autonomous Collective Systems Lab has been collaborating with Dr. Nancy Cooke of Human Systems Engineering to develop a traffic testbed, CHARTopolis, to study the risks of driver-AV interactions and the psychological effects of AVs on human drivers on a small scale. This testbed will be used by researchers from their labs and others to develop testing on reaction, trust, and user experience with AVs in a safe environment that simulates conditions similar to those experienced by full-size AVs. Using a new type of small robot that emulates an AV, developed in Dr. Berman’s lab, participants will be able to remotely drive around a model city environment and interact with other AV-like robots using the cameras and LiDAR sensors on the remotely driven robot to guide them.
Although these commercial and research systems are still in testing, it is important to understand how AVs are being marketed to the general public and how they are perceived, so that one day they may be effectively adopted into everyday life. People do not want to see a car they do not trust on the same roads as them, so the questions are: why don’t people trust them, and how can companies and researchers improve the trustworthiness of the vehicles?
ContributorsShuster, Daniel Nadav (Author) / Berman, Spring (Thesis director) / Cooke, Nancy (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
158843-Thumbnail Image.png
Description
Autonomous Vehicles (AVs), or self-driving cars, are poised to have an enormous impact on the automotive industry and road transportation. While advances have been made towards the development of safe, competent autonomous vehicles, there has been inadequate attention to the control of autonomous vehicles in unanticipated situations, such as imminent

Autonomous Vehicles (AVs), or self-driving cars, are poised to have an enormous impact on the automotive industry and road transportation. While advances have been made towards the development of safe, competent autonomous vehicles, there has been inadequate attention to the control of autonomous vehicles in unanticipated situations, such as imminent crashes. Even if autonomous vehicles follow all safety measures, accidents are inevitable, and humans must trust autonomous vehicles to respond appropriately in such scenarios. It is not plausible to program autonomous vehicles with a set of rules to tackle every possible crash scenario. Instead, a possible approach is to align their decision-making capabilities with the moral priorities, values, and social motivations of trustworthy human drivers.Toward this end, this thesis contributes a simulation framework for collecting, analyzing, and replicating human driving behaviors in a variety of scenarios, including imminent crashes. Four driving scenarios in an urban traffic environment were designed in the CARLA driving simulator platform, in which simulated cars can either drive autonomously or be driven by a user via a steering wheel and pedals. These included three unavoidable crash scenarios, representing classic trolley-problem ethical dilemmas, and a scenario in which a car must be driven through a school zone, in order to examine driver prioritization of reaching a destination versus ensuring safety. Sample human driving data in CARLA was logged from the simulated car’s sensors, including the LiDAR, IMU and camera. In order to reproduce human driving behaviors in a simulated vehicle, it is necessary for the AV to be able to identify objects in the environment and evaluate the volume of their bounding boxes for prediction and planning. An object detection method was used that processes LiDAR point cloud data using the PointNet neural network architecture, analyzes RGB images via transfer learning using the Xception convolutional neural network architecture, and fuses the outputs of these two networks. This method was trained and tested on both the KITTI Vision Benchmark Suite dataset and a virtual dataset exclusively generated from CARLA. When applied to the KITTI dataset, the object detection method achieved an average classification accuracy of 96.72% and an average Intersection over Union (IoU) of 0.72, where the IoU metric compares predicted bounding boxes to those used for training.
ContributorsGovada, Yashaswy (Author) / Berman, Spring (Thesis advisor) / Johnson, Kathryn (Committee member) / Marvi, Hamidreza (Committee member) / Arizona State University (Publisher)
Created2020
157752-Thumbnail Image.png
Description
Autonomous vehicle technology has been evolving for years since the Automated Highway System Project. However, this technology has been under increased scrutiny ever since an autonomous vehicle killed Elaine Herzberg, who was crossing the street in Tempe, Arizona in March 2018. Recent tests of autonomous vehicles on public roads

Autonomous vehicle technology has been evolving for years since the Automated Highway System Project. However, this technology has been under increased scrutiny ever since an autonomous vehicle killed Elaine Herzberg, who was crossing the street in Tempe, Arizona in March 2018. Recent tests of autonomous vehicles on public roads have faced opposition from nearby residents. Before these vehicles are widely deployed, it is imperative that the general public trusts them. For this, the vehicles must be able to identify objects in their surroundings and demonstrate the ability to follow traffic rules while making decisions with human-like moral integrity when confronted with an ethical dilemma, such as an unavoidable crash that will injure either a pedestrian or the passenger.

Testing autonomous vehicles in real-world scenarios would pose a threat to people and property alike. A safe alternative is to simulate these scenarios and test to ensure that the resulting programs can work in real-world scenarios. Moreover, in order to detect a moral dilemma situation quickly, the vehicle should be able to identify objects in real-time while driving. Toward this end, this thesis investigates the use of cross-platform training for neural networks that perform visual identification of common objects in driving scenarios. Here, the object detection algorithm Faster R-CNN is used. The hypothesis is that it is possible to train a neural network model to detect objects from two different domains, simulated or physical, using transfer learning. As a proof of concept, an object detection model is trained on image datasets extracted from CARLA, a virtual driving environment, via transfer learning. After bringing the total loss factor to 0.4, the model is evaluated with an IoU metric. It is determined that the model has a precision of 100% and 75% for vehicles and traffic lights respectively. The recall is found to be 84.62% and 75% for the same. It is also shown that this model can detect the same classes of objects from other virtual environments and real-world images. Further modifications to the algorithm that may be required to improve performance are discussed as future work.
ContributorsSankaramangalam Ulhas, Sangeet (Author) / Berman, Spring (Thesis advisor) / Johnson, Kathryn (Committee member) / Yong, Sze Zheng (Committee member) / Arizona State University (Publisher)
Created2019