Matching Items (2)
154964-Thumbnail Image.png
Description
Traditional methods for detecting the status of traffic lights used in autonomous vehicles may be susceptible to errors, which is troublesome in a safety-critical environment. In the case of vision-based recognition methods, failures may arise due to disturbances in the environment such as occluded views or poor lighting conditions. Some

Traditional methods for detecting the status of traffic lights used in autonomous vehicles may be susceptible to errors, which is troublesome in a safety-critical environment. In the case of vision-based recognition methods, failures may arise due to disturbances in the environment such as occluded views or poor lighting conditions. Some methods also depend on high-precision meta-data which is not always available. This thesis proposes a complementary detection approach based on an entirely new source of information: the movement patterns of other nearby vehicles. This approach is robust to traditional sources of error, and may serve as a viable supplemental detection method. Several different classification models are presented for inferring traffic light status based on these patterns. Their performance is evaluated over real-world and simulation data sets, resulting in up to 97% accuracy in each set.
ContributorsCampbell, Joseph (Author) / Fainekos, Georgios (Thesis advisor) / Ben Amor, Heni (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2016
Description
The autonomous vehicle technology has come a long way, but currently, there are no companies that are able to offer fully autonomous ride in any conditions, on any road without any human supervision. These systems should be extensively trained and validated to guarantee safe human transportation. Any small errors in

The autonomous vehicle technology has come a long way, but currently, there are no companies that are able to offer fully autonomous ride in any conditions, on any road without any human supervision. These systems should be extensively trained and validated to guarantee safe human transportation. Any small errors in the system functionality may lead to fatal accidents and may endanger human lives. Deep learning methods are widely used for environment perception and prediction of hazardous situations. These techniques require huge amount of training data with both normal and abnormal samples to enable the vehicle to avoid a dangerous situation.



The goal of this thesis is to generate simulations from real-world tricky collision scenarios for training and testing autonomous vehicles. Dashcam crash videos from the internet can now be utilized to extract valuable collision data and recreate the crash scenarios in a simulator. The problem of extracting 3D vehicle trajectories from videos recorded by an unknown monocular camera source is solved using a modular approach. The framework is divided into two stages: (a) extracting meaningful adversarial trajectories from short crash videos, and (b) developing methods to automatically process and simulate the vehicle trajectories on a vehicle simulator.
ContributorsBashetty, Sai Krishna (Author) / Fainkeos, Georgios (Thesis advisor) / Amor, Heni Ben (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2019