Matching Items (3)
Filtering by

Clear all filters

153444-Thumbnail Image.png
Description
In this research work, a novel control system strategy for the robust control of an unmanned ground vehicle is proposed. This strategy is motivated by efforts to mitigate the problem for scenarios in which the human operator is unable to properly communicate with the vehicle. This novel control system strategy

In this research work, a novel control system strategy for the robust control of an unmanned ground vehicle is proposed. This strategy is motivated by efforts to mitigate the problem for scenarios in which the human operator is unable to properly communicate with the vehicle. This novel control system strategy consisted of three major components: I.) Two independent intelligent controllers, II.) An intelligent navigation system, and III.) An intelligent controller tuning unit. The inner workings of the first two components are based off the Brain Emotional Learning (BEL), which is a mathematical model of the Amygdala-Orbitofrontal, a region in mammalians brain known to be responsible for emotional learning. Simulation results demonstrated the implementation of the BEL model to be very robust, efficient, and adaptable to dynamical changes in its application as controller and as a sensor fusion filter for an unmanned ground vehicle. These results were obtained with significantly less computational cost when compared to traditional methods for control and sensor fusion. For the intelligent controller tuning unit, the implementation of a human emotion recognition system was investigated. This system was utilized for the classification of driving behavior. Results from experiments showed that the affective states of the driver are accurately captured. However, the driver's affective state is not a good indicator of the driver's driving behavior. As a result, an alternative method for classifying driving behavior from the driver's brain activity was explored. This method proved to be successful at classifying the driver's behavior. It obtained results comparable to the common approach through vehicle parameters. This alternative approach has the advantage of directly classifying driving behavior from the driver, which is of particular use in UGV domain because the operator's information is readily available. The classified driving mode was used tune the controllers' performance to a desired mode of operation. Such qualities are required for a contingency control system that would allow the vehicle to operate with no operator inputs.
ContributorsVargas-Clara, Alvaro (Author) / Redkar, Sangram (Thesis advisor) / McKenna, Anna (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2015
155270-Thumbnail Image.png
Description
Driving a vehicle is a complex task that typically requires several physical interactions and mental tasks. Inattentive driving takes a driver’s attention away from the primary task of driving, which can endanger the safety of driver, passenger(s), as well as pedestrians. According to several traffic safety administration organizations, distracted and

Driving a vehicle is a complex task that typically requires several physical interactions and mental tasks. Inattentive driving takes a driver’s attention away from the primary task of driving, which can endanger the safety of driver, passenger(s), as well as pedestrians. According to several traffic safety administration organizations, distracted and inattentive driving are the primary causes of vehicle crashes or near crashes. In this research, a novel approach to detect and mitigate various levels of driving distractions is proposed. This novel approach consists of two main phases: i.) Proposing a system to detect various levels of driver distractions (low, medium, and high) using a machine learning techniques. ii.) Mitigating the effects of driver distractions through the integration of the distracted driving detection algorithm and the existing vehicle safety systems. In phase- 1, vehicle data were collected from an advanced driving simulator and a visual based sensor (webcam) for face monitoring. In addition, data were processed using a machine learning algorithm and a head pose analysis package in MATLAB. Then the model was trained and validated to detect different human operator distraction levels. In phase 2, the detected level of distraction, time to collision (TTC), lane position (LP), and steering entropy (SE) were used as an input to feed the vehicle safety controller that provides an appropriate action to maintain and/or mitigate vehicle safety status. The integrated detection algorithm and vehicle safety controller were then prototyped using MATLAB/SIMULINK for validation. A complete vehicle power train model including the driver’s interaction was replicated, and the outcome from the detection algorithm was fed into the vehicle safety controller. The results show that the vehicle safety system controller reacted and mitigated the vehicle safety status-in closed loop real-time fashion. The simulation results show that the proposed approach is efficient, accurate, and adaptable to dynamic changes resulting from the driver, as well as the vehicle system. This novel approach was applied in order to mitigate the impact of visual and cognitive distractions on the driver performance.
ContributorsAlomari, Jamil (Author) / Mayyas, AbdRaouf (Thesis advisor) / Cooke, Nancy J. (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2017
155505-Thumbnail Image.png
Description
While various collision warning studies in driving have been conducted, only a handful of studies have investigated the effectiveness of warnings with a distracted driver. Across four experiments, the present study aimed to understand the apparent gap in the literature of distracted drivers and warning effectiveness, specifically by studying various

While various collision warning studies in driving have been conducted, only a handful of studies have investigated the effectiveness of warnings with a distracted driver. Across four experiments, the present study aimed to understand the apparent gap in the literature of distracted drivers and warning effectiveness, specifically by studying various warnings presented to drivers while they were operating a smart phone. Experiment One attempted to understand which smart phone tasks, (text vs image) or (self-paced vs other-paced) are the most distracting to a driver. Experiment Two compared the effectiveness of different smartphone based applications (app’s) for mitigating driver distraction. Experiment Three investigated the effects of informative auditory and tactile warnings which were designed to convey directional information to a distracted driver (moving towards or away). Lastly, Experiment Four extended the research into the area of autonomous driving by investigating the effectiveness of different auditory take-over request signals. Novel to both Experiment Three and Four was that the warnings were delivered from the source of the distraction (i.e., by either the sound triggered at the smart phone location or through a vibration given on the wrist of the hand holding the smart phone). This warning placement was an attempt to break the driver’s attentional focus on their smart phone and understand how to best re-orient the driver in order to improve the driver’s situational awareness (SA). The overall goal was to explore these novel methods of improved SA so drivers may more quickly and appropriately respond to a critical event.
ContributorsMcNabb, Jaimie Christine (Author) / Gray, Dr. Rob (Thesis advisor) / Branaghan, Dr. Russell (Committee member) / Becker, Dr. Vaughn (Committee member) / Arizona State University (Publisher)
Created2017