Matching Items (12)
Filtering by

Clear all filters

156243-Thumbnail Image.png
Description
Using a modified news media brand personality scale developed by Kim, Baek, and Martin (2010), this study measured the personalities of eight news media outlets and combined them into the same associative network with participants’ self-image via the Pathfinder tool (Schvaneveldt, Durso, & Dearholt, 1989). Using these networks, this study

Using a modified news media brand personality scale developed by Kim, Baek, and Martin (2010), this study measured the personalities of eight news media outlets and combined them into the same associative network with participants’ self-image via the Pathfinder tool (Schvaneveldt, Durso, & Dearholt, 1989). Using these networks, this study was able to both explore the personality associations of participants and observe if self-congruity, measured by the distance between the self-image node and a brand, is significantly related to participant preference for a brand. Self-congruity was found to be significantly related to preference. However, this relationship was mediated by participants’ fiscal and social orientation. Overall, using Pathfinder to generate associative networks and measure self-congruity could be a useful approach for understanding how people perceive and relate to different news media outlets.
ContributorsWillinger, Jacob T (Author) / Branaghan, Russel (Thesis advisor) / Craig, Scotty (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2018
156333-Thumbnail Image.png
Description
This study exmaines the effect of in-vehicle infotainment display depth on driving performance. More features are being built into infotainment displays, allowing drivers to complete a greater number of secondary tasks while driving. However, the complexity of completing these tasks can take attention away from the primary task of driving,

This study exmaines the effect of in-vehicle infotainment display depth on driving performance. More features are being built into infotainment displays, allowing drivers to complete a greater number of secondary tasks while driving. However, the complexity of completing these tasks can take attention away from the primary task of driving, which may present safety risks. Tasks become more time consuming as the items drivers wish to select are buried deeper in a menu’s structure. Therefore, this study aims to examine how deeper display structures impact driving performance compared to more shallow structures.

Procedure. Participants complete a lead car following task, where they follow a lead car and attempt to maintain a time headway (TH) of 2 seconds behind the lead car at all times, while avoiding any collisions. Participants experience five conditions where they are given tasks to complete with an in-vehicle infotainment system. There are five conditions, each involving one of five displays with different structures: one-layer vertical, one-layer horizontal, two-layer vertical, two-layer horizontal, and three-layer. Brake Reaction Time (BRT), Mean Time Headway (MTH), Time Headway Variability (THV), and Time to Task Completion (TTC) are measured for each of the five conditions.

Results. There is a significant difference in MTH, THV, and TTC for the three-layer condition. There is a significant difference in BRT for the two-layer horizontal condition. There is a significant difference between one- and two-layer displays for all variables, BRT, MTH, THV, and TTC. There is also a significant difference between one- and three-layer displays for TTC.

Conclusions. Deeper displays negatively impact driving performance and make tasks more time consuming to complete while driving. One-layer displays appear to be optimal, although they may not be practical for in-vehicle displays.
ContributorsGran, Emily (Author) / Gray, Robert (Thesis advisor) / Branaghan, Russell (Committee member) / Carrasquilla, Christina (Committee member) / Arizona State University (Publisher)
Created2018
157384-Thumbnail Image.png
Description
Student pilots are the future of aviation and one of the biggest problems that they face as new pilots is fatigue. The survey was sent out asking if student pilots were fatigued, if they attribute flight training, school work, work outside of school, and social obligations to their sleep loss,

Student pilots are the future of aviation and one of the biggest problems that they face as new pilots is fatigue. The survey was sent out asking if student pilots were fatigued, if they attribute flight training, school work, work outside of school, and social obligations to their sleep loss, and how they spend their time on those activities. The survey was given to aviation students at Arizona State University (ASU) Polytechnic Campus. ASU student pilots were found to be fatigued through a single sample t-test. Other t-tests were done on each of the questions that asked student pilots how flight training, school work, work outside of school and social obligations affect their sleep loss. Flight training and school were found to be contributing to student pilots sleep loss. Work outside of school and social obligations were found to not be contributing to student pilots sleep loss. It was found that student pilots’ tendency to use a planner or calendar was found to not be significant. Along with this planning through the week when they will do assignments or study for exams was also not found to be significant. Students making lists of assignments and when they are due was also found to not be significant. The t-test also found that student pilots are neutral on the topic of whether good time management skills would help increase the amount of sleep that they get.
ContributorsHarris, Mariah Jean (Author) / Cooke, Nancy J. (Thesis advisor) / Nullmeyer, Robert (Thesis advisor) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2019
156924-Thumbnail Image.png
Description
Highly automated vehicles require drivers to remain aware enough to takeover

during critical events. Driver distraction is a key factor that prevents drivers from reacting

adequately, and thus there is need for an alert to help drivers regain situational awareness

and be able to act quickly and successfully should a

Highly automated vehicles require drivers to remain aware enough to takeover

during critical events. Driver distraction is a key factor that prevents drivers from reacting

adequately, and thus there is need for an alert to help drivers regain situational awareness

and be able to act quickly and successfully should a critical event arise. This study

examines two aspects of alerts that could help facilitate driver takeover: mode (auditory

and tactile) and direction (towards and away). Auditory alerts appear to be somewhat

more effective than tactile alerts, though both modes produce significantly faster reaction

times than no alert. Alerts moving towards the driver also appear to be more effective

than alerts moving away from the driver. Future research should examine how

multimodal alerts differ from single mode, and see if higher fidelity alerts influence

takeover times.
ContributorsBrogdon, Michael A (Author) / Gray, Robert (Thesis advisor) / Branaghan, Russell (Committee member) / Chiou, Erin (Committee member) / Arizona State University (Publisher)
Created2018
157284-Thumbnail Image.png
Description
Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with oculomotor movement. Mind-wandering’s definition is debated, but in this research

Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with oculomotor movement. Mind-wandering’s definition is debated, but in this research it was defined as off task thoughts that occur due to the task not requiring full cognitive capacity. Drivers were asked to operate a driving simulator and follow audio turn by turn directions while experiencing each of three cell phone notification levels: Control (no texts), Airplane (texts with no notifications), and Ringer (audio notifications). Measures of Brake Reaction Time, Headway Variability, and Average Speed were used to operationalize driver distraction. Drivers experienced higher Brake Reaction Time and Headway Variability with a lower Average Speed in both experimental conditions when compared to the Control Condition. This is consistent with previous research in the field of implying a distracted state. Oculomotor movement was measured as the percent time the participant was looking at the road. There was no significant difference between the conditions in this measure. The results of this research indicate that not, while not interacting with a cell phone, no audio notification is required to induce a state of distraction. This phenomenon was unable to be linked to mind-wandering.
ContributorsRadina, Earl (Author) / Gray, Robert (Thesis advisor) / Chiou, Erin (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2019
Description
Driver distraction research has a long history spanning nearly 50 years, intensifying in the last decade. The focus has always been on identifying the distractive tasks and measuring the respective harm level. As in-vehicle technology advances, the list of distractive activities grows along with crash risk. Additionally, the distractive activities

Driver distraction research has a long history spanning nearly 50 years, intensifying in the last decade. The focus has always been on identifying the distractive tasks and measuring the respective harm level. As in-vehicle technology advances, the list of distractive activities grows along with crash risk. Additionally, the distractive activities become more common and complicated, especially with regard to In-Car Interactive System. This work's main focus is on driver distraction caused by the in-car interactive System. There have been many User Interaction Designs (Buttons, Speech, Visual) for Human-Car communication, in the past and currently present. And, all related studies suggest that driver distraction level is still high and there is a need for a better design. Multimodal Interaction is a design approach, which relies on using multiple modes for humans to interact with the car & hence reducing driver distraction by allowing the driver to choose the most suitable mode with minimum distraction. Additionally, combining multiple modes simultaneously provides more natural interaction, which could lead to less distraction. The main goal of MMI is to enable the driver to be more attentive to driving tasks and spend less time fiddling with distractive tasks. Engineering based method is used to measure driver distraction. This method uses metrics like Reaction time, Acceleration, Lane Departure obtained from test cases.
ContributorsJahagirdar, Tanvi (Author) / Gaffar, Ashraf (Thesis advisor) / Ghazarian, Arbi (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2015
155270-Thumbnail Image.png
Description
Driving a vehicle is a complex task that typically requires several physical interactions and mental tasks. Inattentive driving takes a driver’s attention away from the primary task of driving, which can endanger the safety of driver, passenger(s), as well as pedestrians. According to several traffic safety administration organizations, distracted and

Driving a vehicle is a complex task that typically requires several physical interactions and mental tasks. Inattentive driving takes a driver’s attention away from the primary task of driving, which can endanger the safety of driver, passenger(s), as well as pedestrians. According to several traffic safety administration organizations, distracted and inattentive driving are the primary causes of vehicle crashes or near crashes. In this research, a novel approach to detect and mitigate various levels of driving distractions is proposed. This novel approach consists of two main phases: i.) Proposing a system to detect various levels of driver distractions (low, medium, and high) using a machine learning techniques. ii.) Mitigating the effects of driver distractions through the integration of the distracted driving detection algorithm and the existing vehicle safety systems. In phase- 1, vehicle data were collected from an advanced driving simulator and a visual based sensor (webcam) for face monitoring. In addition, data were processed using a machine learning algorithm and a head pose analysis package in MATLAB. Then the model was trained and validated to detect different human operator distraction levels. In phase 2, the detected level of distraction, time to collision (TTC), lane position (LP), and steering entropy (SE) were used as an input to feed the vehicle safety controller that provides an appropriate action to maintain and/or mitigate vehicle safety status. The integrated detection algorithm and vehicle safety controller were then prototyped using MATLAB/SIMULINK for validation. A complete vehicle power train model including the driver’s interaction was replicated, and the outcome from the detection algorithm was fed into the vehicle safety controller. The results show that the vehicle safety system controller reacted and mitigated the vehicle safety status-in closed loop real-time fashion. The simulation results show that the proposed approach is efficient, accurate, and adaptable to dynamic changes resulting from the driver, as well as the vehicle system. This novel approach was applied in order to mitigate the impact of visual and cognitive distractions on the driver performance.
ContributorsAlomari, Jamil (Author) / Mayyas, AbdRaouf (Thesis advisor) / Cooke, Nancy J. (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2017
155902-Thumbnail Image.png
Description
We experience spatial separation and temporal asynchrony between visual and

haptic information in many virtual-reality, augmented-reality, or teleoperation systems.

Three studies were conducted to examine the spatial and temporal characteristic of

multisensory integration. Participants interacted with virtual springs using both visual and

haptic senses, and their perception of stiffness and ability to differentiate stiffness

We experience spatial separation and temporal asynchrony between visual and

haptic information in many virtual-reality, augmented-reality, or teleoperation systems.

Three studies were conducted to examine the spatial and temporal characteristic of

multisensory integration. Participants interacted with virtual springs using both visual and

haptic senses, and their perception of stiffness and ability to differentiate stiffness were

measured. The results revealed that a constant visual delay increased the perceived stiffness,

while a variable visual delay made participants depend more on the haptic sensations in

stiffness perception. We also found that participants judged stiffness stiffer when they

interact with virtual springs at faster speeds, and interaction speed was positively correlated

with stiffness overestimation. In addition, it has been found that participants could learn an

association between visual and haptic inputs despite the fact that they were spatially

separated, resulting in the improvement of typing performance. These results show the

limitations of Maximum-Likelihood Estimation model, suggesting that a Bayesian

inference model should be used.
ContributorsSim, Sung Hun (Author) / Wu, Bing (Thesis advisor) / Cooke, Nancy J. (Committee member) / Gray, Robert (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2017
155568-Thumbnail Image.png
Description
This increasing role of highly automated and intelligent systems as team members has started a paradigm shift from human-human teaming to Human-Autonomy Teaming (HAT). However, moving from human-human teaming to HAT is challenging. Teamwork requires skills that are often missing in robots and synthetic agents. It is possible that

This increasing role of highly automated and intelligent systems as team members has started a paradigm shift from human-human teaming to Human-Autonomy Teaming (HAT). However, moving from human-human teaming to HAT is challenging. Teamwork requires skills that are often missing in robots and synthetic agents. It is possible that adding a synthetic agent as a team member may lead teams to demonstrate different coordination patterns resulting in differences in team cognition and ultimately team effectiveness. The theory of Interactive Team Cognition (ITC) emphasizes the importance of team interaction behaviors over the collection of individual knowledge. In this dissertation, Nonlinear Dynamical Methods (NDMs) were applied to capture characteristics of overall team coordination and communication behaviors. The findings supported the hypothesis that coordination stability is related to team performance in a nonlinear manner with optimal performance associated with moderate stability coupled with flexibility. Thus, we need to build mechanisms in HATs to demonstrate moderately stable and flexible coordination behavior to achieve team-level goals under routine and novel task conditions.
ContributorsDemir, Mustafa, Ph.D (Author) / Cooke, Nancy J. (Thesis advisor) / Bekki, Jennifer (Committee member) / Amazeen, Polemnia G (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2017
155315-Thumbnail Image.png
Description
In baseball, the difference between a win and loss can come down to a single call, such as when an umpire judges force outs at first base by typically comparing competing auditory and visual inputs of the ball-mitt sound and the foot-on-base sight. Yet, because the speed of sound in

In baseball, the difference between a win and loss can come down to a single call, such as when an umpire judges force outs at first base by typically comparing competing auditory and visual inputs of the ball-mitt sound and the foot-on-base sight. Yet, because the speed of sound in air only travels about 1100 feet per second, fans observing from several hundred feet away will receive auditory cues that are delayed a significant portion of a second, and thus conceivably could systematically differ in judgments compared to the nearby umpire. The current research examines two questions. 1. How reliably and with what biases do observers judge the order of visual versus auditory events? 2. Do observers making such order judgments from far away systematically compensate for delays due to the slow speed of sound? It is hypothesized that if any temporal bias occurs it is in the direction consistent with observers not accounting for the sound delay, such that increasing viewing distance will increase the bias to assume the sound occurred later. It was found that nearby observers are relatively accurate at judging if a sound occurred before or after a simple visual event (a flash), but exhibit a systematic bias to favor visual stimuli occurring first (by about 30 msec). In contrast, distant observers did not compensate for the delay of the speed of sound such that they systematically favored the visual cue occurring earlier as a function of viewing distance. When observers judged simple visual stimuli in motion relative to the same sound burst, the distance effect occurred as a function of the visual clarity of the ball arriving. In the baseball setting, using a large screen projection of baserunner, a diminished distance effect occurred due to the additional visual cues. In summary, observers generally do not account for the delay of sound due to distance.
ContributorsKrynen, R. Chandler (Author) / McBeath, Michael (Thesis advisor) / Homa, Donald (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2017