Matching Items (6)
Filtering by
- All Subjects: Cognitive Psychology
- Creators: Chiou, Erin
Description
Highly automated vehicles require drivers to remain aware enough to takeover
during critical events. Driver distraction is a key factor that prevents drivers from reacting
adequately, and thus there is need for an alert to help drivers regain situational awareness
and be able to act quickly and successfully should a critical event arise. This study
examines two aspects of alerts that could help facilitate driver takeover: mode (auditory
and tactile) and direction (towards and away). Auditory alerts appear to be somewhat
more effective than tactile alerts, though both modes produce significantly faster reaction
times than no alert. Alerts moving towards the driver also appear to be more effective
than alerts moving away from the driver. Future research should examine how
multimodal alerts differ from single mode, and see if higher fidelity alerts influence
takeover times.
during critical events. Driver distraction is a key factor that prevents drivers from reacting
adequately, and thus there is need for an alert to help drivers regain situational awareness
and be able to act quickly and successfully should a critical event arise. This study
examines two aspects of alerts that could help facilitate driver takeover: mode (auditory
and tactile) and direction (towards and away). Auditory alerts appear to be somewhat
more effective than tactile alerts, though both modes produce significantly faster reaction
times than no alert. Alerts moving towards the driver also appear to be more effective
than alerts moving away from the driver. Future research should examine how
multimodal alerts differ from single mode, and see if higher fidelity alerts influence
takeover times.
ContributorsBrogdon, Michael A (Author) / Gray, Robert (Thesis advisor) / Branaghan, Russell (Committee member) / Chiou, Erin (Committee member) / Arizona State University (Publisher)
Created2018
Description
Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with oculomotor movement. Mind-wandering’s definition is debated, but in this research it was defined as off task thoughts that occur due to the task not requiring full cognitive capacity. Drivers were asked to operate a driving simulator and follow audio turn by turn directions while experiencing each of three cell phone notification levels: Control (no texts), Airplane (texts with no notifications), and Ringer (audio notifications). Measures of Brake Reaction Time, Headway Variability, and Average Speed were used to operationalize driver distraction. Drivers experienced higher Brake Reaction Time and Headway Variability with a lower Average Speed in both experimental conditions when compared to the Control Condition. This is consistent with previous research in the field of implying a distracted state. Oculomotor movement was measured as the percent time the participant was looking at the road. There was no significant difference between the conditions in this measure. The results of this research indicate that not, while not interacting with a cell phone, no audio notification is required to induce a state of distraction. This phenomenon was unable to be linked to mind-wandering.
ContributorsRadina, Earl (Author) / Gray, Robert (Thesis advisor) / Chiou, Erin (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2019
Description
The medical field is constantly looking for technological solutions to reduce user-error and improve procedures. As a potential solution for healthcare environments, Augmented Reality (AR) has received increasing attention in the past few decades due to advances in computing capabilities, lower cost, and better displays (Sauer, Khamene, Bascle, Vogt, & Rubino, 2002). Augmented Reality, as defined in Ronald Azuma’s initial survey of AR, combines virtual and real-world environments in three dimensions and in real-time (Azuma, 1997). Because visualization displays used in AR are related to human physiologic and cognitive constraints, any new system must improve on previous methods and be consistently aligned with human abilities in mind (Drascic & Milgram, 1996; Kruijff, Swan, & Feiner, 2010; Ziv, Wolpe, Small, & Glick, 2006). Based on promising findings from aviation and driving (Liu & Wen, 2004; Sojourner & Antin, 1990; Ververs & Wickens, 1998), this study identifies whether the spatial proximity affordance provided by a head-mounted display or alternative heads up display might benefit to attentional performance in a simulated routine medical task. Additionally, the present study explores how tasks of varying relatedness may relate to attentional performance differences when these tasks are presented at different spatial distances.
Contributorsdel Rio, Richard A (Author) / Branaghan, Russell (Thesis advisor) / Gray, Rob (Committee member) / Chiou, Erin (Committee member) / Arizona State University (Publisher)
Created2017
Description
Minimally invasive surgery is a surgical technique that is known for its reduced
patient recovery time. It is a surgical procedure done by using long reached tools and an
endoscopic camera to operate on the body though small incisions made near the point of
operation while viewing the live camera feed on a nearby display screen. Multiple camera
views are used in various industries such as surveillance and professional gaming to
allow users a spatial awareness advantage as to what is happening in the 3D space that is
presented to them on 2D displays. The concept has not effectively broken into the
medical industry yet. This thesis tests a multi-view camera system in which three cameras
are inserted into a laparoscopic surgical training box along with two surgical instruments,
to determine the system impact on spatial cognition, perceived cognitive workload, and
the overall time needed to complete the task, compared to one camera viewing the
traditional set up. The task is a non-medical task and is one of five typically used to train
surgeons’ motor skills when initially learning minimally invasive surgical procedures.
The task is a peg transfer and will be conducted by 30 people who are randomly assigned
to one of two conditions; one display and three displays. The results indicated that when
three displays were present the overall time initially using them to complete a task was
slower; the task was perceived to be completed more easily and with less strain; and
participants had a slightly higher performance rate.
patient recovery time. It is a surgical procedure done by using long reached tools and an
endoscopic camera to operate on the body though small incisions made near the point of
operation while viewing the live camera feed on a nearby display screen. Multiple camera
views are used in various industries such as surveillance and professional gaming to
allow users a spatial awareness advantage as to what is happening in the 3D space that is
presented to them on 2D displays. The concept has not effectively broken into the
medical industry yet. This thesis tests a multi-view camera system in which three cameras
are inserted into a laparoscopic surgical training box along with two surgical instruments,
to determine the system impact on spatial cognition, perceived cognitive workload, and
the overall time needed to complete the task, compared to one camera viewing the
traditional set up. The task is a non-medical task and is one of five typically used to train
surgeons’ motor skills when initially learning minimally invasive surgical procedures.
The task is a peg transfer and will be conducted by 30 people who are randomly assigned
to one of two conditions; one display and three displays. The results indicated that when
three displays were present the overall time initially using them to complete a task was
slower; the task was perceived to be completed more easily and with less strain; and
participants had a slightly higher performance rate.
ContributorsSchroll, Katelyn (Author) / Cooke, Nancy J. (Thesis advisor) / Chiou, Erin (Committee member) / Craig, Scotty (Committee member) / Arizona State University (Publisher)
Created2019
Description
As deception in cyberspace becomes more dynamic, research in this area should also take a dynamic approach to battling deception and false information. Research has previously shown that people are no better than chance at detecting deception. Deceptive information in cyberspace, specifically on social media, is not exempt from this pitfall. Current practices in social media rely on the users to detect false information and use appropriate discretion when deciding to share information online. This is ineffective and will predicatively end with users being unable to discern true from false information at all, as deceptive information becomes more difficult to distinguish from true information. To proactively combat inaccurate and deceptive information on social media, research must be conducted to understand not only the interaction effects of false content and user characteristics, but user behavior that stems from this interaction as well. This study investigated the effects of confirmation bias and susceptibility to deception on an individual’s choice to share information, specifically to understand how these factors relate to the sharing of false controversial information.
ContributorsChinzi, Ashley (Author) / Cooke, Nancy J. (Thesis advisor) / Chiou, Erin (Committee member) / Becker, David V (Committee member) / Arizona State University (Publisher)
Created2019
Description
Human-robot teams (HRTs) have seen more frequent use over the past few years,specifically, in the context of Search and Rescue (SAR) environments. Trust is an important factor in the success of HRTs. Both trust and reliance must be appropriately calibrated for the human operator to work faultlessly with a robot teammate. In highly complex and time restrictive environments, such as a search and rescue mission following a disaster, uncertainty information may be given by the robot in the form of confidence to help properly calibrate trust and reliance. This study seeks to examine the impact that confidence information may have on trust and how it may help calibrate reliance in complex HRTs. Trust and reliance data were gathered using a simulated SAR task environment for participants who then received confidence information from the robot for one of two missions. Results from this study indicated that trust was higher when participants received confidence information from the robot, however, no clear relationship between confidence and reliance were found. The findings from this study can be used to further improve human-robot teaming in search and rescue tasks.
ContributorsWolff, Alexandra (Author) / Cooke, Nancy J (Thesis advisor) / Chiou, Erin (Committee member) / Gray, Rob (Committee member) / Arizona State University (Publisher)
Created2022