Matching Items (2)
Filtering by

Clear all filters

150051-Thumbnail Image.png
Description
The purpose of this study was to investigate the impacts of visual cues and different types of self-explanation prompts on learning, cognitive load and intrinsic motivation, as well as the potential interaction between the two factors in a multimedia environment that was designed to deliver a computer-based lesson about the

The purpose of this study was to investigate the impacts of visual cues and different types of self-explanation prompts on learning, cognitive load and intrinsic motivation, as well as the potential interaction between the two factors in a multimedia environment that was designed to deliver a computer-based lesson about the human cardiovascular system. A total of 126 college students were randomly assigned in equal numbers (N = 21) to one of the six experimental conditions in a 2 X 3 factorial design with visual cueing (visual cues vs. no cues) and type of self-explanation prompts (prediction prompts vs. reflection prompts vs. no prompts) as the between-subjects factors. They completed a pretest, subjective cognitive load questions, intrinsic motivation questions, and a posttest during the course of the experience. A subsample (49 out of 126) of the participants' eye movements were tracked by an eye tracker. The results revealed that (a) participants presented with visually cued animations had significantly higher learning outcome scores than their peers who viewed uncued animations; and (b) cognitive load and intrinsic motivation had different impacts on learning in multimedia due to the moderation effect of visual cueing. There were no other significant findings in terms of learning outcomes, cognitive load, intrinsic motivation, and eye movements. Limitations, implications and future directions are discussed within the framework of cognitive load theory, cognitive theory of multimedia learning and cognitive-affective theory of learning with media.
ContributorsLin, Lijia (Author) / Atkinson, Robert (Thesis advisor) / Nelson, Brian (Committee member) / Savenye, Wilhelmina (Committee member) / Arizona State University (Publisher)
Created2011
162284-Thumbnail Image.png
Description

Human team members show a remarkable ability to infer the state of their partners and anticipate their needs and actions. Prior research demonstrates that an artificial system can make some predictions accurately concerning artificial agents. This study investigated whether an artificial system could generate a robust Theory of Mind of

Human team members show a remarkable ability to infer the state of their partners and anticipate their needs and actions. Prior research demonstrates that an artificial system can make some predictions accurately concerning artificial agents. This study investigated whether an artificial system could generate a robust Theory of Mind of human teammates. An urban search and rescue (USAR) task environment was developed to elicit human teamwork and evaluate inference and prediction about team members by software agents and humans. The task varied team members’ roles and skills, types of task synchronization and interdependence, task risk and reward, completeness of mission planning, and information asymmetry. The task was implemented in MinecraftTM and applied in a study of 64 teams, each with three remotely distributed members. An evaluation of six Artificial Social Intelligences (ASI) and several human observers addressed the accuracy with which each predicted team performance, inferred experimentally manipulated knowledge of team members, and predicted member actions. All agents performed above chance; humans slightly outperformed ASI agents on some tasks and significantly outperformed ASI agents on others; no one ASI agent reliably outperformed the others; and the accuracy of ASI agents and human observers improved rapidly though modestly during the brief trials.

ContributorsFreeman, Jared T. (Author) / Huang, Lixiao (Author) / Woods, Matt (Author) / Cauffman, Stephen J. (Author)
Created2021-11-04