Matching Items (5)
Filtering by

Clear all filters

150051-Thumbnail Image.png
Description
The purpose of this study was to investigate the impacts of visual cues and different types of self-explanation prompts on learning, cognitive load and intrinsic motivation, as well as the potential interaction between the two factors in a multimedia environment that was designed to deliver a computer-based lesson about the

The purpose of this study was to investigate the impacts of visual cues and different types of self-explanation prompts on learning, cognitive load and intrinsic motivation, as well as the potential interaction between the two factors in a multimedia environment that was designed to deliver a computer-based lesson about the human cardiovascular system. A total of 126 college students were randomly assigned in equal numbers (N = 21) to one of the six experimental conditions in a 2 X 3 factorial design with visual cueing (visual cues vs. no cues) and type of self-explanation prompts (prediction prompts vs. reflection prompts vs. no prompts) as the between-subjects factors. They completed a pretest, subjective cognitive load questions, intrinsic motivation questions, and a posttest during the course of the experience. A subsample (49 out of 126) of the participants' eye movements were tracked by an eye tracker. The results revealed that (a) participants presented with visually cued animations had significantly higher learning outcome scores than their peers who viewed uncued animations; and (b) cognitive load and intrinsic motivation had different impacts on learning in multimedia due to the moderation effect of visual cueing. There were no other significant findings in terms of learning outcomes, cognitive load, intrinsic motivation, and eye movements. Limitations, implications and future directions are discussed within the framework of cognitive load theory, cognitive theory of multimedia learning and cognitive-affective theory of learning with media.
ContributorsLin, Lijia (Author) / Atkinson, Robert (Thesis advisor) / Nelson, Brian (Committee member) / Savenye, Wilhelmina (Committee member) / Arizona State University (Publisher)
Created2011
134293-Thumbnail Image.png
Description
Lie detection is used prominently in contemporary society for many purposes such as for pre-employment screenings, granting security clearances, and determining if criminals or potential subjects may or may not be lying, but by no means is not limited to that scope. However, lie detection has been criticized for being

Lie detection is used prominently in contemporary society for many purposes such as for pre-employment screenings, granting security clearances, and determining if criminals or potential subjects may or may not be lying, but by no means is not limited to that scope. However, lie detection has been criticized for being subjective, unreliable, inaccurate, and susceptible to deliberate manipulation. Furthermore, critics also believe that the administrator of the test also influences the outcome as well. As a result, the polygraph machine, the contemporary device used for lie detection, has come under scrutiny when used as evidence in the courts. The purpose of this study is to use three entirely different tools and concepts to determine whether eye tracking systems, electroencephalogram (EEG), and Facial Expression Emotion Analysis (FACET) are reliable tools for lie detection. This study found that certain constructs such as where the left eye is looking at in regard to its usual position and engagement levels in eye tracking and EEG respectively could distinguish between truths and lies. However, the FACET proved the most reliable tool out of the three by providing not just one distinguishing variable but seven, all related to emotions derived from movements in the facial muscles during the present study. The emotions associated with the FACET that were documented to possess the ability to distinguish between truthful and lying responses were joy, anger, fear, confusion, and frustration. In addition, an overall measure of the subject's neutral and positive emotional expression were found to be distinctive factors. The implications of this study and future directions are discussed.
ContributorsSeto, Raymond Hua (Author) / Atkinson, Robert (Thesis director) / Runger, George (Committee member) / W. P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
Description
Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to

Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to develop and test a real time collaboration system between VR and MR. The system works similar to a Google document where two or more users can see what others are doing i.e. writing, modifying, viewing, etc. Similarly, the system developed during this study will enable users in VR and MR to collaborate in real time.

The study of developing a real-time cross-platform collaboration system between VR and MR takes into consideration a scenario in which multiple device users are connected to a multiplayer network where they are guided to perform various tasks concurrently.

Usability testing was conducted to evaluate participant perceptions of the system. Users were required to assemble a chair in alternating turns; thereafter users were required to fill a survey and give an audio interview. Results collected from the participants showed positive feedback towards using VR and MR for collaboration. However, there are several limitations with the current generation of devices that hinder mass adoption. Devices with better performance factors will lead to wider adoption.
ContributorsSeth, Nayan Sateesh (Author) / Nelson, Brian (Thesis advisor) / Walker, Erin (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2017
155844-Thumbnail Image.png
Description
Human-Robot collaboration can be a challenging exercise especially when both the human and the robot want to work simultaneously on a given task. It becomes difficult for the human to understand the intentions of the robot and vice-versa. To overcome this problem, a novel approach using the concept of Mixed-Reality

Human-Robot collaboration can be a challenging exercise especially when both the human and the robot want to work simultaneously on a given task. It becomes difficult for the human to understand the intentions of the robot and vice-versa. To overcome this problem, a novel approach using the concept of Mixed-Reality has been proposed, which uses the surrounding space as the canvas to augment projected information on and around 3D objects. A vision based tracking algorithm precisely detects the pose and state of the 3D objects, and human-skeleton tracking is performed to create a system that is both human-aware as well as context-aware. Additionally, the system can warn humans about the intentions of the robot, thereby creating a safer environment to work in. An easy-to-use and universal visual language has been created which could form the basis for interaction in various human-robot collaborations in manufacturing industries.

An objective and subjective user study was conducted to test the hypothesis, that using this system to execute a human-robot collaborative task would result in higher performance as compared to using other traditional methods like printed instructions and through mobile devices. Multiple measuring tools were devised to analyze the data which finally led to the conclusion that the proposed mixed-reality projection system does improve the human-robot team's efficiency and effectiveness and hence, will be a better alternative in the future.
ContributorsRathore, Yash K (Author) / Amor, Hani Ben (Thesis advisor) / Nelson, Brian (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2017
155429-Thumbnail Image.png
Description
Emerging information and communication technology (ICT) has had an enormous effect on the building architecture, engineering, construction and operation (AECO) fields in recent decades. The effects have resonated in several disciplines, such as project information flow, design representation and communication, and Building Information Modeling (BIM) approaches. However, these effects can

Emerging information and communication technology (ICT) has had an enormous effect on the building architecture, engineering, construction and operation (AECO) fields in recent decades. The effects have resonated in several disciplines, such as project information flow, design representation and communication, and Building Information Modeling (BIM) approaches. However, these effects can potentially impact communication and coordination of the virtual design contents in both design and construction phases. Therefore, and with the great potential for emerging technologies in construction projects, it is essential to understand how these technologies influence virtual design information within the organizations as well as individuals’ behaviors. This research focusses on understanding current emerging technologies and its impacts on projects virtual design information and communication among projects stakeholders within the AECO organizations.
ContributorsAlsafouri, Suleiman (Author) / Ayer, Steven (Thesis advisor) / Tang, Pingbo (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2017