Matching Items (5)
Filtering by

Clear all filters

152244-Thumbnail Image.png
Description
Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR)

Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR) delivered via mobile technology that could potentially provide rich, contextualized learning for understanding concepts related to statistics education. This study examined the effects of AR experiences for learning basic statistical concepts. Using a 3 x 2 research design, this study compared learning gains of 252 undergraduate and graduate students from a pre- and posttest given before and after interacting with one of three types of augmented reality experiences, a high AR experience (interacting with three dimensional images coupled with movement through a physical space), a low AR experience (interacting with three dimensional images without movement), or no AR experience (two dimensional images without movement). Two levels of collaboration (pairs and no pairs) were also included. Additionally, student perceptions toward collaboration opportunities and engagement were compared across the six treatment conditions. Other demographic information collected included the students' previous statistics experience, as well as their comfort level in using mobile devices. The moderating variables included prior knowledge (high, average, and low) as measured by the student's pretest score. Taking into account prior knowledge, students with low prior knowledge assigned to either high or low AR experience had statistically significant higher learning gains than those assigned to a no AR experience. On the other hand, the results showed no statistical significance between students assigned to work individually versus in pairs. Students assigned to both high and low AR experience perceived a statistically significant higher level of engagement than their no AR counterparts. Students with low prior knowledge benefited the most from the high AR condition in learning gains. Overall, the AR application did well for providing a hands-on experience working with statistical data. Further research on AR and its relationship to spatial cognition, situated learning, high order skill development, performance support, and other classroom applications for learning is still needed.
ContributorsConley, Quincy (Author) / Atkinson, Robert K (Thesis advisor) / Nguyen, Frank (Committee member) / Nelson, Brian C (Committee member) / Arizona State University (Publisher)
Created2013
156463-Thumbnail Image.png
Description
Traditional usability methods in Human-Computer Interaction (HCI) have been extensively used to understand the usability of products. Measurements of user experience (UX) in traditional HCI studies mostly rely on task performance and observable user interactions with the product or services, such as usability tests, contextual inquiry, and subjective self-report data,

Traditional usability methods in Human-Computer Interaction (HCI) have been extensively used to understand the usability of products. Measurements of user experience (UX) in traditional HCI studies mostly rely on task performance and observable user interactions with the product or services, such as usability tests, contextual inquiry, and subjective self-report data, including questionnaires, interviews, and usability tests. However, these studies fail to directly reflect a user’s psychological involvement and further fail to explain the cognitive processing and the related emotional arousal. Thus, capturing how users think and feel when they are using a product remains a vital challenge of user experience evaluation studies. Conversely, recent research has revealed that sensor-based affect detection technologies, such as eye tracking, electroencephalography (EEG), galvanic skin response (GSR), and facial expression analysis, effectively capture affective states and physiological responses. These methods are efficient indicators of cognitive involvement and emotional arousal and constitute effective strategies for a comprehensive measurement of UX. The literature review shows that the impacts of sensor-based affect detection systems to the UX can be categorized in two groups: (1) confirmatory to validate the results obtained from the traditional usability methods in UX evaluations; and (2) complementary to enhance the findings or provide more precise and valid evidence. Both provided comprehensive findings to uncover the issues related to mental and physiological pathways to enhance the design of product and services. Therefore, this dissertation claims that it can be efficient to integrate sensor-based affect detection technologies to solve the current gaps or weaknesses of traditional usability methods. The dissertation revealed that the multi-sensor-based UX evaluation approach through biometrics tools and software corroborated user experience identified by traditional UX methods during an online purchasing task. The use these systems enhanced the findings and provided more precise and valid evidence to predict the consumer purchasing preferences. Thus, their impact was “complementary” on overall UX evaluation. The dissertation also provided information of the unique contributions of each tool and recommended some ways user experience researchers can combine both sensor-based and traditional UX approaches to explain consumer purchasing preferences.
ContributorsKula, Irfan (Author) / Atkinson, Robert K (Thesis advisor) / Roscoe, Rod D. (Thesis advisor) / Branaghan, Russell J (Committee member) / Arizona State University (Publisher)
Created2018
155361-Thumbnail Image.png
Description
This dissertation proposes a new set of analytical methods for high dimensional physiological sensors. The methodologies developed in this work were motivated by problems in learning science, but also apply to numerous disciplines where high dimensional signals are present. In the education field, more data is now available from traditional

This dissertation proposes a new set of analytical methods for high dimensional physiological sensors. The methodologies developed in this work were motivated by problems in learning science, but also apply to numerous disciplines where high dimensional signals are present. In the education field, more data is now available from traditional sources and there is an important need for analytical methods to translate this data into improved learning. Affecting Computing which is the study of new techniques that develop systems to recognize and model human emotions is integrating different physiological signals such as electroencephalogram (EEG) and electromyogram (EMG) to detect and model emotions which later can be used to improve these learning systems.

The first contribution proposes an event-crossover (ECO) methodology to analyze performance in learning environments. The methodology is relevant to studies where it is desired to evaluate the relationships between sentinel events in a learning environment and a physiological measurement which is provided in real time.

The second contribution introduces analytical methods to study relationships between multi-dimensional physiological signals and sentinel events in a learning environment. The methodology proposed learns physiological patterns in the form of node activations near time of events using different statistical techniques.

The third contribution addresses the challenge of performance prediction from physiological signals. Features from the sensors which could be computed early in the learning activity were developed for input to a machine learning model. The objective is to predict success or failure of the student in the learning environment early in the activity. EEG was used as the physiological signal to train a pattern recognition algorithm in order to derive meta affective states.

The last contribution introduced a methodology to predict a learner's performance using Bayes Belief Networks (BBNs). Posterior probabilities of latent nodes were used as inputs to a predictive model in real-time as evidence was accumulated in the BBN.

The methodology was applied to data streams from a video game and from a Damage Control Simulator which were used to predict and quantify performance. The proposed methods provide cognitive scientists with new tools to analyze subjects in learning environments.
ContributorsLujan Moreno, Gustavo A. (Author) / Runger, George C. (Thesis advisor) / Atkinson, Robert K (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Villalobos, Rene (Committee member) / Arizona State University (Publisher)
Created2017
157685-Thumbnail Image.png
Description
Evidence suggests that Augmented Reality (AR) may be a powerful tool for

alleviating certain, lightly held scientific misconceptions. However, many

misconceptions surrounding the theory of evolution are deeply held and resistant to

change. This study examines whether AR can serve as an effective tool for alleviating

these misconceptions by

Evidence suggests that Augmented Reality (AR) may be a powerful tool for

alleviating certain, lightly held scientific misconceptions. However, many

misconceptions surrounding the theory of evolution are deeply held and resistant to

change. This study examines whether AR can serve as an effective tool for alleviating

these misconceptions by comparing the change in the number of misconceptions

expressed by users of a tablet-based version of a well-established classroom simulation to

the change in the number of misconceptions expressed by users of AR versions of the

simulation.

The use of realistic representations of objects is common for many AR

developers. However, this contradicts well-tested practices of multimedia design that

argue against the addition of unnecessary elements. This study also compared the use of

representational visualizations in AR, in this case, models of ladybug beetles, to symbolic

representations, in this case, colored circles.

To address both research questions, a one-factor, between-subjects experiment

was conducted with 189 participants randomly assigned to one of three conditions: non

AR, symbolic AR, and representational AR. Measures of change in the number and types

of misconceptions expressed, motivation, and time on task were examined using a pair of

planned orthogonal contrasts designed to test the study’s two research questions.

Participants in the AR-based condition showed a significantly smaller change in

the number of total misconceptions expressed after the treatment as well as in the number

of misconceptions related to intentionality; none of the other misconceptions examined

showed a significant difference. No significant differences were found in the total

number of misconceptions expressed between participants in the representative and

symbolic AR-based conditions, or on motivation. Contrary to the expectation that the

simulation would alleviate misconceptions, the average change in the number of

misconceptions expressed by participants increased. This is theorized to be due to the

juxtaposition of virtual and real-world entities resulting in a reduction in assumed

intentionality.
ContributorsHenry, Matthew McClellan (Author) / Atkinson, Robert K (Thesis advisor) / Johnson-Glenberg, Mina C (Committee member) / Nelson, Brian C (Committee member) / Arizona State University (Publisher)
Created2019
155180-Thumbnail Image.png
Description
The present study explored the use of augmented reality (AR) technology to support cognitive modeling in an art-based learning environment. The AR application used in this study made visible the thought processes and observational techniques of art experts for the learning benefit of novices through digital annotations, overlays, and side-by-side

The present study explored the use of augmented reality (AR) technology to support cognitive modeling in an art-based learning environment. The AR application used in this study made visible the thought processes and observational techniques of art experts for the learning benefit of novices through digital annotations, overlays, and side-by-side comparisons that when viewed on mobile device appear directly on works of art.

Using a 2 x 3 factorial design, this study compared learner outcomes and motivation across technologies (audio-only, video, AR) and groupings (individuals, dyads) with 182 undergraduate and graduate students who were self-identified art novices. Learner outcomes were measured by post-activity spoken responses to a painting reproduction with the pre-activity response as a moderating variable. Motivation was measured by the sum score of a reduced version of the Instructional Materials Motivational Survey (IMMS), accounting for attention, relevance, confidence, and satisfaction, with total time spent in learning activity as the moderating variable. Information on participant demographics, technology usage, and art experience was also collected.

Participants were randomly assigned to one of six conditions that differed by technology and grouping before completing a learning activity where they viewed four high-resolution, printed-to-scale painting reproductions in a gallery-like setting while listening to audio-recorded conversations of two experts discussing the actual paintings. All participants listened to expert conversations but the video and AR conditions received visual supports via mobile device.

Though no main effects were found for technology or groupings, findings did include statistically significant higher learner outcomes in the elements of design subscale (characteristics most represented by the visual supports of the AR application) than the audio-only conditions. When participants saw digital representations of line, shape, and color directly on the paintings, they were more likely to identify those same features in the post-activity painting. Seeing what the experts see, in a situated environment, resulted in evidence that participants began to view paintings in a manner similar to the experts. This is evidence of the value of the temporal and spatial contiguity afforded by AR in cognitive modeling learning environments.
ContributorsShapera, Daniel Michael (Author) / Atkinson, Robert K (Thesis advisor) / Nelson, Brian C (Committee member) / Erickson, Mary (Committee member) / Arizona State University (Publisher)
Created2016