Matching Items (15)
Description
This dissertation proposes a new set of analytical methods for high dimensional physiological sensors. The methodologies developed in this work were motivated by problems in learning science, but also apply to numerous disciplines where high dimensional signals are present. In the education field, more data is now available from traditional sources and there is an important need for analytical methods to translate this data into improved learning. Affecting Computing which is the study of new techniques that develop systems to recognize and model human emotions is integrating different physiological signals such as electroencephalogram (EEG) and electromyogram (EMG) to detect and model emotions which later can be used to improve these learning systems.
The first contribution proposes an event-crossover (ECO) methodology to analyze performance in learning environments. The methodology is relevant to studies where it is desired to evaluate the relationships between sentinel events in a learning environment and a physiological measurement which is provided in real time.
The second contribution introduces analytical methods to study relationships between multi-dimensional physiological signals and sentinel events in a learning environment. The methodology proposed learns physiological patterns in the form of node activations near time of events using different statistical techniques.
The third contribution addresses the challenge of performance prediction from physiological signals. Features from the sensors which could be computed early in the learning activity were developed for input to a machine learning model. The objective is to predict success or failure of the student in the learning environment early in the activity. EEG was used as the physiological signal to train a pattern recognition algorithm in order to derive meta affective states.
The last contribution introduced a methodology to predict a learner's performance using Bayes Belief Networks (BBNs). Posterior probabilities of latent nodes were used as inputs to a predictive model in real-time as evidence was accumulated in the BBN.
The methodology was applied to data streams from a video game and from a Damage Control Simulator which were used to predict and quantify performance. The proposed methods provide cognitive scientists with new tools to analyze subjects in learning environments.
The first contribution proposes an event-crossover (ECO) methodology to analyze performance in learning environments. The methodology is relevant to studies where it is desired to evaluate the relationships between sentinel events in a learning environment and a physiological measurement which is provided in real time.
The second contribution introduces analytical methods to study relationships between multi-dimensional physiological signals and sentinel events in a learning environment. The methodology proposed learns physiological patterns in the form of node activations near time of events using different statistical techniques.
The third contribution addresses the challenge of performance prediction from physiological signals. Features from the sensors which could be computed early in the learning activity were developed for input to a machine learning model. The objective is to predict success or failure of the student in the learning environment early in the activity. EEG was used as the physiological signal to train a pattern recognition algorithm in order to derive meta affective states.
The last contribution introduced a methodology to predict a learner's performance using Bayes Belief Networks (BBNs). Posterior probabilities of latent nodes were used as inputs to a predictive model in real-time as evidence was accumulated in the BBN.
The methodology was applied to data streams from a video game and from a Damage Control Simulator which were used to predict and quantify performance. The proposed methods provide cognitive scientists with new tools to analyze subjects in learning environments.
ContributorsLujan Moreno, Gustavo A. (Author) / Runger, George C. (Thesis advisor) / Atkinson, Robert K (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Villalobos, Rene (Committee member) / Arizona State University (Publisher)
Created2017
Description
Training for law enforcement on effective ways of intervening in mental health crises is limited. What is available tends to be costly for implementation, labor-intensive, and requires officers to opt-in. DEFUSE, an interactive online training program, was specifically developed to train law enforcement on mental illness and de-escalation skills. Derived from a stress inoculation framework, the curriculum provides education, skills training, and rehearsal; it is brief, cost-effective, and scalable to officers across the country. Participants were randomly assigned to either the experimental or delayed treatment control conditions. A multivariate analysis of variance yielded a significant treatment-by-repeated-measures interaction and univariate analyses confirmed improvement on all of the measures (e.g., empathy, stigma, self-efficacy, behavioral outcomes, knowledge). Replication dependent t-test analyses conducted on the control condition following completion of DEFUSE confirmed significant improvement on four of the measures and marginal significance on the fifth. Participant responses to BPAD video vignettes revealed significant differences in objective behavioral proficiency for those participants who completed the online course. DEFUSE is a powerful tool for training law enforcement on mental illness and effective strategies for intervening in mental health crises. Considerations for future study are discussed.
ContributorsHacker, Robyn Lea (Author) / Horan, John J (Thesis advisor) / Homer, Judith (Committee member) / Atkinson, Robert K (Committee member) / Arizona State University (Publisher)
Created2017
Description
The present study explored the use of augmented reality (AR) technology to support cognitive modeling in an art-based learning environment. The AR application used in this study made visible the thought processes and observational techniques of art experts for the learning benefit of novices through digital annotations, overlays, and side-by-side comparisons that when viewed on mobile device appear directly on works of art.
Using a 2 x 3 factorial design, this study compared learner outcomes and motivation across technologies (audio-only, video, AR) and groupings (individuals, dyads) with 182 undergraduate and graduate students who were self-identified art novices. Learner outcomes were measured by post-activity spoken responses to a painting reproduction with the pre-activity response as a moderating variable. Motivation was measured by the sum score of a reduced version of the Instructional Materials Motivational Survey (IMMS), accounting for attention, relevance, confidence, and satisfaction, with total time spent in learning activity as the moderating variable. Information on participant demographics, technology usage, and art experience was also collected.
Participants were randomly assigned to one of six conditions that differed by technology and grouping before completing a learning activity where they viewed four high-resolution, printed-to-scale painting reproductions in a gallery-like setting while listening to audio-recorded conversations of two experts discussing the actual paintings. All participants listened to expert conversations but the video and AR conditions received visual supports via mobile device.
Though no main effects were found for technology or groupings, findings did include statistically significant higher learner outcomes in the elements of design subscale (characteristics most represented by the visual supports of the AR application) than the audio-only conditions. When participants saw digital representations of line, shape, and color directly on the paintings, they were more likely to identify those same features in the post-activity painting. Seeing what the experts see, in a situated environment, resulted in evidence that participants began to view paintings in a manner similar to the experts. This is evidence of the value of the temporal and spatial contiguity afforded by AR in cognitive modeling learning environments.
Using a 2 x 3 factorial design, this study compared learner outcomes and motivation across technologies (audio-only, video, AR) and groupings (individuals, dyads) with 182 undergraduate and graduate students who were self-identified art novices. Learner outcomes were measured by post-activity spoken responses to a painting reproduction with the pre-activity response as a moderating variable. Motivation was measured by the sum score of a reduced version of the Instructional Materials Motivational Survey (IMMS), accounting for attention, relevance, confidence, and satisfaction, with total time spent in learning activity as the moderating variable. Information on participant demographics, technology usage, and art experience was also collected.
Participants were randomly assigned to one of six conditions that differed by technology and grouping before completing a learning activity where they viewed four high-resolution, printed-to-scale painting reproductions in a gallery-like setting while listening to audio-recorded conversations of two experts discussing the actual paintings. All participants listened to expert conversations but the video and AR conditions received visual supports via mobile device.
Though no main effects were found for technology or groupings, findings did include statistically significant higher learner outcomes in the elements of design subscale (characteristics most represented by the visual supports of the AR application) than the audio-only conditions. When participants saw digital representations of line, shape, and color directly on the paintings, they were more likely to identify those same features in the post-activity painting. Seeing what the experts see, in a situated environment, resulted in evidence that participants began to view paintings in a manner similar to the experts. This is evidence of the value of the temporal and spatial contiguity afforded by AR in cognitive modeling learning environments.
ContributorsShapera, Daniel Michael (Author) / Atkinson, Robert K (Thesis advisor) / Nelson, Brian C (Committee member) / Erickson, Mary (Committee member) / Arizona State University (Publisher)
Created2016
Description
Evidence suggests that Augmented Reality (AR) may be a powerful tool for
alleviating certain, lightly held scientific misconceptions. However, many
misconceptions surrounding the theory of evolution are deeply held and resistant to
change. This study examines whether AR can serve as an effective tool for alleviating
these misconceptions by comparing the change in the number of misconceptions
expressed by users of a tablet-based version of a well-established classroom simulation to
the change in the number of misconceptions expressed by users of AR versions of the
simulation.
The use of realistic representations of objects is common for many AR
developers. However, this contradicts well-tested practices of multimedia design that
argue against the addition of unnecessary elements. This study also compared the use of
representational visualizations in AR, in this case, models of ladybug beetles, to symbolic
representations, in this case, colored circles.
To address both research questions, a one-factor, between-subjects experiment
was conducted with 189 participants randomly assigned to one of three conditions: non
AR, symbolic AR, and representational AR. Measures of change in the number and types
of misconceptions expressed, motivation, and time on task were examined using a pair of
planned orthogonal contrasts designed to test the study’s two research questions.
Participants in the AR-based condition showed a significantly smaller change in
the number of total misconceptions expressed after the treatment as well as in the number
of misconceptions related to intentionality; none of the other misconceptions examined
showed a significant difference. No significant differences were found in the total
number of misconceptions expressed between participants in the representative and
symbolic AR-based conditions, or on motivation. Contrary to the expectation that the
simulation would alleviate misconceptions, the average change in the number of
misconceptions expressed by participants increased. This is theorized to be due to the
juxtaposition of virtual and real-world entities resulting in a reduction in assumed
intentionality.
alleviating certain, lightly held scientific misconceptions. However, many
misconceptions surrounding the theory of evolution are deeply held and resistant to
change. This study examines whether AR can serve as an effective tool for alleviating
these misconceptions by comparing the change in the number of misconceptions
expressed by users of a tablet-based version of a well-established classroom simulation to
the change in the number of misconceptions expressed by users of AR versions of the
simulation.
The use of realistic representations of objects is common for many AR
developers. However, this contradicts well-tested practices of multimedia design that
argue against the addition of unnecessary elements. This study also compared the use of
representational visualizations in AR, in this case, models of ladybug beetles, to symbolic
representations, in this case, colored circles.
To address both research questions, a one-factor, between-subjects experiment
was conducted with 189 participants randomly assigned to one of three conditions: non
AR, symbolic AR, and representational AR. Measures of change in the number and types
of misconceptions expressed, motivation, and time on task were examined using a pair of
planned orthogonal contrasts designed to test the study’s two research questions.
Participants in the AR-based condition showed a significantly smaller change in
the number of total misconceptions expressed after the treatment as well as in the number
of misconceptions related to intentionality; none of the other misconceptions examined
showed a significant difference. No significant differences were found in the total
number of misconceptions expressed between participants in the representative and
symbolic AR-based conditions, or on motivation. Contrary to the expectation that the
simulation would alleviate misconceptions, the average change in the number of
misconceptions expressed by participants increased. This is theorized to be due to the
juxtaposition of virtual and real-world entities resulting in a reduction in assumed
intentionality.
ContributorsHenry, Matthew McClellan (Author) / Atkinson, Robert K (Thesis advisor) / Johnson-Glenberg, Mina C (Committee member) / Nelson, Brian C (Committee member) / Arizona State University (Publisher)
Created2019
Description
Currently, recommender systems are used extensively to find the right audience with the "right" content over various platforms. Recommendations generated by these systems aim to offer relevant items to users. Different approaches have been suggested to solve this problem mainly by using the rating history of the user or by identifying the preferences of similar users. Most of the existing recommendation systems are formulated in an identical fashion, where a model is trained to capture the underlying preferences of users over different kinds of items. Once it is deployed, the model suggests personalized recommendations precisely, and it is assumed that the preferences of users are perfectly reflected by the historical data. However, such user data might be limited in practice, and the characteristics of users may constantly evolve during their intensive interaction between recommendation systems.
Moreover, most of these recommender systems suffer from the cold-start problems where insufficient data for new users or products results in reduced overall recommendation output. In the current study, we have built a recommender system to recommend movies to users. Biclustering algorithm is used to cluster the users and movies simultaneously at the beginning to generate explainable recommendations, and these biclusters are used to form a gridworld where Q-Learning is used to learn the policy to traverse through the grid. The reward function uses the Jaccard Index, which is a measure of common users between two biclusters. Demographic details of new users are used to generate recommendations that solve the cold-start problem too.
Lastly, the implemented algorithm is examined with a real-world dataset against the widely used recommendation algorithm and the performance for the cold-start cases.
Moreover, most of these recommender systems suffer from the cold-start problems where insufficient data for new users or products results in reduced overall recommendation output. In the current study, we have built a recommender system to recommend movies to users. Biclustering algorithm is used to cluster the users and movies simultaneously at the beginning to generate explainable recommendations, and these biclusters are used to form a gridworld where Q-Learning is used to learn the policy to traverse through the grid. The reward function uses the Jaccard Index, which is a measure of common users between two biclusters. Demographic details of new users are used to generate recommendations that solve the cold-start problem too.
Lastly, the implemented algorithm is examined with a real-world dataset against the widely used recommendation algorithm and the performance for the cold-start cases.
ContributorsSargar, Rushikesh Bapu (Author) / Atkinson, Robert K (Thesis advisor) / Chen, Yinong (Thesis advisor) / Chavez-Echeagaray, Maria Elena (Committee member) / Arizona State University (Publisher)
Created2020