This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 2 of 2
Filtering by

Clear all filters

154915-Thumbnail Image.png
Description
EMBRACE (Enhanced Moved By Reading to Accelerate Comprehension in English) is an IPad application that uses the Moved By Reading strategy to help improve the reading comprehension skills of bilingual (Spanish speaking) English Language Learners (ELLs). In EMBRACE, students read the text of a story and then move images corresponding

EMBRACE (Enhanced Moved By Reading to Accelerate Comprehension in English) is an IPad application that uses the Moved By Reading strategy to help improve the reading comprehension skills of bilingual (Spanish speaking) English Language Learners (ELLs). In EMBRACE, students read the text of a story and then move images corresponding to the text that they read. According to the embodied cognition theory, this grounds reading comprehension in physical experiences and thus is more engaging.

In this thesis, I used the log data from 20 students in grades 2-5 to design a skill model for a student using EMBRACE. A skill model is the set of knowledge components that a student needs to master in order to comprehend the text in EMBRACE. A good skill model will improve understanding of the mistakes students make and thus aid in the design of useful feedback for the student.. In this context, the skill model consists of vocabulary and syntax associated with the steps that students performed. I mapped each step in EMBRACE to one or more skills (vocabulary and syntax) from the model. After every step, the skill level is updated in the model. Thus, if a student answered the previous step incorrectly, the corresponding skills are decremented and if the student answered the previous question correctly, the corresponding skills are incremented, through the Bayesian Knowledge Tracing algorithm.

I then correlated the students’ predicted scores (computed from their skill levels) to their posttest scores. I evaluated the students’ predicted scores (computed from their skill levels) by comparing them to their posttest scores. The two sets of scores were not highly correlated, but the results gave insights into potential improvements that could be made to the system with respect to user interaction, posttest scores and modeling algorithm.
ContributorsFurtado, Nicolette Dolores (Author) / Walker, Erin (Thesis advisor) / Hsiao, Ihan (Committee member) / Restrepo, M. Adelaida (Committee member) / Arizona State University (Publisher)
Created2016
158399-Thumbnail Image.png
Description
Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided Sign Language Learning (CASLL) is a recent and promising field

Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided Sign Language Learning (CASLL) is a recent and promising field of research which is made feasible by advances in Computer Vision and Sign Language Recognition(SLR). Leveraging existing SLR systems for feedback based learning is not feasible because their decision processes are not human interpretable and do not facilitate conceptual feedback to learners. Thus, fundamental research is needed towards designing systems that are modular and explainable. The explanations from these systems can then be used to produce feedback to aid in the learning process.

In this work, I present novel approaches for the recognition of location, movement and handshape that are components of American Sign Language (ASL) using both wrist-worn sensors as well as webcams. Finally, I present Learn2Sign(L2S), a chat- bot based AI tutor that can provide fine-grained conceptual feedback to learners of ASL using the modular recognition approaches. L2S is designed to provide feedback directly relating to the fundamental concepts of ASL using an explainable AI. I present the system performance results in terms of Precision, Recall and F-1 scores as well as validation results towards the learning outcomes of users. Both retention and execution tests for 26 participants for 14 different ASL words learned using learn2sign is presented. Finally, I also present the results of a post-usage usability survey for all the participants. In this work, I found that learners who received live feedback on their executions improved their execution as well as retention performances. The average increase in execution performance was 28% points and that for retention was 4% points.
ContributorsPaudyal, Prajwal (Author) / Gupta, Sandeep (Thesis advisor) / Banerjee, Ayan (Committee member) / Hsiao, Ihan (Committee member) / Azuma, Tamiko (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020