Matching Items (2)
Filtering by

Clear all filters

154146-Thumbnail Image.png
Description
Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was

Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was developed for generating open response biology questions. The generated questions were compared to professional authorized questions.

To boost students’ learning experience, adaptive selection was built on the generated questions. Bayesian Knowledge Tracing was used as embedded assessment of the student’s current competence so that a suitable question could be selected based on the student’s previous performance. A between-subjects experiment with 42 participants was performed, where half of the participants studied with adaptive selected questions and the rest studied with mal-adaptive order of questions. Both groups significantly improved their test scores, and the participants in adaptive group registered larger learning gains than participants in the control group.

To explore the possibility of generating rich instructional feedback for machine-generated questions, a question-paragraph mapping task was identified. Given a set of questions and a list of paragraphs for a textbook, the goal of the task was to map the related paragraphs to each question. An algorithm was developed whose performance was comparable to human annotators.

A multiple-choice question with high quality distractors (incorrect answers) can be pedagogically valuable as well as being much easier to grade than open-response questions. Thus, an algorithm was developed to generate good distractors for multiple-choice questions. The machine-generated multiple-choice questions were compared to human-generated questions in terms of three measures: question difficulty, question discrimination and distractor usefulness. By recruiting 200 participants from Amazon Mechanical Turk, it turned out that the two types of questions performed very closely on all the three measures.
ContributorsZhang, Lishang (Author) / VanLehn, Kurt (Thesis advisor) / Baral, Chitta (Committee member) / Hsiao, Ihan (Committee member) / Wright, Christian (Committee member) / Arizona State University (Publisher)
Created2015
154915-Thumbnail Image.png
Description
EMBRACE (Enhanced Moved By Reading to Accelerate Comprehension in English) is an IPad application that uses the Moved By Reading strategy to help improve the reading comprehension skills of bilingual (Spanish speaking) English Language Learners (ELLs). In EMBRACE, students read the text of a story and then move images corresponding

EMBRACE (Enhanced Moved By Reading to Accelerate Comprehension in English) is an IPad application that uses the Moved By Reading strategy to help improve the reading comprehension skills of bilingual (Spanish speaking) English Language Learners (ELLs). In EMBRACE, students read the text of a story and then move images corresponding to the text that they read. According to the embodied cognition theory, this grounds reading comprehension in physical experiences and thus is more engaging.

In this thesis, I used the log data from 20 students in grades 2-5 to design a skill model for a student using EMBRACE. A skill model is the set of knowledge components that a student needs to master in order to comprehend the text in EMBRACE. A good skill model will improve understanding of the mistakes students make and thus aid in the design of useful feedback for the student.. In this context, the skill model consists of vocabulary and syntax associated with the steps that students performed. I mapped each step in EMBRACE to one or more skills (vocabulary and syntax) from the model. After every step, the skill level is updated in the model. Thus, if a student answered the previous step incorrectly, the corresponding skills are decremented and if the student answered the previous question correctly, the corresponding skills are incremented, through the Bayesian Knowledge Tracing algorithm.

I then correlated the students’ predicted scores (computed from their skill levels) to their posttest scores. I evaluated the students’ predicted scores (computed from their skill levels) by comparing them to their posttest scores. The two sets of scores were not highly correlated, but the results gave insights into potential improvements that could be made to the system with respect to user interaction, posttest scores and modeling algorithm.
ContributorsFurtado, Nicolette Dolores (Author) / Walker, Erin (Thesis advisor) / Hsiao, Ihan (Committee member) / Restrepo, M. Adelaida (Committee member) / Arizona State University (Publisher)
Created2016