Filtering by
- All Subjects: Education
To boost students’ learning experience, adaptive selection was built on the generated questions. Bayesian Knowledge Tracing was used as embedded assessment of the student’s current competence so that a suitable question could be selected based on the student’s previous performance. A between-subjects experiment with 42 participants was performed, where half of the participants studied with adaptive selected questions and the rest studied with mal-adaptive order of questions. Both groups significantly improved their test scores, and the participants in adaptive group registered larger learning gains than participants in the control group.
To explore the possibility of generating rich instructional feedback for machine-generated questions, a question-paragraph mapping task was identified. Given a set of questions and a list of paragraphs for a textbook, the goal of the task was to map the related paragraphs to each question. An algorithm was developed whose performance was comparable to human annotators.
A multiple-choice question with high quality distractors (incorrect answers) can be pedagogically valuable as well as being much easier to grade than open-response questions. Thus, an algorithm was developed to generate good distractors for multiple-choice questions. The machine-generated multiple-choice questions were compared to human-generated questions in terms of three measures: question difficulty, question discrimination and distractor usefulness. By recruiting 200 participants from Amazon Mechanical Turk, it turned out that the two types of questions performed very closely on all the three measures.
This study required participants to interact with some instructional material presented on an eBook and complete a learning measure. While interacting with the eBook, participants were equipped with a set of physiological devices, namely an ABM EEG headset and eye tracker during the experiment to collect biometric data that could be used to objectively measure their user experience. Fifty college students participated in this study. The data collected from each of the participants was used to analyze the reading activities of people by performing an Independent Samples t-test.
Transfer learning using different representations was also studied with a goal of building collaboration detectors for one task can be used with a new task. Data for building such detectors were collected in the form of verbal interaction and user action logs from students’ tablets. Three qualitative levels of interactivity were distinguished: Collaboration, Cooperation and Asymmetric Contribution. Machine learning was used to induce a classifier that can assign a code for every episode based on the set of features. The results indicate that machine learned classifiers were reliable and can transfer.
Machine learning is a rapidly growing field, with no doubt in part due to its countless applications to other fields, including pedagogy and the creation of computer-aided tutoring systems. To extend the functionality of FACT, an automated teaching assistant, we want to predict, using metadata produced by student activity, whether a student is capable of fixing their own mistakes. Logs were collected from previous FACT trials with middle school math teachers and students. The data was converted to time series sequences for deep learning, and ordinary features were extracted for statistical machine learning. Ultimately, deep learning models attained an accuracy of 60%, while tree-based methods attained an accuracy of 65%, showing that some correlation, although small, exists between how a student fixes their mistakes and whether their correction is correct.