Matching Items (6)
Filtering by

Clear all filters

150224-Thumbnail Image.png
Description
Lots of previous studies have analyzed human tutoring at great depths and have shown expert human tutors to produce effect sizes, which is twice of that produced by an intelligent tutoring system (ITS). However, there has been no consensus on which factor makes them so effective. It is important to

Lots of previous studies have analyzed human tutoring at great depths and have shown expert human tutors to produce effect sizes, which is twice of that produced by an intelligent tutoring system (ITS). However, there has been no consensus on which factor makes them so effective. It is important to know this, so that same phenomena can be replicated in an ITS in order to achieve the same level of proficiency as expert human tutors. Also, to the best of my knowledge no one has looked at student reactions when they are working with a computer based tutor. The answers to both these questions are needed in order to build a highly effective computer-based tutor. My research focuses on the second question. In the first phase of my thesis, I analyzed the behavior of students when they were working with a step-based tutor Andes, using verbal-protocol analysis. The accomplishment of doing this was that I got to know of some ways in which students use a step-based tutor which can pave way for the creation of more effective computer-based tutors. I found from the first phase of the research that students often keep trying to fix errors by guessing repeatedly instead of asking for help by clicking the hint button. This phenomenon is known as hint refusal. Surprisingly, a large portion of the student's foundering was due to hint refusal. The hypothesis tested in the second phase of the research is that hint refusal can be significantly reduced and learning can be significantly increased if Andes uses more unsolicited hints and meta hints. An unsolicited hint is a hint that is given without the student asking for one. A meta-hint is like an unsolicited hint in that it is given without the student asking for it, but it just prompts the student to click on the hint button. Two versions of Andes were compared: the original version and a new version that gave more unsolicited and meta-hints. During a two-hour experiment, there were large, statistically reliable differences in several performance measures suggesting that the new policy was more effective.
ContributorsRanganathan, Rajagopalan (Author) / VanLehn, Kurt (Thesis advisor) / Atkinson, Robert (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
164849-Thumbnail Image.png
Description

Machine learning is a rapidly growing field, with no doubt in part due to its countless applications to other fields, including pedagogy and the creation of computer-aided tutoring systems. To extend the functionality of FACT, an automated teaching assistant, we want to predict, using metadata produced by student activity, whether

Machine learning is a rapidly growing field, with no doubt in part due to its countless applications to other fields, including pedagogy and the creation of computer-aided tutoring systems. To extend the functionality of FACT, an automated teaching assistant, we want to predict, using metadata produced by student activity, whether a student is capable of fixing their own mistakes. Logs were collected from previous FACT trials with middle school math teachers and students. The data was converted to time series sequences for deep learning, and ordinary features were extracted for statistical machine learning. Ultimately, deep learning models attained an accuracy of 60%, while tree-based methods attained an accuracy of 65%, showing that some correlation, although small, exists between how a student fixes their mistakes and whether their correction is correct.

ContributorsZhou, David (Author) / VanLehn, Kurt (Thesis director) / Wetzel, Jon (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
156684-Thumbnail Image.png
Description
The mathematics test is the most difficult test in the GED (General Education Development) Test battery, largely due to the presence of story problems. Raising performance levels of story problem-solving would have a significant effect on GED Test passage rates. The subject of this formative research study is Ms. Stephens’

The mathematics test is the most difficult test in the GED (General Education Development) Test battery, largely due to the presence of story problems. Raising performance levels of story problem-solving would have a significant effect on GED Test passage rates. The subject of this formative research study is Ms. Stephens’ Categorization Practice Utility (MS-CPU), an example-tracing intelligent tutoring system that serves as practice for the first step (problem categorization) in a larger comprehensive story problem-solving pedagogy that purports to raise the level of story problem-solving performance. During the analysis phase of this project, knowledge components and particular competencies that enable learning (schema building) were identified. During the development phase, a tutoring system was designed and implemented that algorithmically teaches these competencies to the student with graphical, interactive, and animated utilities. Because the tutoring system provides a much more concrete rather than conceptual, learning environment, it should foster a much greater apprehension of a story problem-solving process. With this experience, the student should begin to recognize the generalizability of concrete operations that accomplish particular story problem-solving goals and begin to build conceptual knowledge and a more conceptual approach to the task. During the formative evaluation phase, qualitative methods were used to identify obstacles in the MS-CPU user interface and disconnections in the pedagogy that impede learning story problem categorization and solution preparation. The study was conducted over two iterations where identification of obstacles and change plans (mitigations) produced a qualitative data table used to modify the first version systems (MS-CPU 1.1). Mitigation corrections produced the second version of the MS-CPU 1.2, and the next iteration of the study was conducted producing a second set of obstacle/mitigation tables. Pre-posttests were conducted in each iteration to provide corroboration for the effectiveness of the mitigations that were performed. The study resulted in the identification of a number of learning obstacles in the first version of the MS-CPU 1.1. Their mitigation produced a second version of the MS-CPU 1.2 whose identified obstacles were much less than the first version. It was determined that an additional iteration is needed before more quantitative research is conducted.
ContributorsRitchey, ChristiAnne (Author) / VanLehn, Kurt (Thesis advisor) / Savenye, Wilhelmina (Committee member) / Hong, Yi-Chun (Committee member) / Arizona State University (Publisher)
Created2018
154253-Thumbnail Image.png
Description
Embedded assessment constantly updates a model of the student as the student works on instructional tasks. Accurate embedded assessment allows students, instructors and instructional systems to make informed decisions without requiring the student to stop instruction and take a test. This thesis describes the development and comparison of

Embedded assessment constantly updates a model of the student as the student works on instructional tasks. Accurate embedded assessment allows students, instructors and instructional systems to make informed decisions without requiring the student to stop instruction and take a test. This thesis describes the development and comparison of several student models for Dragoon, an intelligent tutoring system. All the models were instances of Bayesian Knowledge Tracing, a standard method. Several methods of parameterization and calibration were explored using two recently developed toolkits, FAST and BNT-SM that replaces constant-valued parameters with logistic regressions. The evaluation was done by calculating the fit of the models to data from human subjects and by assessing the accuracy of their assessment of simulated students. The student models created using node properties as subskills were superior to coarse-grained, skill-only models. Adding this extra level of representation to emission parameters was superior to adding it to transmission parameters. Adding difficulty parameters did not improve fit, contrary to standard practice in psychometrics.
ContributorsGrover, Sachin (Author) / VanLehn, Kurt (Thesis advisor) / Walker, Erin (Committee member) / Shiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2015
154146-Thumbnail Image.png
Description
Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was

Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was developed for generating open response biology questions. The generated questions were compared to professional authorized questions.

To boost students’ learning experience, adaptive selection was built on the generated questions. Bayesian Knowledge Tracing was used as embedded assessment of the student’s current competence so that a suitable question could be selected based on the student’s previous performance. A between-subjects experiment with 42 participants was performed, where half of the participants studied with adaptive selected questions and the rest studied with mal-adaptive order of questions. Both groups significantly improved their test scores, and the participants in adaptive group registered larger learning gains than participants in the control group.

To explore the possibility of generating rich instructional feedback for machine-generated questions, a question-paragraph mapping task was identified. Given a set of questions and a list of paragraphs for a textbook, the goal of the task was to map the related paragraphs to each question. An algorithm was developed whose performance was comparable to human annotators.

A multiple-choice question with high quality distractors (incorrect answers) can be pedagogically valuable as well as being much easier to grade than open-response questions. Thus, an algorithm was developed to generate good distractors for multiple-choice questions. The machine-generated multiple-choice questions were compared to human-generated questions in terms of three measures: question difficulty, question discrimination and distractor usefulness. By recruiting 200 participants from Amazon Mechanical Turk, it turned out that the two types of questions performed very closely on all the three measures.
ContributorsZhang, Lishang (Author) / VanLehn, Kurt (Thesis advisor) / Baral, Chitta (Committee member) / Hsiao, Ihan (Committee member) / Wright, Christian (Committee member) / Arizona State University (Publisher)
Created2015
157884-Thumbnail Image.png
Description
Concept maps are commonly used knowledge visualization tools and have been shown to have a positive impact on learning. The main drawbacks of concept mapping are the requirement of training, and lack of feedback support. Thus, prior research has attempted to provide support and feedback in concept mapping, such as

Concept maps are commonly used knowledge visualization tools and have been shown to have a positive impact on learning. The main drawbacks of concept mapping are the requirement of training, and lack of feedback support. Thus, prior research has attempted to provide support and feedback in concept mapping, such as by developing computer-based concept mapping tools, offering starting templates and navigational supports, as well as providing automated feedback. Although these approaches have achieved promising results, there are still challenges that remain to be solved. For example, there is a need to create a concept mapping system that reduces the extraneous effort of editing a concept map while encouraging more cognitively beneficial behaviors. Also, there is little understanding of the cognitive process during concept mapping. What’s more, current feedback mechanisms in concept mapping only focus on the outcome of the map, instead of the learning process.

This thesis work strives to solve the fundamental research question: How to leverage computer technologies to intelligently support concept mapping to promote meaningful learning? To approach this research question, I first present an intelligent concept mapping system, MindDot, that supports concept mapping via innovative integration of two features, hyperlink navigation, and expert template. The system reduces the effort of creating and modifying concept maps while encouraging beneficial activities such as comparing related concepts and establishing relationships among them. I then present the comparative strategy metric that modes student learning by evaluating behavioral patterns and learning strategies. Lastly, I develop an adaptive feedback system that provides immediate diagnostic feedback in response to both the key learning behaviors during concept mapping and the correctness and completeness of the created maps.

Empirical evaluations indicated that the integrated navigational and template support in MindDot fostered effective learning behaviors and facilitating learning achievements. The comparative strategy model was shown to be highly representative of learning characteristics such as motivation, engagement, misconceptions, and predicted learning results. The feedback tutor also demonstrated positive impacts on supporting learning and assisting the development of effective learning strategies that prepare learners for future learning. This dissertation contributes to the field of supporting concept mapping with designs of technological affordances, a process-based student model, an adaptive feedback tutor, empirical evaluations of these proposed innovations, and implications for future support in concept mapping.
ContributorsWang, Shang (Author) / Walker, Erin (Thesis advisor) / VanLehn, Kurt (Committee member) / Hsiao, Sharon (Committee member) / Long, Yanjin (Committee member) / Arizona State University (Publisher)
Created2019