Matching Items (12)
Filtering by

Clear all filters

150224-Thumbnail Image.png
Description
Lots of previous studies have analyzed human tutoring at great depths and have shown expert human tutors to produce effect sizes, which is twice of that produced by an intelligent tutoring system (ITS). However, there has been no consensus on which factor makes them so effective. It is important to

Lots of previous studies have analyzed human tutoring at great depths and have shown expert human tutors to produce effect sizes, which is twice of that produced by an intelligent tutoring system (ITS). However, there has been no consensus on which factor makes them so effective. It is important to know this, so that same phenomena can be replicated in an ITS in order to achieve the same level of proficiency as expert human tutors. Also, to the best of my knowledge no one has looked at student reactions when they are working with a computer based tutor. The answers to both these questions are needed in order to build a highly effective computer-based tutor. My research focuses on the second question. In the first phase of my thesis, I analyzed the behavior of students when they were working with a step-based tutor Andes, using verbal-protocol analysis. The accomplishment of doing this was that I got to know of some ways in which students use a step-based tutor which can pave way for the creation of more effective computer-based tutors. I found from the first phase of the research that students often keep trying to fix errors by guessing repeatedly instead of asking for help by clicking the hint button. This phenomenon is known as hint refusal. Surprisingly, a large portion of the student's foundering was due to hint refusal. The hypothesis tested in the second phase of the research is that hint refusal can be significantly reduced and learning can be significantly increased if Andes uses more unsolicited hints and meta hints. An unsolicited hint is a hint that is given without the student asking for one. A meta-hint is like an unsolicited hint in that it is given without the student asking for it, but it just prompts the student to click on the hint button. Two versions of Andes were compared: the original version and a new version that gave more unsolicited and meta-hints. During a two-hour experiment, there were large, statistically reliable differences in several performance measures suggesting that the new policy was more effective.
ContributorsRanganathan, Rajagopalan (Author) / VanLehn, Kurt (Thesis advisor) / Atkinson, Robert (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
151688-Thumbnail Image.png
Description
This study empirically evaluated the effectiveness of the instructional design, learning tools, and role of the teacher in three versions of a semester-long, high-school remedial Algebra I course to determine what impact self-regulated learning skills and learning pattern training have on students' self-regulation, math achievement, and motivation. The 1st version

This study empirically evaluated the effectiveness of the instructional design, learning tools, and role of the teacher in three versions of a semester-long, high-school remedial Algebra I course to determine what impact self-regulated learning skills and learning pattern training have on students' self-regulation, math achievement, and motivation. The 1st version was a business-as-usual traditional classroom teaching mathematics with direct instruction. The 2rd version of the course provided students with self-paced, individualized Algebra instruction with a web-based, intelligent tutor. The 3rd version of the course coupled self-paced, individualized instruction on the web-based, intelligent Algebra tutor coupled with a series of e-learning modules on self-regulated learning knowledge and skills that were distributed throughout the semester. A quasi-experimental, mixed methods evaluation design was used by assigning pre-registered, high-school remedial Algebra I class periods made up of an approximately equal number of students to one of the three study conditions or course versions: (a) the control course design, (b) web-based, intelligent tutor only course design, and (c) web-based, intelligent tutor + SRL e-learning modules course design. While no statistically significant differences on SRL skills, math achievement or motivation were found between the three conditions, effect-size estimates provide suggestive evidence that using the SRL e-learning modules based on ARCS motivation model (Keller, 2010) and Let Me Learn learning pattern instruction (Dawkins, Kottkamp, & Johnston, 2010) may help students regulate their learning and improve their study skills while using a web-based, intelligent Algebra tutor as evidenced by positive impacts on math achievement, motivation, and self-regulated learning skills. The study also explored predictive analyses using multiple regression and found that predictive models based on independent variables aligned to student demographics, learning mastery skills, and ARCS motivational factors are helpful in defining how to further refine course design and design learning evaluations that measure achievement, motivation, and self-regulated learning in web-based learning environments, including intelligent tutoring systems.
ContributorsBarrus, Angela (Author) / Atkinson, Robert K (Thesis advisor) / Van de Sande, Carla (Committee member) / Savenye, Wilhelmina (Committee member) / Arizona State University (Publisher)
Created2013
151942-Thumbnail Image.png
Description
Researchers have postulated that math academic achievement increases student success in college (Lee, 2012; Silverman & Seidman, 2011; Vigdor, 2013), yet 80% of universities and 98% of community colleges require many of their first-year students to be placed in remedial courses (Bettinger & Long, 2009). Many high school graduates are

Researchers have postulated that math academic achievement increases student success in college (Lee, 2012; Silverman & Seidman, 2011; Vigdor, 2013), yet 80% of universities and 98% of community colleges require many of their first-year students to be placed in remedial courses (Bettinger & Long, 2009). Many high school graduates are entering college ill prepared for the rigors of higher education, lacking understanding of basic and important principles (ACT, 2012). The desire to increase academic achievement is a wide held aspiration in education and the idea of adapting instruction to individuals is one approach to accomplish this goal (Lalley & Gentile, 2009a). Frequently, adaptive learning environments rely on a mastery learning approach, it is thought that when students are afforded the opportunity to master the material, deeper and more meaningful learning is likely to occur. Researchers generally agree that the learning environment, the teaching approach, and the students' attributes are all important to understanding the conditions that promote academic achievement (Bandura, 1977; Bloom, 1968; Guskey, 2010; Cassen, Feinstein & Graham, 2008; Changeiywo, Wambugu & Wachanga, 2011; Lee, 2012; Schunk, 1991; Van Dinther, Dochy & Segers, 2011). The present study investigated the role of college students' affective attributes and skills, such as academic competence and academic resilience, in an adaptive mastery-based learning environment on their academic performance, while enrolled in a remedial mathematics course. The results showed that the combined influence of students' affective attributes and academic resilience had a statistically significant effect on students' academic performance. Further, the mastery-based learning environment also had a significant effect on their academic competence and academic performance.
ContributorsFoshee, Cecile Mary (Author) / Atkinson, Robert K (Thesis advisor) / Elliott, Stephen N. (Committee member) / Horan, John (Committee member) / Arizona State University (Publisher)
Created2013
148404-Thumbnail Image.png
Description

Expectation for college attendance in the United States continues to rise as more jobs require degrees. This study aims to determine how parental expectations affect high school students in their decision to attend college. By examining parental expectations that were placed on current college students prior to and during the

Expectation for college attendance in the United States continues to rise as more jobs require degrees. This study aims to determine how parental expectations affect high school students in their decision to attend college. By examining parental expectations that were placed on current college students prior to and during the application period, we can determine the positive and negative outcomes of these expectations as well as the atmosphere they are creating. To test the hypothesis, an online survey was distributed to current ASU and Barrett, Honors College students regarding their experience with college applications and their parents' influence on their collegiate attendance. A qualitative analysis of the data was conducted in tandem with an analysis of several case studies to determine the results. These data show that parental expectations are having a significant impact on the enrollment of high school students in college programs. With parents placing these expectations on their children, collegiate enrollment will continue to increase. Further studies will be necessary to determine the specific influences these expectations are placing on students.

ContributorsScheller, Sara Matheson (Co-author) / Johnson, Benjamin (Co-author) / Kappes, Janelle (Thesis director) / Fairbanks, Elizabeth (Committee member) / Division of Teacher Preparation (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
164849-Thumbnail Image.png
Description

Machine learning is a rapidly growing field, with no doubt in part due to its countless applications to other fields, including pedagogy and the creation of computer-aided tutoring systems. To extend the functionality of FACT, an automated teaching assistant, we want to predict, using metadata produced by student activity, whether

Machine learning is a rapidly growing field, with no doubt in part due to its countless applications to other fields, including pedagogy and the creation of computer-aided tutoring systems. To extend the functionality of FACT, an automated teaching assistant, we want to predict, using metadata produced by student activity, whether a student is capable of fixing their own mistakes. Logs were collected from previous FACT trials with middle school math teachers and students. The data was converted to time series sequences for deep learning, and ordinary features were extracted for statistical machine learning. Ultimately, deep learning models attained an accuracy of 60%, while tree-based methods attained an accuracy of 65%, showing that some correlation, although small, exists between how a student fixes their mistakes and whether their correction is correct.

ContributorsZhou, David (Author) / VanLehn, Kurt (Thesis director) / Wetzel, Jon (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
156684-Thumbnail Image.png
Description
The mathematics test is the most difficult test in the GED (General Education Development) Test battery, largely due to the presence of story problems. Raising performance levels of story problem-solving would have a significant effect on GED Test passage rates. The subject of this formative research study is Ms. Stephens’

The mathematics test is the most difficult test in the GED (General Education Development) Test battery, largely due to the presence of story problems. Raising performance levels of story problem-solving would have a significant effect on GED Test passage rates. The subject of this formative research study is Ms. Stephens’ Categorization Practice Utility (MS-CPU), an example-tracing intelligent tutoring system that serves as practice for the first step (problem categorization) in a larger comprehensive story problem-solving pedagogy that purports to raise the level of story problem-solving performance. During the analysis phase of this project, knowledge components and particular competencies that enable learning (schema building) were identified. During the development phase, a tutoring system was designed and implemented that algorithmically teaches these competencies to the student with graphical, interactive, and animated utilities. Because the tutoring system provides a much more concrete rather than conceptual, learning environment, it should foster a much greater apprehension of a story problem-solving process. With this experience, the student should begin to recognize the generalizability of concrete operations that accomplish particular story problem-solving goals and begin to build conceptual knowledge and a more conceptual approach to the task. During the formative evaluation phase, qualitative methods were used to identify obstacles in the MS-CPU user interface and disconnections in the pedagogy that impede learning story problem categorization and solution preparation. The study was conducted over two iterations where identification of obstacles and change plans (mitigations) produced a qualitative data table used to modify the first version systems (MS-CPU 1.1). Mitigation corrections produced the second version of the MS-CPU 1.2, and the next iteration of the study was conducted producing a second set of obstacle/mitigation tables. Pre-posttests were conducted in each iteration to provide corroboration for the effectiveness of the mitigations that were performed. The study resulted in the identification of a number of learning obstacles in the first version of the MS-CPU 1.1. Their mitigation produced a second version of the MS-CPU 1.2 whose identified obstacles were much less than the first version. It was determined that an additional iteration is needed before more quantitative research is conducted.
ContributorsRitchey, ChristiAnne (Author) / VanLehn, Kurt (Thesis advisor) / Savenye, Wilhelmina (Committee member) / Hong, Yi-Chun (Committee member) / Arizona State University (Publisher)
Created2018
154253-Thumbnail Image.png
Description
Embedded assessment constantly updates a model of the student as the student works on instructional tasks. Accurate embedded assessment allows students, instructors and instructional systems to make informed decisions without requiring the student to stop instruction and take a test. This thesis describes the development and comparison of

Embedded assessment constantly updates a model of the student as the student works on instructional tasks. Accurate embedded assessment allows students, instructors and instructional systems to make informed decisions without requiring the student to stop instruction and take a test. This thesis describes the development and comparison of several student models for Dragoon, an intelligent tutoring system. All the models were instances of Bayesian Knowledge Tracing, a standard method. Several methods of parameterization and calibration were explored using two recently developed toolkits, FAST and BNT-SM that replaces constant-valued parameters with logistic regressions. The evaluation was done by calculating the fit of the models to data from human subjects and by assessing the accuracy of their assessment of simulated students. The student models created using node properties as subskills were superior to coarse-grained, skill-only models. Adding this extra level of representation to emission parameters was superior to adding it to transmission parameters. Adding difficulty parameters did not improve fit, contrary to standard practice in psychometrics.
ContributorsGrover, Sachin (Author) / VanLehn, Kurt (Thesis advisor) / Walker, Erin (Committee member) / Shiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2015
154146-Thumbnail Image.png
Description
Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was

Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was developed for generating open response biology questions. The generated questions were compared to professional authorized questions.

To boost students’ learning experience, adaptive selection was built on the generated questions. Bayesian Knowledge Tracing was used as embedded assessment of the student’s current competence so that a suitable question could be selected based on the student’s previous performance. A between-subjects experiment with 42 participants was performed, where half of the participants studied with adaptive selected questions and the rest studied with mal-adaptive order of questions. Both groups significantly improved their test scores, and the participants in adaptive group registered larger learning gains than participants in the control group.

To explore the possibility of generating rich instructional feedback for machine-generated questions, a question-paragraph mapping task was identified. Given a set of questions and a list of paragraphs for a textbook, the goal of the task was to map the related paragraphs to each question. An algorithm was developed whose performance was comparable to human annotators.

A multiple-choice question with high quality distractors (incorrect answers) can be pedagogically valuable as well as being much easier to grade than open-response questions. Thus, an algorithm was developed to generate good distractors for multiple-choice questions. The machine-generated multiple-choice questions were compared to human-generated questions in terms of three measures: question difficulty, question discrimination and distractor usefulness. By recruiting 200 participants from Amazon Mechanical Turk, it turned out that the two types of questions performed very closely on all the three measures.
ContributorsZhang, Lishang (Author) / VanLehn, Kurt (Thesis advisor) / Baral, Chitta (Committee member) / Hsiao, Ihan (Committee member) / Wright, Christian (Committee member) / Arizona State University (Publisher)
Created2015
154101-Thumbnail Image.png
Description
The growing use of Learning Management Systems (LMS) in classrooms has enabled a great amount of data to be collected about the study behavior of students. Previously, research has been conducted to interpret the collected LMS usage data in order to find the most effective study habits for students. Professors

The growing use of Learning Management Systems (LMS) in classrooms has enabled a great amount of data to be collected about the study behavior of students. Previously, research has been conducted to interpret the collected LMS usage data in order to find the most effective study habits for students. Professors can then use the interpretations to predict which students will perform well and which student will perform poorly in the rest of the course, allowing the professor to better provide assistance to students in need. However, these research attempts have largely analyzed metrics that are specific to certain graphical interfaces, ways of answering questions, or specific pages on an LMS. As a result, the analysis is only relevant to classrooms that use the specific LMS being analyzed.

For this thesis, behavior metrics obtained by the Organic Practice Environment (OPE) LMS at Arizona State University were compared to student performance in Dr. Ian Gould’s Organic Chemistry I course. Each metric gathered was generic enough to be potentially used by any LMS, allowing the results to be relevant to a larger amount of classrooms. By using a combination of bivariate correlation analysis, group mean comparisons, linear regression model generation, and outlier analysis, the metrics that correlate best to exam performance were identified. The results indicate that the total usage of the LMS, amount of cramming done before exams, correctness of the responses submitted, and duration of the responses submitted all demonstrate a strong correlation with exam scores.
ContributorsBeerman, Eric (Author) / VanLehn, Kurt (Thesis advisor) / Gould, Ian (Committee member) / Hsiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2015
156508-Thumbnail Image.png
Description
A recorded tutorial dialogue can produce positive learning gains, when observed and used to promote discussion between a pair of learners; however, this same effect does not typically occur when an leaner observes a tutorial dialogue by himself or herself. One potential approach to enhancing learning in the latter situation

A recorded tutorial dialogue can produce positive learning gains, when observed and used to promote discussion between a pair of learners; however, this same effect does not typically occur when an leaner observes a tutorial dialogue by himself or herself. One potential approach to enhancing learning in the latter situation is by incorporating self-explanation prompts, a proven technique for encouraging students to engage in active learning and attend to the material in a meaningful way. This study examined whether learning from observing recorded tutorial dialogues could be made more effective by adding self-explanation prompts in computer-based learning environment. The research questions in this two-experiment study were (a) Do self-explanation prompts help support student learning while watching a recorded dialogue? and (b) Does collaboratively observing (in dyads) a tutorial dialogue with self-explanation prompts help support student learning while watching a recorded dialogue? In Experiment 1, 66 participants were randomly assigned as individuals to a physics lesson (a) with self-explanation prompts (Condition 1) or (b) without self-explanation prompts (Condition 2). In Experiment 2, 20 participants were randomly assigned in 10 pairs to the same physics lesson (a) with self-explanation prompts (Condition 1) or (b) without self-explanation prompts (Condition 2). Pretests and posttests were administered, as well as other surveys that measured motivation and system usability. Although supplemental analyses showed some significant differences among individual scale items or factors, neither primary results for Experiment 1 or Experiment 2 were significant for changes in posttest scores from pretest scores for learning, motivation, or system usability assessments.
ContributorsWright, Kyle Matthew (Author) / Atkinson, Robert K (Thesis advisor) / Savenye, Wilhelmina (Committee member) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2018