Matching Items (13)

133146-Thumbnail Image.png

Reading Motivation and Comprehension: Using iSTART-3 to Improve Comprehension in South Africa

Description

The purposes of the study are to: 1) investigate how students' motivation towards reading is related to their reading comprehension skills, and 2) assess the impact of using an Intelligent

The purposes of the study are to: 1) investigate how students' motivation towards reading is related to their reading comprehension skills, and 2) assess the impact of using an Intelligent Tutoring System to improve comprehension. Interactive Strategy Training for Active Reading and Thinking-3 (iSTART-3) is a game-based tutoring system designed to improve students' reading comprehension skills. The current study was conducted in South Africa with 8th and 9th graders between the ages of 14 and 18. These students are multilingual and they learn English as a First Additional Language (English-FAL). Firstly, we predict that students who are highly motivated to read will have high comprehension scores than those who are slightly or not at all motivated to read. Secondly, we predict that the use of iSTART-3 will improve students' reading comprehension, regardless of their level of reading motivation, with better results for those who are more motivated to read. Counter to our predictions, the results did not reveal a relation between reading motivation and reading comprehension. Furthermore, an effect of iSTART-3 on reading comprehension was not found. These results were likely influenced by the small sample size and the length of the intervention.

Contributors

Agent

Created

Date Created
  • 2018-12

151688-Thumbnail Image.png

Does self-regulated learning-skills training improve high-school students' self-regulation, math achievement, and motivation while using an intelligent tutor?

Description

This study empirically evaluated the effectiveness of the instructional design, learning tools, and role of the teacher in three versions of a semester-long, high-school remedial Algebra I course to determine

This study empirically evaluated the effectiveness of the instructional design, learning tools, and role of the teacher in three versions of a semester-long, high-school remedial Algebra I course to determine what impact self-regulated learning skills and learning pattern training have on students' self-regulation, math achievement, and motivation. The 1st version was a business-as-usual traditional classroom teaching mathematics with direct instruction. The 2rd version of the course provided students with self-paced, individualized Algebra instruction with a web-based, intelligent tutor. The 3rd version of the course coupled self-paced, individualized instruction on the web-based, intelligent Algebra tutor coupled with a series of e-learning modules on self-regulated learning knowledge and skills that were distributed throughout the semester. A quasi-experimental, mixed methods evaluation design was used by assigning pre-registered, high-school remedial Algebra I class periods made up of an approximately equal number of students to one of the three study conditions or course versions: (a) the control course design, (b) web-based, intelligent tutor only course design, and (c) web-based, intelligent tutor + SRL e-learning modules course design. While no statistically significant differences on SRL skills, math achievement or motivation were found between the three conditions, effect-size estimates provide suggestive evidence that using the SRL e-learning modules based on ARCS motivation model (Keller, 2010) and Let Me Learn learning pattern instruction (Dawkins, Kottkamp, & Johnston, 2010) may help students regulate their learning and improve their study skills while using a web-based, intelligent Algebra tutor as evidenced by positive impacts on math achievement, motivation, and self-regulated learning skills. The study also explored predictive analyses using multiple regression and found that predictive models based on independent variables aligned to student demographics, learning mastery skills, and ARCS motivational factors are helpful in defining how to further refine course design and design learning evaluations that measure achievement, motivation, and self-regulated learning in web-based learning environments, including intelligent tutoring systems.

Contributors

Agent

Created

Date Created
  • 2013

155225-Thumbnail Image.png

Building adaptation and error feedback in an intelligent tutoring system for reading comprehension of English language learners

Description

Many English Language Learner (ELL) children struggle with knowledge of vocabulary and syntax. Enhanced Moved by Reading to Accelerate Comprehension in English (EMBRACE) is an interactive storybook application that teaches

Many English Language Learner (ELL) children struggle with knowledge of vocabulary and syntax. Enhanced Moved by Reading to Accelerate Comprehension in English (EMBRACE) is an interactive storybook application that teaches children to read by moving pictures on the screen to act out the sentences in the text. However, EMBRACE presents the same level of text to all users, and it is limited in its ability to provide error feedback, as it can only determine whether a user action is right or wrong. EMBRACE could help readers learn more effectively if it personalized its instruction with texts that fit their current reading level and feedback that addresses ways to correct their mistakes. Improvements were made to the system by applying design principles of intelligent tutoring systems (ITSs). The new system added features to track the student’s reading comprehension skills, including vocabulary, syntax, and usability, based on various user actions, as well as features to adapt text complexity and provide more specific error feedback using the skills. A pilot study was conducted with 7 non-ELL students to evaluate the functionality and effectiveness of these features. The results revealed both strengths and weaknesses of the ITS. While skill updates appeared most accurate when users made particular kinds of vocabulary and syntax errors, it was not able to correctly identify other kinds of syntax errors or provide feedback when skill values became too high. Additionally, vocabulary error feedback and adapting the complexity of syntax were helpful, but syntax error feedback and adapting the complexity of vocabulary were not as helpful. Overall, children enjoy using EMBRACE, and building an intelligent tutoring system into the application presents a promising approach to make reading a both fun and effective experience.

Contributors

Agent

Created

Date Created
  • 2017

156508-Thumbnail Image.png

Exploring the use of self-explanation prompts in a collaborative learning environment

Description

A recorded tutorial dialogue can produce positive learning gains, when observed and used to promote discussion between a pair of learners; however, this same effect does not typically occur when

A recorded tutorial dialogue can produce positive learning gains, when observed and used to promote discussion between a pair of learners; however, this same effect does not typically occur when an leaner observes a tutorial dialogue by himself or herself. One potential approach to enhancing learning in the latter situation is by incorporating self-explanation prompts, a proven technique for encouraging students to engage in active learning and attend to the material in a meaningful way. This study examined whether learning from observing recorded tutorial dialogues could be made more effective by adding self-explanation prompts in computer-based learning environment. The research questions in this two-experiment study were (a) Do self-explanation prompts help support student learning while watching a recorded dialogue? and (b) Does collaboratively observing (in dyads) a tutorial dialogue with self-explanation prompts help support student learning while watching a recorded dialogue? In Experiment 1, 66 participants were randomly assigned as individuals to a physics lesson (a) with self-explanation prompts (Condition 1) or (b) without self-explanation prompts (Condition 2). In Experiment 2, 20 participants were randomly assigned in 10 pairs to the same physics lesson (a) with self-explanation prompts (Condition 1) or (b) without self-explanation prompts (Condition 2). Pretests and posttests were administered, as well as other surveys that measured motivation and system usability. Although supplemental analyses showed some significant differences among individual scale items or factors, neither primary results for Experiment 1 or Experiment 2 were significant for changes in posttest scores from pretest scores for learning, motivation, or system usability assessments.

Contributors

Agent

Created

Date Created
  • 2018

154146-Thumbnail Image.png

Biology question generation from a semantic network

Description

Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other

Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was developed for generating open response biology questions. The generated questions were compared to professional authorized questions.

To boost students’ learning experience, adaptive selection was built on the generated questions. Bayesian Knowledge Tracing was used as embedded assessment of the student’s current competence so that a suitable question could be selected based on the student’s previous performance. A between-subjects experiment with 42 participants was performed, where half of the participants studied with adaptive selected questions and the rest studied with mal-adaptive order of questions. Both groups significantly improved their test scores, and the participants in adaptive group registered larger learning gains than participants in the control group.

To explore the possibility of generating rich instructional feedback for machine-generated questions, a question-paragraph mapping task was identified. Given a set of questions and a list of paragraphs for a textbook, the goal of the task was to map the related paragraphs to each question. An algorithm was developed whose performance was comparable to human annotators.

A multiple-choice question with high quality distractors (incorrect answers) can be pedagogically valuable as well as being much easier to grade than open-response questions. Thus, an algorithm was developed to generate good distractors for multiple-choice questions. The machine-generated multiple-choice questions were compared to human-generated questions in terms of three measures: question difficulty, question discrimination and distractor usefulness. By recruiting 200 participants from Amazon Mechanical Turk, it turned out that the two types of questions performed very closely on all the three measures.

Contributors

Agent

Created

Date Created
  • 2015

157884-Thumbnail Image.png

Providing Intelligent and Adaptive Support in Concept Map-based Learning Environments

Description

Concept maps are commonly used knowledge visualization tools and have been shown to have a positive impact on learning. The main drawbacks of concept mapping are the requirement of training,

Concept maps are commonly used knowledge visualization tools and have been shown to have a positive impact on learning. The main drawbacks of concept mapping are the requirement of training, and lack of feedback support. Thus, prior research has attempted to provide support and feedback in concept mapping, such as by developing computer-based concept mapping tools, offering starting templates and navigational supports, as well as providing automated feedback. Although these approaches have achieved promising results, there are still challenges that remain to be solved. For example, there is a need to create a concept mapping system that reduces the extraneous effort of editing a concept map while encouraging more cognitively beneficial behaviors. Also, there is little understanding of the cognitive process during concept mapping. What’s more, current feedback mechanisms in concept mapping only focus on the outcome of the map, instead of the learning process.

This thesis work strives to solve the fundamental research question: How to leverage computer technologies to intelligently support concept mapping to promote meaningful learning? To approach this research question, I first present an intelligent concept mapping system, MindDot, that supports concept mapping via innovative integration of two features, hyperlink navigation, and expert template. The system reduces the effort of creating and modifying concept maps while encouraging beneficial activities such as comparing related concepts and establishing relationships among them. I then present the comparative strategy metric that modes student learning by evaluating behavioral patterns and learning strategies. Lastly, I develop an adaptive feedback system that provides immediate diagnostic feedback in response to both the key learning behaviors during concept mapping and the correctness and completeness of the created maps.

Empirical evaluations indicated that the integrated navigational and template support in MindDot fostered effective learning behaviors and facilitating learning achievements. The comparative strategy model was shown to be highly representative of learning characteristics such as motivation, engagement, misconceptions, and predicted learning results. The feedback tutor also demonstrated positive impacts on supporting learning and assisting the development of effective learning strategies that prepare learners for future learning. This dissertation contributes to the field of supporting concept mapping with designs of technological affordances, a process-based student model, an adaptive feedback tutor, empirical evaluations of these proposed innovations, and implications for future support in concept mapping.

Contributors

Agent

Created

Date Created
  • 2019

154915-Thumbnail Image.png

Student modeling for English language learners in a moved by reading intervention

Description

EMBRACE (Enhanced Moved By Reading to Accelerate Comprehension in English) is an IPad application that uses the Moved By Reading strategy to help improve the reading comprehension skills of bilingual

EMBRACE (Enhanced Moved By Reading to Accelerate Comprehension in English) is an IPad application that uses the Moved By Reading strategy to help improve the reading comprehension skills of bilingual (Spanish speaking) English Language Learners (ELLs). In EMBRACE, students read the text of a story and then move images corresponding to the text that they read. According to the embodied cognition theory, this grounds reading comprehension in physical experiences and thus is more engaging.

In this thesis, I used the log data from 20 students in grades 2-5 to design a skill model for a student using EMBRACE. A skill model is the set of knowledge components that a student needs to master in order to comprehend the text in EMBRACE. A good skill model will improve understanding of the mistakes students make and thus aid in the design of useful feedback for the student.. In this context, the skill model consists of vocabulary and syntax associated with the steps that students performed. I mapped each step in EMBRACE to one or more skills (vocabulary and syntax) from the model. After every step, the skill level is updated in the model. Thus, if a student answered the previous step incorrectly, the corresponding skills are decremented and if the student answered the previous question correctly, the corresponding skills are incremented, through the Bayesian Knowledge Tracing algorithm.

I then correlated the students’ predicted scores (computed from their skill levels) to their posttest scores. I evaluated the students’ predicted scores (computed from their skill levels) by comparing them to their posttest scores. The two sets of scores were not highly correlated, but the results gave insights into potential improvements that could be made to the system with respect to user interaction, posttest scores and modeling algorithm.

Contributors

Agent

Created

Date Created
  • 2016

154253-Thumbnail Image.png

Online embedded assessment for Dragoon, intelligent tutoring system

Description

Embedded assessment constantly updates a model of the student as the student works on instructional tasks. Accurate embedded assessment allows students, instructors and instructional systems to make informed decisions

Embedded assessment constantly updates a model of the student as the student works on instructional tasks. Accurate embedded assessment allows students, instructors and instructional systems to make informed decisions without requiring the student to stop instruction and take a test. This thesis describes the development and comparison of several student models for Dragoon, an intelligent tutoring system. All the models were instances of Bayesian Knowledge Tracing, a standard method. Several methods of parameterization and calibration were explored using two recently developed toolkits, FAST and BNT-SM that replaces constant-valued parameters with logistic regressions. The evaluation was done by calculating the fit of the models to data from human subjects and by assessing the accuracy of their assessment of simulated students. The student models created using node properties as subskills were superior to coarse-grained, skill-only models. Adding this extra level of representation to emission parameters was superior to adding it to transmission parameters. Adding difficulty parameters did not improve fit, contrary to standard practice in psychometrics.

Contributors

Agent

Created

Date Created
  • 2015

156684-Thumbnail Image.png

A formative evaluation research study to guide the design of the Categorization Step Practice Utility (MS-CPU) as an integral part of preparation for the GED mathematics test using the Ms. Stephens Algebra Story Problem-solving Tutor (MSASPT)

Description

The mathematics test is the most difficult test in the GED (General Education Development) Test battery, largely due to the presence of story problems. Raising performance levels of story problem-solving

The mathematics test is the most difficult test in the GED (General Education Development) Test battery, largely due to the presence of story problems. Raising performance levels of story problem-solving would have a significant effect on GED Test passage rates. The subject of this formative research study is Ms. Stephens’ Categorization Practice Utility (MS-CPU), an example-tracing intelligent tutoring system that serves as practice for the first step (problem categorization) in a larger comprehensive story problem-solving pedagogy that purports to raise the level of story problem-solving performance. During the analysis phase of this project, knowledge components and particular competencies that enable learning (schema building) were identified. During the development phase, a tutoring system was designed and implemented that algorithmically teaches these competencies to the student with graphical, interactive, and animated utilities. Because the tutoring system provides a much more concrete rather than conceptual, learning environment, it should foster a much greater apprehension of a story problem-solving process. With this experience, the student should begin to recognize the generalizability of concrete operations that accomplish particular story problem-solving goals and begin to build conceptual knowledge and a more conceptual approach to the task. During the formative evaluation phase, qualitative methods were used to identify obstacles in the MS-CPU user interface and disconnections in the pedagogy that impede learning story problem categorization and solution preparation. The study was conducted over two iterations where identification of obstacles and change plans (mitigations) produced a qualitative data table used to modify the first version systems (MS-CPU 1.1). Mitigation corrections produced the second version of the MS-CPU 1.2, and the next iteration of the study was conducted producing a second set of obstacle/mitigation tables. Pre-posttests were conducted in each iteration to provide corroboration for the effectiveness of the mitigations that were performed. The study resulted in the identification of a number of learning obstacles in the first version of the MS-CPU 1.1. Their mitigation produced a second version of the MS-CPU 1.2 whose identified obstacles were much less than the first version. It was determined that an additional iteration is needed before more quantitative research is conducted.

Contributors

Agent

Created

Date Created
  • 2018

150224-Thumbnail Image.png

Analyzing student problem-solving behavior in a step-based tutor and understanding the effect of unsolicited hints

Description

Lots of previous studies have analyzed human tutoring at great depths and have shown expert human tutors to produce effect sizes, which is twice of that produced by an intelligent

Lots of previous studies have analyzed human tutoring at great depths and have shown expert human tutors to produce effect sizes, which is twice of that produced by an intelligent tutoring system (ITS). However, there has been no consensus on which factor makes them so effective. It is important to know this, so that same phenomena can be replicated in an ITS in order to achieve the same level of proficiency as expert human tutors. Also, to the best of my knowledge no one has looked at student reactions when they are working with a computer based tutor. The answers to both these questions are needed in order to build a highly effective computer-based tutor. My research focuses on the second question. In the first phase of my thesis, I analyzed the behavior of students when they were working with a step-based tutor Andes, using verbal-protocol analysis. The accomplishment of doing this was that I got to know of some ways in which students use a step-based tutor which can pave way for the creation of more effective computer-based tutors. I found from the first phase of the research that students often keep trying to fix errors by guessing repeatedly instead of asking for help by clicking the hint button. This phenomenon is known as hint refusal. Surprisingly, a large portion of the student's foundering was due to hint refusal. The hypothesis tested in the second phase of the research is that hint refusal can be significantly reduced and learning can be significantly increased if Andes uses more unsolicited hints and meta hints. An unsolicited hint is a hint that is given without the student asking for one. A meta-hint is like an unsolicited hint in that it is given without the student asking for it, but it just prompts the student to click on the hint button. Two versions of Andes were compared: the original version and a new version that gave more unsolicited and meta-hints. During a two-hour experiment, there were large, statistically reliable differences in several performance measures suggesting that the new policy was more effective.

Contributors

Agent

Created

Date Created
  • 2011