Matching Items (3)
151845-Thumbnail Image.png
Description
This study explored three methods to measure cognitive load in a learning environment using four logic puzzles that systematically varied in level of intrinsic cognitive load. Participants' perceived intrinsic load was simultaneously measured with a self-report measure--a traditional subjective measure--and two objective, physiological measures based on eye-tracking and EEG technology.

This study explored three methods to measure cognitive load in a learning environment using four logic puzzles that systematically varied in level of intrinsic cognitive load. Participants' perceived intrinsic load was simultaneously measured with a self-report measure--a traditional subjective measure--and two objective, physiological measures based on eye-tracking and EEG technology. In addition to gathering self-report, eye-tracking data, and EEG data, this study also captured data on individual difference variables and puzzle performance. Specifically, this study addressed the following research questions: 1. Are self-report ratings of cognitive load sensitive to tasks that increase in level of intrinsic load? 2. Are physiological measures sensitive to tasks that increase in level of intrinsic load? 3. To what extent do objective physiological measures and individual difference variables predict self-report ratings of intrinsic cognitive load? 4. Do the number of errors and the amount of time spent on each puzzle increase as the puzzle difficulty increases? Participants were 56 undergraduate students. Results from analyses with inferential statistics and data-mining techniques indicated features from the physiological data were sensitive to the puzzle tasks that varied in level of intrinsic load. The self-report measures performed similarly when the difference in intrinsic load of the puzzles was the most varied. Implications for these results and future directions for this line of research are discussed.
ContributorsJoseph, Stacey (Author) / Atkinson, Robert K (Thesis advisor) / Johnson-Glenberg, Mina (Committee member) / Nelson, Brian (Committee member) / Klein, James (Committee member) / Arizona State University (Publisher)
Created2013
150224-Thumbnail Image.png
Description
Lots of previous studies have analyzed human tutoring at great depths and have shown expert human tutors to produce effect sizes, which is twice of that produced by an intelligent tutoring system (ITS). However, there has been no consensus on which factor makes them so effective. It is important to

Lots of previous studies have analyzed human tutoring at great depths and have shown expert human tutors to produce effect sizes, which is twice of that produced by an intelligent tutoring system (ITS). However, there has been no consensus on which factor makes them so effective. It is important to know this, so that same phenomena can be replicated in an ITS in order to achieve the same level of proficiency as expert human tutors. Also, to the best of my knowledge no one has looked at student reactions when they are working with a computer based tutor. The answers to both these questions are needed in order to build a highly effective computer-based tutor. My research focuses on the second question. In the first phase of my thesis, I analyzed the behavior of students when they were working with a step-based tutor Andes, using verbal-protocol analysis. The accomplishment of doing this was that I got to know of some ways in which students use a step-based tutor which can pave way for the creation of more effective computer-based tutors. I found from the first phase of the research that students often keep trying to fix errors by guessing repeatedly instead of asking for help by clicking the hint button. This phenomenon is known as hint refusal. Surprisingly, a large portion of the student's foundering was due to hint refusal. The hypothesis tested in the second phase of the research is that hint refusal can be significantly reduced and learning can be significantly increased if Andes uses more unsolicited hints and meta hints. An unsolicited hint is a hint that is given without the student asking for one. A meta-hint is like an unsolicited hint in that it is given without the student asking for it, but it just prompts the student to click on the hint button. Two versions of Andes were compared: the original version and a new version that gave more unsolicited and meta-hints. During a two-hour experiment, there were large, statistically reliable differences in several performance measures suggesting that the new policy was more effective.
ContributorsRanganathan, Rajagopalan (Author) / VanLehn, Kurt (Thesis advisor) / Atkinson, Robert (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
157042-Thumbnail Image.png
Description
The present study examined the effect of value-directed encoding on recognition memory and how various divided attention tasks at encoding alter value-directed remembering. In the first experiment, participants encoded words that were assigned either high or low point values in multiple study-test phases. The points corresponded to the value the

The present study examined the effect of value-directed encoding on recognition memory and how various divided attention tasks at encoding alter value-directed remembering. In the first experiment, participants encoded words that were assigned either high or low point values in multiple study-test phases. The points corresponded to the value the participants could earn by successfully recognizing the words in an upcoming recognition memory task. Importantly, participants were instructed that their goal was to maximize their score in this memory task. The second experiment was modified such that while studying the words participants simultaneously completed a divided attention task (either articulatory suppression or random number generation). The third experiment used a non-verbal tone detection divided attention task (easy or difficult versions). Subjective states of recollection (i.e., “Remember”) and familiarity (i.e., “Know”) were assessed at retrieval in all experiments. In Experiment 1, high value words were recognized more effectively than low value words, and this difference was primarily driven by increases in “Remember” responses with no difference in “Know” responses. In Experiment 2, the pattern of subjective judgment results from the articulatory suppression condition replicated Experiment 1. However, in the random number generation condition, the effect of value on recognition memory was lost. This same pattern of results was found in Experiment 3 which implemented a different variant of the divided attention task. Overall, these data suggest that executive processes are used when encoding valuable information and that value-directed improvements to memory are not merely the result of differential rehearsal.
ContributorsElliott, Blake L (Author) / Brewer, Gene A. (Thesis advisor) / McClure, Samuel M. (Committee member) / Fine, Justin M (Committee member) / Arizona State University (Publisher)
Created2019