Matching Items (149)
152061-Thumbnail Image.png
Description
Most people are experts in some area of information; however, they may not be knowledgeable about other closely related areas. How knowledge is generalized to hierarchically related categories was explored. Past work has found little to no generalization to categories closely related to learned categories. These results do not fit

Most people are experts in some area of information; however, they may not be knowledgeable about other closely related areas. How knowledge is generalized to hierarchically related categories was explored. Past work has found little to no generalization to categories closely related to learned categories. These results do not fit well with other work focusing on attention during and after category learning. The current work attempted to merge these two areas of by creating a category structure with the best chance to detect generalization. Participants learned order level bird categories and family level wading bird categories. Then participants completed multiple measures to test generalization to old wading bird categories, new wading bird categories, owl and raptor categories, and lizard categories. As expected, the generalization measures converged on a single overall pattern of generalization. No generalization was found, except for already learned categories. This pattern fits well with past work on generalization within a hierarchy, but do not fit well with theories of dimensional attention. Reasons why these findings do not match are discussed, as well as directions for future research.
ContributorsLancaster, Matthew E (Author) / Homa, Donald (Thesis advisor) / Glenberg, Arthur (Committee member) / Chi, Michelene (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2013
151981-Thumbnail Image.png
Description
Magicians are informal cognitive scientists who regularly test their hypotheses in the real world. As such, they can provide scientists with novel hypotheses for formal psychological research as well as a real-world context in which to study them. One domain where magic can directly inform science is the deployment of

Magicians are informal cognitive scientists who regularly test their hypotheses in the real world. As such, they can provide scientists with novel hypotheses for formal psychological research as well as a real-world context in which to study them. One domain where magic can directly inform science is the deployment of attention in time and across modalities. Both magicians and scientists have an incomplete understanding of how attention operates in time, rather than in space. However, magicians have highlighted a set of variables that can create moments of visual attentional suppression, which they call "off-beats," and these variables can speak to modern models of temporal attention. The current research examines two of these variables under conditions ranging from artificial laboratory tasks to the (almost) natural viewing of magic tricks. Across three experiments, I show that the detection of subtle dot probes in a noisy visual display and pieces of sleight of hand in magic tricks can be influenced by the seemingly irrelevant rhythmic qualities of auditory stimuli (cross-modal attentional entrainment) and processes of working memory updating (akin to the attentional blink).
ContributorsBarnhart, Anthony S (Author) / Goldinger, Stephen D. (Thesis advisor) / Glenberg, Arthur M. (Committee member) / Homa, Donald (Committee member) / Simons, Daniel J. (Committee member) / Arizona State University (Publisher)
Created2013
152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
152678-Thumbnail Image.png
Description
Recognition memory was investigated for naturalistic dynamic scenes. Although visual recognition for static objects and scenes has been investigated previously and found to be extremely robust in terms of fidelity and retention, visual recognition for dynamic scenes has received much less attention. In four experiments, participants view a number of

Recognition memory was investigated for naturalistic dynamic scenes. Although visual recognition for static objects and scenes has been investigated previously and found to be extremely robust in terms of fidelity and retention, visual recognition for dynamic scenes has received much less attention. In four experiments, participants view a number of clips from novel films and are then tasked to complete a recognition test containing frames from the previously viewed films and difficult foil frames. Recognition performance is good when foils are taken from other parts of the same film (Experiment 1), but degrades greatly when foils are taken from unseen gaps from within the viewed footage (Experiments 3 and 4). Removing all non-target frames had a serious effect on recognition performance (Experiment 2). Across all experiments, presenting the films as a random series of clips seemed to have no effect on recognition performance. Patterns of accuracy and response latency in Experiments 3 and 4 appear to be a result of a serial-search process. It is concluded that visual representations of dynamic scenes may be stored as units of events, and participant's old
ew judgments of individual frames were better characterized by a cued-recall paradigm than traditional recognition judgments.
ContributorsFerguson, Ryan (Author) / Homa, Donald (Thesis advisor) / Goldinger, Stephen (Committee member) / Glenberg, Arthur (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2014
152909-Thumbnail Image.png
Description
This thesis is an initial test of the hypothesis that superficial measures suffice for measuring collaboration among pairs of students solving complex math problems, where the degree of collaboration is categorized at a high level. Data were collected

in the form of logs from students' tablets and the vocal interaction

This thesis is an initial test of the hypothesis that superficial measures suffice for measuring collaboration among pairs of students solving complex math problems, where the degree of collaboration is categorized at a high level. Data were collected

in the form of logs from students' tablets and the vocal interaction between pairs of students. Thousands of different features were defined, and then extracted computationally from the audio and log data. Human coders used richer data (several video streams) and a thorough understand of the tasks to code episodes as

collaborative, cooperative or asymmetric contribution. Machine learning was used to induce a detector, based on random forests, that outputs one of these three codes for an episode given only a characterization of the episode in terms of superficial features. An overall accuracy of 92.00% (kappa = 0.82) was obtained when

comparing the detector's codes to the humans' codes. However, due irregularities in running the study (e.g., the tablet software kept crashing), these results should be viewed as preliminary.
ContributorsViswanathan, Sree Aurovindh (Author) / VanLehn, Kurt (Thesis advisor) / T.H CHI, Michelene (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2014
152920-Thumbnail Image.png
Description
Categories are often defined by rules regarding their features. These rules may be intensely complex yet, despite the complexity of these rules, we are often able to learn them with sufficient practice. A possible explanation for how we arrive at consistent category judgments despite these difficulties would be that we

Categories are often defined by rules regarding their features. These rules may be intensely complex yet, despite the complexity of these rules, we are often able to learn them with sufficient practice. A possible explanation for how we arrive at consistent category judgments despite these difficulties would be that we may define these complex categories such as chairs, tables, or stairs by understanding the simpler rules defined by potential interactions with these objects. This concept, called grounding, allows for the learning and transfer of complex categorization rules if said rules are capable of being expressed in a more simple fashion by virtue of meaningful physical interactions. The present experiment tested this hypothesis by having participants engage in either a Rule Based (RB) or Information Integration (II) categorization task with instructions to engage with the stimuli in either a non-interactive or interactive fashion. If participants were capable of grounding the categories, which were defined in the II task with a complex visual rule, to a simpler interactive rule, then participants with interactive instructions should outperform participants with non-interactive instructions. Results indicated that physical interaction with stimuli had a marginally beneficial effect on category learning, but this effect seemed most prevalent in participants were engaged in an II task.
ContributorsCrawford, Thomas (Author) / Homa, Donald (Thesis advisor) / Glenberg, Arthur (Committee member) / McBeath, Michael (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2014
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
152844-Thumbnail Image.png
Description
For this master's thesis, a unique set of cognitive prompts, designed to be delivered through a teachable robotic agent, were developed for students using Tangible Activities for Geometry (TAG), a tangible learning environment developed at Arizona State University. The purpose of these prompts is to enhance the affordances of the

For this master's thesis, a unique set of cognitive prompts, designed to be delivered through a teachable robotic agent, were developed for students using Tangible Activities for Geometry (TAG), a tangible learning environment developed at Arizona State University. The purpose of these prompts is to enhance the affordances of the tangible learning environment and help researchers to better understand how we can design tangible learning environments to best support student learning. Specifically, the prompts explicitly encourage users to make use of their physical environment by asking students to perform a number of gestures and behaviors while prompting students about domain-specific knowledge. To test the effectiveness of these prompts that combine elements of cognition and physical movements, the performance and behavior of students who encounter these prompts while using TAG will be compared against the performance and behavior of students who encounter a more traditional set of cognitive prompts that would typically be used within a virtual learning environment. Following this study, data was analyzed using a novel modeling and analysis tool that combines enhanced log annotation using video and user model generation functionalities to highlight trends amongst students.
ContributorsThomas, Elissa (Author) / Burleson, Winslow (Thesis advisor) / Muldner, Katarzyna (Committee member) / Walker, Erin (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2014
152976-Thumbnail Image.png
Description
Research in the learning sciences suggests that students learn better by collaborating with their peers than learning individually. Students working together as a group tend to generate new ideas more frequently and exhibit a higher level of reasoning. In this internet age with the advent of massive open online courses

Research in the learning sciences suggests that students learn better by collaborating with their peers than learning individually. Students working together as a group tend to generate new ideas more frequently and exhibit a higher level of reasoning. In this internet age with the advent of massive open online courses (MOOCs), students across the world are able to access and learn material remotely. This creates a need for tools that support distant or remote collaboration. In order to build such tools we need to understand the basic elements of remote collaboration and how it differs from traditional face-to-face collaboration.

The main goal of this thesis is to explore how spoken dialogue varies in face-to-face and remote collaborative learning settings. Speech data is collected from student participants solving mathematical problems collaboratively on a tablet. Spoken dialogue is analyzed based on conversational and acoustic features in both the settings. Looking for collaborative differences of transactivity and dialogue initiative, both settings are compared in detail using machine learning classification techniques based on acoustic and prosodic features of speech. Transactivity is defined as a joint construction of knowledge by peers. The main contributions of this thesis are: a speech corpus to analyze spoken dialogue in face-to-face and remote settings and an empirical analysis of conversation, collaboration, and speech prosody in both the settings. The results from the experiments show that amount of overlap is lower in remote dialogue than in the face-to-face setting. There is a significant difference in transactivity among strangers. My research benefits the computer-supported collaborative learning community by providing an analysis that can be used to build more efficient tools for supporting remote collaborative learning.
ContributorsNelakurthi, Arun Reddy (Author) / Pon-Barry, Heather (Thesis advisor) / VanLehn, Kurt (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2014
153437-Thumbnail Image.png
Description
A converging operations approach using response time distribution modeling was adopted to better characterize the cognitive control dynamics underlying ongoing task cost and cue detection in event based prospective memory (PM). In Experiment 1, individual differences analyses revealed that working memory capacity uniquely predicted nonfocal cue detection, while proactive control

A converging operations approach using response time distribution modeling was adopted to better characterize the cognitive control dynamics underlying ongoing task cost and cue detection in event based prospective memory (PM). In Experiment 1, individual differences analyses revealed that working memory capacity uniquely predicted nonfocal cue detection, while proactive control and inhibition predicted variation in ongoing task cost of the ex-Gaussian parameter associated with continuous monitoring strategies (mu). In Experiments 2A and 2B, quasi-experimental techniques aimed at identifying the role of proactive control abilities in PM monitoring and cue detection suggested that low ability participants may have PM deficits during demanding tasks due to inefficient monitoring strategies, but that emphasizing importance of the intention can increase reliance on more efficacious monitoring strategies that boosts performance (Experiment 2A). Furthermore, high proactive control ability participants are able to efficiently regulate their monitoring strategies under scenarios that do not require costly monitoring for successful cue detection (Experiment 2B). In Experiments 3A and 3B, it was found that proactive control benefited cue detection in interference-rich environments, but the neural correlates of cue detection or intention execution did not differ when engaged in proactive versus reactive control. The results from the current set of studies highlight the importance of response time distribution modeling in understanding PM cost. Additionally, these results have important implications for extant theories of PM and have considerable applied ramifications concerning the cognitive control processes that should be targeted to improve PM abilities.
ContributorsBall, Brett Hunter (Author) / Brewer, Gene A. (Thesis advisor) / Goldinger, Stephen (Committee member) / Glenberg, Arthur (Committee member) / Amazeen, Eric (Committee member) / Arizona State University (Publisher)
Created2015