Matching Items (1,051)
Filtering by

Clear all filters

152036-Thumbnail Image.png
Description
It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most

It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most evidence for this comes from hemispheric semantic priming, though fewer studies have investigated verbal memory in the cerebral hemispheres. The goal of the current investigations is to examine how top-down mechanisms influence hemispheric asymmetries in verbal memory, and determine the specific nature of hypothesized top-down mechanisms. Five experiments were conducted to explore the influence of top-down mechanisms on hemispheric asymmetries in verbal memory. Experiments 1 and 2 used item-method directed forgetting to examine maintenance and inhibition mechanisms. In Experiment 1, participants were cued to remember or forget certain words, and cues were presented simultaneously or after the presentation of target words. In Experiment 2, participants were cued again to remember or forget words, but each word was repeated once or four times. Experiments 3 and 4 examined the influence of cognitive load on hemispheric asymmetries in true and false memory. In Experiment 3, cognitive load was imposed during memory encoding, while in Experiment 4, cognitive load was imposed during memory retrieval. Finally, Experiment 5 investigated the association between controlled processing in hemispheric semantic priming, and top-down mechanisms used for hemispheric verbal memory. Across all experiments, divided visual field presentation was used to probe verbal memory in the cerebral hemispheres. Results from all experiments revealed several important findings. First, top-down mechanisms used by the LH primarily used to facilitate verbal processing, but also operate in a domain general manner in the face of increasing processing demands. Second, evidence indicates that the RH uses top-down mechanisms minimally, and processes verbal information in a more bottom-up manner. These data help clarify the nature of top-down mechanisms used in hemispheric memory and language processing, and build upon current theories that attempt to explain hemispheric asymmetries in language processing.
ContributorsTat, Michael J (Author) / Azuma, Tamiko (Thesis advisor) / Goldinger, Stephen D (Committee member) / Liss, Julie M (Committee member) / Arizona State University (Publisher)
Created2013
152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
150716-Thumbnail Image.png
Description
Current theoretical debate, crossing the bounds of memory theory and mental imagery, surrounds the role of eye movements in successful encoding and retrieval. Although the eyes have been shown to revisit previously-viewed locations during retrieval, the functional role of these saccades is not known. Understanding the potential role of eye

Current theoretical debate, crossing the bounds of memory theory and mental imagery, surrounds the role of eye movements in successful encoding and retrieval. Although the eyes have been shown to revisit previously-viewed locations during retrieval, the functional role of these saccades is not known. Understanding the potential role of eye movements may help address classic questions in recognition memory. Specifically, are episodic traces rich and detailed, characterized by a single strength-driven recognition process, or are they better described by two separate processes, one for vague information and one for the retrieval of detail? Three experiments are reported, in which participants encoded audio-visual information while completing controlled patterns of eye movements. By presenting information in four sources (i.e., voices), assessments of specific and partial source memory were measured at retrieval. Across experiments, participants' eye movements at test were manipulated. Experiment 1 allowed free viewing, Experiment 2 required externally-cued fixations to previously-relevant (or irrelevant) screen locations, and Experiment 3 required externally-cued new or familiar oculomotor patterns to multiple screen locations in succession. Although eye movements were spontaneously reinstated when gaze was unconstrained during retrieval (Experiment 1), externally-cueing participants to re-engage in fixations or oculomotor patterns from encoding (Experiments 2 and 3) did not enhance retrieval. Across all experiments, participants' memories were well-described by signal-detection models of memory. Source retrieval was characterized by a continuous process, with evidence that source retrieval occurred following item memory failures, and additional evidence that participants partially recollected source, in the absence of specific item retrieval. Pupillometry provided an unbiased metric by which to compute receiver operating characteristic (ROC) curves, which were consistently curvilinear (but linear in z-space), supporting signal-detection predictions over those from dual-process theories. Implications for theoretical views of memory representations are discussed.
ContributorsPapesh, Megan H (Author) / Goldinger, Stephen D (Thesis advisor) / Brewer, Gene A. (Committee member) / Reichle, Erik D. (Committee member) / Homa, Donald (Committee member) / Glenberg, Arthur M. (Committee member) / Arizona State University (Publisher)
Created2012
156027-Thumbnail Image.png
Description
Recent research has shown that reward-related stimuli capture attention in an automatic and involuntary manner, or reward-salience (Le Pelley, Pearson, Griffiths, & Beesley, 2015). Although patterns of oculomotor behavior have been previously examined in recent experiments, questions surrounding a potential neural signal of reward remain. Consequently, this study used pupillometry

Recent research has shown that reward-related stimuli capture attention in an automatic and involuntary manner, or reward-salience (Le Pelley, Pearson, Griffiths, & Beesley, 2015). Although patterns of oculomotor behavior have been previously examined in recent experiments, questions surrounding a potential neural signal of reward remain. Consequently, this study used pupillometry to investigate how reward-related stimuli affect pupil size and attention. Across three experiments, response time, accuracy, and pupil were measured as participants searched for targets among distractors. Participants were informed that singleton distractors indicated the magnitude of a potential gain/loss available in a trial. Two visual search conditions were included to manipulate ongoing cognitive demands and isolate reward-related pupillary responses. Although the optimal strategy was to perform quickly and accurately, participants were slower and less accurate in high magnitude trials. The data suggest that attention is automatically captured by potential loss, even when it is counterintuitive to current task goals. Regarding a pupillary response, patterns of pupil size were inconsistent with our predictions across the visual search conditions. We hypothesized that if pupil dilation reflected a reward-related reaction, pupil size would vary as a function of both the presence of a reward and its magnitude. More so, we predicted that this pattern would be more apparent in the easier search condition (i.e., cooperation visual search), because the signal of available reward was still present, but the ongoing attentional demands were significantly reduced in comparison to the more difficult search condition (i.e., conflict visual search). In contrast to our predictions, pupil size was more closely related to ongoing cognitive demands, as opposed to affective factors, in cooperation visual search. Surprisingly, pupil size in response to signals of available reward was better explained by affective, motivational and emotional influences than ongoing cognitive demands in conflict visual search. The current research suggests that similar to recent findings involving LC-NE activity (Aston-Jones & Cohen, 2005; Bouret & Richmond, 2009), the measure of pupillometry may be used to assess more specific areas of cognition, such as motivation and perception of reward. However, additional research is needed to better understand this unexpected pattern of pupil size.
ContributorsPhifer, Casey (Author) / Goldinger, Stephen D (Thesis advisor) / Homa, Donald J (Committee member) / McClure, Samuel M. (Committee member) / Papesh, Megan H (Committee member) / Arizona State University (Publisher)
Created2017
156857-Thumbnail Image.png
Description
Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our

Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our lab suggests that varying the prevalence of cue-colored targets does not attenuate the visual confirmation bias, although people still fail to detect rare targets regardless of whether they match the initial cue (Walenchok et al. under review). The present investigation examines the boundary conditions of the visual confirmation bias under conditions of equal, low, and high cued-target frequency. Across experiments, I found that: (1) People are strongly susceptible to the low-prevalence effect, often failing to detect rare targets regardless of whether they match the cue (Wolfe et al., 2005). (2) However, they are still biased to seek cue-colored stimuli, even when such targets are rare. (3) Regardless of target prevalence, people employ strategies when search is made sufficiently burdensome with distributed items and large search sets. These results further support previous findings that the low-prevalence effect arises from a failure to perceive rare items (Hout et al., 2015), while visual confirmation bias is a bias of attentional guidance (Rajsic et al., 2015, 2017).
ContributorsWalenchok, Stephen Charles (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / McClure, Samuel M. (Committee member) / Arizona State University (Publisher)
Created2018
154879-Thumbnail Image.png
Description
The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their

The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their results challenging to assess. This study evaluated whether self-directed speech influences target locating (i.e. attentional guidance) or target identification after location (i.e. decision time), testing whether the Label Feedback Effect reflects changes in visual attention or some other mechanism (e.g. template maintenance in working memory). Across three experiments, search RTs and eye movements were analyzed from four within-subject conditions. People spoke target names, nonwords, irrelevant (absent) object names, or irrelevant (present) object names. Speaking target names weakly facilitates visual search, but speaking different names strongly inhibits search. The most parsimonious account is that language affects target maintenance during search, rather than visual perception.
ContributorsHebert, Katherine P (Author) / Goldinger, Stephen D (Thesis advisor) / Rogalsky, Corianne (Committee member) / McClure, Samuel M. (Committee member) / Arizona State University (Publisher)
Created2016