Matching Items (3)
Filtering by

Clear all filters

156027-Thumbnail Image.png
Description
Recent research has shown that reward-related stimuli capture attention in an automatic and involuntary manner, or reward-salience (Le Pelley, Pearson, Griffiths, & Beesley, 2015). Although patterns of oculomotor behavior have been previously examined in recent experiments, questions surrounding a potential neural signal of reward remain. Consequently, this study used pupillometry

Recent research has shown that reward-related stimuli capture attention in an automatic and involuntary manner, or reward-salience (Le Pelley, Pearson, Griffiths, & Beesley, 2015). Although patterns of oculomotor behavior have been previously examined in recent experiments, questions surrounding a potential neural signal of reward remain. Consequently, this study used pupillometry to investigate how reward-related stimuli affect pupil size and attention. Across three experiments, response time, accuracy, and pupil were measured as participants searched for targets among distractors. Participants were informed that singleton distractors indicated the magnitude of a potential gain/loss available in a trial. Two visual search conditions were included to manipulate ongoing cognitive demands and isolate reward-related pupillary responses. Although the optimal strategy was to perform quickly and accurately, participants were slower and less accurate in high magnitude trials. The data suggest that attention is automatically captured by potential loss, even when it is counterintuitive to current task goals. Regarding a pupillary response, patterns of pupil size were inconsistent with our predictions across the visual search conditions. We hypothesized that if pupil dilation reflected a reward-related reaction, pupil size would vary as a function of both the presence of a reward and its magnitude. More so, we predicted that this pattern would be more apparent in the easier search condition (i.e., cooperation visual search), because the signal of available reward was still present, but the ongoing attentional demands were significantly reduced in comparison to the more difficult search condition (i.e., conflict visual search). In contrast to our predictions, pupil size was more closely related to ongoing cognitive demands, as opposed to affective factors, in cooperation visual search. Surprisingly, pupil size in response to signals of available reward was better explained by affective, motivational and emotional influences than ongoing cognitive demands in conflict visual search. The current research suggests that similar to recent findings involving LC-NE activity (Aston-Jones & Cohen, 2005; Bouret & Richmond, 2009), the measure of pupillometry may be used to assess more specific areas of cognition, such as motivation and perception of reward. However, additional research is needed to better understand this unexpected pattern of pupil size.
ContributorsPhifer, Casey (Author) / Goldinger, Stephen D (Thesis advisor) / Homa, Donald J (Committee member) / McClure, Samuel M. (Committee member) / Papesh, Megan H (Committee member) / Arizona State University (Publisher)
Created2017
157042-Thumbnail Image.png
Description
The present study examined the effect of value-directed encoding on recognition memory and how various divided attention tasks at encoding alter value-directed remembering. In the first experiment, participants encoded words that were assigned either high or low point values in multiple study-test phases. The points corresponded to the value the

The present study examined the effect of value-directed encoding on recognition memory and how various divided attention tasks at encoding alter value-directed remembering. In the first experiment, participants encoded words that were assigned either high or low point values in multiple study-test phases. The points corresponded to the value the participants could earn by successfully recognizing the words in an upcoming recognition memory task. Importantly, participants were instructed that their goal was to maximize their score in this memory task. The second experiment was modified such that while studying the words participants simultaneously completed a divided attention task (either articulatory suppression or random number generation). The third experiment used a non-verbal tone detection divided attention task (easy or difficult versions). Subjective states of recollection (i.e., “Remember”) and familiarity (i.e., “Know”) were assessed at retrieval in all experiments. In Experiment 1, high value words were recognized more effectively than low value words, and this difference was primarily driven by increases in “Remember” responses with no difference in “Know” responses. In Experiment 2, the pattern of subjective judgment results from the articulatory suppression condition replicated Experiment 1. However, in the random number generation condition, the effect of value on recognition memory was lost. This same pattern of results was found in Experiment 3 which implemented a different variant of the divided attention task. Overall, these data suggest that executive processes are used when encoding valuable information and that value-directed improvements to memory are not merely the result of differential rehearsal.
ContributorsElliott, Blake L (Author) / Brewer, Gene A. (Thesis advisor) / McClure, Samuel M. (Committee member) / Fine, Justin M (Committee member) / Arizona State University (Publisher)
Created2019
154879-Thumbnail Image.png
Description
The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their

The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their results challenging to assess. This study evaluated whether self-directed speech influences target locating (i.e. attentional guidance) or target identification after location (i.e. decision time), testing whether the Label Feedback Effect reflects changes in visual attention or some other mechanism (e.g. template maintenance in working memory). Across three experiments, search RTs and eye movements were analyzed from four within-subject conditions. People spoke target names, nonwords, irrelevant (absent) object names, or irrelevant (present) object names. Speaking target names weakly facilitates visual search, but speaking different names strongly inhibits search. The most parsimonious account is that language affects target maintenance during search, rather than visual perception.
ContributorsHebert, Katherine P (Author) / Goldinger, Stephen D (Thesis advisor) / Rogalsky, Corianne (Committee member) / McClure, Samuel M. (Committee member) / Arizona State University (Publisher)
Created2016