Matching Items (11)
Filtering by

Clear all filters

152036-Thumbnail Image.png
Description
It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most

It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most evidence for this comes from hemispheric semantic priming, though fewer studies have investigated verbal memory in the cerebral hemispheres. The goal of the current investigations is to examine how top-down mechanisms influence hemispheric asymmetries in verbal memory, and determine the specific nature of hypothesized top-down mechanisms. Five experiments were conducted to explore the influence of top-down mechanisms on hemispheric asymmetries in verbal memory. Experiments 1 and 2 used item-method directed forgetting to examine maintenance and inhibition mechanisms. In Experiment 1, participants were cued to remember or forget certain words, and cues were presented simultaneously or after the presentation of target words. In Experiment 2, participants were cued again to remember or forget words, but each word was repeated once or four times. Experiments 3 and 4 examined the influence of cognitive load on hemispheric asymmetries in true and false memory. In Experiment 3, cognitive load was imposed during memory encoding, while in Experiment 4, cognitive load was imposed during memory retrieval. Finally, Experiment 5 investigated the association between controlled processing in hemispheric semantic priming, and top-down mechanisms used for hemispheric verbal memory. Across all experiments, divided visual field presentation was used to probe verbal memory in the cerebral hemispheres. Results from all experiments revealed several important findings. First, top-down mechanisms used by the LH primarily used to facilitate verbal processing, but also operate in a domain general manner in the face of increasing processing demands. Second, evidence indicates that the RH uses top-down mechanisms minimally, and processes verbal information in a more bottom-up manner. These data help clarify the nature of top-down mechanisms used in hemispheric memory and language processing, and build upon current theories that attempt to explain hemispheric asymmetries in language processing.
ContributorsTat, Michael J (Author) / Azuma, Tamiko (Thesis advisor) / Goldinger, Stephen D (Committee member) / Liss, Julie M (Committee member) / Arizona State University (Publisher)
Created2013
151721-Thumbnail Image.png
Description
Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and

Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and error responses, reaction times, and accuracy. Accuracy and reaction times were analyzed in terms of memory load (task condition), number of repetitions, effect of high vs. low print-frequency, and correlations with working memory span. Multinomial tree analyses were also completed to investigate source vs. item memory and revealed a mirror effect in episodic memory experiments (source memory), but a frequency advantage in span tasks (item memory). Interestingly enough, we did not observe an advantage for high working memory span individuals in frequency judgments, even when participants split their attention during the dual task (similar to a complex span task). However, we concluded that both the amount of attentional resources allocated and prior experience with an item affect how it is stored in memory.
ContributorsPeterson, Megan Paige (Author) / Azuma, Tamiko (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
153415-Thumbnail Image.png
Description
In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important

In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important because they help identify word boundaries, a process known as lexical segmentation. Individuals vary in the degree to which they rely on phonemic or prosodic cues for lexical segmentation in degraded conditions.

Deafened individuals who use a cochlear implant have diminished access to fine frequency information in the speech signal, and show resulting difficulty perceiving phonemic and prosodic cues. Auditory training on phonemic elements improves word recognition for some listeners. Little is known, however, about the potential benefits of prosodic training, or the degree to which individual differences in cue use affect outcomes.

The present study used simulated cochlear implant stimulation to examine the effects of phonemic and prosodic training on lexical segmentation. Participants completed targeted training with either phonemic or prosodic cues, and received passive exposure to the non-targeted cue. Results show that acuity to the targeted cue improved after training. In addition, both targeted attention and passive exposure to prosodic features led to increased use of these cues for lexical segmentation. Individual differences in degree and source of benefit point to the importance of personalizing clinical intervention to increase flexible use of a range of perceptual strategies for understanding speech.
ContributorsHelms Tillery, Augusta Katherine (Author) / Liss, Julie M. (Thesis advisor) / Azuma, Tamiko (Committee member) / Brown, Christopher A. (Committee member) / Dorman, Michael F. (Committee member) / Utianski, Rene L. (Committee member) / Arizona State University (Publisher)
Created2015
150607-Thumbnail Image.png
Description
Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of

Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of such a system for differential diagnosis. Acoustic analysis may provide a more objective measure for improvement in overall reliability (Guerra & Lovely, 2003) of classification. The following paper investigates the potential use of a taxonomical approach to dysarthria. The purpose of this study was to identify a set of acoustic correlates of perceptual dimensions used to group similarly sounding speakers with dysarthria, irrespective of disease etiology. The present study utilized a free classification auditory perceptual task in order to identify a set of salient speech characteristics displayed by speakers with varying dysarthria types and perceived by listeners, which was then analyzed using multidimensional scaling (MDS), correlation analysis, and cluster analysis. In addition, discriminant function analysis (DFA) was conducted to establish the feasibility of using the dimensions underlying perceptual similarity in dysarthria to classify speakers into both listener-derived clusters and etiology-based categories. The following hypothesis was identified: Because of the presumed predictive link between the acoustic correlates and listener-derived clusters, the DFA classification results should resemble the perceptual clusters more closely than the etiology-based (Mayo System) classifications. Results of the present investigation's MDS revealed three dimensions, which were significantly correlated with 1) metrics capturing rate and rhythm, 2) intelligibility, and 3) all of the long-term average spectrum metrics in the 8000 Hz band, which has been linked to degree of phonemic distinctiveness (Utianski et al., February 2012). A qualitative examination of listener notes supported the MDS and correlation results, with listeners overwhelmingly making reference to speaking rate/rhythm, intelligibility, and articulatory precision while participating in the free classification task. Additionally, acoustic correlates revealed by the MDS and subjected to DFA indeed predicted listener group classification. These results beget acoustic measurement as representative of listener perception, and represent the first phase in supporting the use of a perceptually relevant taxonomy of dysarthria.
ContributorsNorton, Rebecca (Author) / Liss, Julie (Thesis advisor) / Azuma, Tamiko (Committee member) / Ingram, David (Committee member) / Arizona State University (Publisher)
Created2012
150496-Thumbnail Image.png
Description
Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been

Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third experiment was conducted to evaluate the extent to which the nature of the acoustic degradation predicts the resulting percept. Results suggest distinctive vowel tokens are better identified and, likewise, better-identified tokens are more distinctive. Further, an above-chance level agreement between nature of vowel misclassification and misidentification errors was demonstrated for all vowels, suggesting degraded vowel acoustics are not merely an index of severity in dysarthria, but rather are an integral component of the resultant intelligibility disorder.
ContributorsLansford, Kaitlin L (Author) / Liss, Julie M (Thesis advisor) / Dorman, Michael F. (Committee member) / Azuma, Tamiko (Committee member) / Lotto, Andrew J (Committee member) / Arizona State University (Publisher)
Created2012
157377-Thumbnail Image.png
Description
Older adults often experience communication difficulties, including poorer comprehension of auditory speech when it contains complex sentence structures or occurs in noisy environments. Previous work has linked cognitive abilities and the engagement of domain-general cognitive resources, such as the cingulo-opercular and frontoparietal brain networks, in response to challenging speech. However,

Older adults often experience communication difficulties, including poorer comprehension of auditory speech when it contains complex sentence structures or occurs in noisy environments. Previous work has linked cognitive abilities and the engagement of domain-general cognitive resources, such as the cingulo-opercular and frontoparietal brain networks, in response to challenging speech. However, the degree to which these networks can support comprehension remains unclear. Furthermore, how hearing loss may be related to the cognitive resources recruited during challenging speech comprehension is unknown. This dissertation investigated how hearing, cognitive performance, and functional brain networks contribute to challenging auditory speech comprehension in older adults. Experiment 1 characterized how age and hearing loss modulate resting-state functional connectivity between Heschl’s gyrus and several sensory and cognitive brain networks. The results indicate that older adults exhibit decreased functional connectivity between Heschl’s gyrus and sensory and attention networks compared to younger adults. Within older adults, greater hearing loss was associated with increased functional connectivity between right Heschl’s gyrus and the cingulo-opercular and language networks. Experiments 2 and 3 investigated how hearing, working memory, attentional control, and fMRI measures predict comprehension of complex sentence structures and speech in noisy environments. Experiment 2 utilized resting-state functional magnetic resonance imaging (fMRI) and behavioral measures of working memory and attentional control. Experiment 3 used activation-based fMRI to examine the brain regions recruited in response to sentences with both complex structures and in noisy background environments as a function of hearing and cognitive abilities. The results suggest that working memory abilities and the functionality of the frontoparietal and language networks support the comprehension of speech in multi-speaker environments. Conversely, attentional control and the cingulo-opercular network were shown to support comprehension of complex sentence structures. Hearing loss was shown to decrease activation within right Heschl’s gyrus in response to all sentence conditions and increase activation within frontoparietal and cingulo-opercular regions. Hearing loss also was associated with poorer sentence comprehension in energetic, but not informational, masking. Together, these three experiments identify the unique contributions of cognition and brain networks that support challenging auditory speech comprehension in older adults, further probing how hearing loss affects these relationships.
ContributorsFitzhugh, Megan (Author) / (Reddy) Rogalsky, Corianne (Thesis advisor) / Baxter, Leslie C (Thesis advisor) / Azuma, Tamiko (Committee member) / Braden, Blair (Committee member) / Arizona State University (Publisher)
Created2019
156857-Thumbnail Image.png
Description
Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our

Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our lab suggests that varying the prevalence of cue-colored targets does not attenuate the visual confirmation bias, although people still fail to detect rare targets regardless of whether they match the initial cue (Walenchok et al. under review). The present investigation examines the boundary conditions of the visual confirmation bias under conditions of equal, low, and high cued-target frequency. Across experiments, I found that: (1) People are strongly susceptible to the low-prevalence effect, often failing to detect rare targets regardless of whether they match the cue (Wolfe et al., 2005). (2) However, they are still biased to seek cue-colored stimuli, even when such targets are rare. (3) Regardless of target prevalence, people employ strategies when search is made sufficiently burdensome with distributed items and large search sets. These results further support previous findings that the low-prevalence effect arises from a failure to perceive rare items (Hout et al., 2015), while visual confirmation bias is a bias of attentional guidance (Rajsic et al., 2015, 2017).
ContributorsWalenchok, Stephen Charles (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / McClure, Samuel M. (Committee member) / Arizona State University (Publisher)
Created2018
149601-Thumbnail Image.png
Description
It has been suggested that directed forgetting (DF) in the item-method paradigm results from selective rehearsal of R items and passive decay of F items. However, recent evidence suggested that the passive decay explanation is insufficient. The current experiments examined two theories of DF that assume an active forgetting process:

It has been suggested that directed forgetting (DF) in the item-method paradigm results from selective rehearsal of R items and passive decay of F items. However, recent evidence suggested that the passive decay explanation is insufficient. The current experiments examined two theories of DF that assume an active forgetting process: (1) attentional inhibition and (2) tagging and selective search (TSS). Across three experiments, the central tenets of these theories were evaluated. Experiment 1 included encoding manipulations in an attempt to distinguish between these competing theories, but the results were inconclusive. Experiments 2 and 3 examined the theories separately. The results from Experiment 2 supported a representation suppression account of attentional inhibition, while the evidence from Experiment 3 suggested that TSS was not a viable mechanism for DF. Overall, the results provide additional evidence that forgetting is due to an active process, and suggest this process may act to suppress the representations of F items.
ContributorsHansen, Whitney Anne (Author) / Goldinger, Stephen D. (Thesis advisor) / Azuma, Tamiko (Committee member) / Brewer, Gene (Committee member) / Homa, Donald (Committee member) / Arizona State University (Publisher)
Created2011