Matching Items (56)
152036-Thumbnail Image.png
Description
It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most

It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most evidence for this comes from hemispheric semantic priming, though fewer studies have investigated verbal memory in the cerebral hemispheres. The goal of the current investigations is to examine how top-down mechanisms influence hemispheric asymmetries in verbal memory, and determine the specific nature of hypothesized top-down mechanisms. Five experiments were conducted to explore the influence of top-down mechanisms on hemispheric asymmetries in verbal memory. Experiments 1 and 2 used item-method directed forgetting to examine maintenance and inhibition mechanisms. In Experiment 1, participants were cued to remember or forget certain words, and cues were presented simultaneously or after the presentation of target words. In Experiment 2, participants were cued again to remember or forget words, but each word was repeated once or four times. Experiments 3 and 4 examined the influence of cognitive load on hemispheric asymmetries in true and false memory. In Experiment 3, cognitive load was imposed during memory encoding, while in Experiment 4, cognitive load was imposed during memory retrieval. Finally, Experiment 5 investigated the association between controlled processing in hemispheric semantic priming, and top-down mechanisms used for hemispheric verbal memory. Across all experiments, divided visual field presentation was used to probe verbal memory in the cerebral hemispheres. Results from all experiments revealed several important findings. First, top-down mechanisms used by the LH primarily used to facilitate verbal processing, but also operate in a domain general manner in the face of increasing processing demands. Second, evidence indicates that the RH uses top-down mechanisms minimally, and processes verbal information in a more bottom-up manner. These data help clarify the nature of top-down mechanisms used in hemispheric memory and language processing, and build upon current theories that attempt to explain hemispheric asymmetries in language processing.
ContributorsTat, Michael J (Author) / Azuma, Tamiko (Thesis advisor) / Goldinger, Stephen D (Committee member) / Liss, Julie M (Committee member) / Arizona State University (Publisher)
Created2013
151721-Thumbnail Image.png
Description
Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and

Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and error responses, reaction times, and accuracy. Accuracy and reaction times were analyzed in terms of memory load (task condition), number of repetitions, effect of high vs. low print-frequency, and correlations with working memory span. Multinomial tree analyses were also completed to investigate source vs. item memory and revealed a mirror effect in episodic memory experiments (source memory), but a frequency advantage in span tasks (item memory). Interestingly enough, we did not observe an advantage for high working memory span individuals in frequency judgments, even when participants split their attention during the dual task (similar to a complex span task). However, we concluded that both the amount of attentional resources allocated and prior experience with an item affect how it is stored in memory.
ContributorsPeterson, Megan Paige (Author) / Azuma, Tamiko (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
151634-Thumbnail Image.png
Description
Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a

Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a single CI and had low-frequency, acoustic hearing in both the ear contralateral to the CI and in the implanted ear. These `hearing preservation' listeners, potentially, had access to ITD cues but not to ILD cues. At issue in this dissertation was the value of the two types of information about sound sources, ITDs and ILDs, for localization and for speech perception when speech and noise sources were separated in space. For Experiment 1, normal hearing (NH) listeners and the two groups of CI listeners were tested for sound source localization using a 13 loudspeaker array. For the NH listeners, the mean RMS error for localization was 7 degrees, for the bilateral CI listeners, 20 degrees, and for the hearing preservation listeners, 23 degrees. The scores for the two CI groups did not differ significantly. Thus, both CI groups showed equivalent, but poorer than normal, localization. This outcome using the filtered noise bands for the normal hearing listeners, suggests ILD and ITD cues can support equivalent levels of localization. For Experiment 2, the two groups of CI listeners were tested for speech recognition in noise when the noise sources and targets were spatially separated in a simulated `restaurant' environment and in two versions of a `cocktail party' environment. At issue was whether either CI group would show benefits from binaural hearing, i.e., better performance when the noise and targets were separated in space. Neither of the CI groups showed spatial release from masking. However, both groups showed a significant binaural advantage (a combination of squelch and summation), which also maintained separation of the target and noise, indicating the presence of some binaural processing or `unmasking' of speech in noise. Finally, localization ability in Experiment 1 was not correlated with binaural advantage in Experiment 2.
ContributorsLoiselle, Louise (Author) / Dorman, Michael F. (Thesis advisor) / Yost, William A. (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
151671-Thumbnail Image.png
Description
Concussion, a subset of mild traumatic brain injury (mTBI), has recently been brought to the forefront of the media due to a large lawsuit filed against the National Football League. Concussion resulting from injury varies in severity, duration, and type, based on many characteristics about the individual that research does

Concussion, a subset of mild traumatic brain injury (mTBI), has recently been brought to the forefront of the media due to a large lawsuit filed against the National Football League. Concussion resulting from injury varies in severity, duration, and type, based on many characteristics about the individual that research does not presently understand. Chronic fatigue, poor working memory, impaired self-awareness, and lack of attention to task are symptoms commonly present post-concussion. Currently, there is not a standard method of assessing concussion, nor is there a way to track an individual's recovery, resulting in misguided treatment for better prognosis. The aim of the following study was to determine patient specific higher-order cognitive processing deficits for clinical diagnosis and prognosis of concussion. Six individuals (N=6) were seen during the acute phase of concussion, two of whom were seen subsequently when their symptoms were deemed clinically resolved. Subjective information was collected from both the patient and from neurology testing. Each individual completed a task, in which they were presented with degraded speech, taxing their higher-order cognitive processing. Patient specific behavioral patterns are noted, creating a unique paradigm for mapping subjective and objective data for each patient's strategy to compensate for deficits and understand speech in a difficult listening situation. Keywords: concussion, cognitive processing
ContributorsBerg, Dena (Author) / Liss, Julie M (Committee member) / Azuma, Tamiko (Committee member) / Caviness, John (Committee member) / Arizona State University (Publisher)
Created2013
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
153415-Thumbnail Image.png
Description
In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important

In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important because they help identify word boundaries, a process known as lexical segmentation. Individuals vary in the degree to which they rely on phonemic or prosodic cues for lexical segmentation in degraded conditions.

Deafened individuals who use a cochlear implant have diminished access to fine frequency information in the speech signal, and show resulting difficulty perceiving phonemic and prosodic cues. Auditory training on phonemic elements improves word recognition for some listeners. Little is known, however, about the potential benefits of prosodic training, or the degree to which individual differences in cue use affect outcomes.

The present study used simulated cochlear implant stimulation to examine the effects of phonemic and prosodic training on lexical segmentation. Participants completed targeted training with either phonemic or prosodic cues, and received passive exposure to the non-targeted cue. Results show that acuity to the targeted cue improved after training. In addition, both targeted attention and passive exposure to prosodic features led to increased use of these cues for lexical segmentation. Individual differences in degree and source of benefit point to the importance of personalizing clinical intervention to increase flexible use of a range of perceptual strategies for understanding speech.
ContributorsHelms Tillery, Augusta Katherine (Author) / Liss, Julie M. (Thesis advisor) / Azuma, Tamiko (Committee member) / Brown, Christopher A. (Committee member) / Dorman, Michael F. (Committee member) / Utianski, Rene L. (Committee member) / Arizona State University (Publisher)
Created2015
150607-Thumbnail Image.png
Description
Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of

Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of such a system for differential diagnosis. Acoustic analysis may provide a more objective measure for improvement in overall reliability (Guerra & Lovely, 2003) of classification. The following paper investigates the potential use of a taxonomical approach to dysarthria. The purpose of this study was to identify a set of acoustic correlates of perceptual dimensions used to group similarly sounding speakers with dysarthria, irrespective of disease etiology. The present study utilized a free classification auditory perceptual task in order to identify a set of salient speech characteristics displayed by speakers with varying dysarthria types and perceived by listeners, which was then analyzed using multidimensional scaling (MDS), correlation analysis, and cluster analysis. In addition, discriminant function analysis (DFA) was conducted to establish the feasibility of using the dimensions underlying perceptual similarity in dysarthria to classify speakers into both listener-derived clusters and etiology-based categories. The following hypothesis was identified: Because of the presumed predictive link between the acoustic correlates and listener-derived clusters, the DFA classification results should resemble the perceptual clusters more closely than the etiology-based (Mayo System) classifications. Results of the present investigation's MDS revealed three dimensions, which were significantly correlated with 1) metrics capturing rate and rhythm, 2) intelligibility, and 3) all of the long-term average spectrum metrics in the 8000 Hz band, which has been linked to degree of phonemic distinctiveness (Utianski et al., February 2012). A qualitative examination of listener notes supported the MDS and correlation results, with listeners overwhelmingly making reference to speaking rate/rhythm, intelligibility, and articulatory precision while participating in the free classification task. Additionally, acoustic correlates revealed by the MDS and subjected to DFA indeed predicted listener group classification. These results beget acoustic measurement as representative of listener perception, and represent the first phase in supporting the use of a perceptually relevant taxonomy of dysarthria.
ContributorsNorton, Rebecca (Author) / Liss, Julie (Thesis advisor) / Azuma, Tamiko (Committee member) / Ingram, David (Committee member) / Arizona State University (Publisher)
Created2012
150496-Thumbnail Image.png
Description
Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been

Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third experiment was conducted to evaluate the extent to which the nature of the acoustic degradation predicts the resulting percept. Results suggest distinctive vowel tokens are better identified and, likewise, better-identified tokens are more distinctive. Further, an above-chance level agreement between nature of vowel misclassification and misidentification errors was demonstrated for all vowels, suggesting degraded vowel acoustics are not merely an index of severity in dysarthria, but rather are an integral component of the resultant intelligibility disorder.
ContributorsLansford, Kaitlin L (Author) / Liss, Julie M (Thesis advisor) / Dorman, Michael F. (Committee member) / Azuma, Tamiko (Committee member) / Lotto, Andrew J (Committee member) / Arizona State University (Publisher)
Created2012