Matching Items (929)
152594-Thumbnail Image.png
Description
The recent spotlight on concussion has illuminated deficits in the current standard of care with regard to addressing acute and persistent cognitive signs and symptoms of mild brain injury. This stems, in part, from the diffuse nature of the injury, which tends not to produce focal cognitive or behavioral deficits

The recent spotlight on concussion has illuminated deficits in the current standard of care with regard to addressing acute and persistent cognitive signs and symptoms of mild brain injury. This stems, in part, from the diffuse nature of the injury, which tends not to produce focal cognitive or behavioral deficits that are easily identified or tracked. Indeed it has been shown that patients with enduring symptoms have difficulty describing their problems; therefore, there is an urgent need for a sensitive measure of brain activity that corresponds with higher order cognitive processing. The development of a neurophysiological metric that maps to clinical resolution would inform decisions about diagnosis and prognosis, including the need for clinical intervention to address cognitive deficits. The literature suggests the need for assessment of concussion under cognitively demanding tasks. Here, a joint behavioral- high-density electroencephalography (EEG) paradigm was employed. This allows for the examination of cortical activity patterns during speech comprehension at various levels of degradation in a sentence verification task, imposing the need for higher-order cognitive processes. Eight participants with concussion listened to true-false sentences produced with either moderately to highly intelligible noise-vocoders. Behavioral data were simultaneously collected. The analysis of cortical activation patterns included 1) the examination of event-related potentials, including latency and source localization, and 2) measures of frequency spectra and associated power. Individual performance patterns were assessed during acute injury and a return visit several months following injury. Results demonstrate a combination of task-related electrophysiology measures correspond to changes in task performance during the course of recovery. Further, a discriminant function analysis suggests EEG measures are more sensitive than behavioral measures in distinguishing between individuals with concussion and healthy controls at both injury and recovery, suggesting the robustness of neurophysiological measures during a cognitively demanding task to both injury and persisting pathophysiology.
ContributorsUtianski, Rene (Author) / Liss, Julie M (Thesis advisor) / Berisha, Visar (Committee member) / Caviness, John N (Committee member) / Dorman, Michael (Committee member) / Arizona State University (Publisher)
Created2014
153453-Thumbnail Image.png
Description
The present study describes audiovisual sentence recognition in normal hearing listeners, bimodal cochlear implant (CI) listeners and bilateral CI listeners. This study explores a new set of sentences (the AzAV sentences) that were created to have equal auditory intelligibility and equal gain from visual information.

The aims of Experiment I

The present study describes audiovisual sentence recognition in normal hearing listeners, bimodal cochlear implant (CI) listeners and bilateral CI listeners. This study explores a new set of sentences (the AzAV sentences) that were created to have equal auditory intelligibility and equal gain from visual information.

The aims of Experiment I were to (i) compare the lip reading difficulty of the AzAV sentences to that of other sentence materials, (ii) compare the speech-reading ability of CI listeners to that of normal-hearing listeners and (iii) assess the gain in speech understanding when listeners have both auditory and visual information from easy-to-lip-read and difficult-to-lip read sentences. In addition, the sentence lists were subjected to a multi-level text analysis to determine the factors that make sentences easy or difficult to speech read.

The results of Experiment I showed that (i) the AzAV sentences were relatively difficult to lip read, (ii) that CI listeners and normal-hearing listeners did not differ in lip reading ability and (iii) that sentences with low lip-reading intelligibility (10-15 % correct) provide about a 30 percentage point improvement in speech understanding when added to the acoustic stimulus, while sentences with high lip-reading intelligibility (30-60 % correct) provide about a 50 percentage point improvement in the same comparison. The multi-level text analyses showed that the familiarity of phrases in the sentences was the primary driving factor that affects the lip reading difficulty.

The aim of Experiment II was to investigate the value, when visual information is present, of bimodal hearing and bilateral cochlear implants. The results of Experiment II showed that when visual information is present, low-frequency acoustic hearing can be of value to speech understanding for patients fit with a single CI. However, when visual information was available no gain was seen from the provision of a second CI, i.e., bilateral CIs. As was the case in Experiment I, visual information provided about a 30 percentage point improvement in speech understanding.
ContributorsWang, Shuai (Author) / Dorman, Michael (Thesis advisor) / Berisha, Visar (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2015
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
153418-Thumbnail Image.png
Description
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum

This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
ContributorsZhong, Xuan (Author) / Yost, William (Thesis advisor) / Zhou, Yi (Committee member) / Dorman, Michael (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2015
150688-Thumbnail Image.png
Description
Otoacoustic emissions (OAEs) are soft sounds generated by the inner ear and can be recorded within the ear canal. Since OAEs can reflect the functional status of the inner ear, OAE measurements have been widely used for hearing loss screening in the clinic. However, there are limitations in current clinical

Otoacoustic emissions (OAEs) are soft sounds generated by the inner ear and can be recorded within the ear canal. Since OAEs can reflect the functional status of the inner ear, OAE measurements have been widely used for hearing loss screening in the clinic. However, there are limitations in current clinical OAE measurements, such as the restricted frequency range, low efficiency and inaccurate calibration. In this dissertation project, a new method of OAE measurement which used a swept tone to evoke the stimulus frequency OAEs (SFOAEs) was developed to overcome the limitations of current methods. In addition, an in-situ calibration was applied to equalize the spectral level of the swept-tone stimulus at the tympanic membrane (TM). With this method, SFOAEs could be recorded with high resolution over a wide frequency range within one or two minutes. Two experiments were conducted to verify the accuracy of the in-situ calibration and to test the performance of the swept-tone SFOAEs. In experiment I, the calibration of the TM sound pressure was verified in both acoustic cavities and real ears by using a second probe microphone. In addition, the benefits of the in-situ calibration were investigated by measuring OAEs under different calibration conditions. Results showed that the TM pressure could be predicted correctly, and the in-situ calibration provided the most reliable results in OAE measurements. In experiment II, a three-interval paradigm with a tracking-filter technique was used to record the swept-tone SFOAEs in 20 normal-hearing subjects. The test-retest reliability of the swept-tone SFOAEs was examined using a repeated-measure design under various stimulus levels and durations. The accuracy of the swept-tone method was evaluated by comparisons with a standard method using discrete pure tones. Results showed that SFOAEs could be reliably and accurately measured with the swept-tone method. Comparing with the pure-tone approach, the swept-tone method showed significantly improved efficiency. The swept-tone SFOAEs with in-situ calibration may be an alternative of current clinical OAE measurements for more detailed evaluation of inner ear function and accurate diagnosis.
ContributorsChen, Shixiong (Author) / Bian, Lin (Thesis advisor) / Yost, William (Committee member) / Azuma, Tamiko (Committee member) / Dorman, Michael (Committee member) / Arizona State University (Publisher)
Created2012
136639-Thumbnail Image.png
Description
Social Networking Sites (SNSs), such as Facebook and Twitter, have continued to gain popularity worldwide. Previous research has shown differences in online behaviors at the cultural level, namely between predominantly independent societies, such as the United States, and predominantly interdependent societies, such as China and Japan. In the current study

Social Networking Sites (SNSs), such as Facebook and Twitter, have continued to gain popularity worldwide. Previous research has shown differences in online behaviors at the cultural level, namely between predominantly independent societies, such as the United States, and predominantly interdependent societies, such as China and Japan. In the current study I sought to test whether self-construal was correlated with different ways of using SNSs and whether there might be SES differences within the US that were analogous to previously observed cross-cultural differences in SNS use. Higher levels of interdependence were linked with using SNSs to keep in touch with family and friends, and providing social support to others. Interdependence was also correlated with Facebook addiction scale scores, using SNSs in inappropriate situations, and overall SNS use. Implications for assessing risk for Internet addiction, as well as understanding cultural variations in prevalence of Internet addiction are discussed.
ContributorsSobota, David Stanley (Author) / Varnum, Michael (Thesis director) / Knight, George (Committee member) / Dorman, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor)
Created2015-05
135844-Thumbnail Image.png
Description
Head turning is a common sound localization strategy in primates. A novel system that can track head movement and acoustic signals received at the entrance to the ear canal was tested to obtain binaural sound localization information during fast head movement of marmoset monkey. Analysis of binaural information was conducted

Head turning is a common sound localization strategy in primates. A novel system that can track head movement and acoustic signals received at the entrance to the ear canal was tested to obtain binaural sound localization information during fast head movement of marmoset monkey. Analysis of binaural information was conducted with a focus on inter-aural level difference (ILD) and inter-aural time difference (ITD) at various head positions over time. The results showed that during fast head turns, the ITDs showed significant and clear changes in trajectory in response to low frequency stimuli. However, significant phase ambiguity occurred at frequencies greater than 2 kHz. Analysis of ITD and ILD information with animal vocalization as the stimulus was also tested. The results indicated that ILDs may provide more information in understanding the dynamics of head movement in response to animal vocalizations in the environment. The primary significance of this experimentation is the successful implementation of a system capable of simultaneously recording head movement and acoustic signals at the ear canals. The collected data provides insight into the usefulness of ITD and ILD as binaural cues during head movement.
ContributorsLabban, Kyle John (Author) / Zhou, Yi (Thesis director) / Buneo, Christopher (Committee member) / Dorman, Michael (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05