Filtering by
- Creators: Barrett, The Honors College
- Creators: Berisha, Visar
The aims of Experiment I were to (i) compare the lip reading difficulty of the AzAV sentences to that of other sentence materials, (ii) compare the speech-reading ability of CI listeners to that of normal-hearing listeners and (iii) assess the gain in speech understanding when listeners have both auditory and visual information from easy-to-lip-read and difficult-to-lip read sentences. In addition, the sentence lists were subjected to a multi-level text analysis to determine the factors that make sentences easy or difficult to speech read.
The results of Experiment I showed that (i) the AzAV sentences were relatively difficult to lip read, (ii) that CI listeners and normal-hearing listeners did not differ in lip reading ability and (iii) that sentences with low lip-reading intelligibility (10-15 % correct) provide about a 30 percentage point improvement in speech understanding when added to the acoustic stimulus, while sentences with high lip-reading intelligibility (30-60 % correct) provide about a 50 percentage point improvement in the same comparison. The multi-level text analyses showed that the familiarity of phrases in the sentences was the primary driving factor that affects the lip reading difficulty.
The aim of Experiment II was to investigate the value, when visual information is present, of bimodal hearing and bilateral cochlear implants. The results of Experiment II showed that when visual information is present, low-frequency acoustic hearing can be of value to speech understanding for patients fit with a single CI. However, when visual information was available no gain was seen from the provision of a second CI, i.e., bilateral CIs. As was the case in Experiment I, visual information provided about a 30 percentage point improvement in speech understanding.
Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light of the importance of residual acoustic hearing for the development of vocal emotion production, this study tested whether pre-lingually deafened CI children with residual acoustic hearing may produce similar pitch cues for vocal emotions as the other participant groups. Sixteen pre-lingually deafened CI children with residual acoustic hearing, nine post-lingually deafened CI adults with residual acoustic hearing, twelve NH children, and eleven NH adults were asked to produce ten semantically neutral sentences in happy or sad emotion. The results showed that there was no significant group effect for the ratio of mean fundamental frequency (F0) and the ratio of F0 standard deviation between emotions. Instead, CI children showed significantly greater intensity difference between emotions than CI adults, NH children, and NH adults. In CI children, aided pure-tone average hearing threshold of acoustic ear was correlated with the ratio of mean F0 and the ratio of duration between emotions. These results suggest that residual acoustic hearing with low-frequency pitch cues may facilitate the development of vocal emotion production in pre-lingually deafened CI children.
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-related confounds. It is unclear how speech production may change with
the degree of spectral degradation of auditory feedback as experience by CI users. In this study,
a real-time sinewave CI simulation was developed to provide NH subjects with auditory
feedback of different spectral resolution (1, 2, 4, and 8 channels). NH subjects were asked to
produce and identify vowels, as well as recognize sentences while listening to the real-time CI
simulation. The results showed that sentence recognition scores with the real-time CI simulation
improved with more channels, similar to those with the traditional off-line CI simulation.
Perception of a vowel continuum “HEAD”- “HAD” was near chance with 1, 2, and 4 channels,
and greatly improved with 8 channels and full spectrum. The spectral resolution of auditory
feedback did not significantly affect any acoustic feature of vowel production (e.g., vowel space
area, mean amplitude, mean and variability of fundamental and formant frequencies). There
was no correlation between vowel production and perception. The lack of effect of auditory
feedback spectral resolution on vowel production was likely due to the limited exposure of NH
subjects to CI simulation and the limited frequency ranges covered by the sinewave carriers of
CI simulation. Future studies should investigate the effects of various CI processing parameters
on speech production using a noise-band CI simulation.