Matching Items (9)
Filtering by

Clear all filters

152801-Thumbnail Image.png
Description
Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults

Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults participated in the study that measured intelligibility (percent words correct) of dysarthric speech in auditory versus audiovisual conditions. Participants were then separated into two groups: older adults (age range 47 to 68) and young adults (age range 19 to 36) to examine the influence of age. Findings revealed that all participants, regardless of age, improved their ability to recognize dysarthric speech when visual speech was added to the auditory signal. The magnitude of this benefit, however, was greater for older adults when compared with younger adults. These results inform our understanding of how visual speech information influences understanding of dysarthric speech.
ContributorsFall, Elizabeth (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2014
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05
134779-Thumbnail Image.png
Description
Pitch and timbre perception are two important dimensions of auditory perception. These aspects of sound aid the understanding of our environment, and contribute to normal everyday functioning. It is therefore important to determine the nature of perceptual interaction between these two dimensions of sound. This study tested the interactions between

Pitch and timbre perception are two important dimensions of auditory perception. These aspects of sound aid the understanding of our environment, and contribute to normal everyday functioning. It is therefore important to determine the nature of perceptual interaction between these two dimensions of sound. This study tested the interactions between pitch perception associated with the fundamental frequency (F0) and sharpness perception associated with the spectral slope of harmonic complex tones in normal hearing (NH) listeners and cochlear implant (CI) users. Pitch and sharpness ranking was measured without changes in the non-target dimension (Experiment 1), with different amounts of unrelated changes in the non-target dimension (Experiment 2), and with congruent/incongruent changes of similar perceptual salience in the non-target dimension (Experiment 3). The results showed that CI users had significantly worse pitch and sharpness ranking thresholds than NH listeners. Pitch and sharpness perception had symmetric interactions in NH listeners. However, for CI users, spectral slope changes significantly affected pitch ranking, while F0 changes had no significant effect on sharpness ranking. CI users' pitch ranking sensitivity was significantly better with congruent than with incongruent spectral slope changes. These results have important implications for CI processing strategies to better transmit pitch and timbre cues to CI users.
ContributorsSoslowsky, Samara Miranda (Author) / Luo, Xin (Thesis director) / Yost, William (Committee member) / Dorman, Michael (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
153745-Thumbnail Image.png
Description
Glottal fry is a vocal register characterized by low frequency and increased signal perturbation, and is perceptually identified by its popping, creaky quality. Recently, the use of the glottal fry vocal register has received growing awareness and attention in popular culture and media in the United States. The creaky quality

Glottal fry is a vocal register characterized by low frequency and increased signal perturbation, and is perceptually identified by its popping, creaky quality. Recently, the use of the glottal fry vocal register has received growing awareness and attention in popular culture and media in the United States. The creaky quality that was originally associated with vocal pathologies is indeed becoming “trendy,” particularly among young women across the United States. But while existing studies have defined, quantified, and attempted to explain the use of glottal fry in conversational speech, there is currently no explanation for the increasing prevalence of the use of glottal fry amongst American women. This thesis, however, proposes that conversational entrainment—a communication phenomenon which describes the propensity to modify one’s behavior to align more closely with one’s communication partner—may provide a theoretical framework to explain the growing trend in the use of glottal fry amongst college-aged women in the United States. Female participants (n = 30) between the ages of 18 and 29 years (M = 20.6, SD = 2.95) had conversations with two conversation partners, one who used quantifiably more glottal fry than the other. The study utilized perceptual and quantifiable acoustic information to address the following key question: Does the amount of habitual glottal fry in a conversational partner influence one’s use of glottal fry in their own speech? Results yielded the following two findings: (1) according to perceptual annotations, the participants used a greater amount of glottal fry when speaking with the Fry conversation partner than with the Non Fry partner, (2) statistically significant differences were found in the acoustics of the participants’ vocal qualities based on conversation partner. While the current study demonstrates that young women are indeed speaking in glottal fry in everyday conversations, and that its use can be attributed in part to conversational entrainment, we still lack a clear explanation of the deeper motivations for women to speak in a lower vocal register. The current study opens avenues for continued analysis of the sociolinguistic functions of the glottal fry register.
ContributorsDelfino, Christine R (Author) / Liss, Julie M (Thesis advisor) / Borrie, Stephanie A (Thesis advisor) / Azuma, Tamiko (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2015
155613-Thumbnail Image.png
Description
The purpose of this study was to identify acoustic markers that correlate with accurate and inaccurate /r/ production in children ages 5-8 using signal processing. In addition, the researcher aimed to identify predictive acoustic markers that relate to changes in /r/ accuracy. A total of 35 children (23 accurate, 12

The purpose of this study was to identify acoustic markers that correlate with accurate and inaccurate /r/ production in children ages 5-8 using signal processing. In addition, the researcher aimed to identify predictive acoustic markers that relate to changes in /r/ accuracy. A total of 35 children (23 accurate, 12 inaccurate, 8 longitudinal) were recorded. Computerized stimuli were presented on a PC laptop computer and the children were asked to do five tasks to elicit spontaneous and imitated /r/ production in all positions. Files were edited and analyzed using a filter bank approach centered at 40 frequencies based on the Mel-scale. T-tests were used to compare spectral energy of tokens between accurate and inaccurate groups and additional t-tests were used to compare duration of accurate and inaccurate files. Results included significant differences between the accurate and inaccurate productions of /r/, notable differences in the 24-26 mel bin range, and longer duration of inaccurate /r/ than accurate. Signal processing successfully identified acoustic features of accurate and inaccurate production of /r/ and candidate predictive markers that may be associated with acquisition of /r/.
ContributorsBecvar, Brittany Patricia (Author) / Azuma, Tamiko (Thesis advisor) / Weinhold, Juliet (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2017
155459-Thumbnail Image.png
Description
Children with hearing impairment are at risk for poor attainment in reading decoding and reading comprehension, which suggests they may have difficulty with early literacy skills prior to learning to read. The first purpose of this study was to determine if young children with hearing impairment differ from their peers

Children with hearing impairment are at risk for poor attainment in reading decoding and reading comprehension, which suggests they may have difficulty with early literacy skills prior to learning to read. The first purpose of this study was to determine if young children with hearing impairment differ from their peers with normal hearing on early literacy skills and also on three known predictors of early literacy skills – non-verbal cognition, executive functioning, and home literacy environment. A second purpose was to determine if strengths and weaknesses in early literacy skills of individual children with hearing impairment are associated with degree of hearing loss, non-verbal cognitive ability, or executive functioning.

I assessed seven children with normal hearing and 10 children with hearing impairment on assessments of expressive vocabulary, expressive morphosyntax, listening comprehension, phonological awareness, alphabet knowledge, non-verbal cognition, and executive functioning. Two children had unilateral hearing loss, two had mild hearing loss and used hearing aids, two had moderate hearing loss and used hearing aids, one child had mild hearing loss and did not use hearing aids, and three children used bilateral cochlear implants. Parents completed a questionnaire about their home literacy environment.

Findings showed large between-group effect sizes for phonological awareness, morphosyntax, and executive functioning, and medium between-group effect sizes for expressive vocabulary, listening comprehension, and non-verbal cognition. Visual analyses provided no clear pattern to suggest that non-verbal cognition or degree of hearing loss were associated with individual patterns of performance for children with hearing impairment; however, three children who seemed at risk for reading difficulties had executive functioning scores that were at the floor.

Most prekindergarten and kindergarten children with hearing impairment in this study appeared to be at risk for future reading decoding and reading comprehension difficulties. Further, based on individual patterns of performance, risk was not restricted to one type of early literacy skill and a strength in one skill did not necessarily indicate a child would have strengths in all early literacy skills. Therefore, it is essential to evaluate all early literacy skills to pinpoint skill deficits and to prioritize intervention goals.
ContributorsRunnion, Elizabeth (Author) / Gray, Shelley (Thesis advisor) / Dorman, Michael (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2017
157359-Thumbnail Image.png
Description
Speech intelligibility measures how much a speaker can be understood by a listener. Traditional measures of intelligibility, such as word accuracy, are not sufficient to reveal the reasons of intelligibility degradation. This dissertation investigates the underlying sources of intelligibility degradations from both perspectives of the speaker and the listener. Segmental

Speech intelligibility measures how much a speaker can be understood by a listener. Traditional measures of intelligibility, such as word accuracy, are not sufficient to reveal the reasons of intelligibility degradation. This dissertation investigates the underlying sources of intelligibility degradations from both perspectives of the speaker and the listener. Segmental phoneme errors and suprasegmental lexical boundary errors are developed to reveal the perceptual strategies of the listener. A comprehensive set of automated acoustic measures are developed to quantify variations in the acoustic signal from three perceptual aspects, including articulation, prosody, and vocal quality. The developed measures have been validated on a dysarthric speech dataset with various severity degrees. Multiple regression analysis is employed to show the developed measures could predict perceptual ratings reliably. The relationship between the acoustic measures and the listening errors is investigated to show the interaction between speech production and perception. The hypothesize is that the segmental phoneme errors are mainly caused by the imprecise articulation, while the sprasegmental lexical boundary errors are due to the unreliable phonemic information as well as the abnormal rhythm and prosody patterns. To test the hypothesis, within-speaker variations are simulated in different speaking modes. Significant changes have been detected in both the acoustic signals and the listening errors. Results of the regression analysis support the hypothesis by showing that changes in the articulation-related acoustic features are important in predicting changes in listening phoneme errors, while changes in both of the articulation- and prosody-related features are important in predicting changes in lexical boundary errors. Moreover, significant correlation has been achieved in the cross-validation experiment, which indicates that it is possible to predict intelligibility variations from acoustic signal.
ContributorsJiao, Yishan (Author) / Berisha, Visar (Thesis advisor) / Liss, Julie (Thesis advisor) / Zhou, Yi (Committee member) / Arizona State University (Publisher)
Created2019
157395-Thumbnail Image.png
Description
Dementia is a syndrome resulting from an acquired brain disease that affects many domains of cognitive impairment. The progressive disorder generally affects memory, attention, executive functions, communication, and other cognitive domains that significantly alter everyday function (Quinn, 2014). The purpose of this research was to gather a systematic review of

Dementia is a syndrome resulting from an acquired brain disease that affects many domains of cognitive impairment. The progressive disorder generally affects memory, attention, executive functions, communication, and other cognitive domains that significantly alter everyday function (Quinn, 2014). The purpose of this research was to gather a systematic review of cognitive-communication assessments and screeners used in assessing dementia to assist in early prognosis. From this review, there is potential in developing a new test to address the areas that people with dementia often have deficits in 1) Memory, 2) Attention, 3) Executive Functions, 4) Language, and 5) Visuospatial Skills. In the field of speech-language pathology, or medicine in general, there is no one assessment that can diagnose dementia. Additionally, this review will explore identifying speech and language characteristics of dementia through speech analytics to theoretically help clinicians identify early signs of dementia.
ContributorsMiller, Marissa (Author) / Liss, Julie M (Thesis advisor) / Berisha, Visar (Thesis advisor) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2019