Matching Items (4)
Filtering by

Clear all filters

152801-Thumbnail Image.png
Description
Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults

Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults participated in the study that measured intelligibility (percent words correct) of dysarthric speech in auditory versus audiovisual conditions. Participants were then separated into two groups: older adults (age range 47 to 68) and young adults (age range 19 to 36) to examine the influence of age. Findings revealed that all participants, regardless of age, improved their ability to recognize dysarthric speech when visual speech was added to the auditory signal. The magnitude of this benefit, however, was greater for older adults when compared with younger adults. These results inform our understanding of how visual speech information influences understanding of dysarthric speech.
ContributorsFall, Elizabeth (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2014
137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05
134531-Thumbnail Image.png
Description
Student to Student: A Guide to Anatomy is an anatomy guide written by students, for students. Its focus is on teaching the anatomy of the heart, lungs, nose, ears and throat in a manner that isn't overpowering or stress inducing. Daniel and I have taken numerous anatomy courses, and fully

Student to Student: A Guide to Anatomy is an anatomy guide written by students, for students. Its focus is on teaching the anatomy of the heart, lungs, nose, ears and throat in a manner that isn't overpowering or stress inducing. Daniel and I have taken numerous anatomy courses, and fully comprehend what it takes to have success in these classes. We found that the anatomy books recommended for these courses are often completely overwhelming, offering way more information than what is needed. This renders them near useless for a college student who just wants to learn the essentials. Why would a student even pick it up if they can't find what they need to learn? With that in mind, our goal was to create a comprehensive, easy to understand, and easy to follow guide to the heart, lungs and ENT (ear nose throat). We know what information is vital for test day, and wanted to highlight these key concepts and ideas in our guide. Spending just 60 to 90 minutes studying our guide should help any student with their studying needs. Whether the student has medical school aspirations, or if they simply just want to pass the class, our guide is there for them. We aren't experts, but we know what strategies and methods can help even the most confused students learn. Our guide can also be used as an introductory resource to our respective majors (Daniel-Biology, Charles-Speech and Hearing) for students who are undecided on what they want to do. In the future Daniel and I would like to see more students creating similar guides, and adding onto the "Student to Student' title with their own works... After all, who better to teach students than the students who know what it takes?
ContributorsKennedy, Charles (Co-author) / McDermand, Daniel (Co-author) / Kingsbury, Jeffrey (Thesis director) / Washo-Krupps, Delon (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
158318-Thumbnail Image.png
Description
Speech is known to serve as an early indicator of neurological decline, particularly in motor diseases. There is significant interest in developing automated, objective signal analytics that detect clinically-relevant changes and in evaluating these algorithms against the existing gold-standard: perceptual evaluation by trained speech and language pathologists. Hypernasality, the result

Speech is known to serve as an early indicator of neurological decline, particularly in motor diseases. There is significant interest in developing automated, objective signal analytics that detect clinically-relevant changes and in evaluating these algorithms against the existing gold-standard: perceptual evaluation by trained speech and language pathologists. Hypernasality, the result of poor control of the velopharyngeal flap---the soft palate regulating airflow between the oral and nasal cavities---is one such speech symptom of interest, as precise velopharyngeal control is difficult to achieve under neuromuscular disorders. However, a host of co-modulating variables give hypernasal speech a complex and highly variable acoustic signature, making it difficult for skilled clinicians to assess and for automated systems to evaluate. Previous work in rating hypernasality from speech relies on either engineered features based on statistical signal processing or machine learning models trained end-to-end on clinical ratings of disordered speech examples. Engineered features often fail to capture the complex acoustic patterns associated with hypernasality, while end-to-end methods tend to overfit to the small datasets on which they are trained. In this thesis, I present a set of acoustic features, models, and strategies for characterizing hypernasality in dysarthric speech that split the difference between these two approaches, with the aim of capturing the complex perceptual character of hypernasality without overfitting to the small datasets available. The features are based on acoustic models trained on a large corpus of healthy speech, integrating expert knowledge to capture known perceptual characteristics of hypernasal speech. They are then used in relatively simple linear models to predict clinician hypernasality scores. These simple models are robust, generalizing across diseases and outperforming comprehensive set of baselines in accuracy and correlation. This novel approach represents a new state-of-the-art in objective hypernasality assessment.
ContributorsSaxon, Michael Stephen (Author) / Berisha, Visar (Thesis advisor) / Panchanathan, Sethuraman (Thesis advisor) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2020