Matching Items (13)
Filtering by

Clear all filters

151721-Thumbnail Image.png
Description
Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and

Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and error responses, reaction times, and accuracy. Accuracy and reaction times were analyzed in terms of memory load (task condition), number of repetitions, effect of high vs. low print-frequency, and correlations with working memory span. Multinomial tree analyses were also completed to investigate source vs. item memory and revealed a mirror effect in episodic memory experiments (source memory), but a frequency advantage in span tasks (item memory). Interestingly enough, we did not observe an advantage for high working memory span individuals in frequency judgments, even when participants split their attention during the dual task (similar to a complex span task). However, we concluded that both the amount of attentional resources allocated and prior experience with an item affect how it is stored in memory.
ContributorsPeterson, Megan Paige (Author) / Azuma, Tamiko (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
152801-Thumbnail Image.png
Description
Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults

Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults participated in the study that measured intelligibility (percent words correct) of dysarthric speech in auditory versus audiovisual conditions. Participants were then separated into two groups: older adults (age range 47 to 68) and young adults (age range 19 to 36) to examine the influence of age. Findings revealed that all participants, regardless of age, improved their ability to recognize dysarthric speech when visual speech was added to the auditory signal. The magnitude of this benefit, however, was greater for older adults when compared with younger adults. These results inform our understanding of how visual speech information influences understanding of dysarthric speech.
ContributorsFall, Elizabeth (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2014
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
153352-Thumbnail Image.png
Description
Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one

Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one domain support the other when damage occurs? Evidence supporting this question exists for speech production. Musical pitch and rhythm are employed in Melodic Intonation Therapy to improve expressive language recovery, but little is known about the effects of music on the recovery of speech perception and receptive language. This research is one of the first to address the effects of music on speech perception. Two groups of participants, an older adult group (n=24; M = 71.63 yrs) and a younger adult group (n=50; M = 21.88 yrs) took part in the study. A native female speaker of Standard American English created four different types of stimuli including pseudoword sentences of normal speech, simultaneous music-speech, rhythmic speech, and music-primed speech. The stimuli were presented binaurally and participants were instructed to repeat what they heard following a 15 second time delay. Results were analyzed using standard parametric techniques. It was found that musical priming of speech, but not simultaneous synchronized music and speech, facilitated speech perception in both the younger adult and older adult groups. This effect may be driven by rhythmic information. The younger adults outperformed the older adults in all conditions. The speech perception task relied heavily on working memory, and there is a known working memory decline associated with aging. Thus, participants completed a working memory task to be used as a covariate in analyses of differences across stimulus types and age groups. Working memory ability was found to correlate with speech perception performance, but that the age-related performance differences are still significant once working memory differences are taken into account. These results provide new avenues for facilitating speech perception in stroke patients and sheds light upon the underlying mechanisms of Melodic Intonation Therapy for speech production.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2015
150607-Thumbnail Image.png
Description
Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of

Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of such a system for differential diagnosis. Acoustic analysis may provide a more objective measure for improvement in overall reliability (Guerra & Lovely, 2003) of classification. The following paper investigates the potential use of a taxonomical approach to dysarthria. The purpose of this study was to identify a set of acoustic correlates of perceptual dimensions used to group similarly sounding speakers with dysarthria, irrespective of disease etiology. The present study utilized a free classification auditory perceptual task in order to identify a set of salient speech characteristics displayed by speakers with varying dysarthria types and perceived by listeners, which was then analyzed using multidimensional scaling (MDS), correlation analysis, and cluster analysis. In addition, discriminant function analysis (DFA) was conducted to establish the feasibility of using the dimensions underlying perceptual similarity in dysarthria to classify speakers into both listener-derived clusters and etiology-based categories. The following hypothesis was identified: Because of the presumed predictive link between the acoustic correlates and listener-derived clusters, the DFA classification results should resemble the perceptual clusters more closely than the etiology-based (Mayo System) classifications. Results of the present investigation's MDS revealed three dimensions, which were significantly correlated with 1) metrics capturing rate and rhythm, 2) intelligibility, and 3) all of the long-term average spectrum metrics in the 8000 Hz band, which has been linked to degree of phonemic distinctiveness (Utianski et al., February 2012). A qualitative examination of listener notes supported the MDS and correlation results, with listeners overwhelmingly making reference to speaking rate/rhythm, intelligibility, and articulatory precision while participating in the free classification task. Additionally, acoustic correlates revealed by the MDS and subjected to DFA indeed predicted listener group classification. These results beget acoustic measurement as representative of listener perception, and represent the first phase in supporting the use of a perceptually relevant taxonomy of dysarthria.
ContributorsNorton, Rebecca (Author) / Liss, Julie (Thesis advisor) / Azuma, Tamiko (Committee member) / Ingram, David (Committee member) / Arizona State University (Publisher)
Created2012
156069-Thumbnail Image.png
Description
Military veterans have a significantly higher incidence of mild traumatic brain injury (mTBI), depression, and Post-traumatic stress disorder (PTSD) compared to civilians. Military veterans also represent a rapidly growing subgroup of college students, due in part to the robust and financially incentivizing educational benefits under the Post-9/11 GI Bill. The

Military veterans have a significantly higher incidence of mild traumatic brain injury (mTBI), depression, and Post-traumatic stress disorder (PTSD) compared to civilians. Military veterans also represent a rapidly growing subgroup of college students, due in part to the robust and financially incentivizing educational benefits under the Post-9/11 GI Bill. The overlapping cognitively impacting symptoms of service-related conditions combined with the underreporting of mTBI and psychiatric-related conditions, make accurate assessment of cognitive performance in military veterans challenging. Recent research findings provide conflicting information on cognitive performance patterns in military veterans. The purpose of this study was to determine whether service-related conditions and self-assessments predict performance on complex working memory and executive function tasks for military veteran college students. Sixty-one military veteran college students attending classes at Arizona State University campuses completed clinical neuropsychological tasks and experimental working memory and executive function tasks. The results revealed that a history of mTBI significantly predicted poorer performance in the areas of verbal working memory and decision-making. Depression significantly predicted poorer performance in executive function related to serial updating. In contrast, the commonly used clinical neuropsychological tasks were not sensitive service-related conditions including mTBI, PTSD, and depression. The differing performance patterns observed between the clinical tasks and the more complex experimental tasks support that researchers and clinicians should use tests that sufficiently tax verbal working memory and executive function when evaluating the subtle, higher-order cognitive deficits associated with mTBI and depression.
ContributorsGallagher, Karen Louise (Author) / Azuma, Tamiko (Thesis advisor) / Liss, Julie (Committee member) / Lavoie, Michael (Committee member) / Arizona State University (Publisher)
Created2017
157084-Thumbnail Image.png
Description
Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of

Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of attention (alerting, orienting, executive control) were measured in persons with aphasia and matched-controls using visual and auditory versions of the well-studied Attention Network Test. Experiment 1b then explored the neural resources supporting each component of attention in the visual and auditory modalities in chronic stroke participants. The results from Experiment 1a indicate that alerting, orienting, and executive control are uniquely affected by presentation modality. The lesion-symptom mapping results from Experiment 1b associated the left angular gyrus with visual executive control, the left supramarginal gyrus with auditory alerting, and Broca’s area (pars opercularis) with auditory orienting attention post-stroke. Overall, these findings indicate that perceptual modality may impact the lateralization of some aspects of attention, thus auditory attention may be more susceptible to impairment after a left hemisphere stroke.

Prosody, rhythm and pitch changes associated with spoken language may improve spoken language comprehension in persons with aphasia by recruiting intact cognitive abilities (e.g., attention and working memory) and their associated non-lesioned brain regions post-stroke. Therefore, Experiment 2 explored the relationship between cognition, two unique prosody manipulations, lesion location, and auditory sentence comprehension in persons with chronic stroke and matched-controls. The combined results from Experiment 2a and 2b indicate that stroke participants with better auditory orienting attention and a specific left fronto-parietal network intact had greater comprehension of sentences spoken with sentence prosody. For list prosody, participants with deficits in auditory executive control and/or short-term memory and the left angular gyrus and globus pallidus relatively intact, demonstrated better comprehension of sentences spoken with list prosody. Overall, the results from Experiment 2 indicate that following a left hemisphere stroke, individuals need good auditory attention and an intact left fronto-parietal network to benefit from typical sentence prosody, yet when cognitive deficits are present and this fronto-parietal network is damaged, list prosody may be more beneficial.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Braden, B. Blair (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2019
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134804-Thumbnail Image.png
Description
Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed

Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed with difficulty. While the presence of SEM in the stroke survivor population advances scientific understanding of movement capabilities following a stroke, published studies using the SEM phenomenon only examined one joint. The ability of SEM to generate multi-jointed movements is understudied and consequently limits SEM as a potential therapy tool. In order to apply SEM as a therapy tool however, the biomechanics of the arm in multi-jointed movement planning and execution must be better understood. Thus, the objective of our study was to evaluate if SEM could elicit multi-joint reaching movements that were accurate in an unrestrained, two-dimensional workspace. Data was collected from ten subjects with no previous neck, arm, or brain injury. Each subject performed a reaching task to five Targets that were equally spaced in a semi-circle to create a two-dimensional workspace. The subject reached to each Target following a sequence of two non-startling acoustic stimuli cues: "Get Ready" and "Go". A loud acoustic stimuli was randomly substituted for the "Go" cue. We hypothesized that SEM is accessible and accurate for unrestricted multi-jointed reaching tasks in a functional workspace and is therefore independent of movement direction. Our results found that SEM is possible in all five Target directions. The probability of evoking SEM and the movement kinematics (i.e. total movement time, linear deviation, average velocity) to each Target are not statistically different. Thus, we conclude that SEM is possible in a functional workspace and is not dependent on where arm stability is maximized. Moreover, coordinated preparation and storage of a multi-jointed movement is indeed possible.
ContributorsOssanna, Meilin Ryan (Author) / Honeycutt, Claire (Thesis director) / Schaefer, Sydney (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
158471-Thumbnail Image.png
Description
Children with cleft palate with or without cleft lip (CP+/-L) often demonstrate disordered speech. Clinicians and researchers have a goal for children with CP+/-L to demonstrate typical speech when entering kindergarten; however, this benchmark is not routinely met. There is a large body of previous research examining speech articulation skills

Children with cleft palate with or without cleft lip (CP+/-L) often demonstrate disordered speech. Clinicians and researchers have a goal for children with CP+/-L to demonstrate typical speech when entering kindergarten; however, this benchmark is not routinely met. There is a large body of previous research examining speech articulation skills in this clinical population; however, there are continued questions regarding the severity of articulation deficits in children with CP+/-L, especially for the age range of children entering school. This dissertation aimed to provide additional information on speech accuracy and speech error usage in children with CP+/-L between the ages of four and seven years. Additionally, it explored individual and treatment characteristics that may influence articulation skills. Finally, it examined the relationship between speech accuracy during a sentence repetition task versus during a single-word naming task.

Children with CP+/-L presented with speech accuracy that differed according to manner of production. Speech accuracy for fricative phonemes was influenced by severity of hypernasality, although age and status of secondary surgery did not influence speech accuracy for fricatives. For place of articulation, children with CP+/-L demonstrated strongest accuracy of production for bilabial and velar phonemes, while alveolar and palatal phonemes were produced with lower accuracy. Children with clefting that involved the lip and alveolus demonstrated reduced speech accuracy for alveolar phonemes compared to children with clefts involving the hard and soft palate only.

Participants used a variety of speech error types, with developmental/phonological errors, anterior oral cleft speech characteristics, and compensatory errors occurring most frequently across the sample. Several factors impacted the type of speech errors used, including cleft type, severity of hypernasality, and age.

The results from this dissertation project support previous research findings and provide additional information regarding the severity of speech articulation deficits according to manner and place of consonant production and according to different speech error categories. This study adds information on individual and treatment characteristics that influenced speech accuracy and speech error usage.
ContributorsLien, Kari (Author) / Scherer, Nancy J. (Thesis advisor) / Nett Cordero, Kelly (Committee member) / Liss, Julie (Committee member) / Sitzman, Thomas (Committee member) / Arizona State University (Publisher)
Created2020