Matching Items (57)
151721-Thumbnail Image.png
Description
Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and

Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and error responses, reaction times, and accuracy. Accuracy and reaction times were analyzed in terms of memory load (task condition), number of repetitions, effect of high vs. low print-frequency, and correlations with working memory span. Multinomial tree analyses were also completed to investigate source vs. item memory and revealed a mirror effect in episodic memory experiments (source memory), but a frequency advantage in span tasks (item memory). Interestingly enough, we did not observe an advantage for high working memory span individuals in frequency judgments, even when participants split their attention during the dual task (similar to a complex span task). However, we concluded that both the amount of attentional resources allocated and prior experience with an item affect how it is stored in memory.
ContributorsPeterson, Megan Paige (Author) / Azuma, Tamiko (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
152801-Thumbnail Image.png
Description
Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults

Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults participated in the study that measured intelligibility (percent words correct) of dysarthric speech in auditory versus audiovisual conditions. Participants were then separated into two groups: older adults (age range 47 to 68) and young adults (age range 19 to 36) to examine the influence of age. Findings revealed that all participants, regardless of age, improved their ability to recognize dysarthric speech when visual speech was added to the auditory signal. The magnitude of this benefit, however, was greater for older adults when compared with younger adults. These results inform our understanding of how visual speech information influences understanding of dysarthric speech.
ContributorsFall, Elizabeth (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2014
153352-Thumbnail Image.png
Description
Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one

Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one domain support the other when damage occurs? Evidence supporting this question exists for speech production. Musical pitch and rhythm are employed in Melodic Intonation Therapy to improve expressive language recovery, but little is known about the effects of music on the recovery of speech perception and receptive language. This research is one of the first to address the effects of music on speech perception. Two groups of participants, an older adult group (n=24; M = 71.63 yrs) and a younger adult group (n=50; M = 21.88 yrs) took part in the study. A native female speaker of Standard American English created four different types of stimuli including pseudoword sentences of normal speech, simultaneous music-speech, rhythmic speech, and music-primed speech. The stimuli were presented binaurally and participants were instructed to repeat what they heard following a 15 second time delay. Results were analyzed using standard parametric techniques. It was found that musical priming of speech, but not simultaneous synchronized music and speech, facilitated speech perception in both the younger adult and older adult groups. This effect may be driven by rhythmic information. The younger adults outperformed the older adults in all conditions. The speech perception task relied heavily on working memory, and there is a known working memory decline associated with aging. Thus, participants completed a working memory task to be used as a covariate in analyses of differences across stimulus types and age groups. Working memory ability was found to correlate with speech perception performance, but that the age-related performance differences are still significant once working memory differences are taken into account. These results provide new avenues for facilitating speech perception in stroke patients and sheds light upon the underlying mechanisms of Melodic Intonation Therapy for speech production.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2015
150357-Thumbnail Image.png
Description
The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect the theoretical mechanisms that underlie nonword repetition accuracy was estimated. Generative item features were based on the phonological loop component

The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect the theoretical mechanisms that underlie nonword repetition accuracy was estimated. Generative item features were based on the phonological loop component of Baddelely's model of working memory which addresses phonological short-term memory (Baddeley, 2000, 2003; Baddeley & Hitch, 1974). Using researcher developed software, nonwords were generated to adhere to the phonological constraints of Spanish. Thirty-six nonwords were chosen based on the set item features identified by the proposed cognitive processing model. Using a planned missing data design, two-hundred fifteen Spanish-English bilingual children were administered 24 of the 36 generated nonwords. Multiple regression and explanatory item response modeling techniques (e.g., linear logistic test model, LLTM; Fischer, 1973) were used to estimate the impact of item features on item difficulty. The final LLTM included three item radicals and two item incidentals. Results indicated that the LLTM predicted item difficulties were highly correlated with the Rasch item difficulties (r = .89) and accounted for a substantial amount of the variance in item difficulty (R2 = .79). The findings are discussed in terms of validity evidence in support of using the phonological loop component of Baddeley's model (2000) as a cognitive processing model for nonword repetition items and the feasibility of using the proposed radical structure as an item blueprint for the future generation of nonword repetition items.
ContributorsMorgan, Gareth Philip (Author) / Gorin, Joanna (Thesis advisor) / Levy, Roy (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2011
151135-Thumbnail Image.png
Description
Identification of primary language impairment (PLI) in sequential bilingual children is challenging because of the interaction between PLI and second language (L2) proficiency. An important step in improving the accurate diagnosis of PLI in bilingual children is to investigate how differences in L2 performance are affected by a length of

Identification of primary language impairment (PLI) in sequential bilingual children is challenging because of the interaction between PLI and second language (L2) proficiency. An important step in improving the accurate diagnosis of PLI in bilingual children is to investigate how differences in L2 performance are affected by a length of L2 exposure and how L2 assessment contributes to differentiation between children with and without PLI at different L2 proficiency levels. Sixty one children with typical language development (TD) ages 5;3-8 years and 12 children with PLI ages 5;5-7;8 years participated. Results revealed that bilingual children with and without PLI, who had between 1 and 3 years of L2 exposure, did not differ in mean length of utterance (MLU), number of different words, percent of maze words, and performance on expressive and receptive grammatical tasks in L2. Performance on a grammaticality judgment task by children with and without PLI demonstrated the largest effect size, indicating that it may potentially contribute to identification of PLI in bilingual populations. In addition, children with PLI did not demonstrate any association between the length of exposure and L2 proficiency, suggesting that they do not develop their L2 proficiency in relation to length of exposure in the same manner as children with TD. Results also indicated that comprehension of grammatical structures and expressive grammatical task in L2 may contribute to differentiation between the language ability groups at the low and intermediate-high proficiency levels. The discriminant analysis with the entire sample of bilingual children with and without PLI revealed that among L2 measures, only MLU contributed to the discrimination between the language ability groups. However, poor classification accuracy suggested that MLU alone is not a sufficient predictor of PLI. There were significant differences among L2 proficiency levels in children with TD in MLU, number of different words, and performance on the expressive and receptive grammatical tasks in L2, indicating that L2 proficiency level may potentially impact the differentiation between language difficulties due to typical L2 acquisition processes and PLI.
ContributorsSmyk, Ekaterina (Author) / Restrepo, Maria Adelaida (Thesis advisor) / Gorin, Joanna (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2012
153916-Thumbnail Image.png
Description
ABSTRACT



This study investigated the effects of a family literacy program for Latino parents' language practices at home and their children's oral language skills. Specifically, the study examined the extent to which: (a) the program called Family Reading Intervention for Language and Literacy in Spanish (FRILLS) was effective

ABSTRACT



This study investigated the effects of a family literacy program for Latino parents' language practices at home and their children's oral language skills. Specifically, the study examined the extent to which: (a) the program called Family Reading Intervention for Language and Literacy in Spanish (FRILLS) was effective at teaching low-education, low-income Latino parents three language strategies (i.e., comments, high-level questions and recasts) as measured by parent implementation, (b) parents maintained implementation of the three language strategies two weeks following the program, and (c) parent implementation of such practices positively impacted children's oral language skills as measured by number of inferences, conversational turns, number of different words, and the Mean Length of Utterance in words (MLU-w).

Five Latino mothers and their Spanish-speaking preschool children participated in a multiple baseline single-subject design across participants. After stable baseline data, each mother was randomly selected to initiate the intervention. Program initiation was staggered across the five mothers. The mothers engaged in seven individual intervention sessions. Data on parent and child outcomes were collected across three experimental conditions: baseline, intervention, and follow-up. This study employed visual analysis of the data to determine the program effects on parent and child outcome variables.

Results indicated that the program was effective in increasing the mothers' use of comments and high-level questions, but not recasts, when reading to their children. The program had a positive effect on the children's number of inferences, different words, and conversational turns, but not on the mean length of utterances. Findings indicate that FRILLS may be effective at extending and enriching the language environment that low-income children who are culturally and linguistically diverse experience at home. Three results with important implications for those who implement, develop, or examine family literacy programs are discussed.
ContributorsMesa Guecha, Carol Magnolia (Author) / Restrepo, María A (Thesis advisor) / Gray, Shelley (Committee member) / Jimenez-Silva, Margarita (Committee member) / Arizona State University (Publisher)
Created2015
156877-Thumbnail Image.png
Description
This mixed methods study examined whether participation in a virtual community of practice (vCoP) could impact the implementation of new skills learned in a professional development session and help to close the research to implementation gap.

Six participants attended a common professional development session and completed pre- , mid-

This mixed methods study examined whether participation in a virtual community of practice (vCoP) could impact the implementation of new skills learned in a professional development session and help to close the research to implementation gap.

Six participants attended a common professional development session and completed pre- , mid- , and post-intervention surveys regarding their implementation of social emotional teaching strategies as well as face-to-face interviews.

Both quantitative and qualitative data was examined to determine if participation in the vCoP impacted implementation of skills learned in the PD session. Quantitative data was inconclusive but qualitative data showed an appreciation for participation in the vCoP and access to the resources shared by the participants. Limitations and implications for future cycles of research are discussed.
ContributorsLopez, Ariana Colleen (Author) / Dorn, Sherman (Thesis advisor) / Gray, Shelley (Committee member) / Zbyszinski, Lauren (Committee member) / Arizona State University (Publisher)
Created2018
133900-Thumbnail Image.png
Description
22q11.2 Deletion Syndrome (22q11.2DS) is one of the most frequent chromosomal microdeletion syndromes in humans. This case study focuses on the language and reading profile of a female adult with 22q11.2 Deletion Syndrome who was undiagnosed until the age of 27 years old. To comprehensively describe the participant's profile, a

22q11.2 Deletion Syndrome (22q11.2DS) is one of the most frequent chromosomal microdeletion syndromes in humans. This case study focuses on the language and reading profile of a female adult with 22q11.2 Deletion Syndrome who was undiagnosed until the age of 27 years old. To comprehensively describe the participant's profile, a series of assessment measures was administered in the speech, language, cognition, reading, and motor domains. Understanding how 22q11.2DS has impacted the life of a recently diagnosed adult will provide insight into how to best facilitate long-term language and educational support for this population and inform future research.
ContributorsPhilp, Jennifer Lynn (Author) / Scherer, Nancy (Thesis director) / Peter, Beate (Committee member) / Department of Speech and Hearing Science (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133916-Thumbnail Image.png
Description
The purpose of the present study was to determine if vocabulary knowledge is related to degree of hearing loss. A 50-question multiple-choice vocabulary test comprised of old and new words was administered to 43 adults with hearing loss (19 to 92 years old) and 51 adults with normal hearing (20

The purpose of the present study was to determine if vocabulary knowledge is related to degree of hearing loss. A 50-question multiple-choice vocabulary test comprised of old and new words was administered to 43 adults with hearing loss (19 to 92 years old) and 51 adults with normal hearing (20 to 40 years old). Degree of hearing loss ranged from mild to moderately-severe as determined by bilateral pure-tone thresholds. Education levels ranged from some high school to graduate degrees. It was predicted that knowledge of new words would decrease with increasing hearing loss, whereas knowledge of old words would be unaffected. The Test of Contemporary Vocabulary (TCV) was developed for this study and contained words with old and new definitions. The vocabulary scores were subjected to repeated-measures ANOVA with definition type (old and new) as the within-subjects factor. Hearing level and education were between-subjects factors, while age was entered as a covariate. The results revealed no main effect of age or education level, while a significant main effect of hearing level was observed. Specifically, performance for new words decreased significantly as degree of hearing loss increased. A similar effect was not observed for old words. These results indicate that knowledge of new definitions is inversely related to degree of hearing loss.
ContributorsMarzan, Nicole Ann (Author) / Pittman, Andrea (Thesis director) / Azuma, Tamiko (Committee member) / Wexler, Kathryn (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135399-Thumbnail Image.png
Description
Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.
ContributorsMickelsen, Soren Brooks (Co-author) / Johnson, Lisa (Co-author) / Rogalsky, Corianne (Thesis director) / Azuma, Tamiko (Committee member) / Howard, Pamela (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05