Matching Items (43)
150009-Thumbnail Image.png
Description
Minimal information exists concerning dual language acquisition of three-year-old dual language learners (DLLs) during their first school experience and first systematic exposure to English. This study examined the Spanish and early English language development of young DLLs in the context of standardized measures and a story retell task. Participants included

Minimal information exists concerning dual language acquisition of three-year-old dual language learners (DLLs) during their first school experience and first systematic exposure to English. This study examined the Spanish and early English language development of young DLLs in the context of standardized measures and a story retell task. Participants included eight Spanish-English DLLs (7 females, 1 male, M age = 3 years, 8 months) attending Head Start, and their classroom teachers. Outcome measures for the children included composite and scaled scores on the Clinical Evaluation of Language Fundamentals Preschool-2 Spanish (CELF Preschool-2 Spanish; Wiig, Secord & Semel, 2009) and the parallel English measure (CELF Preschool-2; Wiig, Secord & Semel, 2005), and measures of lexical (NVT, NNVT, TNV, NW, NDW, TNW and TTR) and grammatical (MLUw) development. Proportion of classroom teachers' and paraprofessionals' Spanish, English and mixed language use was measured to contextualize the children's learning environment with regard to language exposure. Children's mean standardized Spanish scores at school entry were not significantly different from their mean scores in May; however, an increase in total number of verb types was observed. Children's English receptive, content, and structure mean standardized scores in May were significantly higher than their scores at school entry. Children were exposed to a high proportion of mixed language use and disproportionate amounts of English and Spanish exclusively. Children's performance was highly variable across measures and languages. The findings of the current study provide a reference point for future research regarding language development of three-year-old Spanish-English dual language learners.
ContributorsDubasik, Virginia L (Author) / Wilcox, M J (Thesis advisor) / Ingram, David (Committee member) / Lafferty, Addie (Committee member) / Macswan, Jeff (Committee member) / Arizona State University (Publisher)
Created2011
151721-Thumbnail Image.png
Description
Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and

Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and error responses, reaction times, and accuracy. Accuracy and reaction times were analyzed in terms of memory load (task condition), number of repetitions, effect of high vs. low print-frequency, and correlations with working memory span. Multinomial tree analyses were also completed to investigate source vs. item memory and revealed a mirror effect in episodic memory experiments (source memory), but a frequency advantage in span tasks (item memory). Interestingly enough, we did not observe an advantage for high working memory span individuals in frequency judgments, even when participants split their attention during the dual task (similar to a complex span task). However, we concluded that both the amount of attentional resources allocated and prior experience with an item affect how it is stored in memory.
ContributorsPeterson, Megan Paige (Author) / Azuma, Tamiko (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
151634-Thumbnail Image.png
Description
Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a

Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a single CI and had low-frequency, acoustic hearing in both the ear contralateral to the CI and in the implanted ear. These `hearing preservation' listeners, potentially, had access to ITD cues but not to ILD cues. At issue in this dissertation was the value of the two types of information about sound sources, ITDs and ILDs, for localization and for speech perception when speech and noise sources were separated in space. For Experiment 1, normal hearing (NH) listeners and the two groups of CI listeners were tested for sound source localization using a 13 loudspeaker array. For the NH listeners, the mean RMS error for localization was 7 degrees, for the bilateral CI listeners, 20 degrees, and for the hearing preservation listeners, 23 degrees. The scores for the two CI groups did not differ significantly. Thus, both CI groups showed equivalent, but poorer than normal, localization. This outcome using the filtered noise bands for the normal hearing listeners, suggests ILD and ITD cues can support equivalent levels of localization. For Experiment 2, the two groups of CI listeners were tested for speech recognition in noise when the noise sources and targets were spatially separated in a simulated `restaurant' environment and in two versions of a `cocktail party' environment. At issue was whether either CI group would show benefits from binaural hearing, i.e., better performance when the noise and targets were separated in space. Neither of the CI groups showed spatial release from masking. However, both groups showed a significant binaural advantage (a combination of squelch and summation), which also maintained separation of the target and noise, indicating the presence of some binaural processing or `unmasking' of speech in noise. Finally, localization ability in Experiment 1 was not correlated with binaural advantage in Experiment 2.
ContributorsLoiselle, Louise (Author) / Dorman, Michael F. (Thesis advisor) / Yost, William A. (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
150954-Thumbnail Image.png
Description
Spanish-speaking (SS) dual language learners (DLLs) have shown differential developmental profiles of the native language (L1). The current study examined whether or not the Spanish acquisition profile, specifically accusative clitics, in predominantly SS, Latino children continues to develop in an English-language contact situation. This study examined (1) accuracy rates of

Spanish-speaking (SS) dual language learners (DLLs) have shown differential developmental profiles of the native language (L1). The current study examined whether or not the Spanish acquisition profile, specifically accusative clitics, in predominantly SS, Latino children continues to develop in an English-language contact situation. This study examined (1) accuracy rates of clitic production, total substitutions, and total omissions across 5-, 6-, and 7-year-olds; (2) accuracy rates of clitic production, total substitutions, and total omissions across low and high English proficiency groups; and (3) whether or not there was a trend to use the default clitic lo in inappropriate contexts. Seventy-four SS children aged 5;1 to 7;11 participated in a clitic elicitation task. Results indicated non-significant effects of age and proficiency level on the accuracy of clitic production. These results suggest dual language learners are in an environment that does not foster the maintenance of the L1, at least in the accuracy of accusative clitic pronouns.
ContributorsFigueroa, Megan Danielle (Author) / Restrepo, María A (Thesis advisor) / Gelderen, Elly van (Committee member) / Ingram, David (Committee member) / Arizona State University (Publisher)
Created2012
150607-Thumbnail Image.png
Description
Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of

Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of such a system for differential diagnosis. Acoustic analysis may provide a more objective measure for improvement in overall reliability (Guerra & Lovely, 2003) of classification. The following paper investigates the potential use of a taxonomical approach to dysarthria. The purpose of this study was to identify a set of acoustic correlates of perceptual dimensions used to group similarly sounding speakers with dysarthria, irrespective of disease etiology. The present study utilized a free classification auditory perceptual task in order to identify a set of salient speech characteristics displayed by speakers with varying dysarthria types and perceived by listeners, which was then analyzed using multidimensional scaling (MDS), correlation analysis, and cluster analysis. In addition, discriminant function analysis (DFA) was conducted to establish the feasibility of using the dimensions underlying perceptual similarity in dysarthria to classify speakers into both listener-derived clusters and etiology-based categories. The following hypothesis was identified: Because of the presumed predictive link between the acoustic correlates and listener-derived clusters, the DFA classification results should resemble the perceptual clusters more closely than the etiology-based (Mayo System) classifications. Results of the present investigation's MDS revealed three dimensions, which were significantly correlated with 1) metrics capturing rate and rhythm, 2) intelligibility, and 3) all of the long-term average spectrum metrics in the 8000 Hz band, which has been linked to degree of phonemic distinctiveness (Utianski et al., February 2012). A qualitative examination of listener notes supported the MDS and correlation results, with listeners overwhelmingly making reference to speaking rate/rhythm, intelligibility, and articulatory precision while participating in the free classification task. Additionally, acoustic correlates revealed by the MDS and subjected to DFA indeed predicted listener group classification. These results beget acoustic measurement as representative of listener perception, and represent the first phase in supporting the use of a perceptually relevant taxonomy of dysarthria.
ContributorsNorton, Rebecca (Author) / Liss, Julie (Thesis advisor) / Azuma, Tamiko (Committee member) / Ingram, David (Committee member) / Arizona State University (Publisher)
Created2012
137669-Thumbnail Image.png
Description
When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder,

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.
ContributorsStanley, Nicole Ernestine (Author) / Yost, William (Thesis director) / Dorman, Michael (Committee member) / Liss, Julie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Hugh Downs School of Human Communication (Contributor)
Created2013-05
137577-Thumbnail Image.png
Description
Children's speech and language development is measured by performance on standardized articulation tests. Test items on these assessments, however, vary in length and complexity. Word complexity was compared across five articulation tests: the Assessment of Phonological Patterns-Revised (APP-R), the Bankson-Bernthal Test of Phonology (BBTOP), the Clinical Assessment of Articulation and

Children's speech and language development is measured by performance on standardized articulation tests. Test items on these assessments, however, vary in length and complexity. Word complexity was compared across five articulation tests: the Assessment of Phonological Patterns-Revised (APP-R), the Bankson-Bernthal Test of Phonology (BBTOP), the Clinical Assessment of Articulation and Phonology (CAAP), the Goldman-Fristoe Test of Articulation (GFTA), and the Assessment of Children's Articulation and Phonology (ACAP). Four groups of word complexity were used, using the dimensions of monosyllabic vs. multisyllabic words, and words with consonant clusters vs. words without consonant clusters. The measure of phonological mean length of utterance (Ingram, 2001), was used to assess overall word complexity. It was found that the tests varied in number of test items and word complexity, with the BBTOP and the CAAP showing the most similarity to word complexity in spontaneous speech of young children. On the other hand, the APP-R used the most complex words and showed the least similarity. Additionally, case studies were analyzed for three of the tests to examine the effect of word complexity on consonant correctness, usedin the measures of Percentage of Correct Consonants (PCC) and the Proportion of Whole Word Proximity (PWP). Word complexity was found to affect consonant correctness, therefore affecting test performance.
ContributorsSullivan, Katherine Elizabeth (Author) / Ingram, David (Thesis director) / Bacon, Cathy (Committee member) / Brown, Jean (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / T. Denny Sanford School of Social and Family Dynamics (Contributor)
Created2013-05
137376-Thumbnail Image.png
Description
This thesis investigated the impact of word complexity as measured through the Proportion of Whole Word Proximity (PWP; Ingram 2002) on consonant correctness as measured by the Percentage of Correct Consonants (PCC; Shriberg & Kwiatkowski 1980) on the spoken words of monolingual Spanish-speaking children. The effect of word complexity on

This thesis investigated the impact of word complexity as measured through the Proportion of Whole Word Proximity (PWP; Ingram 2002) on consonant correctness as measured by the Percentage of Correct Consonants (PCC; Shriberg & Kwiatkowski 1980) on the spoken words of monolingual Spanish-speaking children. The effect of word complexity on consonant correctness has previously been studied on English-speaking children (Knodel 2012); the present study extends this line of research to determine if it can be appropriately applied to Spanish. Language samples from a previous study were used (Hase, 2010) in which Spanish-speaking children were given two articulation assessments: Evaluación fonológica del habla infantil (FON; Bosch Galceran, 2004), and the Spanish Test of Articulation for Children Under Three Years of Age (STAR; Bunta, 2002). It was hypothesized that word complexity would affect a Spanish-speaking child’s productions of correct consonants as was seen for the English- speaking children studied. This hypothesis was supported for 10 out of the 14 children. The pattern of word complexity found for Spanish was as follows: CVCV > CVCVC, Tri-syllables no clusters > Disyllable words with clusters.
ContributorsPurinton, Kaitlyn Lisa (Author) / Ingram, David (Thesis director) / Dixon, Dixon (Committee member) / Barlow, Jessica (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / School of International Letters and Cultures (Contributor)
Created2013-12
137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05
Description
This thesis compared two measures of phonological assessment of children, Shriberg and Kwiatkowski's 1980 Percentage of Correct Consonants (PCC) and Ingram's 2002 Proportion of Whole Word Proximity (PWP). Two typically developing two-year-old children were initially studied, and then nine children with speech sound disorders. The children's words were divided into

This thesis compared two measures of phonological assessment of children, Shriberg and Kwiatkowski's 1980 Percentage of Correct Consonants (PCC) and Ingram's 2002 Proportion of Whole Word Proximity (PWP). Two typically developing two-year-old children were initially studied, and then nine children with speech sound disorders. The children's words were divided into four categories ranging from least complex to most complex. It was hypothesized that the measures would correlate with word simplicity. The hypothesis was supported for the two typically developing children, and for five of the children with speech sound disorders. The other four children with speech disorders, however, did not show the correlation. It was concluded that PCC and PWP did not measure the same thing, that PCC alone was sufficient to assess the typically developing children, and that the two measures together better captured the ability of the children with speech sound disorders than either singularly. Further, the differences between the two groups of children with speech sound disorders were interpreted as showing a difference between phonological delay and phonological disorder.
ContributorsKnodel, Rebekah Katelyn (Author) / Ingram, David (Thesis director) / Major, Roy (Committee member) / Fox, Angela (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2013-05