Matching Items (22)
Filtering by

Clear all filters

151721-Thumbnail Image.png
Description
Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and

Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and error responses, reaction times, and accuracy. Accuracy and reaction times were analyzed in terms of memory load (task condition), number of repetitions, effect of high vs. low print-frequency, and correlations with working memory span. Multinomial tree analyses were also completed to investigate source vs. item memory and revealed a mirror effect in episodic memory experiments (source memory), but a frequency advantage in span tasks (item memory). Interestingly enough, we did not observe an advantage for high working memory span individuals in frequency judgments, even when participants split their attention during the dual task (similar to a complex span task). However, we concluded that both the amount of attentional resources allocated and prior experience with an item affect how it is stored in memory.
ContributorsPeterson, Megan Paige (Author) / Azuma, Tamiko (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
151671-Thumbnail Image.png
Description
Concussion, a subset of mild traumatic brain injury (mTBI), has recently been brought to the forefront of the media due to a large lawsuit filed against the National Football League. Concussion resulting from injury varies in severity, duration, and type, based on many characteristics about the individual that research does

Concussion, a subset of mild traumatic brain injury (mTBI), has recently been brought to the forefront of the media due to a large lawsuit filed against the National Football League. Concussion resulting from injury varies in severity, duration, and type, based on many characteristics about the individual that research does not presently understand. Chronic fatigue, poor working memory, impaired self-awareness, and lack of attention to task are symptoms commonly present post-concussion. Currently, there is not a standard method of assessing concussion, nor is there a way to track an individual's recovery, resulting in misguided treatment for better prognosis. The aim of the following study was to determine patient specific higher-order cognitive processing deficits for clinical diagnosis and prognosis of concussion. Six individuals (N=6) were seen during the acute phase of concussion, two of whom were seen subsequently when their symptoms were deemed clinically resolved. Subjective information was collected from both the patient and from neurology testing. Each individual completed a task, in which they were presented with degraded speech, taxing their higher-order cognitive processing. Patient specific behavioral patterns are noted, creating a unique paradigm for mapping subjective and objective data for each patient's strategy to compensate for deficits and understand speech in a difficult listening situation. Keywords: concussion, cognitive processing
ContributorsBerg, Dena (Author) / Liss, Julie M (Committee member) / Azuma, Tamiko (Committee member) / Caviness, John (Committee member) / Arizona State University (Publisher)
Created2013
151846-Thumbnail Image.png
Description
Efficiency of components is an ever increasing area of importance to portable applications, where a finite battery means finite operating time. Higher efficiency devices need to be designed that don't compromise on the performance that the consumer has come to expect. Class D amplifiers deliver on the goal of increased

Efficiency of components is an ever increasing area of importance to portable applications, where a finite battery means finite operating time. Higher efficiency devices need to be designed that don't compromise on the performance that the consumer has come to expect. Class D amplifiers deliver on the goal of increased efficiency, but at the cost of distortion. Class AB amplifiers have low efficiency, but high linearity. By modulating the supply voltage of a Class AB amplifier to make a Class H amplifier, the efficiency can increase while still maintaining the Class AB level of linearity. A 92dB Power Supply Rejection Ratio (PSRR) Class AB amplifier and a Class H amplifier were designed in a 0.24um process for portable audio applications. Using a multiphase buck converter increased the efficiency of the Class H amplifier while still maintaining a fast response time to respond to audio frequencies. The Class H amplifier had an efficiency above the Class AB amplifier by 5-7% from 5-30mW of output power without affecting the total harmonic distortion (THD) at the design specifications. The Class H amplifier design met all design specifications and showed performance comparable to the designed Class AB amplifier across 1kHz-20kHz and 0.01mW-30mW. The Class H design was able to output 30mW into 16Ohms without any increase in THD. This design shows that Class H amplifiers merit more research into their potential for increasing efficiency of audio amplifiers and that even simple designs can give significant increases in efficiency without compromising linearity.
ContributorsPeterson, Cory (Author) / Bakkaloglu, Bertan (Thesis advisor) / Barnaby, Hugh (Committee member) / Kiaei, Sayfe (Committee member) / Arizona State University (Publisher)
Created2013
152801-Thumbnail Image.png
Description
Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults

Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults participated in the study that measured intelligibility (percent words correct) of dysarthric speech in auditory versus audiovisual conditions. Participants were then separated into two groups: older adults (age range 47 to 68) and young adults (age range 19 to 36) to examine the influence of age. Findings revealed that all participants, regardless of age, improved their ability to recognize dysarthric speech when visual speech was added to the auditory signal. The magnitude of this benefit, however, was greater for older adults when compared with younger adults. These results inform our understanding of how visual speech information influences understanding of dysarthric speech.
ContributorsFall, Elizabeth (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2014
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
153352-Thumbnail Image.png
Description
Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one

Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one domain support the other when damage occurs? Evidence supporting this question exists for speech production. Musical pitch and rhythm are employed in Melodic Intonation Therapy to improve expressive language recovery, but little is known about the effects of music on the recovery of speech perception and receptive language. This research is one of the first to address the effects of music on speech perception. Two groups of participants, an older adult group (n=24; M = 71.63 yrs) and a younger adult group (n=50; M = 21.88 yrs) took part in the study. A native female speaker of Standard American English created four different types of stimuli including pseudoword sentences of normal speech, simultaneous music-speech, rhythmic speech, and music-primed speech. The stimuli were presented binaurally and participants were instructed to repeat what they heard following a 15 second time delay. Results were analyzed using standard parametric techniques. It was found that musical priming of speech, but not simultaneous synchronized music and speech, facilitated speech perception in both the younger adult and older adult groups. This effect may be driven by rhythmic information. The younger adults outperformed the older adults in all conditions. The speech perception task relied heavily on working memory, and there is a known working memory decline associated with aging. Thus, participants completed a working memory task to be used as a covariate in analyses of differences across stimulus types and age groups. Working memory ability was found to correlate with speech perception performance, but that the age-related performance differences are still significant once working memory differences are taken into account. These results provide new avenues for facilitating speech perception in stroke patients and sheds light upon the underlying mechanisms of Melodic Intonation Therapy for speech production.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2015
150357-Thumbnail Image.png
Description
The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect the theoretical mechanisms that underlie nonword repetition accuracy was estimated. Generative item features were based on the phonological loop component

The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect the theoretical mechanisms that underlie nonword repetition accuracy was estimated. Generative item features were based on the phonological loop component of Baddelely's model of working memory which addresses phonological short-term memory (Baddeley, 2000, 2003; Baddeley & Hitch, 1974). Using researcher developed software, nonwords were generated to adhere to the phonological constraints of Spanish. Thirty-six nonwords were chosen based on the set item features identified by the proposed cognitive processing model. Using a planned missing data design, two-hundred fifteen Spanish-English bilingual children were administered 24 of the 36 generated nonwords. Multiple regression and explanatory item response modeling techniques (e.g., linear logistic test model, LLTM; Fischer, 1973) were used to estimate the impact of item features on item difficulty. The final LLTM included three item radicals and two item incidentals. Results indicated that the LLTM predicted item difficulties were highly correlated with the Rasch item difficulties (r = .89) and accounted for a substantial amount of the variance in item difficulty (R2 = .79). The findings are discussed in terms of validity evidence in support of using the phonological loop component of Baddeley's model (2000) as a cognitive processing model for nonword repetition items and the feasibility of using the proposed radical structure as an item blueprint for the future generation of nonword repetition items.
ContributorsMorgan, Gareth Philip (Author) / Gorin, Joanna (Thesis advisor) / Levy, Roy (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2011
149791-Thumbnail Image.png
Description
Emotion recognition through facial expression plays a critical role in communication. Review of studies investigating individuals with traumatic brain injury (TBI) and emotion recognition indicates significantly poorer performance compared to controls. The purpose of the study was to determine the effects of different media presentation on emotion recognition in individuals

Emotion recognition through facial expression plays a critical role in communication. Review of studies investigating individuals with traumatic brain injury (TBI) and emotion recognition indicates significantly poorer performance compared to controls. The purpose of the study was to determine the effects of different media presentation on emotion recognition in individuals with TBI, and if results differ depending on severity of TBI. Adults with and without TBI participated in the study and were assessed using the The Awareness of Social Inferences Test: Emotion Evaluation Test (TASIT:EET) and the Facial Expressions of Emotion-Stimuli and Tests (FEEST) The Ekman 60 Faces Test (E-60-FT). Results indicated that individuals with TBI perform significantly more poorly on emotion recognition tasks compared to age and education matched controls. Additionally, emotion recognition abilities greatly differ between mild and severe TBI groups, and TBI participants performed better with the static presentation compared to dynamic presentation.
ContributorsBrown, Cassie Anne (Author) / Wright, Heather H (Thesis advisor) / Stats-Caldwell, Denise (Committee member) / Ingram, Kelly (Committee member) / Arizona State University (Publisher)
Created2011
150607-Thumbnail Image.png
Description
Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of

Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of such a system for differential diagnosis. Acoustic analysis may provide a more objective measure for improvement in overall reliability (Guerra & Lovely, 2003) of classification. The following paper investigates the potential use of a taxonomical approach to dysarthria. The purpose of this study was to identify a set of acoustic correlates of perceptual dimensions used to group similarly sounding speakers with dysarthria, irrespective of disease etiology. The present study utilized a free classification auditory perceptual task in order to identify a set of salient speech characteristics displayed by speakers with varying dysarthria types and perceived by listeners, which was then analyzed using multidimensional scaling (MDS), correlation analysis, and cluster analysis. In addition, discriminant function analysis (DFA) was conducted to establish the feasibility of using the dimensions underlying perceptual similarity in dysarthria to classify speakers into both listener-derived clusters and etiology-based categories. The following hypothesis was identified: Because of the presumed predictive link between the acoustic correlates and listener-derived clusters, the DFA classification results should resemble the perceptual clusters more closely than the etiology-based (Mayo System) classifications. Results of the present investigation's MDS revealed three dimensions, which were significantly correlated with 1) metrics capturing rate and rhythm, 2) intelligibility, and 3) all of the long-term average spectrum metrics in the 8000 Hz band, which has been linked to degree of phonemic distinctiveness (Utianski et al., February 2012). A qualitative examination of listener notes supported the MDS and correlation results, with listeners overwhelmingly making reference to speaking rate/rhythm, intelligibility, and articulatory precision while participating in the free classification task. Additionally, acoustic correlates revealed by the MDS and subjected to DFA indeed predicted listener group classification. These results beget acoustic measurement as representative of listener perception, and represent the first phase in supporting the use of a perceptually relevant taxonomy of dysarthria.
ContributorsNorton, Rebecca (Author) / Liss, Julie (Thesis advisor) / Azuma, Tamiko (Committee member) / Ingram, David (Committee member) / Arizona State University (Publisher)
Created2012
151246-Thumbnail Image.png
Description
Class D Amplifiers are widely used in portable systems such as mobile phones to achieve high efficiency. The demands of portable electronics for low power consumption to extend battery life and reduce heat dissipation mandate efficient, high-performance audio amplifiers. The high efficiency of Class D amplifiers (CDAs) makes them particularly

Class D Amplifiers are widely used in portable systems such as mobile phones to achieve high efficiency. The demands of portable electronics for low power consumption to extend battery life and reduce heat dissipation mandate efficient, high-performance audio amplifiers. The high efficiency of Class D amplifiers (CDAs) makes them particularly attractive for portable applications. The Digital class D amplifier is an interesting solution to increase the efficiency of embedded systems. However, this solution is not good enough in terms of PWM stage linearity and power supply rejection. An efficient control is needed to correct the error sources in order to get a high fidelity sound quality in the whole audio range of frequencies. A fundamental analysis on various error sources due to non idealities in the power stage have been discussed here with key focus on Power supply perturbations driving the Power stage of a Class D Audio Amplifier. Two types of closed loop Digital Class D architecture for PSRR improvement have been proposed and modeled. Double sided uniform sampling modulation has been used. One of the architecture uses feedback around the power stage and the second architecture uses feedback into digital domain. Simulation & experimental results confirm that the closed loop PSRR & PS-IMD improve by around 30-40 dB and 25 dB respectively.
ContributorsChakraborty, Bijeta (Author) / Bakkaloglu, Bertan (Thesis advisor) / Garrity, Douglas (Committee member) / Ozev, Sule (Committee member) / Arizona State University (Publisher)
Created2012