Matching Items (34)
151721-Thumbnail Image.png
Description
Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and

Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and error responses, reaction times, and accuracy. Accuracy and reaction times were analyzed in terms of memory load (task condition), number of repetitions, effect of high vs. low print-frequency, and correlations with working memory span. Multinomial tree analyses were also completed to investigate source vs. item memory and revealed a mirror effect in episodic memory experiments (source memory), but a frequency advantage in span tasks (item memory). Interestingly enough, we did not observe an advantage for high working memory span individuals in frequency judgments, even when participants split their attention during the dual task (similar to a complex span task). However, we concluded that both the amount of attentional resources allocated and prior experience with an item affect how it is stored in memory.
ContributorsPeterson, Megan Paige (Author) / Azuma, Tamiko (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
151634-Thumbnail Image.png
Description
Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a

Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a single CI and had low-frequency, acoustic hearing in both the ear contralateral to the CI and in the implanted ear. These `hearing preservation' listeners, potentially, had access to ITD cues but not to ILD cues. At issue in this dissertation was the value of the two types of information about sound sources, ITDs and ILDs, for localization and for speech perception when speech and noise sources were separated in space. For Experiment 1, normal hearing (NH) listeners and the two groups of CI listeners were tested for sound source localization using a 13 loudspeaker array. For the NH listeners, the mean RMS error for localization was 7 degrees, for the bilateral CI listeners, 20 degrees, and for the hearing preservation listeners, 23 degrees. The scores for the two CI groups did not differ significantly. Thus, both CI groups showed equivalent, but poorer than normal, localization. This outcome using the filtered noise bands for the normal hearing listeners, suggests ILD and ITD cues can support equivalent levels of localization. For Experiment 2, the two groups of CI listeners were tested for speech recognition in noise when the noise sources and targets were spatially separated in a simulated `restaurant' environment and in two versions of a `cocktail party' environment. At issue was whether either CI group would show benefits from binaural hearing, i.e., better performance when the noise and targets were separated in space. Neither of the CI groups showed spatial release from masking. However, both groups showed a significant binaural advantage (a combination of squelch and summation), which also maintained separation of the target and noise, indicating the presence of some binaural processing or `unmasking' of speech in noise. Finally, localization ability in Experiment 1 was not correlated with binaural advantage in Experiment 2.
ContributorsLoiselle, Louise (Author) / Dorman, Michael F. (Thesis advisor) / Yost, William A. (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
150607-Thumbnail Image.png
Description
Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of

Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of such a system for differential diagnosis. Acoustic analysis may provide a more objective measure for improvement in overall reliability (Guerra & Lovely, 2003) of classification. The following paper investigates the potential use of a taxonomical approach to dysarthria. The purpose of this study was to identify a set of acoustic correlates of perceptual dimensions used to group similarly sounding speakers with dysarthria, irrespective of disease etiology. The present study utilized a free classification auditory perceptual task in order to identify a set of salient speech characteristics displayed by speakers with varying dysarthria types and perceived by listeners, which was then analyzed using multidimensional scaling (MDS), correlation analysis, and cluster analysis. In addition, discriminant function analysis (DFA) was conducted to establish the feasibility of using the dimensions underlying perceptual similarity in dysarthria to classify speakers into both listener-derived clusters and etiology-based categories. The following hypothesis was identified: Because of the presumed predictive link between the acoustic correlates and listener-derived clusters, the DFA classification results should resemble the perceptual clusters more closely than the etiology-based (Mayo System) classifications. Results of the present investigation's MDS revealed three dimensions, which were significantly correlated with 1) metrics capturing rate and rhythm, 2) intelligibility, and 3) all of the long-term average spectrum metrics in the 8000 Hz band, which has been linked to degree of phonemic distinctiveness (Utianski et al., February 2012). A qualitative examination of listener notes supported the MDS and correlation results, with listeners overwhelmingly making reference to speaking rate/rhythm, intelligibility, and articulatory precision while participating in the free classification task. Additionally, acoustic correlates revealed by the MDS and subjected to DFA indeed predicted listener group classification. These results beget acoustic measurement as representative of listener perception, and represent the first phase in supporting the use of a perceptually relevant taxonomy of dysarthria.
ContributorsNorton, Rebecca (Author) / Liss, Julie (Thesis advisor) / Azuma, Tamiko (Committee member) / Ingram, David (Committee member) / Arizona State University (Publisher)
Created2012
137669-Thumbnail Image.png
Description
When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder,

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.
ContributorsStanley, Nicole Ernestine (Author) / Yost, William (Thesis director) / Dorman, Michael (Committee member) / Liss, Julie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Hugh Downs School of Human Communication (Contributor)
Created2013-05
137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05
131951-Thumbnail Image.png
Description
Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven

Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven adult
subjects in both speaking (speech planning) and silent reading (no speech planning) conditions.
Data analysis was accomplished manually as well as via generation of a MATLAB code to
combine data sets and calculate auditory modulation (suppression). Results of the P200
modulation showed that modulation was larger for incongruent stimuli than congruent stimuli.
However, this was not the case for the N100 modulation. The data for pure tone could not be
analyzed because the intensity of this stimulus was substantially lower than that of the speech
stimuli. Overall, the results indicated that the P200 component plays a significant role in
processing stimuli and determining the relevance of stimuli; this result is consistent with role of
P200 component in high-level analysis of speech and perceptual processing. This experiment is
ongoing, and we hope to obtain data from more subjects to support the current findings.
ContributorsTaylor, Megan Kathleen (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / School of Life Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131570-Thumbnail Image.png
Description
Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study, we used five stimulation conditions: tACS at three different frequencies

Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study, we used five stimulation conditions: tACS at three different frequencies (6Hz, 12Hz, and 22Hz), transcranial random noise stimulation (tRNS), and a no-stimulation sham condition. In all stimulation conditions, we recorded electroencephalographic data to investigate the link between different frequencies of tACS and their effects on brain oscillations. We recruited 12 healthy participants. Each participant completed 30 trials of the stimulation conditions. In a given trial, we recorded brain activity for 10 seconds, stimulated for 12 seconds, and recorded an additional 10 seconds of brain activity. The difference between the average oscillation power before and after a stimulation condition indicated change in oscillation amplitude due to the stimulation. Our results showed the stimulation conditions entrained brain activity of a sub-group of participants.
ContributorsChernicky, Jacob Garrett (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / School of Life Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133725-Thumbnail Image.png
Description
Detecting early signs of neurodegeneration is vital for measuring the efficacy of pharmaceuticals and planning treatments for neurological diseases. This is especially true for Amyotrophic Lateral Sclerosis (ALS) where differences in symptom onset can be indicative of the prognosis. Because it can be measured noninvasively, changes in speech production have

Detecting early signs of neurodegeneration is vital for measuring the efficacy of pharmaceuticals and planning treatments for neurological diseases. This is especially true for Amyotrophic Lateral Sclerosis (ALS) where differences in symptom onset can be indicative of the prognosis. Because it can be measured noninvasively, changes in speech production have been proposed as a promising indicator of neurological decline. However, speech changes are typically measured subjectively by a clinician. These perceptual ratings can vary widely between clinicians and within the same clinician on different patient visits, making clinical ratings less sensitive to subtle early indicators. In this paper, we propose an algorithm for the objective measurement of flutter, a quasi-sinusoidal modulation of fundamental frequency that manifests in the speech of some ALS patients. The algorithm detailed in this paper employs long-term average spectral analysis on the residual F0 track of a sustained phonation to detect the presence of flutter and is robust to longitudinal drifts in F0. The algorithm is evaluated on a longitudinal speech dataset of ALS patients at varying stages in their prognosis. Benchmarking with two stages of perceptual ratings provided by an expert speech pathologist indicate that the algorithm follows perceptual ratings with moderate accuracy and can objectively detect flutter in instances where the variability of the perceptual rating causes uncertainty.
ContributorsPeplinski, Jacob Scott (Author) / Berisha, Visar (Thesis director) / Liss, Julie (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133445-Thumbnail Image.png
Description
The objective of this study was to analyze the auditory feedback system and the pitch-shift reflex in relation to vibrato. 11 subjects (female n = 8, male n = 3) without speech, hearing, or neurological disorders were used. Compensation magnitude, adaptation magnitude, relative response phase, and passive and active perception

The objective of this study was to analyze the auditory feedback system and the pitch-shift reflex in relation to vibrato. 11 subjects (female n = 8, male n = 3) without speech, hearing, or neurological disorders were used. Compensation magnitude, adaptation magnitude, relative response phase, and passive and active perception were recorded when the subjects were subjected to auditory feedback perturbed by phasic amplitude and F0 modulation, or “vibrato”. “Tremolo,” or phasic amplitude modulation, was used as a control. Significant correlation was found between the ability to perceive vibrato and tremolo in active trials and the ability to perceive in passive trials (p=0.01). Passive perceptions were lower (more sensitive) than active perceptions (p< 0.01). Adaptation vibrato trials showed significant modulation magnitude (p=0.031), while tremolo did not. The two conditions were significantly different (p<0.01). There was significant phase change for both tremolo and vibrato, but vibrato phase change was greater, nearly 180° (p<0.01). In the compensation trials, the modulation change from control to vibrato trials was significantly greater than the change from control to tremolo (p=0.01). Vibrato and tremolo also had significantly different average phase change (p<0.01). It can be concluded that the auditory feedback system tries to cancel out dynamic pitch perturbations by cancelling them out out-of-phase. Similar systems must be used to adapt and to compensate to vibrato. Despite the auditory feedback system’s online monitoring, the passive perception was still better than active perception, possibly because it required only one task (perceiving) rather than two (perceiving and producing). The pitch-shift reflex compensates to the sensitivity of the auditory feedback system, as shown by the increased perception of vibrato over tremolo.
ContributorsHiggins, Alexis Brittany (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Luo, Xin (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05