Matching Items (13)
Filtering by

Clear all filters

152594-Thumbnail Image.png
Description
The recent spotlight on concussion has illuminated deficits in the current standard of care with regard to addressing acute and persistent cognitive signs and symptoms of mild brain injury. This stems, in part, from the diffuse nature of the injury, which tends not to produce focal cognitive or behavioral deficits

The recent spotlight on concussion has illuminated deficits in the current standard of care with regard to addressing acute and persistent cognitive signs and symptoms of mild brain injury. This stems, in part, from the diffuse nature of the injury, which tends not to produce focal cognitive or behavioral deficits that are easily identified or tracked. Indeed it has been shown that patients with enduring symptoms have difficulty describing their problems; therefore, there is an urgent need for a sensitive measure of brain activity that corresponds with higher order cognitive processing. The development of a neurophysiological metric that maps to clinical resolution would inform decisions about diagnosis and prognosis, including the need for clinical intervention to address cognitive deficits. The literature suggests the need for assessment of concussion under cognitively demanding tasks. Here, a joint behavioral- high-density electroencephalography (EEG) paradigm was employed. This allows for the examination of cortical activity patterns during speech comprehension at various levels of degradation in a sentence verification task, imposing the need for higher-order cognitive processes. Eight participants with concussion listened to true-false sentences produced with either moderately to highly intelligible noise-vocoders. Behavioral data were simultaneously collected. The analysis of cortical activation patterns included 1) the examination of event-related potentials, including latency and source localization, and 2) measures of frequency spectra and associated power. Individual performance patterns were assessed during acute injury and a return visit several months following injury. Results demonstrate a combination of task-related electrophysiology measures correspond to changes in task performance during the course of recovery. Further, a discriminant function analysis suggests EEG measures are more sensitive than behavioral measures in distinguishing between individuals with concussion and healthy controls at both injury and recovery, suggesting the robustness of neurophysiological measures during a cognitively demanding task to both injury and persisting pathophysiology.
ContributorsUtianski, Rene (Author) / Liss, Julie M (Thesis advisor) / Berisha, Visar (Committee member) / Caviness, John N (Committee member) / Dorman, Michael (Committee member) / Arizona State University (Publisher)
Created2014
136828-Thumbnail Image.png
Description
This study evaluated whether the Story Champs intervention is effective in bilingual kindergarten children who speak Spanish as their native language. Previous research by Spencer and Slocum (2010) found that monolingual, English-speaking participants made significant gains in narrative retelling after intervention. This study implemented the intervention in two languages and

This study evaluated whether the Story Champs intervention is effective in bilingual kindergarten children who speak Spanish as their native language. Previous research by Spencer and Slocum (2010) found that monolingual, English-speaking participants made significant gains in narrative retelling after intervention. This study implemented the intervention in two languages and examined its effects after ten sessions. Results indicate that some children benefited from the intervention and there was variability across languages as well.
ContributorsFernandez, Olga E (Author) / Restrepo, Laida (Thesis director) / Mesa, Carol (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / School of International Letters and Cultures (Contributor)
Created2014-05
133445-Thumbnail Image.png
Description
The objective of this study was to analyze the auditory feedback system and the pitch-shift reflex in relation to vibrato. 11 subjects (female n = 8, male n = 3) without speech, hearing, or neurological disorders were used. Compensation magnitude, adaptation magnitude, relative response phase, and passive and active perception

The objective of this study was to analyze the auditory feedback system and the pitch-shift reflex in relation to vibrato. 11 subjects (female n = 8, male n = 3) without speech, hearing, or neurological disorders were used. Compensation magnitude, adaptation magnitude, relative response phase, and passive and active perception were recorded when the subjects were subjected to auditory feedback perturbed by phasic amplitude and F0 modulation, or “vibrato”. “Tremolo,” or phasic amplitude modulation, was used as a control. Significant correlation was found between the ability to perceive vibrato and tremolo in active trials and the ability to perceive in passive trials (p=0.01). Passive perceptions were lower (more sensitive) than active perceptions (p< 0.01). Adaptation vibrato trials showed significant modulation magnitude (p=0.031), while tremolo did not. The two conditions were significantly different (p<0.01). There was significant phase change for both tremolo and vibrato, but vibrato phase change was greater, nearly 180° (p<0.01). In the compensation trials, the modulation change from control to vibrato trials was significantly greater than the change from control to tremolo (p=0.01). Vibrato and tremolo also had significantly different average phase change (p<0.01). It can be concluded that the auditory feedback system tries to cancel out dynamic pitch perturbations by cancelling them out out-of-phase. Similar systems must be used to adapt and to compensate to vibrato. Despite the auditory feedback system’s online monitoring, the passive perception was still better than active perception, possibly because it required only one task (perceiving) rather than two (perceiving and producing). The pitch-shift reflex compensates to the sensitivity of the auditory feedback system, as shown by the increased perception of vibrato over tremolo.
ContributorsHiggins, Alexis Brittany (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Luo, Xin (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133025-Thumbnail Image.png
Description
During speech, the brain is constantly processing and monitoring speech output through the auditory feedback loop to ensure correct and accurate speech. If the speech signal is experimentally altered/perturbed while speaking, the brain compensates for the perturbations by changing speech output in the opposite direction of the perturbations. In this

During speech, the brain is constantly processing and monitoring speech output through the auditory feedback loop to ensure correct and accurate speech. If the speech signal is experimentally altered/perturbed while speaking, the brain compensates for the perturbations by changing speech output in the opposite direction of the perturbations. In this study, we designed an experiment that examined the compensatory responses in response to unexpected vowel perturbations during speech. We applied two types of perturbations. In one condition, the vowel /ɛ/ was perturbed toward the vowel /æ/ by simultaneously shifting both the first formant (F1) and the second formant (F2) at 3 different levels (.5=small, 1=medium, and 1.5=large shifts). In another condition, the vowel /ɛ/ was perturbed by shifting F1 at 3 different levels (small, medium, and large shifts). Our results showed that there was a significant perturbation-type effect, with participants compensating more in response to perturbation that shifted /ɛ/ toward /æ/. In addition, we found that there was a significant level effect, with the compensatory responses to level .5 being significantly smaller than the compensatory responses to levels 1 and 1.5, regardless of the perturbation pathway. We also found that responses to shift level 1 and shift level 1.5 did not differ. Overall, our results highlighted the importance of the auditory feedback loop during speech production and how the brain is more sensitive to auditory errors that change a vowel category (e.g., /ɛ/ to /æ/).
ContributorsFitzgerald, Lacee (Author) / Daliri, Ayoub (Thesis director) / Corianne, Rogalsky (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134804-Thumbnail Image.png
Description
Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed

Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed with difficulty. While the presence of SEM in the stroke survivor population advances scientific understanding of movement capabilities following a stroke, published studies using the SEM phenomenon only examined one joint. The ability of SEM to generate multi-jointed movements is understudied and consequently limits SEM as a potential therapy tool. In order to apply SEM as a therapy tool however, the biomechanics of the arm in multi-jointed movement planning and execution must be better understood. Thus, the objective of our study was to evaluate if SEM could elicit multi-joint reaching movements that were accurate in an unrestrained, two-dimensional workspace. Data was collected from ten subjects with no previous neck, arm, or brain injury. Each subject performed a reaching task to five Targets that were equally spaced in a semi-circle to create a two-dimensional workspace. The subject reached to each Target following a sequence of two non-startling acoustic stimuli cues: "Get Ready" and "Go". A loud acoustic stimuli was randomly substituted for the "Go" cue. We hypothesized that SEM is accessible and accurate for unrestricted multi-jointed reaching tasks in a functional workspace and is therefore independent of movement direction. Our results found that SEM is possible in all five Target directions. The probability of evoking SEM and the movement kinematics (i.e. total movement time, linear deviation, average velocity) to each Target are not statistically different. Thus, we conclude that SEM is possible in a functional workspace and is not dependent on where arm stability is maximized. Moreover, coordinated preparation and storage of a multi-jointed movement is indeed possible.
ContributorsOssanna, Meilin Ryan (Author) / Honeycutt, Claire (Thesis director) / Schaefer, Sydney (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
153745-Thumbnail Image.png
Description
Glottal fry is a vocal register characterized by low frequency and increased signal perturbation, and is perceptually identified by its popping, creaky quality. Recently, the use of the glottal fry vocal register has received growing awareness and attention in popular culture and media in the United States. The creaky quality

Glottal fry is a vocal register characterized by low frequency and increased signal perturbation, and is perceptually identified by its popping, creaky quality. Recently, the use of the glottal fry vocal register has received growing awareness and attention in popular culture and media in the United States. The creaky quality that was originally associated with vocal pathologies is indeed becoming “trendy,” particularly among young women across the United States. But while existing studies have defined, quantified, and attempted to explain the use of glottal fry in conversational speech, there is currently no explanation for the increasing prevalence of the use of glottal fry amongst American women. This thesis, however, proposes that conversational entrainment—a communication phenomenon which describes the propensity to modify one’s behavior to align more closely with one’s communication partner—may provide a theoretical framework to explain the growing trend in the use of glottal fry amongst college-aged women in the United States. Female participants (n = 30) between the ages of 18 and 29 years (M = 20.6, SD = 2.95) had conversations with two conversation partners, one who used quantifiably more glottal fry than the other. The study utilized perceptual and quantifiable acoustic information to address the following key question: Does the amount of habitual glottal fry in a conversational partner influence one’s use of glottal fry in their own speech? Results yielded the following two findings: (1) according to perceptual annotations, the participants used a greater amount of glottal fry when speaking with the Fry conversation partner than with the Non Fry partner, (2) statistically significant differences were found in the acoustics of the participants’ vocal qualities based on conversation partner. While the current study demonstrates that young women are indeed speaking in glottal fry in everyday conversations, and that its use can be attributed in part to conversational entrainment, we still lack a clear explanation of the deeper motivations for women to speak in a lower vocal register. The current study opens avenues for continued analysis of the sociolinguistic functions of the glottal fry register.
ContributorsDelfino, Christine R (Author) / Liss, Julie M (Thesis advisor) / Borrie, Stephanie A (Thesis advisor) / Azuma, Tamiko (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2015
148383-Thumbnail Image.png
Description

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions,

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.

ContributorsKarthigeyan, Vishnu R (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Harrington Bioengineering Program (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148204-Thumbnail Image.png
Description

The purpose of this longitudinal study was to predict /r/ acquisition using acoustic signal processing. 19 children, aged 5-7 with inaccurate /r/, were followed until they turned 8 or acquired /r/, whichever came first. Acoustic and descriptive data from 14 participants were analyzed. The remaining 5 children continued to be

The purpose of this longitudinal study was to predict /r/ acquisition using acoustic signal processing. 19 children, aged 5-7 with inaccurate /r/, were followed until they turned 8 or acquired /r/, whichever came first. Acoustic and descriptive data from 14 participants were analyzed. The remaining 5 children continued to be followed. The study analyzed differences in spectral energy at the baseline acoustic signals of participants who eventually acquired /r/ compared to that of those who did not acquire /r/. Results indicated significant differences between groups in the baseline signals for vocalic and postvocalic /r/, suggesting that the acquisition of certain allophones may be predictable. Participants’ articulatory changes made during the progression of acquisition were also analyzed spectrally. A retrospective analysis described the pattern in which /r/ allophones were acquired, proposing that vocalic /r/ and the postvocalic variant of consonantal /r/ may be acquired prior to prevocalic /r/, and /r/ followed by low vowels may be acquired before /r/ followed by high vowels, although individual variations exist.

ContributorsConger, Sarah Grace (Author) / Weinhold, Juliet (Thesis director) / Daliri, Ayoub (Committee member) / Bruce, Laurel (Committee member) / College of Health Solutions (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
187769-Thumbnail Image.png
Description
This dissertation explores applications of machine learning methods in service of the design of screening tests, which are ubiquitous in applications from social work, to criminology, to healthcare. In the first part, a novel Bayesian decision theory framework is presented for designing tree-based adaptive tests. On an application to youth

This dissertation explores applications of machine learning methods in service of the design of screening tests, which are ubiquitous in applications from social work, to criminology, to healthcare. In the first part, a novel Bayesian decision theory framework is presented for designing tree-based adaptive tests. On an application to youth delinquency in Honduras, the method produces a 15-item instrument that is almost as accurate as a full-length 150+ item test. The framework includes specific considerations for the context in which the test will be administered, and provides uncertainty quantification around the trade-offs of shortening lengthy tests. In the second part, classification complexity is explored via theoretical and empirical results from statistical learning theory, information theory, and empirical data complexity measures. A simulation study that explicitly controls two key aspects of classification complexity is performed to relate the theoretical and empirical approaches. Throughout, a unified language and notation that formalizes classification complexity is developed; this same notation is used in subsequent chapters to discuss classification complexity in the context of a speech-based screening test. In the final part, the relative merits of task and feature engineering when designing a speech-based cognitive screening test are explored. Through an extensive classification analysis on a clinical speech dataset from patients with normal cognition and Alzheimer’s disease, the speech elicitation task is shown to have a large impact on test accuracy; carefully performed task and feature engineering are required for best results. A new framework for objectively quantifying speech elicitation tasks is introduced, and two methods are proposed for automatically extracting insights into the aspects of the speech elicitation task that are driving classification performance. The dissertation closes with recommendations for how to evaluate the obtained insights and use them to guide future design of speech-based screening tests.
ContributorsKrantsevich, Chelsea (Author) / Hahn, P. Richard (Thesis advisor) / Berisha, Visar (Committee member) / Lopes, Hedibert (Committee member) / Renaut, Rosemary (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2023