Matching Items (6)
Filtering by

Clear all filters

135989-Thumbnail Image.png
Description
The research question this thesis aims to answer is whether depressive symptoms of adolescents involved in romantic relationships are related to their rejection sensitivity. It was hypothesized that adolescents who have more rejection sensitivity, indicated by a bigger P3b response, will have more depressive symptoms. This hypothesis was tested by

The research question this thesis aims to answer is whether depressive symptoms of adolescents involved in romantic relationships are related to their rejection sensitivity. It was hypothesized that adolescents who have more rejection sensitivity, indicated by a bigger P3b response, will have more depressive symptoms. This hypothesis was tested by having adolescent couples attend a lab session in which they played a Social Rejection Task while EEG data was being collected. Rejection sensitivity was measured using the activity of the P3b ERP at the Pz electrode. The P3b ERP was chosen to measure rejection sensitivity as it has been used before to measure rejection sensitivity in previous ostracism studies. Depressive symptoms were measured using the 20-item Center for Epidemiological Studies Depression Scale (CES-D, Radloff, 1977). After running a multiple regression analysis, the results did not support the hypothesis; instead, the results showed no relationship between rejection sensitivity and depressive symptoms. The results are also contrary to similar literature which typically shows that the higher the rejection sensitivity, the greater the depressive symptoms.
ContributorsBiera, Alex (Author) / Dishion, Tom (Thesis director) / Ha, Thao (Committee member) / Shore, Danielle (Committee member) / Barrett, The Honors College (Contributor)
Created2015-05
Description

The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography electrodes. The present study aims to find neural signatures of

The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography electrodes. The present study aims to find neural signatures of auditory attention to achieve this same goal with noninvasive electroencephalographic (EEG) methods. Five human participants participated in an auditory attention task. Participants listened to a series of four syllables followed by a fifth syllable (probe syllable). Participants were instructed to indicate whether or not the probe syllable was one of the four syllables played immediately before the probe syllable. Trials of this task were separated into conditions of playing the syllables in silence (Signal) and in background noise (Signal With Noise), and both behavioral and EEG data were recorded. EEG signals were analyzed with event-related potential and time-frequency analysis methods. The behavioral data indicated that participants performed better on the task during the “Signal” condition, which aligns with the challenges demonstrated in the cocktail party effect. The EEG analysis showed that the alpha band’s (9-13 Hz) inter-trial coherence could potentially indicate characteristics of the attended speech signal. These preliminary results suggest that EEG time-frequency analysis has the potential to reveal the neural signatures of auditory attention, which may allow for the design of a noninvasive, EEG-based hearing aid.

ContributorsLaBine, Alyssa (Author) / Daliri, Ayoub (Thesis director) / Chao, Saraching (Committee member) / Barrett, The Honors College (Contributor) / College of Health Solutions (Contributor) / Harrington Bioengineering Program (Contributor)
Created2023-05
171425-Thumbnail Image.png
Description
Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH)

Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH) children. In contrast, post-lingually deaf adult CI users have developed speech production skills via normal hearing before deafness and implantation. Further, combined electric hearing (via CI) and acoustic hearing (via hearing aid, HA) may improve CI users’ perception of pitch cues in speech intonation. Therefore, this study tested (1) whether post-lingually deaf adult CI users have similar speech intonation production to NH adults and (2) whether their speech intonation production improves with auditory feedback via CI+HA (i.e., bimodal hearing). Eight post-lingually deaf adult bimodal CI users and nine NH adults participated in this study. 10 question-and-answer dialogues with an experimenter were used to elicit 10 pairs of syntactically matched questions and statements from each participant. Bimodal CI users were tested under four hearing conditions: no-device (ND), HA, CI, and CI+HA. F0 change, intensity change, and duration ratio between the last two syllables of each utterance were analyzed to evaluate the quality of speech intonation production. The results showed no significant differences between CI and NH participants in any of the acoustic features of questions and statements. For CI participants, the CI+HA condition led to significantly greater F0 decreases of statements than the ND condition, while the ND condition led to significantly greater duration ratios of questions and statements. These results suggest that bimodal CI users change the use of prosodic cues for speech intonation production in different hearing conditions and access to auditory feedback via CI+HA may improve their voice pitch control to produce more salient statement intonation contours.
ContributorsAi, Chang (Author) / Luo, Xin (Thesis advisor) / Daliri, Ayoub (Committee member) / Davidson, Lisa (Committee member) / Arizona State University (Publisher)
Created2022
171661-Thumbnail Image.png
Description
Speech and music are traditionally thought to be primarily supported by different hemispheres. A growing body of evidence suggests that speech and music often rely on shared resources in bilateral brain networks, though the right and left hemispheres exhibit some domain-specific specialization. While there is ample research investigating speech deficits

Speech and music are traditionally thought to be primarily supported by different hemispheres. A growing body of evidence suggests that speech and music often rely on shared resources in bilateral brain networks, though the right and left hemispheres exhibit some domain-specific specialization. While there is ample research investigating speech deficits in individuals with right hemisphere lesions and amusia, fewer investigate amusia in individuals with left hemisphere lesions and aphasia. Many of the fronto-temporal-parietal regions in the left hemisphere commonly associated with speech processing and production are also implicated in bilateral music processing networks. The current study investigates the relationship between damage to specific regions of interest within these networks, and an individual’s ability to successfully match the pitch and rhythm of a presented melody. Twenty-seven participants with chronic-stroke lesions were given a melody repetition task to hum short novel piano melodies. Participants underwent structural MRI acquisition and were administered an extensive speech and cognitive battery. Pitch and rhythm scores were calculated by correlating participant responses and target piano notes. Production errors were calculated by counting trials with responses that don’t match the target melody’s note count. Overall, performance varied widely, and rhythm scores were significantly correlated. Working memory scores were significantly correlated with rhythm scores and production errors, but not pitch scores. Broca’s area lesions were not associated with significant differences in any of the melody repetition measures, while left Heschl’s gyrus lesions were associated with worse performance on pitch, rhythm, and production errors. Lower rhythm scores were associated with lesions including both the left anterior and posterior superior temporal gyrus, and in participants with damage to the left planum temporale. The other regions of interest were not consistently associated with poorer pitch scores or production errors. Although the present study does have limitations, the current study suggests lesions to left hemisphere regions thought to only affect speech also affect musical pitch and rhythm processing. Therefore, amusia should not be characterized solely as a right hemisphere disorder. Instead, musical abilities of individuals with left hemisphere stroke and aphasia should be characterized to better understand their deficits and mechanisms of impairment.
ContributorsWojtaszek, Mallory (Author) / Rogalsky, Corianne (Thesis advisor) / Daliri, Ayoub (Committee member) / Patten, Kristopher (Committee member) / Arizona State University (Publisher)
Created2022
131951-Thumbnail Image.png
Description
Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven

Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven adult
subjects in both speaking (speech planning) and silent reading (no speech planning) conditions.
Data analysis was accomplished manually as well as via generation of a MATLAB code to
combine data sets and calculate auditory modulation (suppression). Results of the P200
modulation showed that modulation was larger for incongruent stimuli than congruent stimuli.
However, this was not the case for the N100 modulation. The data for pure tone could not be
analyzed because the intensity of this stimulus was substantially lower than that of the speech
stimuli. Overall, the results indicated that the P200 component plays a significant role in
processing stimuli and determining the relevance of stimuli; this result is consistent with role of
P200 component in high-level analysis of speech and perceptual processing. This experiment is
ongoing, and we hope to obtain data from more subjects to support the current findings.
ContributorsTaylor, Megan Kathleen (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / School of Life Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131570-Thumbnail Image.png
Description
Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study, we used five stimulation conditions: tACS at three different frequencies

Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study, we used five stimulation conditions: tACS at three different frequencies (6Hz, 12Hz, and 22Hz), transcranial random noise stimulation (tRNS), and a no-stimulation sham condition. In all stimulation conditions, we recorded electroencephalographic data to investigate the link between different frequencies of tACS and their effects on brain oscillations. We recruited 12 healthy participants. Each participant completed 30 trials of the stimulation conditions. In a given trial, we recorded brain activity for 10 seconds, stimulated for 12 seconds, and recorded an additional 10 seconds of brain activity. The difference between the average oscillation power before and after a stimulation condition indicated change in oscillation amplitude due to the stimulation. Our results showed the stimulation conditions entrained brain activity of a sub-group of participants.
ContributorsChernicky, Jacob Garrett (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / School of Life Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05