Matching Items (27)

134568-Thumbnail Image.png

A Personal and Neuropsychological Evaluation of Synesthetic Experience

Description

Synesthesia is a psychological phenomenon in which the stimulation of one sensory modality brings about a response from at least one other modality. There has now been about two centuries

Synesthesia is a psychological phenomenon in which the stimulation of one sensory modality brings about a response from at least one other modality. There has now been about two centuries of official synesthesia research, yet the current era of study, about the 2000s on, has proven invaluable to our further understanding of how synesthesia works in our perceptive world. I myself have two forms of synesthesia: color-grapheme and lexical-gustatory. In this paper, I look back on my personal experience with synesthesia and review its history and its operational definitions and theories. I then aim to perform a small case study on my synesthesia, using current research to evaluate my observations. I believe synesthesia has the ability to tell us much about perception, subjectivity, language, and consciousness, and I investigate the potential implications that studying synesthesia could have for some of these fields.

Contributors

Agent

Created

Date Created
  • 2017-05

134776-Thumbnail Image.png

The Challenges of Telemedicine and Pathways to Success

Description

Telemedicine is a multipurpose tool that allows medical professionals to use technology as a means to evaluate, diagnose, and treat patients remotely. This paper focuses on the challenges that developing

Telemedicine is a multipurpose tool that allows medical professionals to use technology as a means to evaluate, diagnose, and treat patients remotely. This paper focuses on the challenges that developing telemedicine programs face, specifically discussing target population, user experience, and physician adoption. Various users of telemedicine share their experiences overcoming such challenges with the greater goal of this paper being to facilitate the growth of telemedicine programs.

Contributors

Agent

Created

Date Created
  • 2016-12

131570-Thumbnail Image.png

Using Transcranial Alternating Current Stimulation to Entrain Cortical Oscillations

Description

Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous

Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study, we used five stimulation conditions: tACS at three different frequencies (6Hz, 12Hz, and 22Hz), transcranial random noise stimulation (tRNS), and a no-stimulation sham condition. In all stimulation conditions, we recorded electroencephalographic data to investigate the link between different frequencies of tACS and their effects on brain oscillations. We recruited 12 healthy participants. Each participant completed 30 trials of the stimulation conditions. In a given trial, we recorded brain activity for 10 seconds, stimulated for 12 seconds, and recorded an additional 10 seconds of brain activity. The difference between the average oscillation power before and after a stimulation condition indicated change in oscillation amplitude due to the stimulation. Our results showed the stimulation conditions entrained brain activity of a sub-group of participants.

Contributors

Agent

Created

Date Created
  • 2020-05

133725-Thumbnail Image.png

An Algorithm for the Automatic Detection of Vocal Flutter

Description

Detecting early signs of neurodegeneration is vital for measuring the efficacy of pharmaceuticals and planning treatments for neurological diseases. This is especially true for Amyotrophic Lateral Sclerosis (ALS) where differences

Detecting early signs of neurodegeneration is vital for measuring the efficacy of pharmaceuticals and planning treatments for neurological diseases. This is especially true for Amyotrophic Lateral Sclerosis (ALS) where differences in symptom onset can be indicative of the prognosis. Because it can be measured noninvasively, changes in speech production have been proposed as a promising indicator of neurological decline. However, speech changes are typically measured subjectively by a clinician. These perceptual ratings can vary widely between clinicians and within the same clinician on different patient visits, making clinical ratings less sensitive to subtle early indicators. In this paper, we propose an algorithm for the objective measurement of flutter, a quasi-sinusoidal modulation of fundamental frequency that manifests in the speech of some ALS patients. The algorithm detailed in this paper employs long-term average spectral analysis on the residual F0 track of a sustained phonation to detect the presence of flutter and is robust to longitudinal drifts in F0. The algorithm is evaluated on a longitudinal speech dataset of ALS patients at varying stages in their prognosis. Benchmarking with two stages of perceptual ratings provided by an expert speech pathologist indicate that the algorithm follows perceptual ratings with moderate accuracy and can objectively detect flutter in instances where the variability of the perceptual rating causes uncertainty.

Contributors

Agent

Created

Date Created
  • 2018-05

137447-Thumbnail Image.png

Vowel Normalization in Dysarthria

Description

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.

Contributors

Agent

Created

Date Created
  • 2013-05

133445-Thumbnail Image.png

Probing the Role of Auditory Feedback in Voice Pitch Control Using Vibrato Perturbation

Description

The objective of this study was to analyze the auditory feedback system and the pitch-shift reflex in relation to vibrato. 11 subjects (female n = 8, male n = 3)

The objective of this study was to analyze the auditory feedback system and the pitch-shift reflex in relation to vibrato. 11 subjects (female n = 8, male n = 3) without speech, hearing, or neurological disorders were used. Compensation magnitude, adaptation magnitude, relative response phase, and passive and active perception were recorded when the subjects were subjected to auditory feedback perturbed by phasic amplitude and F0 modulation, or “vibrato”. “Tremolo,” or phasic amplitude modulation, was used as a control. Significant correlation was found between the ability to perceive vibrato and tremolo in active trials and the ability to perceive in passive trials (p=0.01). Passive perceptions were lower (more sensitive) than active perceptions (p< 0.01). Adaptation vibrato trials showed significant modulation magnitude (p=0.031), while tremolo did not. The two conditions were significantly different (p<0.01). There was significant phase change for both tremolo and vibrato, but vibrato phase change was greater, nearly 180° (p<0.01). In the compensation trials, the modulation change from control to vibrato trials was significantly greater than the change from control to tremolo (p=0.01). Vibrato and tremolo also had significantly different average phase change (p<0.01). It can be concluded that the auditory feedback system tries to cancel out dynamic pitch perturbations by cancelling them out out-of-phase. Similar systems must be used to adapt and to compensate to vibrato. Despite the auditory feedback system’s online monitoring, the passive perception was still better than active perception, possibly because it required only one task (perceiving) rather than two (perceiving and producing). The pitch-shift reflex compensates to the sensitivity of the auditory feedback system, as shown by the increased perception of vibrato over tremolo.

Contributors

Agent

Created

Date Created
  • 2018-05

137669-Thumbnail Image.png

Two-Sentence Recognition with a Pulse Train Vocoder

Description

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.

Contributors

Agent

Created

Date Created
  • 2013-05

129284-Thumbnail Image.png

Cortical characterization of the perception of intelligible and unintelligible speech measured via high-density electroencephalography

Description

High-density electroencephalography was used to evaluate cortical activity during speech comprehension via a sentence verification task. Twenty-four participants assigned true or false to sentences produced with 3 noise-vocoded channel levels

High-density electroencephalography was used to evaluate cortical activity during speech comprehension via a sentence verification task. Twenty-four participants assigned true or false to sentences produced with 3 noise-vocoded channel levels (1-unintelligible, 6-decipherable, 16-intelligible), during simultaneous EEG recording. Participant data were sorted into higher- (HP) and lower-performing (LP) groups. The identification of a late-event related potential for LP listeners in the intelligible condition and in all listeners when challenged with a 6-Ch signal supports the notion that this induced potential may be related to either processing degraded speech, or degraded processing of intelligible speech. Different cortical locations are identified as neural generators responsible for this activity; HP listeners are engaging motor aspects of their language system, utilizing an acoustic–phonetic based strategy to help resolve the sentence, while LP listeners do not. This study presents evidence for neurophysiological indices associated with more or less successful speech comprehension performance across listening conditions.

Contributors

Agent

Created

Date Created
  • 2015-01-01

131951-Thumbnail Image.png

Specificity of Auditory Modulation during Speech Planning

Description

Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity

Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven adult
subjects in both speaking (speech planning) and silent reading (no speech planning) conditions.
Data analysis was accomplished manually as well as via generation of a MATLAB code to
combine data sets and calculate auditory modulation (suppression). Results of the P200
modulation showed that modulation was larger for incongruent stimuli than congruent stimuli.
However, this was not the case for the N100 modulation. The data for pure tone could not be
analyzed because the intensity of this stimulus was substantially lower than that of the speech
stimuli. Overall, the results indicated that the P200 component plays a significant role in
processing stimuli and determining the relevance of stimuli; this result is consistent with role of
P200 component in high-level analysis of speech and perceptual processing. This experiment is
ongoing, and we hope to obtain data from more subjects to support the current findings.

Contributors

Agent

Created

Date Created
  • 2020-05

133028-Thumbnail Image.png

Somatosensory Modulation during Speech Planning

Description

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.

Contributors

Agent

Created

Date Created
  • 2019-05