Matching Items (3)
Filtering by

Clear all filters

152070-Thumbnail Image.png
Description
When surgical resection becomes necessary to alleviate a patient's epileptiform activity, that patient is monitored by video synchronized with electrocorticography (ECoG) to determine the type and location of seizure focus. This provides a unique opportunity for researchers to gather neurophysiological data with high temporal and spatial resolution; these data are

When surgical resection becomes necessary to alleviate a patient's epileptiform activity, that patient is monitored by video synchronized with electrocorticography (ECoG) to determine the type and location of seizure focus. This provides a unique opportunity for researchers to gather neurophysiological data with high temporal and spatial resolution; these data are assessed prior to surgical resection to ensure the preservation of the patient's quality of life, e.g. avoid the removal of brain tissue required for speech processing. Currently considered the "gold standard" for the mapping of cortex, electrical cortical stimulation (ECS) involves the systematic activation of pairs of electrodes to localize functionally specific brain regions. This method has distinct limitations, which often includes pain experienced by the patient. Even in the best cases, the technique suffers from subjective assessments on the parts of both patients and physicians, and high inter- and intra-observer variability. Recent advances have been made as researchers have reported the localization of language areas through several signal processing methodologies, all necessitating patient participation in a controlled experiment. The development of a quantification tool to localize speech areas in which a patient is engaged in an unconstrained interpersonal conversation would eliminate the dependence of biased patient and reviewer input, as well as unnecessary discomfort to the patient. Post-hoc ECoG data were gathered from five patients with intractable epilepsy while each was engaged in a conversation with family members or clinicians. After the data were separated into different speech conditions, the power of each was compared to baseline to determine statistically significant activated electrodes. The results of several analytical methods are presented here. The algorithms did not yield language-specific areas exclusively, as broad activation of statistically significant electrodes was apparent across cortical areas. For one patient, 15 adjacent contacts along superior temporal gyrus (STG) and posterior part of the temporal lobe were determined language-significant through a controlled experiment. The task involved a patient lying in bed listening to repeated words, and yielded statistically significant activations that aligned with those of clinical evaluation. The results of this study do not support the hypothesis that unconstrained conversation may be used to localize areas required for receptive and productive speech, yet suggests a simple listening task may be an adequate alternative to direct cortical stimulation.
ContributorsLingo VanGilder, Jennapher (Author) / Helms Tillery, Stephen I (Thesis advisor) / Wahnoun, Remy (Thesis advisor) / Buneo, Christopher (Committee member) / Arizona State University (Publisher)
Created2013
134293-Thumbnail Image.png
Description
Lie detection is used prominently in contemporary society for many purposes such as for pre-employment screenings, granting security clearances, and determining if criminals or potential subjects may or may not be lying, but by no means is not limited to that scope. However, lie detection has been criticized for being

Lie detection is used prominently in contemporary society for many purposes such as for pre-employment screenings, granting security clearances, and determining if criminals or potential subjects may or may not be lying, but by no means is not limited to that scope. However, lie detection has been criticized for being subjective, unreliable, inaccurate, and susceptible to deliberate manipulation. Furthermore, critics also believe that the administrator of the test also influences the outcome as well. As a result, the polygraph machine, the contemporary device used for lie detection, has come under scrutiny when used as evidence in the courts. The purpose of this study is to use three entirely different tools and concepts to determine whether eye tracking systems, electroencephalogram (EEG), and Facial Expression Emotion Analysis (FACET) are reliable tools for lie detection. This study found that certain constructs such as where the left eye is looking at in regard to its usual position and engagement levels in eye tracking and EEG respectively could distinguish between truths and lies. However, the FACET proved the most reliable tool out of the three by providing not just one distinguishing variable but seven, all related to emotions derived from movements in the facial muscles during the present study. The emotions associated with the FACET that were documented to possess the ability to distinguish between truthful and lying responses were joy, anger, fear, confusion, and frustration. In addition, an overall measure of the subject's neutral and positive emotional expression were found to be distinctive factors. The implications of this study and future directions are discussed.
ContributorsSeto, Raymond Hua (Author) / Atkinson, Robert (Thesis director) / Runger, George (Committee member) / W. P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
137004-Thumbnail Image.png
Description
Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement

Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement of a virtual ball in a target-hitting task. Preliminary results indicate that a method in which the position of the virtual object directly relates to the amplitude of brain signals is most conducive to success. In addition, this research explores learning in the context of neural signals during training with a BCI task. Specifically, it investigates whether subjects can adapt to parameters of the interface without guidance. This experiment prompts subjects to modulate brain signals spectrally, spatially, and temporally, as well differentially to discriminate between two different targets. However, subjects are not given knowledge regarding these desired changes, nor are they given instruction on how to move the virtual ball. Preliminary analysis of signal trends suggests that some successful participants are able to adapt brain wave activity in certain pre-specified locations and frequency bands over time in order to achieve control. Future studies will further explore these phenomena, and future BCI projects will be advised by these methods, which will give insight into the creation of more intuitive and reliable BCI technology.
ContributorsLancaster, Jenessa Mae (Co-author) / Appavu, Brian (Co-author) / Wahnoun, Remy (Co-author, Committee member) / Helms Tillery, Stephen (Thesis director) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05