Matching Items (17)
Filtering by

Clear all filters

134293-Thumbnail Image.png
Description
Lie detection is used prominently in contemporary society for many purposes such as for pre-employment screenings, granting security clearances, and determining if criminals or potential subjects may or may not be lying, but by no means is not limited to that scope. However, lie detection has been criticized for being

Lie detection is used prominently in contemporary society for many purposes such as for pre-employment screenings, granting security clearances, and determining if criminals or potential subjects may or may not be lying, but by no means is not limited to that scope. However, lie detection has been criticized for being subjective, unreliable, inaccurate, and susceptible to deliberate manipulation. Furthermore, critics also believe that the administrator of the test also influences the outcome as well. As a result, the polygraph machine, the contemporary device used for lie detection, has come under scrutiny when used as evidence in the courts. The purpose of this study is to use three entirely different tools and concepts to determine whether eye tracking systems, electroencephalogram (EEG), and Facial Expression Emotion Analysis (FACET) are reliable tools for lie detection. This study found that certain constructs such as where the left eye is looking at in regard to its usual position and engagement levels in eye tracking and EEG respectively could distinguish between truths and lies. However, the FACET proved the most reliable tool out of the three by providing not just one distinguishing variable but seven, all related to emotions derived from movements in the facial muscles during the present study. The emotions associated with the FACET that were documented to possess the ability to distinguish between truthful and lying responses were joy, anger, fear, confusion, and frustration. In addition, an overall measure of the subject's neutral and positive emotional expression were found to be distinctive factors. The implications of this study and future directions are discussed.
ContributorsSeto, Raymond Hua (Author) / Atkinson, Robert (Thesis director) / Runger, George (Committee member) / W. P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135402-Thumbnail Image.png
Description
It is unknown which regions of the brain are most or least active for golfers during a peak performance state (Flow State or "The Zone") on the putting green. To address this issue, electroencephalographic (EEG) recordings were taken on 10 elite golfers while they performed a putting drill consisting of

It is unknown which regions of the brain are most or least active for golfers during a peak performance state (Flow State or "The Zone") on the putting green. To address this issue, electroencephalographic (EEG) recordings were taken on 10 elite golfers while they performed a putting drill consisting of hitting nine putts spaced uniformly around a hole each five feet away. Data was collected at three time periods, before, during and after the putt. Galvanic Skin Response (GSR) measurements were also recorded on each subject. Three of the subjects performed a visualization of the same putting drill and their brain waves and GSR were recorded and then compared with their actual performance of the drill. EEG data in the Theta (4 \u2014 7 Hz) bandwidth and Alpha (7 \u2014 13 Hz) bandwidth in 11 different locations across the head were analyzed. Relative power spectrum was used to quantify the data. From the results, it was found that there is a higher magnitude of power in both the theta and alpha bandwidths for a missed putt in comparison to a made putt (p<0.05). It was also found that there is a higher average power in the right hemisphere for made putts. There was not a higher power in the occipital region of the brain nor was there a lower power level in the frontal cortical region during made putts. The hypothesis that there would be a difference between the means of the power level in performance compared to visualization techniques was also supported.
ContributorsCarpenter, Andrea (Co-author) / Hool, Nicholas (Co-author) / Muthuswamy, Jitendran (Thesis director) / Crews, Debbie (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
137004-Thumbnail Image.png
Description
Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement

Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement of a virtual ball in a target-hitting task. Preliminary results indicate that a method in which the position of the virtual object directly relates to the amplitude of brain signals is most conducive to success. In addition, this research explores learning in the context of neural signals during training with a BCI task. Specifically, it investigates whether subjects can adapt to parameters of the interface without guidance. This experiment prompts subjects to modulate brain signals spectrally, spatially, and temporally, as well differentially to discriminate between two different targets. However, subjects are not given knowledge regarding these desired changes, nor are they given instruction on how to move the virtual ball. Preliminary analysis of signal trends suggests that some successful participants are able to adapt brain wave activity in certain pre-specified locations and frequency bands over time in order to achieve control. Future studies will further explore these phenomena, and future BCI projects will be advised by these methods, which will give insight into the creation of more intuitive and reliable BCI technology.
ContributorsLancaster, Jenessa Mae (Co-author) / Appavu, Brian (Co-author) / Wahnoun, Remy (Co-author, Committee member) / Helms Tillery, Stephen (Thesis director) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134804-Thumbnail Image.png
Description
Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed

Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed with difficulty. While the presence of SEM in the stroke survivor population advances scientific understanding of movement capabilities following a stroke, published studies using the SEM phenomenon only examined one joint. The ability of SEM to generate multi-jointed movements is understudied and consequently limits SEM as a potential therapy tool. In order to apply SEM as a therapy tool however, the biomechanics of the arm in multi-jointed movement planning and execution must be better understood. Thus, the objective of our study was to evaluate if SEM could elicit multi-joint reaching movements that were accurate in an unrestrained, two-dimensional workspace. Data was collected from ten subjects with no previous neck, arm, or brain injury. Each subject performed a reaching task to five Targets that were equally spaced in a semi-circle to create a two-dimensional workspace. The subject reached to each Target following a sequence of two non-startling acoustic stimuli cues: "Get Ready" and "Go". A loud acoustic stimuli was randomly substituted for the "Go" cue. We hypothesized that SEM is accessible and accurate for unrestricted multi-jointed reaching tasks in a functional workspace and is therefore independent of movement direction. Our results found that SEM is possible in all five Target directions. The probability of evoking SEM and the movement kinematics (i.e. total movement time, linear deviation, average velocity) to each Target are not statistically different. Thus, we conclude that SEM is possible in a functional workspace and is not dependent on where arm stability is maximized. Moreover, coordinated preparation and storage of a multi-jointed movement is indeed possible.
ContributorsOssanna, Meilin Ryan (Author) / Honeycutt, Claire (Thesis director) / Schaefer, Sydney (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
135808-Thumbnail Image.png
Description
The premise of the embodied cognition hypothesis is that cognitive processes require emotion, sensory, and motor systems in the brain, rather than using arbitrary symbols divorced from sensorimotor systems. The hypothesis explains many of the mechanisms of mental simulation or imagination and how they facilitate comprehension of concepts. Some forms

The premise of the embodied cognition hypothesis is that cognitive processes require emotion, sensory, and motor systems in the brain, rather than using arbitrary symbols divorced from sensorimotor systems. The hypothesis explains many of the mechanisms of mental simulation or imagination and how they facilitate comprehension of concepts. Some forms of embodied processing can be measured using electroencephalography (EEG), in a particular waveform known as the mu rhythm (8-13 Hz) in the sensorimotor cortex of the brain. Power in the mu band is suppressed (or de-synchronized) when an individual performs an action, as well as when the individual imagines performing the action, thus mu suppression measures embodied imagination. An important question however is whether the sensorimotor cortex involvement while reading, as measured by mu suppression, is part of the comprehension of what is read or if it is arises after comprehension has taken place. To answer this question, participants first took the Gates-MacGinitie reading comprehension test. Then, mu-suppression was measured while participants read experimental materials. The degree of mu-suppression while reading verbs correlated .45 with their score on the Gates-MacGinitie test. This correlation strongly suggests that the sensorimotor system involvement while reading action sentences is part of the comprehension process rather than being an aftereffect.
ContributorsMarino, Annette Webb (Author) / Glenberg, Arthur (Thesis director) / Presson, Clark (Committee member) / Blais, Chris (Committee member) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135989-Thumbnail Image.png
Description
The research question this thesis aims to answer is whether depressive symptoms of adolescents involved in romantic relationships are related to their rejection sensitivity. It was hypothesized that adolescents who have more rejection sensitivity, indicated by a bigger P3b response, will have more depressive symptoms. This hypothesis was tested by

The research question this thesis aims to answer is whether depressive symptoms of adolescents involved in romantic relationships are related to their rejection sensitivity. It was hypothesized that adolescents who have more rejection sensitivity, indicated by a bigger P3b response, will have more depressive symptoms. This hypothesis was tested by having adolescent couples attend a lab session in which they played a Social Rejection Task while EEG data was being collected. Rejection sensitivity was measured using the activity of the P3b ERP at the Pz electrode. The P3b ERP was chosen to measure rejection sensitivity as it has been used before to measure rejection sensitivity in previous ostracism studies. Depressive symptoms were measured using the 20-item Center for Epidemiological Studies Depression Scale (CES-D, Radloff, 1977). After running a multiple regression analysis, the results did not support the hypothesis; instead, the results showed no relationship between rejection sensitivity and depressive symptoms. The results are also contrary to similar literature which typically shows that the higher the rejection sensitivity, the greater the depressive symptoms.
ContributorsBiera, Alex (Author) / Dishion, Tom (Thesis director) / Ha, Thao (Committee member) / Shore, Danielle (Committee member) / Barrett, The Honors College (Contributor)
Created2015-05
135499-Thumbnail Image.png
Description
Many mysteries still surround brain function, and yet greater understanding of it is vital to advancing scientific research. Studies on the brain in particular play a huge role in the medical field as analysis can lead to proper diagnosis of patients and to anticipatory treatments. The objective of this research

Many mysteries still surround brain function, and yet greater understanding of it is vital to advancing scientific research. Studies on the brain in particular play a huge role in the medical field as analysis can lead to proper diagnosis of patients and to anticipatory treatments. The objective of this research was to apply signal processing techniques on electroencephalogram (EEG) data in order to extract features for which to quantify an activity performed or a response to stimuli. The responses by the brain were shown in eigenspectrum plots in combination with time-frequency plots for each of the sensors to provide both spatial and temporal frequency analysis. Through this method, it was revealed how the brain responds to various stimuli not typically used in current research. Future applications might include testing similar stimuli on patients with neurological diseases to gain further insight into their condition.
ContributorsJackson, Matthew Joseph (Author) / Bliss, Daniel (Thesis director) / Berisha, Visar (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
Abstract: Behavioral evidence suggests that joint coordinated movement attunes one's own motor system to the actions of another. This attunement is called a joint body schema (JBS). According to the JBS hypothesis, the attunement arises from heightened mirror neuron sensitivity to the actions of the other person. This study uses

Abstract: Behavioral evidence suggests that joint coordinated movement attunes one's own motor system to the actions of another. This attunement is called a joint body schema (JBS). According to the JBS hypothesis, the attunement arises from heightened mirror neuron sensitivity to the actions of the other person. This study uses EEG mu suppression, an index of mirror neuron system activity, to provide neurophysiological evidence for the JBS hypothesis. After a joint action task in which the experimenter used her left hand, the participant's EEG revealed greater mu suppression (compared to before the task) in her right cerebral hemisphere when watching a left hand movement. This enhanced mu suppression was found regardless of whether the participant was moving or watching the experimenter move. These results are suggestive of super mirror neurons, that is, mirror neurons which are strengthened in sensitivity to another after a joint action task and do not distinguish between whether the individual or the individual's partner is moving.
ContributorsGoodwin, Brenna Renee (Author) / Glenberg, Art (Thesis director) / Presson, Clark (Committee member) / Blais, Chris (Committee member) / School of Historical, Philosophical and Religious Studies (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
Description

The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography electrodes. The present study aims to find neural signatures of

The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography electrodes. The present study aims to find neural signatures of auditory attention to achieve this same goal with noninvasive electroencephalographic (EEG) methods. Five human participants participated in an auditory attention task. Participants listened to a series of four syllables followed by a fifth syllable (probe syllable). Participants were instructed to indicate whether or not the probe syllable was one of the four syllables played immediately before the probe syllable. Trials of this task were separated into conditions of playing the syllables in silence (Signal) and in background noise (Signal With Noise), and both behavioral and EEG data were recorded. EEG signals were analyzed with event-related potential and time-frequency analysis methods. The behavioral data indicated that participants performed better on the task during the “Signal” condition, which aligns with the challenges demonstrated in the cocktail party effect. The EEG analysis showed that the alpha band’s (9-13 Hz) inter-trial coherence could potentially indicate characteristics of the attended speech signal. These preliminary results suggest that EEG time-frequency analysis has the potential to reveal the neural signatures of auditory attention, which may allow for the design of a noninvasive, EEG-based hearing aid.

ContributorsLaBine, Alyssa (Author) / Daliri, Ayoub (Thesis director) / Chao, Saraching (Committee member) / Barrett, The Honors College (Contributor) / College of Health Solutions (Contributor) / Harrington Bioengineering Program (Contributor)
Created2023-05