Matching Items (3)
Filtering by

Clear all filters

153807-Thumbnail Image.png
Description
Brain Computer Interfaces are becoming the next generation controllers not only in the medical devices for disabled individuals but also in the gaming and entertainment industries. In order to build an effective Brain Computer Interface, which accurately translates the user thoughts into machine commands, it is important to have robust

Brain Computer Interfaces are becoming the next generation controllers not only in the medical devices for disabled individuals but also in the gaming and entertainment industries. In order to build an effective Brain Computer Interface, which accurately translates the user thoughts into machine commands, it is important to have robust and fail proof signal processing and machine learning modules which operate on the raw EEG signals and estimate the current thought of the user.

In this thesis, several techniques used to perform EEG signal pre-processing, feature extraction and signal classification have been discussed, implemented, validated and verified; efficient supervised machine learning models, for the EEG motor imagery signal classification are identified. To further improve the performance of system unsupervised feature learning techniques have been investigated by pre-training the Deep Learning models. Use of pre-training stacked autoencoders have been proposed to solve the problems caused by random initialization of weights in neural networks.

Motor Imagery (imaginary hand and leg movements) signals are acquire using the Emotiv EEG headset. Different kinds of features like mean signal, band powers, RMS of the signal have been extracted and supplied to the machine learning (ML) stage, wherein, several ML techniques like LDA, KNN, SVM, Logistic regression and Neural Networks are applied and validated. During the validation phase the performances of various techniques are compared and some important observations are reported. Further, deep Learning techniques like autoencoding have been used to perform unsupervised feature learning. The reliability of the features is analyzed by performing classification by using the ML techniques mentioned earlier. The performance of the neural networks has been further improved by pre-training the network in an unsupervised fashion using stacked autoencoders and supplying the stacked autoencoders’ network parameters as initial parameters to the neural network. All the findings in this research, during each phase (pre-processing, feature extraction, classification) are directly relevant and can be used by the BCI research community for building motor imagery based BCI applications.

Additionally, this thesis attempts to develop, test, and compare the performance of an alternative method for classifying human driving behavior. This thesis proposes the use of driver affective states to know the driving behavior. The purpose of this part of the thesis was to classify the EEG data collected from several subjects while driving simulated vehicle and compare the classification results with those obtained by classifying the driving behavior using vehicle parameters collected simultaneously from all the subjects. The objective here is to see if the drivers’ mental state is reflected in his driving behavior.
ContributorsManchala, Vamsi Krishna (Author) / Redkar, Sangram (Thesis advisor) / Rogers, Bradley (Committee member) / Sugar, Thomas (Committee member) / Arizona State University (Publisher)
Created2015
Description
Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on the most microscopic levels, and everything in between. As the

Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on the most microscopic levels, and everything in between. As the years have gone on, the extent and depth of interaction between brains and computers have consistently widened, to the point where computers help brains with their thinking in virtually infinite everyday situations around the world. The first purpose of this research project was to conduct a brief review for the purposes of gaining a sound understanding of how both brains and computers operate at fundamental levels, and what it is about these two entities that allow them to work evermore seamlessly as the years go on. Next, a history of interaction between brains and computers was developed, which expanded upon the first task and helped to contribute to visions of future brain-computer interaction (BCI). The subsequent and primary task of this research project was to develop a theoretical framework for a potential brain-aiding device of the future. This was done by conducting an extensive literature review regarding the most advanced BCI technology in modern times and expanding upon the findings to argue feasibility of the future device and its components. Next, social predictions regarding the acceptance and use of the new technology were made by designing and executing a survey based on the Unified Theory of the Acceptance and Use of Technology (UTAUT). Finally, general economic predictions were inferred by examining several relationships between money and computers over time.
ContributorsThum, Giuseppe Edwardo (Author) / Gaffar, Ashraf (Thesis director) / Gonzalez-Sanchez, Javier (Committee member) / College of Integrative Sciences and Arts (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
157788-Thumbnail Image.png
Description
Parents fulfill a pivotal role in early childhood development of social and communication

skills. In children with autism, the development of these skills can be delayed. Applied

behavioral analysis (ABA) techniques have been created to aid in skill acquisition.

Among these, pivotal response treatment (PRT) has been empirically shown to foster

improvements. Research into

Parents fulfill a pivotal role in early childhood development of social and communication

skills. In children with autism, the development of these skills can be delayed. Applied

behavioral analysis (ABA) techniques have been created to aid in skill acquisition.

Among these, pivotal response treatment (PRT) has been empirically shown to foster

improvements. Research into PRT implementation has also shown that parents can be

trained to be effective interventionists for their children. The current difficulty in PRT

training is how to disseminate training to parents who need it, and how to support and

motivate practitioners after training.

Evaluation of the parents’ fidelity to implementation is often undertaken using video

probes that depict the dyadic interaction occurring between the parent and the child during

PRT sessions. These videos are time consuming for clinicians to process, and often result

in only minimal feedback for the parents. Current trends in technology could be utilized to

alleviate the manual cost of extracting data from the videos, affording greater

opportunities for providing clinician created feedback as well as automated assessments.

The naturalistic context of the video probes along with the dependence on ubiquitous

recording devices creates a difficult scenario for classification tasks. The domain of the

PRT video probes can be expected to have high levels of both aleatory and epistemic

uncertainty. Addressing these challenges requires examination of the multimodal data

along with implementation and evaluation of classification algorithms. This is explored

through the use of a new dataset of PRT videos.

The relationship between the parent and the clinician is important. The clinician can

provide support and help build self-efficacy in addition to providing knowledge and

modeling of treatment procedures. Facilitating this relationship along with automated

feedback not only provides the opportunity to present expert feedback to the parent, but

also allows the clinician to aid in personalizing the classification models. By utilizing a

human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the

classification models by providing additional labeled samples. This will allow the system

to improve classification and provides a person-centered approach to extracting

multimodal data from PRT video probes.
ContributorsCopenhaver Heath, Corey D (Author) / Panchanathan, Sethuraman (Thesis advisor) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Davulcu, Hasan (Committee member) / Gaffar, Ashraf (Committee member) / Arizona State University (Publisher)
Created2019