Matching Items (40)
Filtering by

Clear all filters

187370-Thumbnail Image.png
Description
This project investigates the gleam-glum effect, a well-replicated phonetic emotion association in which words with the [i] vowel-sound (as in “gleam”) are judged more emotionally positive than words with the [Ʌ] vowel-sound (as in “glum”). The effect is observed across different modalities and languages and is moderated by mouth movements

This project investigates the gleam-glum effect, a well-replicated phonetic emotion association in which words with the [i] vowel-sound (as in “gleam”) are judged more emotionally positive than words with the [Ʌ] vowel-sound (as in “glum”). The effect is observed across different modalities and languages and is moderated by mouth movements relevant to word production. This research presents and tests an articulatory explanation for this association in three experiments. Experiment 1 supported the articulatory explanation by comparing recordings of 71 participants completing an emotional recall task and a word read-aloud task, showing that oral movements were more similar between positive emotional expressions and [i] articulation, and negative emotional expressions and [Ʌ] articulation. Experiment 2 partially supported the explanation with 98 YouTube recordings of natural speech. In Experiment 3, 149 participants judged emotions expressed by a speaker during [i] and [Ʌ] articulation. Contradicting the robust phonetic emotion association, participants judged more frequently that the speaker’s [Ʌ] articulatory movements were positive emotional expressions and [i] articulatory movements were negative emotional expressions. This is likely due to other visual emotional cues not related to oral movements and the order of word lists read by the speaker. Findings from the current project overall support an articulatory explanation for the gleam-glum effect, which has major implications for language and communication.
ContributorsYu, Shin-Phing (Author) / Mcbeath, Michael K (Thesis advisor) / Glenberg, Arthur M (Committee member) / Stone, Greg O (Committee member) / Coza, Aurel (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2023
191018-Thumbnail Image.png
Description
This dissertation focuses on reinforcement learning (RL) controller design aiming for real-life applications in continuous state and control problems. It involves three major research investigations in the aspect of design, analysis, implementation, and evaluation. The application case addresses automatically configuring robotic prosthesis impedance parameters. Major contributions of the dissertation include

This dissertation focuses on reinforcement learning (RL) controller design aiming for real-life applications in continuous state and control problems. It involves three major research investigations in the aspect of design, analysis, implementation, and evaluation. The application case addresses automatically configuring robotic prosthesis impedance parameters. Major contributions of the dissertation include the following. 1) An “echo control” using the intact knee profile as target is designed to overcome the limitation of a designer prescribed robotic knee profile. 2) Collaborative multiagent reinforcement learning (cMARL) is proposed to directly take into account human influence in the robot control design. 3) A phased actor in actor-critic (PAAC) reinforcement learning method is developed to reduce learning variance in RL. The design of an “echo control” is based on a new formulation of direct heuristic dynamic programming (dHDP) for tracking control of a robotic knee prosthesis to mimic the intact knee profile. A systematic simulation of the proposed control is provided using a human-robot system simulation in OpenSim. The tracking controller is then tested on able-bodied and amputee subjects. This is the first real-time human testing of RL tracking control of a robotic knee to mirror the profile of an intact knee. The cMARL is a new solution framework for the human-prosthesis collaboration (HPC) problem. This is the first attempt at considering human influence on human-robot walking with the presence of a reinforcement learning controlled lower limb prosthesis. Results show that treating the human and robot as coupled and collaborating agents and using an estimated human adaptation in robot control design help improve human walking performance. The above studies have demonstrated great potential of RL control in solving continuous problems. To solve more complex real-life tasks with multiple control inputs and high dimensional state space, high variance, low data efficiency, slow learning or even instability are major roadblocks to be addressed. A novel PAAC method is proposed to improve learning performance in policy gradient RL by accounting for both Q value and TD error in actor updates. Systematical and comprehensive demonstrations show its effectiveness by qualitative analysis and quantitative evaluation in DeepMind Control Suite.
ContributorsWu, Ruofan (Author) / Si, Jennie (Thesis advisor) / Huang, He (Committee member) / Santello, Marco (Committee member) / Papandreou- Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2023
158010-Thumbnail Image.png
Description
Robotic lower limb prostheses provide new opportunities to help transfemoral amputees regain mobility. However, their application is impeded by that the impedance control parameters need to be tuned and optimized manually by prosthetists for each individual user in different task environments. Reinforcement learning (RL) is capable of automatically learning from

Robotic lower limb prostheses provide new opportunities to help transfemoral amputees regain mobility. However, their application is impeded by that the impedance control parameters need to be tuned and optimized manually by prosthetists for each individual user in different task environments. Reinforcement learning (RL) is capable of automatically learning from interacting with the environment. It becomes a natural candidate to replace human prosthetists to customize the control parameters. However, neither traditional RL approaches nor the popular deep RL approaches are readily suitable for learning with limited number of samples and samples with large variations. This dissertation aims to explore new RL based adaptive solutions that are data-efficient for controlling robotic prostheses.

This dissertation begins by proposing a new flexible policy iteration (FPI) framework. To improve sample efficiency, FPI can utilize either on-policy or off-policy learning strategy, can learn from either online or offline data, and can even adopt exiting knowledge of an external critic. Approximate convergence to Bellman optimal solutions are guaranteed under mild conditions. Simulation studies validated that FPI was data efficient compared to several established RL methods. Furthermore, a simplified version of FPI was implemented to learn from offline data, and then the learned policy was successfully tested for tuning the control parameters online on a human subject.

Next, the dissertation discusses RL control with information transfer (RL-IT), or knowledge-guided RL (KG-RL), which is motivated to benefit from transferring knowledge acquired from one subject to another. To explore its feasibility, knowledge was extracted from data measurements of able-bodied (AB) subjects, and transferred to guide Q-learning control for an amputee in OpenSim simulations. This result again demonstrated that data and time efficiency were improved using previous knowledge.

While the present study is new and promising, there are still many open questions to be addressed in future research. To account for human adaption, the learning control objective function may be designed to incorporate human-prosthesis performance feedback such as symmetry, user comfort level and satisfaction, and user energy consumption. To make the RL based control parameter tuning practical in real life, it should be further developed and tested in different use environments, such as from level ground walking to stair ascending or descending, and from walking to running.
ContributorsGao, Xiang (Author) / Si, Jennie (Thesis advisor) / Huang, He Helen (Committee member) / Santello, Marco (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2020
158833-Thumbnail Image.png
Description
Transcranial focused ultrasound (tFUS) is a unique neurostimulation modality with potential to develop into a highly sophisticated and effective tool. Unlike any other noninvasive neurostimulation technique, tFUS has a high spatial resolution (on the order of millimeters) and can penetrate across the skull, deep into the brain. Sub-thermal tFUS has

Transcranial focused ultrasound (tFUS) is a unique neurostimulation modality with potential to develop into a highly sophisticated and effective tool. Unlike any other noninvasive neurostimulation technique, tFUS has a high spatial resolution (on the order of millimeters) and can penetrate across the skull, deep into the brain. Sub-thermal tFUS has been shown to induce changes in EEG and fMRI, as well as perception and mood. This study investigates the possibility of using tFUS to modulate brain networks involved in attention and cognitive control.Three different brain areas linked to saliency, cognitive control, and emotion within the cingulo-opercular network were stimulated with tFUS while subjects performed behavioral paradigms. The first study targeted the dorsal anterior cingulate cortex (dACC), which is associated with performance on cognitive attention tasks, conflict, error, and, emotion. Subjects performed a variant of the Erikson Flanker task in which emotional faces (fear, neutral or scrambled) were displayed in the background as distractors. tFUS significantly reduced the reaction time (RT) delay induced by faces; there were significant differences between tFUS and Sham groups in event related potentials (ERP), event related spectral perturbation (ERSP), conflict and error processing, and heart rate variability (HRV).
The second study used the same behavioral paradigm, but targeted tFUS to the right anterior insula/frontal operculum (aIns/fO). The aIns/fO is implicated in saliency, cognitive control, interoceptive awareness, autonomic function, and emotion. tFUS was found to significantly alter ERP, ERSP, conflict and error processing, and HRV responses.
The third study targeted tFUS to the right inferior frontal gyrus (rIFG), employing the Stop Signal task to study inhibition. tFUS affected ERPs and improved stopping speed. Using network modeling, causal evidence is presented for rIFG influence on subcortical nodes in stopping.
This work provides preliminarily evidence that tFUS can be used to modulate broader network function through a single node, affecting neurophysiological processing, physiologic responses, and behavioral performance. Additionally it can be used as a tool to elucidate network function. These studies suggest tFUS has the potential to affect cognitive function as a clinical tool, and perhaps even enhance wellbeing and expand conscious awareness.
ContributorsFini, Maria Elizabeth (Author) / Tyler, William J (Thesis advisor) / Greger, Bradley (Committee member) / Santello, Marco (Committee member) / Kleim, Jeffrey (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2020
158494-Thumbnail Image.png
Description
The human ankle is a vital joint in the lower limb of the human body. As the point of interaction between the human neuromuscular system and the physical world, the ankle plays important role in lower extremity functions including postural balance and locomotion . Accurate characterization of ankle mechanics in

The human ankle is a vital joint in the lower limb of the human body. As the point of interaction between the human neuromuscular system and the physical world, the ankle plays important role in lower extremity functions including postural balance and locomotion . Accurate characterization of ankle mechanics in lower extremity function is essential not just to advance the design and control of robots physically interacting with the human lower extremities but also in rehabilitation of humans suffering from neurodegenerative disorders.

In order to characterize the ankle mechanics and understand the underlying mechanisms that influence the neuromuscular properties of the ankle, a novel multi-axial robotic platform was developed. The robotic platform is capable of simulating various haptic environments and transiently perturbing the ankle to analyze the neuromechanics of the ankle, specifically the ankle impedance. Humans modulate ankle impedance to perform various tasks of the lower limb. The robotic platform is used to analyze the modulation of ankle impedance during postural balance and locomotion on various haptic environments. Further, various factors that influence modulation of ankle impedance were identified. Using the factors identified during environment dependent impedance modulation studies, the quantitative relationship between these factors, namely the muscle activation of major ankle muscles, the weight loading on ankle and the torque generation at the ankle was analyzed during postural balance and locomotion. A universal neuromuscular model of the ankle that quantitatively relates ankle stiffness, the major component of ankle impedance, to these factors was developed.

This neuromuscular model is then used as a basis to study the alterations caused in ankle behavior due to neurodegenerative disorders such as Multiple Sclerosis and Stroke. Pilot studies to validate the analysis of altered ankle behavior and demonstrate the effectiveness of robotic rehabilitation protocols in addressing the altered ankle behavior were performed. The pilot studies demonstrate that the altered ankle mechanics can be quantified in the affected populations and correlate with the observed adverse effects of the disability. Further, robotic rehabilitation protocols improve ankle control in affected populations as seen through functional improvements in postural balance and locomotion, validating the neuromuscular approach for rehabilitation.
ContributorsNalam, Varun (Author) / Lee, Hyunglae (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Santello, Marco (Committee member) / Sugar, Thomas (Committee member) / Lockhart, Thurmon (Committee member) / Arizona State University (Publisher)
Created2020
158505-Thumbnail Image.png
Description
The term Poly-Limb stems from the rare birth defect syndrome, called Polymelia. Although Poly-Limbs in nature have often been nonfunctional, humans have had the fascination of functional Poly-Limbs. Science fiction has led us to believe that having Poly-Limbs leads to augmented manipulation abilities and higher work efficiency. To bring this

The term Poly-Limb stems from the rare birth defect syndrome, called Polymelia. Although Poly-Limbs in nature have often been nonfunctional, humans have had the fascination of functional Poly-Limbs. Science fiction has led us to believe that having Poly-Limbs leads to augmented manipulation abilities and higher work efficiency. To bring this to life however, requires a synergistic combination between robot manipulation and wearable robotics. Where traditional robots feature precision and speed in constrained environments, the emerging field of soft robotics feature robots that are inherently compliant, lightweight, and cost effective. These features highlight the applicability of soft robotic systems to design personal, collaborative, and wearable systems such as the Soft Poly-Limb.

This dissertation presents the design and development of three actuator classes, made from various soft materials, such as elastomers and fabrics. These materials are initially studied and characterized, leading to actuators capable of various motion capabilities, like bending, twisting, extending, and contracting. These actuators are modeled and optimized, using computational models, in order to achieve the desired articulation and payload capabilities. Using these soft actuators, modular integrated designs are created for functional tasks that require larger degrees of freedom. This work focuses on the development, modeling, and evaluation of these soft robot prototypes.

In the first steps to understand whether humans have the capability of collaborating with a wearable Soft Poly-Limb, multiple versions of the Soft Poly-Limb are developed for assisting daily living tasks. The system is evaluated not only for performance, but also for safety, customizability, and modularity. Efforts were also made to monitor the position and orientation of the Soft Poly-Limbs components through embedded soft sensors and first steps were taken in developing self-powered compo-nents to bring the system out into the world. This work has pushed the boundaries of developing high powered-to-weight soft manipulators that can interact side-by-side with a human user and builds the foundation upon which researchers can investigate whether the brain can support additional limbs and whether these systems can truly allow users to augment their manipulation capabilities to improve their daily lives.
ContributorsNguyen, Pham Huy (Author) / Zhang, Wenlong (Thesis advisor) / Sugar, Thomas G. (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2020
158304-Thumbnail Image.png
Description
Due to the advent of easy-to-use, portable, and cost-effective brain signal sensing devices, pervasive Brain-Machine Interface (BMI) applications using Electroencephalogram (EEG) are growing rapidly. The main objectives of these applications are: 1) pervasive collection of brain data from multiple users, 2) processing the collected data to recognize the corresponding mental

Due to the advent of easy-to-use, portable, and cost-effective brain signal sensing devices, pervasive Brain-Machine Interface (BMI) applications using Electroencephalogram (EEG) are growing rapidly. The main objectives of these applications are: 1) pervasive collection of brain data from multiple users, 2) processing the collected data to recognize the corresponding mental states, and 3) providing real-time feedback to the end users, activating an actuator, or information harvesting by enterprises for further services. Developing BMI applications faces several challenges, such as cumbersome setup procedure, low signal-to-noise ratio, insufficient signal samples for analysis, and long processing times. Internet-of-Things (IoT) technologies provide the opportunity to solve these challenges through large scale data collection, fast data transmission, and computational offloading.

This research proposes an IoT-based framework, called BraiNet, that provides a standard design methodology for fulfilling the pervasive BMI applications requirements including: accuracy, timeliness, energy-efficiency, security, and dependability. BraiNet applies Machine Learning (ML) based solutions (e.g. classifiers and predictive models) to: 1) improve the accuracy of mental state detection on-the-go, 2) provide real-time feedback to the users, and 3) save power on mobile platforms. However, BraiNet inherits security breaches of IoT, due to applying off-the-shelf soft/hardware, high accessibility, and massive network size. ML algorithms, as the core technology for mental state recognition, are among the main targets for cyber attackers. Novel ML security solutions are proposed and added to BraiNet, which provide analytical methodologies for tuning the ML hyper-parameters to be secure against attacks.

To implement these solutions, two main optimization problems are solved: 1) maximizing accuracy, while minimizing delays and power consumption, and 2) maximizing the ML security, while keeping its accuracy high. Deep learning algorithms, delay and power models are developed to solve the former problem, while gradient-free optimization techniques, such as Bayesian optimization are applied for the latter. To test the framework, several BMI applications are implemented, such as EEG-based drivers fatigue detector (SafeDrive), EEG-based identification and authentication system (E-BIAS), and interactive movies that adapt to viewers mental states (nMovie). The results from the experiments on the implemented applications show the successful design of pervasive BMI applications based on the BraiNet framework.
ContributorsSadeghi Oskooyee, Seyed Koosha (Author) / Gupta, Sandeep K S (Thesis advisor) / Santello, Marco (Committee member) / Li, Baoxin (Committee member) / Venkatasubramanian, Krishna K (Committee member) / Banerjee, Ayan (Committee member) / Arizona State University (Publisher)
Created2020
161998-Thumbnail Image.png
Description
In recent years, brain signals have gained attention as a potential trait for biometric-based security systems and laboratory systems have been designed. A real-world brain-based security system requires to be usable, accurate, and robust. While there have been developments in these aspects, there are still challenges to be met. With

In recent years, brain signals have gained attention as a potential trait for biometric-based security systems and laboratory systems have been designed. A real-world brain-based security system requires to be usable, accurate, and robust. While there have been developments in these aspects, there are still challenges to be met. With regard to usability, users need to provide lengthy amount of data compared to other traits such as fingerprint and face to get authenticated. Furthermore, in the majority of works, medical sensors are used which are more accurate compared to commercial ones but have a tedious setup process and are not mobile. Performance wise, the current state-of-art can provide acceptable accuracy on a small pool of users data collected in few sessions close to each other but still falls behind on a large pool of subjects over a longer time period. Finally, a brain security system should be robust against presentation attacks to prevent adversaries from gaining access to the system. This dissertation proposes E-BIAS (EEG-based Identification and Authentication System), a brain-mobile security system that makes contributions in three directions. First, it provides high performance on signals with shorter lengths collected by commercial sensors and processed with lightweight models to meet the computation/energy capacity of mobile devices. Second, to evaluate the system's robustness a novel presentation attack was designed which challenged the literature's presumption of intrinsic liveness property for brain signals. Third, to bridge the gap, I formulated and studied the brain liveness problem and proposed two solution approaches (model-aware & model agnostic) to ensure liveness and enhance robustness against presentation attacks. Under each of the two solution approaches, several methods were suggested and evaluated against both synthetic and manipulative classes of attacks (a total of 43 different attack vectors). Methods in both model-aware and model-agnostic approaches were successful in achieving an error rate of zero (0%). More importantly, such error rates were reached in face of unseen attacks which provides evidence of the generalization potentials of the proposed solution approaches and methods. I suggested an adversarial workflow to facilitate attack and defense cycles to allow for enhanced generalization capacity for domains in which the decision-making process is non-deterministic such as cyber-physical systems (e.g. biometric/medical monitoring, autonomous machines, etc.). I utilized this workflow for the brain liveness problem and was able to iteratively improve the performance of both the designed attacks and the proposed liveness detection methods.
ContributorsSohankar Esfahani, Mohammad Javad (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Santello, Marco (Committee member) / Dasgupta, Partha (Committee member) / Banerjee, Ayan (Committee member) / Arizona State University (Publisher)
Created2021
153654-Thumbnail Image.png
Description
Locomotion in natural environments requires coordinated movements from multiple body parts, and precise adaptations when changes in the environment occur. The contributions of the neurons of the motor cortex underlying these behaviors are poorly understood, and especially little is known about how such contributions may differ based on the

Locomotion in natural environments requires coordinated movements from multiple body parts, and precise adaptations when changes in the environment occur. The contributions of the neurons of the motor cortex underlying these behaviors are poorly understood, and especially little is known about how such contributions may differ based on the anatomical and physiological characteristics of neurons. To elucidate the contributions of motor cortical subpopulations to movements, the activity of motor cortical neurons, muscle activity, and kinematics were studied in the cat during a variety of locomotion tasks requiring accurate foot placement, including some tasks involving both expected and unexpected perturbations of the movement environment. The roles of neurons with two types of neuronal characteristics were studied: the existence of somatosensory receptive fields located at the shoulder, elbow, or wrist of the contralateral forelimb; and the existence projections through the pyramidal tract, including fast- and slow-conducting subtypes.

Distinct neuronal adaptations between simple and complex locomotion tasks were observed for neurons with different receptive field properties and fast- and slow-conducting pyramidal tract neurons. Feedforward and feedback-driven kinematic control strategies were observed for adaptations to expected and unexpected perturbations, respectively, during complex locomotion tasks. These kinematic differences were reflected in the response characteristics of motor cortical neurons receptive to somatosensory information from different parts of the forelimb, elucidating roles for the various neuronal populations in accommodating disturbances in the environment during behaviors. The results show that anatomical and physiological characteristics of motor cortical neurons are important for determining if and how neurons are involved in precise control of locomotion during natural behaviors.
ContributorsStout, Eric (Author) / Beloozerova, Irina N (Thesis advisor) / Dounskaia, Natalia (Thesis advisor) / Buneo, Christopher A (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2015
151390-Thumbnail Image.png
Description
Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on a stored internal representation of the body parts in space.

Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on a stored internal representation of the body parts in space. However, relatively little is known about this internal representation of arm position. To this end, I developed a method to map proprioceptive estimates of hand location across a 2-d workspace. In this task, I moved each subject's hand to a target location while the subject's eyes were closed. After returning the hand, subjects opened their eyes to verbally report the location of where their fingertip had been. Then, I reconstructed and analyzed the spatial structure of the pattern of estimation errors. In the first couple of experiments I probed the structure and stability of the pattern of errors by manipulating the hand used and tactile feedback provided when the hand was at each target location. I found that the resulting pattern of errors was systematically stable across conditions for each subject, subject-specific, and not uniform across the workspace. These findings suggest that the observed structure of pattern of errors has been constructed through experience, which has resulted in a systematically stable internal representation of arm location. Moreover, this representation is continuously being calibrated across the workspace. In the next two experiments, I aimed to probe the calibration of this structure. To this end, I used two different perturbation paradigms: 1) a virtual reality visuomotor adaptation to induce a local perturbation, 2) and a standard prism adaptation paradigm to induce a global perturbation. I found that the magnitude of the errors significantly increased to a similar extent after each perturbation. This small effect indicates that proprioception is recalibrated to a similar extent regardless of how the perturbation is introduced, suggesting that sensory and motor changes may be two independent processes arising from the perturbation. Moreover, I propose that the internal representation of arm location might be constructed with a global solution and not capable of local changes.
ContributorsRincon Gonzalez, Liliana (Author) / Helms Tillery, Stephen I (Thesis advisor) / Buneo, Christopher A (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2012