Skip to main content

ASU Global menu

Skip to Content Report an accessibility problem ASU Home My ASU Colleges and Schools Sign In
Arizona State University Arizona State University
ASU Library KEEP

Main navigation

Home Browse Collections Share Your Work
Copyright Describe Your Materials File Formats Open Access Repository Practices Share Your Materials Terms of Deposit API Documentation
Skip to Content Report an accessibility problem ASU Home My ASU Colleges and Schools Sign In
  1. KEEP
  2. Theses and Dissertations
  3. ASU Electronic Theses and Dissertations
  4. Data-Efficient Reinforcement Learning Control of Robotic Lower-Limb Prosthesis With Human in the Loop
  5. Full metadata

Data-Efficient Reinforcement Learning Control of Robotic Lower-Limb Prosthesis With Human in the Loop

Full metadata

Title
Data-Efficient Reinforcement Learning Control of Robotic Lower-Limb Prosthesis With Human in the Loop
Description

Robotic lower limb prostheses provide new opportunities to help transfemoral amputees regain mobility. However, their application is impeded by that the impedance control parameters need to be tuned and optimized manually by prosthetists for each individual user in different task environments. Reinforcement learning (RL) is capable of automatically learning from interacting with the environment. It becomes a natural candidate to replace human prosthetists to customize the control parameters. However, neither traditional RL approaches nor the popular deep RL approaches are readily suitable for learning with limited number of samples and samples with large variations. This dissertation aims to explore new RL based adaptive solutions that are data-efficient for controlling robotic prostheses.

This dissertation begins by proposing a new flexible policy iteration (FPI) framework. To improve sample efficiency, FPI can utilize either on-policy or off-policy learning strategy, can learn from either online or offline data, and can even adopt exiting knowledge of an external critic. Approximate convergence to Bellman optimal solutions are guaranteed under mild conditions. Simulation studies validated that FPI was data efficient compared to several established RL methods. Furthermore, a simplified version of FPI was implemented to learn from offline data, and then the learned policy was successfully tested for tuning the control parameters online on a human subject.

Next, the dissertation discusses RL control with information transfer (RL-IT), or knowledge-guided RL (KG-RL), which is motivated to benefit from transferring knowledge acquired from one subject to another. To explore its feasibility, knowledge was extracted from data measurements of able-bodied (AB) subjects, and transferred to guide Q-learning control for an amputee in OpenSim simulations. This result again demonstrated that data and time efficiency were improved using previous knowledge.

While the present study is new and promising, there are still many open questions to be addressed in future research. To account for human adaption, the learning control objective function may be designed to incorporate human-prosthesis performance feedback such as symmetry, user comfort level and satisfaction, and user energy consumption. To make the RL based control parameter tuning practical in real life, it should be further developed and tested in different use environments, such as from level ground walking to stair ascending or descending, and from walking to running.

Date Created
2020
Contributors
  • Gao, Xiang (Author)
  • Si, Jennie (Thesis advisor)
  • Huang, He Helen (Committee member)
  • Santello, Marco (Committee member)
  • Papandreou-Suppappola, Antonia (Committee member)
  • Arizona State University (Publisher)
Topical Subject
  • Electrical Engineering
  • Computer Science
  • robotics
  • adaptive optimal control
  • data-efficient learning
  • Human-in-the-loop
  • Reinforcement Learning
  • robotic knee
  • Transfer Learning
Resource Type
Text
Genre
Doctoral Dissertation
Academic theses
Extent
147 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
ASU Electronic Theses and Dissertations
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.I.56958
Level of coding
minimal
System Created
  • 2020-06-01 08:00:17
System Modified
  • 2021-08-26 09:47:01
  •     
  • 2 years ago
Additional Formats
  • OAI Dublin Core
  • MODS XML

Quick actions

About this item

Overview
 Copy permalink

Explore this item

Explore Document

Share this content

Feedback

ASU University Technology Office Arizona State University.
KEEP

Contact Us

Repository Services
Home KEEP PRISM ASU Research Data Repository
Resources
Terms of Deposit Sharing Materials: ASU Digital Repository Guide Open Access at ASU

The ASU Library acknowledges the twenty-three Native Nations that have inhabited this land for centuries. Arizona State University's four campuses are located in the Salt River Valley on ancestral territories of Indigenous peoples, including the Akimel O’odham (Pima) and Pee Posh (Maricopa) Indian Communities, whose care and keeping of these lands allows us to be here today. ASU Library acknowledges the sovereignty of these nations and seeks to foster an environment of success and possibility for Native American students and patrons. We are advocates for the incorporation of Indigenous knowledge systems and research methodologies within contemporary library practice. ASU Library welcomes members of the Akimel O’odham and Pee Posh, and all Native nations to the Library.

Number one in the U.S. for innovation. ASU ahead of MIT and Stanford. - U.S. News and World Report, 8 years, 2016-2023
Maps and Locations Jobs Directory Contact ASU My ASU
Copyright and Trademark Accessibility Privacy Terms of Use Emergency COVID-19 Information