Description
Recent advancements in external memory based neural networks have shown promise

in solving tasks that require precise storage and retrieval of past information. Re-

searchers have applied these models to a wide range of tasks that have algorithmic

properties but have not applied

Recent advancements in external memory based neural networks have shown promise

in solving tasks that require precise storage and retrieval of past information. Re-

searchers have applied these models to a wide range of tasks that have algorithmic

properties but have not applied these models to real-world robotic tasks. In this

thesis, we present memory-augmented neural networks that synthesize robot navigation policies which a) encode long-term temporal dependencies b) make decisions in

partially observed environments and c) quantify the uncertainty inherent in the task.

We extract information about the temporal structure of a task via imitation learning

from human demonstration and evaluate the performance of the models on control

policies for a robot navigation task. Experiments are performed in partially observed

environments in both simulation and the real world
Downloads
pdf (3.1 MB)

Details

Title
  • Training Robot Policies using External Memory Based Networks Via Imitation Learning
Contributors
Date Created
2018
Resource Type
  • Text
  • Collections this item is in
    Note
    • Masters Thesis Computer Science 2018

    Machine-readable links