Theses and Dissertations
Displaying 1 - 2 of 2
Filtering by
- All Subjects: robotics
Description
Visual Odometry is one of the key aspects of robotic localization and mapping. Visual Odometry consists of many geometric-based approaches that convert visual data (images) into pose estimates of where the robot is in space. The classical geometric methods have shown promising results; they are carefully crafted and built explicitly for these tasks. However, such geometric methods require extreme fine-tuning and extensive prior knowledge to set up these systems for different scenarios. Classical Geometric approaches also require significant post-processing and optimization to minimize the error between the estimated pose and the global truth. In this body of work, the deep learning model was formed by combining SuperPoint and SuperGlue. The resulting model does not require any prior fine-tuning. It has been trained to enable both outdoor and indoor settings. The proposed deep learning model is applied to the Karlsruhe Institute of Technology and Toyota Technological Institute dataset along with other classical geometric visual odometry models. The proposed deep learning model has not been trained on the Karlsruhe Institute of Technology and Toyota Technological Institute dataset. It is only during experimentation that the deep learning model is first introduced to the Karlsruhe Institute of Technology and Toyota Technological Institute dataset. Using the monocular grayscale images from the visual odometer files of the Karlsruhe Institute of Technology and Toyota Technological Institute dataset, through the experiment to test the viability of the models for different sequences. The experiment has been performed on eight different sequences and has obtained the Absolute Trajectory Error and the time taken for each sequence to finish the computation. From the obtained results, there are inferences drawn from the classical and deep learning approaches.
ContributorsVaidyanathan, Venkatesh (Author) / Venkateswara, Hemanth (Thesis advisor) / McDaniel, Troy (Thesis advisor) / Michael, Katina (Committee member) / Arizona State University (Publisher)
Created2022
Description
As people begin to live longer and the population shifts to having more olderadults on Earth than young children, radical solutions will be needed to ease the
burden on society. It will be essential to develop technology that can age with the
individual. One solution is to keep older adults in their homes longer through smart
home and smart living technology, allowing them to age in place. People have many
choices when choosing where to age in place, including their own homes, assisted
living facilities, nursing homes, or family members. No matter where people choose to
age, they may face isolation and financial hardships. It is crucial to keep finances in
mind when developing Smart Home technology.
Smart home technologies seek to allow individuals to stay inside their homes for
as long as possible, yet little work looks at how we can use technology in different
life stages. Robots are poised to impact society and ease burns at home and in the
workforce. Special attention has been given to social robots to ease isolation. As
social robots become accepted into society, researchers need to understand how these
robots should mimic natural conversation. My work attempts to answer this question
within social robotics by investigating how to make conversational robots natural and
reciprocal.
I investigated this through a 2x2 Wizard of Oz between-subjects user study. The
study lasted four months, testing four different levels of interactivity with the robot.
None of the levels were significantly different from the others, an unexpected result. I
then investigated the robot’s personality, the participant’s trust, and the participant’s
acceptance of the robot and how that influenced the study.
ContributorsMiller, Jordan (Author) / McDaniel, Troy (Thesis advisor) / Michael, Katina (Committee member) / Cooke, Nancy (Committee member) / Bryan, Chris (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022