This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 430
Filtering by

Clear all filters

152234-Thumbnail Image.png
Description
One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of

One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of the terrain is needed prior to traversal. The Digital Terrain Model (DTM) provides information about the terrain along with waypoints for the rover to traverse. However, traversing a set of waypoints linearly is burdensome, as the rovers would constantly need to modify their orientation as they successively approach waypoints. Although there are various solutions to this problem, this research paper proposes the smooth traversability of the rover using splines as a quick and easy implementation to traverse a set of waypoints. In addition, a rover was used to compare the smoothness of the linear traversal along with the spline interpolations. The data collected illustrated that spline traversals had a less rate of change in the velocity over time, indicating that the rover performed smoother than with linear paths.
ContributorsKamasamudram, Anurag (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
152179-Thumbnail Image.png
Description
As the complexity of robotic systems and applications grows rapidly, development of high-performance, easy to use, and fully integrated development environments for those systems is inevitable. Model-Based Design (MBD) of dynamic systems using engineering software such as Simulink® from MathWorks®, SciCos from Metalau team and SystemModeler® from Wolfram® is quite

As the complexity of robotic systems and applications grows rapidly, development of high-performance, easy to use, and fully integrated development environments for those systems is inevitable. Model-Based Design (MBD) of dynamic systems using engineering software such as Simulink® from MathWorks®, SciCos from Metalau team and SystemModeler® from Wolfram® is quite popular nowadays. They provide tools for modeling, simulation, verification and in some cases automatic code generation for desktop applications, embedded systems and robots. For real-world implementation of models on the actual hardware, those models should be converted into compilable machine code either manually or automatically. Due to the complexity of robotic systems, manual code translation from model to code is not a feasible optimal solution so we need to move towards automated code generation for such systems. MathWorks® offers code generation facilities called Coder® products for this purpose. However in order to fully exploit the power of model-based design and code generation tools for robotic applications, we need to enhance those software systems by adding and modifying toolboxes, files and other artifacts as well as developing guidelines and procedures. In this thesis, an effort has been made to propose a guideline as well as a Simulink® library, StateFlow® interface API and a C/C++ interface API to complete this toolchain for NAO humanoid robots. Thus the model of the hierarchical control architecture can be easily and properly converted to code and built for implementation.
ContributorsRaji Kermani, Ramtin (Author) / Fainekos, Georgios (Thesis advisor) / Lee, Yann-Hang (Committee member) / Sarjoughian, Hessam S. (Committee member) / Arizona State University (Publisher)
Created2013
152071-Thumbnail Image.png
Description
The development of advanced, anthropomorphic artificial hands aims to provide upper extremity amputees with improved functionality for activities of daily living. However, many state-of-the-art hands have a large number of degrees of freedom that can be challenging to control in an intuitive manner. Automated grip responses could be built into

The development of advanced, anthropomorphic artificial hands aims to provide upper extremity amputees with improved functionality for activities of daily living. However, many state-of-the-art hands have a large number of degrees of freedom that can be challenging to control in an intuitive manner. Automated grip responses could be built into artificial hands in order to enhance grasp stability and reduce the cognitive burden on the user. To this end, three studies were conducted to understand how human hands respond, passively and actively, to unexpected perturbations of a grasped object along and about different axes relative to the hand. The first study investigated the effect of magnitude, direction, and axis of rotation on precision grip responses to unexpected rotational perturbations of a grasped object. A robust "catch-up response" (a rapid, pulse-like increase in grip force rate previously reported only for translational perturbations) was observed whose strength scaled with the axis of rotation. Using two haptic robots, we then investigated the effects of grip surface friction, axis, and direction of perturbation on precision grip responses for unexpected translational and rotational perturbations for three different hand-centric axes. A robust catch-up response was observed for all axes and directions for both translational and rotational perturbations. Grip surface friction had no effect on the stereotypical catch-up response. Finally, we characterized the passive properties of the precision grip-object system via robot-imposed impulse perturbations. The hand-centric axis associated with the greatest translational stiffness was different than that for rotational stiffness. This work expands our understanding of the passive and active features of precision grip, a hallmark of human dexterous manipulation. Biological insights such as these could be used to enhance the functionality of artificial hands and the quality of life for upper extremity amputees.
ContributorsDe Gregorio, Michael (Author) / Santos, Veronica J. (Thesis advisor) / Artemiadis, Panagiotis K. (Committee member) / Santello, Marco (Committee member) / Sugar, Thomas (Committee member) / Helms Tillery, Stephen I. (Committee member) / Arizona State University (Publisher)
Created2013
152076-Thumbnail Image.png
Description
Human fingertips contain thousands of specialized mechanoreceptors that enable effortless physical interactions with the environment. Haptic perception capabilities enable grasp and manipulation in the absence of visual feedback, as when reaching into one's pocket or wrapping a belt around oneself. Unfortunately, state-of-the-art artificial tactile sensors and processing algorithms are no

Human fingertips contain thousands of specialized mechanoreceptors that enable effortless physical interactions with the environment. Haptic perception capabilities enable grasp and manipulation in the absence of visual feedback, as when reaching into one's pocket or wrapping a belt around oneself. Unfortunately, state-of-the-art artificial tactile sensors and processing algorithms are no match for their biological counterparts. Tactile sensors must not only meet stringent practical specifications for everyday use, but their signals must be processed and interpreted within hundreds of milliseconds. Control of artificial manipulators, ranging from prosthetic hands to bomb defusal robots, requires a constant reliance on visual feedback that is not entirely practical. To address this, we conducted three studies aimed at advancing artificial haptic intelligence. First, we developed a novel, robust, microfluidic tactile sensor skin capable of measuring normal forces on flat or curved surfaces, such as a fingertip. The sensor consists of microchannels in an elastomer filled with a liquid metal alloy. The fluid serves as both electrical interconnects and tunable capacitive sensing units, and enables functionality despite substantial deformation. The second study investigated the use of a commercially-available, multimodal tactile sensor (BioTac sensor, SynTouch) to characterize edge orientation with respect to a body fixed reference frame, such as a fingertip. Trained on data from a robot testbed, a support vector regression model was developed to relate haptic exploration actions to perception of edge orientation. The model performed comparably to humans for estimating edge orientation. Finally, the robot testbed was used to perceive small, finger-sized geometric features. The efficiency and accuracy of different haptic exploratory procedures and supervised learning models were assessed for estimating feature properties such as type (bump, pit), order of curvature (flat, conical, spherical), and size. This study highlights the importance of tactile sensing in situations where other modalities fail, such as when the finger itself blocks line of sight. Insights from this work could be used to advance tactile sensor technology and haptic intelligence for artificial manipulators that improve quality of life, such as prosthetic hands and wheelchair-mounted robotic hands.
ContributorsPonce Wong, Ruben Dario (Author) / Santos, Veronica J (Thesis advisor) / Artemiadis, Panagiotis K (Committee member) / Helms Tillery, Stephen I (Committee member) / Posner, Jonathan D (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2013
151740-Thumbnail Image.png
Description
MOVE was a choreographic project that investigated content in conjunction with the creative process. The yearlong collaborative creative process utilized improvisational and compositional experiments to research the movement potential of the human body, as well as movement's ability to be an emotional catalyst. Multiple showings were held to receive feedback

MOVE was a choreographic project that investigated content in conjunction with the creative process. The yearlong collaborative creative process utilized improvisational and compositional experiments to research the movement potential of the human body, as well as movement's ability to be an emotional catalyst. Multiple showings were held to receive feedback from a variety of viewers. Production elements were designed in conjunction with the development of the evening-length dance work. As a result of discussion and research, several process-revealing sections were created to provide clear relationships between pedestrian/daily functional movement and technical movement. Each section within MOVE addressed movement as an emotional catalyst, resulting in a variety of emotional textures. The sections were placed in a non-linear structure in order for the audience to have the space to create their own connections between concepts. Community was developed in rehearsal via touch/weight sharing, and translated to the performance of MOVE via a communal, instinctive approach to the performance of the work. Community was also created between the movers and the audience via the design of the performance space. The production elements all revolved around the human body, and offered different viewpoints into various body parts. The choreographer, designers, and movers all participated in the creation of the production elements, resulting in a clear understanding of MOVE by the entire community involved. The overall creation, presentation, and reflection of MOVE was a view into the choreographer's growth as a dance artist, and her values of people and movement.
ContributorsPeterson, Britta Joy (Author) / Fitzgerald, Mary (Thesis advisor) / Schupp, Karen (Committee member) / Mcneal Hunt, Diane (Committee member) / Arizona State University (Publisher)
Created2013
151749-Thumbnail Image.png
Description
Educed Play is a performance installation that investigates spontaneity and the invisible communication that can exist in improvisation and collaborative play. The work unites the mediums of dance, drawing, music, and video through improvisational performances. The multimedia installation entitled Educed Play was presented in the fall of 2012. Inspiration came

Educed Play is a performance installation that investigates spontaneity and the invisible communication that can exist in improvisation and collaborative play. The work unites the mediums of dance, drawing, music, and video through improvisational performances. The multimedia installation entitled Educed Play was presented in the fall of 2012. Inspiration came from the idea of relics created by ephemeral interactions, using improvisation as a means to performance, and working within a genuine collaboration. This document encompasses an overview of the project.
ContributorsLing, Amanda (Author) / Kaplan, Robert (Thesis advisor) / Standley, Eileen (Committee member) / Pittsley, Janice (Committee member) / Arizona State University (Publisher)
Created2013
151787-Thumbnail Image.png
Description
Electromyogram (EMG)-based control interfaces are increasingly used in robot teleoperation, prosthetic devices control and also in controlling robotic exoskeletons. Over the last two decades researchers have come up with a plethora of decoding functions to map myoelectric signals to robot motions. However, this requires a lot of training and validation

Electromyogram (EMG)-based control interfaces are increasingly used in robot teleoperation, prosthetic devices control and also in controlling robotic exoskeletons. Over the last two decades researchers have come up with a plethora of decoding functions to map myoelectric signals to robot motions. However, this requires a lot of training and validation data sets, while the parameters of the decoding function are specific for each subject. In this thesis we propose a new methodology that doesn't require training and is not user-specific. The main idea is to supplement the decoding functional error with the human ability to learn inverse model of an arbitrary mapping function. We have shown that the subjects gradually learned the control strategy and their learning rates improved. We also worked on identifying an optimized control scheme that would be even more effective and easy to learn for the subjects. Optimization was done by taking into account that muscles act in synergies while performing a motion task. The low-dimensional representation of the neural activity was used to control a two-dimensional task. Results showed that in the case of reduced dimensionality mapping, the subjects were able to learn to control the device in a slower pace, however they were able to reach and retain the same level of controllability. To summarize, we were able to build an EMG-based controller for robot devices that would work for any subject, without any training or decoding function, suggesting human-embedded controllers for robotic devices.
ContributorsAntuvan, Chris Wilson (Author) / Artemiadis, Panagiotis (Thesis advisor) / Muthuswamy, Jitendran (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
151793-Thumbnail Image.png
Description
Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to

Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to create high level motion plans to control robots in the field by converting a visual representation of the motion/task plan into a Linear Temporal Logic (LTL) specification. The visual interface is built on the Android tablet platform and provides functionality to create task plans through a set of well defined gestures and on screen controls. It uses the notion of waypoints to quickly and efficiently describe the motion plan and enables a variety of complex Linear Temporal Logic specifications to be described succinctly and intuitively by the user without the need for the knowledge and understanding of LTL specification. Thus, it opens avenues for its use by personnel in military, warehouse management, and search and rescue missions. This thesis describes the construction of LTL for various scenarios used for robot navigation using the visual interface developed and leverages the use of existing LTL based motion planners to carry out the task plan by a robot.
ContributorsSrinivas, Shashank (Author) / Fainekos, Georgios (Thesis advisor) / Baral, Chitta (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2013
151820-Thumbnail Image.png
Description
This thesis document encapsulates the findings of my research process in which I studied my self, my artistic process, and the interconnectivity among the various aspects of my life. Those findings are two-fold as they relate to the creation of three original works and my personal transformation through the process.

This thesis document encapsulates the findings of my research process in which I studied my self, my artistic process, and the interconnectivity among the various aspects of my life. Those findings are two-fold as they relate to the creation of three original works and my personal transformation through the process. This document encapsulates the three works, swimminginthepsyche, applecede and The 21st Century Adventures of Wonder Woman, chronologically from their performance dates. My personal growth and transformation is expressed throughout the paper and presented in the explanation of the emergent philosophical approach for self-study as creative practice that I followed. This creative-centered framework for embodied transformation weaves spiritual philosophy with my artistic process to sustain a holistic life practice, where the self, seen as an integrated whole, is also a direct reflection of the greater, singular and holistic existence.
ContributorsDeWitt, Inertia Q.E.D (Author) / Mitchell, John D. (Thesis advisor) / Dyer, Becky (Committee member) / De La Garza, Sarah (Committee member) / Arizona State University (Publisher)
Created2013
151462-Thumbnail Image.png
Description
Embodied Continuity documents the methodology of Entangled/Embraced, a dance performance piece presented December, 2011 and created as an artistic translation of research conducted January-May, 2011 in the states of Karnataka and Kerala, South India. Focused on the sciences of Ayurveda, Kalaripayattu and yoga, this research stems from an interest in

Embodied Continuity documents the methodology of Entangled/Embraced, a dance performance piece presented December, 2011 and created as an artistic translation of research conducted January-May, 2011 in the states of Karnataka and Kerala, South India. Focused on the sciences of Ayurveda, Kalaripayattu and yoga, this research stems from an interest in body-mind connectivity, body-mind-environment continuity, embodied epistemology and the implications of ethnography within artistic practice. The document begins with a theoretical grounding covering established research on theories of embodiment; ethnographic methodologies framing research conducted in South India including sensory ethnography, performance ethnography and autoethnography; and an explanation of the sciences of Ayurveda, Kalaripayattu and yoga with a descriptive slant that emphasizes concepts of embodiment and body-mind-environment continuity uniquely inherent to these sciences. Following the theoretical grounding, the document provides an account of methods used in translating theoretical concepts and experiences emerging from research in India into the creation of the Entangled/Embraced dance work. Using dancer and audience member participation to inspire emergent meanings and maintain ethnographic consciousness, Embodied Continuity demonstrates how concepts inspiring research interests, along with ideas emerging from within research experiences, in addition to philosophical standpoints embedded in the ethnographic methodologies chosen to conduct research, weave into the entire project of Entangled/Embraced to unite the phases of research and performance, ethnography and artistry.
ContributorsRamsey, Ashlee (Author) / Vissicaro, Pegge (Thesis advisor) / Standley, Eileen (Committee member) / Dove, Simon (Committee member) / Arizona State University (Publisher)
Created2012