Matching Items (36)
Filtering by

Clear all filters

Description

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse or surveying construction sites. However, there is a modern trend away from human hand-engineering and toward robot learning. To this end, the ideal robot is not engineered,but automatically designed for a specific task. This thesis focuses on robots which learn path-planning algorithms for specific environments. Learning is accomplished via genetic programming. Path-planners are represented as Python code, which is optimized via Pareto evolution. These planners are encouraged to explore curiously and efficiently. This research asks the questions: “How can robots exhibit life-long learning where they adapt to changing environments in a robust way?”, and “How can robots learn to be curious?”.

ContributorsSaldyt, Lucas P (Author) / Ben Amor, Heni (Thesis director) / Pavlic, Theodore (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
135981-Thumbnail Image.png
Description
Education in computer science is a difficult endeavor, with learning a new programing language being a barrier to entry, especially for college freshman and high school students. Learning a first programming language requires understanding the syntax of the language, the algorithms to use, and any additional complexities the language carries.

Education in computer science is a difficult endeavor, with learning a new programing language being a barrier to entry, especially for college freshman and high school students. Learning a first programming language requires understanding the syntax of the language, the algorithms to use, and any additional complexities the language carries. Often times this becomes a deterrent from learning computer science at all. Especially in high school, students may not want to spend a year or more simply learning the syntax of a programming language. In order to overcome these issues, as well as to mitigate the issues caused by Microsoft discontinuing their Visual Programming Language (VPL), we have decided to implement a new VPL, ASU-VPL, based on Microsoft's VPL. ASU-VPL provides an environment where users can focus on algorithms and worry less about syntactic issues. ASU-VPL was built with the concepts of Robot as a Service and workflow based development in mind. As such, ASU-VPL is designed with the intention of allowing web services to be added to the toolbox (e.g. WSDL and REST services). ASU-VPL has strong support for multithreaded operations, including event driven development, and is built with Microsoft VPL users in mind. It provides support for many different robots, including Lego's third generation robots, i.e. EV3, and any open platform robots. To demonstrate the capabilities of ASU-VPL, this paper details the creation of an Intel Edison based robot and the use of ASU-VPL for programming both the Intel based robot and an EV3 robot. This paper will also discuss differences between ASU-VPL and Microsoft VPL as well as differences between developing for the EV3 and for an open platform robot.
ContributorsDe Luca, Gennaro (Author) / Chen, Yinong (Thesis director) / Cheng, Calvin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
137106-Thumbnail Image.png
Description
The goal of this project was to use the sense of touch to investigate tactile cues during multidigit rotational manipulations of objects. A robotic arm and hand equipped with three multimodal tactile sensors were used to gather data about skin deformation during rotation of a haptic knob. Three different rotation

The goal of this project was to use the sense of touch to investigate tactile cues during multidigit rotational manipulations of objects. A robotic arm and hand equipped with three multimodal tactile sensors were used to gather data about skin deformation during rotation of a haptic knob. Three different rotation speeds and two levels of rotation resistance were used to investigate tactile cues during knob rotation. In the future, this multidigit task can be generalized to similar rotational tasks, such as opening a bottle or turning a doorknob.
ContributorsChalla, Santhi Priya (Author) / Santos, Veronica (Thesis director) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / School of Earth and Space Exploration (Contributor)
Created2014-05
137772-Thumbnail Image.png
Description
As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much

As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much research has been done to characterize the spatiotemporal parameters of human arm motions for reaching and gasping, not much has been done to characterize the behavior of human arm motion in response to control errors in a system. The scope of this investigation is to investigate human corrective actions in response to error in an anthropomorphic teleoperated robot limb. Characterizing human corrective actions contributes to the development of control strategies that are capable of mitigating potential instabilities inherent in human-machine control interfaces. Characterization of human corrective actions requires the simulation of a teleoperated anthropomorphic armature and the comparison of a human subject's arm kinematics, in response to error, against the human arm kinematics without error. This was achieved using OpenGL software to simulate a teleoperated robot arm and an NDI motion tracking system to acquire the subject's arm position and orientation. Error was intermittently and programmatically introduced to the virtual robot's joints as the subject attempted to reach for several targets located around the arm. The comparison of error free human arm kinematics to error prone human arm kinematics revealed an addition of a bell shaped velocity peak into the human subject's tangential velocity profile. The size, extent, and location of the additional velocity peak depended on target location and join angle error. Some joint angle and target location combinations do not produce an additional peak but simply maintain the end effector velocity at a low value until the target is reached. Additional joint angle error parameters and degrees of freedom are needed to continue this investigation.
ContributorsBevilacqua, Vincent Frank (Author) / Artemiadis, Panagiotis (Thesis director) / Santello, Marco (Committee member) / Trimble, Steven (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2013-05
137748-Thumbnail Image.png
Description
I worked on the human-machine interface to improve human physical capability. This work was done in the Human Oriented Robotics and Control Lab (HORC) towards the creation of an advanced, EMG-controlled exoskeleton. The project was new, and any work on the human- machine interface needs the physical interface itself. So

I worked on the human-machine interface to improve human physical capability. This work was done in the Human Oriented Robotics and Control Lab (HORC) towards the creation of an advanced, EMG-controlled exoskeleton. The project was new, and any work on the human- machine interface needs the physical interface itself. So I designed and fabricated a human-robot coupling device with a novel safety feature. The validation testing of this coupling proved very successful, and the device was granted a provisional patent as well as published to facilitate its spread to other human-machine interface applications, where it could be of major benefit. I then employed this coupling in experimentation towards understanding impedance, with the end goal being the creation of an EMG-based impedance exoskeleton control system. I modified a previously established robot-to-human perturbation method for use in my novel, three- dimensional (3D) impedance measurement experiment. Upon execution of this experiment, I was able to successfully characterize passive, static human arm stiffness in 3D, and in doing so validated the aforementioned method. This establishes an important foundation for promising future work on understanding impedance and the creation of the proposed control scheme, thereby furthering the field of human-robot interaction.
ContributorsO'Neill, Gerald D. (Author) / Artemiadis, Panagiotis (Thesis director) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2013-05
137175-Thumbnail Image.png
Description
The purpose of this project is to design a waterproof magnetic coupling that will allow the actuators on remotely operated vehicles (ROV) to remain water tight in extreme underwater conditions for longs periods of time. ROVs are tethered mobile robots controlled and powered by an operator from some distance away

The purpose of this project is to design a waterproof magnetic coupling that will allow the actuators on remotely operated vehicles (ROV) to remain water tight in extreme underwater conditions for longs periods of time. ROVs are tethered mobile robots controlled and powered by an operator from some distance away at the surface of the water. These vehicles all require some method for transmitting power to the surrounding water to interact with their environment, such as in thrusters for propulsion or a claw for manipulation. Many commercially available thrusters, for example, use shaft seals to transfer power through a waterproof housing to the adjacent water. Even though this works excellently for many of them, I propose that having a static seal and transmitting the power from the motor to the shaft through magnetic coupling will allow a much greater depth at which they are waterproof to be achieved. In addition, it will not require the chronic maintenance that dynamic shaft seals entail, making long scientific endeavors possible.
ContributorsHouda, Jonathon Jacob (Author) / Foy, Joseph (Thesis director) / Zhu, Haolin (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2014-05
137299-Thumbnail Image.png
Description
This thesis focused on grasping tasks with the goal of investigating, analyzing, and quantifying human catching trends by way of a mathematical model. The aim of this project was to study human trends in a dynamic grasping task (catching a rolling ball), relate those discovered trends to kinematic characteristics of

This thesis focused on grasping tasks with the goal of investigating, analyzing, and quantifying human catching trends by way of a mathematical model. The aim of this project was to study human trends in a dynamic grasping task (catching a rolling ball), relate those discovered trends to kinematic characteristics of the object, and use this relation to control a robot hand in real time. As an ultimate goal, it was hoped that this research will aide in furthering the bio-inspiration in robot control methods. To achieve the above goal, firstly a tactile sensing glove was developed. This instrument allowed for in depth study of human reactionary grasping movements when worn by subjects during experimentation. This sensing glove system recorded force data from the palm and motion data from four fingers. From these data sets, temporal trends were established relating to when subjects initiated grasping during each trial. Moreover, optical tracking was implemented to study the kinematics of the moving object during human experiments and also to close the loop during the control of the robot hand. Ultimately, a mathematical bio-inspired model was created. This was embodied in a two-term decreasing power function which related the temporal trend of wait time to the ball initial acceleration. The wait time is defined as the time between when the experimental conductor releases the ball and when the subject begins to initiate grasping by closing their fingers, over a distance of four feet. The initial acceleration is the first acceleration value of the object due to the force provided when the conductor throws the object. The distance over which the ball was thrown was incorporated into the model. This is discussed in depth within the thesis. Overall, the results presented here show promise for bio-inspired control schemes in the successful application of robotic devices. This control methodology will ideally be developed to move robotic prosthesis past discrete tasks and into more complicated activities.
ContributorsCard, Dillon (Co-author) / Mincieli, Jennifer (Co-author) / Artemiadis, Panagiotis (Thesis director) / Santos, Veronica (Committee member) / Middleton, James (Committee member) / Barrett, The Honors College (Contributor) / School of Sustainability (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / W. P. Carey School of Business (Contributor)
Created2014-05
136627-Thumbnail Image.png
Description
This thesis focused on understanding how humans visually perceive swarm behavior through the use of swarm simulations and gaze tracking. The goal of this project was to determine visual patterns subjects display while observing and supervising a swarm as well as determine what swarm characteristics affect these patterns. As an

This thesis focused on understanding how humans visually perceive swarm behavior through the use of swarm simulations and gaze tracking. The goal of this project was to determine visual patterns subjects display while observing and supervising a swarm as well as determine what swarm characteristics affect these patterns. As an ultimate goal, it was hoped that this research will contribute to optimizing human-swarm interaction for the design of human supervisory controllers for swarms. To achieve the stated goals, two investigations were conducted. First, subjects gaze was tracked while observing a simulated swarm as it moved across the screen. This swarm changed in size, disturbance level in the position of the agents, speed, and path curvature. Second, subjects were asked to play a supervisory role as they watched a swarm move across the screen toward targets. The subjects determined whether a collision would occur and with which target while their responses as well as their gaze was tracked. In the case of an observatory role, a model of human gaze was created. This was embodied in a second order model similar to that of a spring-mass-damper system. This model was similar across subjects and stable. In the case of a supervisory role, inherent weaknesses in human perception were found, such as the inability to predict future position of curved paths. These findings are discussed in depth within the thesis. Overall, the results presented suggest that understanding human perception of swarms offers a new approach to the problem of swarm control. The ability to adapt controls to the strengths and weaknesses could lead to great strides in the reduction of operators in the control of one UAV, resulting in a move towards one man operation of a swarm.
ContributorsWhitton, Elena Michelle (Author) / Artemiadis, Panagiotis (Thesis director) / Berman, Spring (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2015-05
131002-Thumbnail Image.png
Description
This thesis presents a process by which a controller used for collective transport tasks is qualitatively studied and probed for presence of undesirable equilibrium states that could entrap the system and prevent it from converging to a target state. Fields of study relevant to this project include dynamic system modeling,

This thesis presents a process by which a controller used for collective transport tasks is qualitatively studied and probed for presence of undesirable equilibrium states that could entrap the system and prevent it from converging to a target state. Fields of study relevant to this project include dynamic system modeling, modern control theory, script-based system simulation, and autonomous systems design. Simulation and computational software MATLAB and Simulink® were used in this thesis.
To achieve this goal, a model of a swarm performing a collective transport task in a bounded domain featuring convex obstacles was simulated in MATLAB/ Simulink®. The closed-loop dynamic equations of this model were linearized about an equilibrium state with angular acceleration and linear acceleration set to zero. The simulation was run over 30 times to confirm system ability to successfully transport the payload to a goal point without colliding with obstacles and determine ideal operating conditions by testing various orientations of objects in the bounded domain. An additional purely MATLAB simulation was run to identify local minima of the Hessian of the navigation-like potential function. By calculating this Hessian periodically throughout the system’s progress and determining the signs of its eigenvalues, a system could check whether it is trapped in a local minimum, and potentially dislodge itself through implementation of a stochastic term in the robot controllers. The eigenvalues of the Hessian calculated in this research suggested the model local minima were degenerate, indicating an error in the mathematical model for this system, which likely incurred during linearization of this highly nonlinear system.
Created2020-12
132414-Thumbnail Image.png
Description
A common design of multi-agent robotic systems requires a centralized master node, which coordinates the actions of all the agents. The multi-agent system designed in this project enables coordination between the robots and reduces the dependence on a single node in the system. This design change reduces the complexity of

A common design of multi-agent robotic systems requires a centralized master node, which coordinates the actions of all the agents. The multi-agent system designed in this project enables coordination between the robots and reduces the dependence on a single node in the system. This design change reduces the complexity of the central node, and makes the system more adaptable to changes in its topology. The final goal of this project was to have a group of robots collaboratively claim positions in pre-defined formations, and navigate to the position using pose data transmitted by a localization server.
Planning coordination between robots in a multi-agent system requires each robot to know the position of the other robots. To address this, the localization server tracked visual fiducial markers attached to the robots and relayed their pose to every robot at a rate of 20Hz using the MQTT communication protocol. The robots used this data to inform a potential fields path planning algorithm and navigate to their target position.
This project was unable to address all of the challenges facing true distributed multi-agent coordination and needed to make concessions in order to meet deadlines. Further research would focus on shoring up these deficiencies and developing a more robust system.
ContributorsThibeault, Quinn (Author) / Meuth, Ryan (Thesis director) / Chen, Yinong (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05