This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 11 - 20 of 292
Filtering by

Clear all filters

152741-Thumbnail Image.png
Description
This project is to develop a new method to generate GPS waypoints for better terrain mapping efficiency using an UAV. To create a map of a desired terrain, an UAV is used to capture images at particular GPS locations. These images are then stitched together to form a complete ma

This project is to develop a new method to generate GPS waypoints for better terrain mapping efficiency using an UAV. To create a map of a desired terrain, an UAV is used to capture images at particular GPS locations. These images are then stitched together to form a complete map of the terrain. To generate a good map using image stitching, the images are desired to have a certain percentage of overlap between them. In high windy condition, an UAV may not capture image at desired GPS location, which in turn interferes with the desired percentage of overlap between images; both frontal and sideways; thus causing discrepancies while stitching the images together. The information about the exact GPS locations at which the images are captured can be found on the flight logs that are stored in the Ground Control Station and the Auto pilot board. The objective is to look at the flight logs, predict the waypoints at which the UAV might have swayed from the desired flight path. If there are locations where flight swayed from intended path, the code should generate a new set of waypoints for a correction flight. This will save the time required for stitching the images together, thus making the whole process faster and more efficient.
ContributorsGhadage, Prasannakumar Prakashrao (Author) / Saripalli, Srikanth (Thesis advisor) / Berman, Spring M (Thesis advisor) / Thangavelautham, Jekanthan (Committee member) / Arizona State University (Publisher)
Created2014
152921-Thumbnail Image.png
Description
Small metallic parts of size less than 1mm, with features measured in tens of microns, with tolerances as small as 0.1 micron are in demand for the research in many fields such as electronics, optics, and biomedical engineering. Because of various drawbacks with non-mechanical micromanufacturing processes, micromilling has shown itself

Small metallic parts of size less than 1mm, with features measured in tens of microns, with tolerances as small as 0.1 micron are in demand for the research in many fields such as electronics, optics, and biomedical engineering. Because of various drawbacks with non-mechanical micromanufacturing processes, micromilling has shown itself to be an attractive alternative manufacturing method. Micromilling is a microscale manufacturing process that can be used to produce a wide range of small parts, including those that have complex 3-dimensional contours. Although the micromilling process is superficially similar to conventional-scale milling, the physical processes of micromilling are unique due to the scale effects. These scale effects occur due to unequal scaling of the parameters from the macroscale to the microscale milling. One key example of scale effects in micromilling process is a geometrical source of error known as chord error. The chord error limits the feedrate to a reduced value to produce the features within machining tolerances. In this research, it is hypothesized that the increase of chord error in micromilling can be alleviated by intelligent modification of the kinematic arrangement of the micromilling machine. Currently, all 3-axis micromilling machines are constructed with a Cartesian kinematic arrangement with three perpendicular linear axes. In this research, the cylindrical kinematic arrangement is introduced, and an analytical expression for the chord error for this arrangement is derived. The numerical simulations are performed to evaluate the chord errors for the cylindrical kinematic arrangement. It is found that cylindrical kinematic arrangement gives reduced chord error for some types of the desired toolpaths. Then, the kinematic redundancy is introduced to design a novel kinematic arrangement. Several desired toolpaths have been numerically simulated to evaluate the chord error for kinematically redundant arrangement. It is concluded that this arrangement gives up to 5 times reduced error for all the desired toolpaths considered, and allows significant gains in allowable feedrates.
ContributorsChukewad, Yogesh Madhavrao (Author) / SODEMANN, ANGELA A (Thesis advisor) / Davidson, Joseph K. (Thesis advisor) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2014
153492-Thumbnail Image.png
Description
Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue

Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue task in an uncertain virtual environment. Conditions are tested emulating a remotely controlled robot versus an intelligent one. Differences in performance, situation awareness, trust, workload, and communications are measured. The Intelligent robot condition resulted in higher levels of performance and operator situation awareness (SA).
ContributorsBartlett, Cade Earl (Author) / Cooke, Nancy J. (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2015
153498-Thumbnail Image.png
Description
Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric

Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric interfaces have struggled to achieve both enhanced

functionality and long-term reliability. As demands in myoelectric interfaces trend

toward simultaneous and proportional control of compliant robots, robust processing

of multi-muscle coordinations, or synergies, plays a larger role in the success of the

control scheme. This dissertation presents a framework enhancing the utility of myoelectric

interfaces by exploiting motor skill learning and

exible muscle synergies for

reliable long-term simultaneous and proportional control of multifunctional compliant

robots. The interface is learned as a new motor skill specic to the controller,

providing long-term performance enhancements without requiring any retraining or

recalibration of the system. Moreover, the framework oers control of both motion

and stiness simultaneously for intuitive and compliant human-robot interaction. The

framework is validated through a series of experiments characterizing motor learning

properties and demonstrating control capabilities not seen previously in the literature.

The results validate the approach as a viable option to remove the trade-o

between functionality and reliability that have hindered state-of-the-art myoelectric

interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric

controlled applications beyond commonly perceived anthropomorphic and

\intuitive control" constraints and into more advanced robotic systems designed for

everyday tasks.
ContributorsIson, Mark (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Greger, Bradley (Committee member) / Berman, Spring (Committee member) / Sugar, Thomas (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
153533-Thumbnail Image.png
Description
As the robotic industry becomes increasingly present in some of the more extreme environments such as the battle field, disaster sites or extraplanetary exploration, it will be necessary to provide locomotive niche strategies that are optimal to each terrain. The hopping gait has been well studied in robotics and

As the robotic industry becomes increasingly present in some of the more extreme environments such as the battle field, disaster sites or extraplanetary exploration, it will be necessary to provide locomotive niche strategies that are optimal to each terrain. The hopping gait has been well studied in robotics and proven to be a potential method to fit some of these niche areas. There have been some difficulties in producing terrain following controllers that maintain robust, steady state, which are disturbance resistant.

The following thesis will discuss a controller which has shown the ability to produce these desired properties. A phase angle oscillator controller is shown to work remarkably well, both in simulation and with a one degree of freedom robotic test stand.

Work was also done with an experimental quadruped with less successful results, but which did show potential for stability. Additional work is suggested for the quadruped.
ContributorsNew, Philip Wesley (Author) / Sugar, Thomas G. (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Redkar, Sangram (Committee member) / Arizona State University (Publisher)
Created2015
153091-Thumbnail Image.png
Description
As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application

As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application scenarios are characterized by a need to leverage the strengths of each agent as part of a unified team to reach those common goals. To ensure that the robotic agent is truly a contributing team-member, it must exhibit some degree of autonomy in achieving goals that have been delegated to it. Indeed, a significant portion of the utility of such human-robot teams derives from the delegation of goals to the robot, and autonomy on the part of the robot in achieving those goals. In order to be considered truly autonomous, the robot must be able to make its own plans to achieve the goals assigned to it, with only minimal direction and assistance from the human.

Automated planning provides the solution to this problem -- indeed, one of the main motivations that underpinned the beginnings of the field of automated planning was to provide planning support for Shakey the robot with the STRIPS system. For long, however, automated planners suffered from scalability issues that precluded their application to real world, real time robotic systems. Recent decades have seen a gradual abeyance of those issues, and fast planning systems are now the norm rather than the exception. However, some of these advances in speedup and scalability have been achieved by ignoring or abstracting out challenges that real world integrated robotic systems must confront.

In this work, the problem of planning for human-hobot teaming is introduced. The central idea -- the use of automated planning systems as mediators in such human-robot teaming scenarios -- and the main challenges inspired from real world scenarios that must be addressed in order to make such planning seamless are presented: (i) Goals which can be specified or changed at execution time, after the planning process has completed; (ii) Worlds and scenarios where the state changes dynamically while a previous plan is executing; (iii) Models that are incomplete and can be changed during execution; and (iv) Information about the human agent's plan and intentions that can be used for coordination. These challenges are compounded by the fact that the human-robot team must execute in an open world, rife with dynamic events and other agents; and in a manner that encourages the exchange of information between the human and the robot. As an answer to these challenges, implemented solutions and a fielded prototype that combines all of those solutions into one planning system are discussed. Results from running this prototype in real world scenarios are presented, and extensions to some of the solutions are offered as appropriate.
ContributorsTalamadupula, Kartik (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Liu, Huan (Committee member) / Scheutz, Matthias (Committee member) / Smith, David E. (Committee member) / Arizona State University (Publisher)
Created2014
153418-Thumbnail Image.png
Description
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum

This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
ContributorsZhong, Xuan (Author) / Yost, William (Thesis advisor) / Zhou, Yi (Committee member) / Dorman, Michael (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2015
153189-Thumbnail Image.png
Description
Wearable robots including exoskeletons, powered prosthetics, and powered orthotics must add energy to the person at an appropriate time to enhance, augment, or supplement human performance. Adding energy while not being in sync with the user can dramatically hurt performance making it necessary to have correct timing with the user.

Wearable robots including exoskeletons, powered prosthetics, and powered orthotics must add energy to the person at an appropriate time to enhance, augment, or supplement human performance. Adding energy while not being in sync with the user can dramatically hurt performance making it necessary to have correct timing with the user. Many human tasks such as walking, running, and hopping are repeating or cyclic tasks and a robot can add energy in sync with the repeating pattern for assistance. A method has been developed to add energy at the appropriate time to the repeating limit cycle based on a phase oscillator. The phase oscillator eliminates time from the forcing function which is based purely on the motion of the user. This approach has been simulated, implemented and tested in a robotic backpack which facilitates carrying heavy loads. The device oscillates the load of the backpack, based on the motion of the user, in order to add energy at the correct time and thus reduce the amount of energy required for walking with a heavy load. Models were developed in Working Model 2-D, a dynamics simulation software, in conjunction with MATLAB to verify theory and test control methods. The control system developed is robust and has successfully operated on a range of different users, each with their own different and distinct gait. The results of experimental testing validated the corresponding models.
ContributorsWheeler, Chase (Author) / Sugar, Thomas G. (Thesis advisor) / Redkar, Sangram (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2014
153270-Thumbnail Image.png
Description
Fisheye cameras are special cameras that have a much larger field of view compared to

conventional cameras. The large field of view comes at a price of non-linear distortions

introduced near the boundaries of the images captured by such cameras. Despite this

drawback, they are being used increasingly in many applications of computer

Fisheye cameras are special cameras that have a much larger field of view compared to

conventional cameras. The large field of view comes at a price of non-linear distortions

introduced near the boundaries of the images captured by such cameras. Despite this

drawback, they are being used increasingly in many applications of computer vision,

robotics, reconnaissance, astrophotography, surveillance and automotive applications.

The images captured from such cameras can be corrected for their distortion if the

cameras are calibrated and the distortion function is determined. Calibration also allows

fisheye cameras to be used in tasks involving metric scene measurement, metric

scene reconstruction and other simultaneous localization and mapping (SLAM) algorithms.

This thesis presents a calibration toolbox (FisheyeCDC Toolbox) that implements a collection of some of the most widely used techniques for calibration of fisheye cameras under one package. This enables an inexperienced user to calibrate his/her own camera without the need for a theoretical understanding about computer vision and camera calibration. This thesis also explores some of the applications of calibration such as distortion correction and 3D reconstruction.
ContributorsKashyap Takmul Purushothama Raju, Vinay (Author) / Karam, Lina (Thesis advisor) / Turaga, Pavan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
153240-Thumbnail Image.png
Description
Human running requires extensive training and conditioning for an individual to maintain high speeds (greater than 10mph) for an extended duration of time. Studies have shown that running at peak speeds generates a high metabolic cost due to the use of large muscle groups in the legs associated with

Human running requires extensive training and conditioning for an individual to maintain high speeds (greater than 10mph) for an extended duration of time. Studies have shown that running at peak speeds generates a high metabolic cost due to the use of large muscle groups in the legs associated with the human gait cycle. Applying supplemental external and internal forces to the human body during the gait cycle has been shown to decrease the metabolic cost for walking, allowing individuals to carry additional weight and walk further distances. Significant research has been conducted to reduce the metabolic cost of walking, however, there are few if any documented studies that focus specifically on reducing the metabolic cost associated with high speed running. Three mechanical systems were designed to work in concert with the human user to decrease metabolic cost and increase the range and speeds at which a human can run.

The methods of design require a focus on mathematical modeling, simulations, and metabolic cost. Mathematical modeling and simulations are used to aid in the design process of robotic systems and metabolic testing is regarded as the final analysis process to determine the true effectiveness of robotic prototypes. Metabolic data, (VO2) is the volumetric consumption of oxygen, per minute, per unit mass (ml/min/kg). Metabolic testing consists of analyzing the oxygen consumption of a test subject while performing a task naturally and then comparing that data with analyzed oxygen consumption of the same task while using an assistive device.

Three devices were designed and tested to augment high speed running. The first device, AirLegs V1, is a mostly aluminum exoskeleton with two pneumatic linear actuators connecting from the lower back directly to the user's thighs, allowing the device to induce a torque on the leg by pushing and pulling on the user's thigh during running. The device also makes use of two smaller pneumatic linear actuators which drive cables connecting to small lever arms at the back of the heel, inducing a torque at the ankles. Device two, AirLegs V2, is also pneumatically powered but is considered to be a soft suit version of the first device. It uses cables to interface the forces created by actuators located vertically on the user's back. These cables then connect to the back of the user's knees resulting in greater flexibility and range of motion of the legs. Device three, a Jet Pack, produces an external force against the user's torso to propel a user forward and upward making it easier to run. Third party testing, pilot demonstrations and timed trials have demonstrated that all three of the devices effectively reduce the metabolic cost of running below that of natural running with no device.
ContributorsKerestes, Jason (Author) / Sugar, Thomas (Thesis advisor) / Redkar, Sangram (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2014