Matching Items (29)
Filtering by

Clear all filters

152732-Thumbnail Image.png
Description
The presented work in this report is about Real time Estimation of wind and analyzing current wind correction algorithm in commercial off the shelf Autopilot board. The open source ArduPilot Mega 2.5 (APM 2.5) board manufactured by 3D Robotics is used. Currently there is lot of development being done in

The presented work in this report is about Real time Estimation of wind and analyzing current wind correction algorithm in commercial off the shelf Autopilot board. The open source ArduPilot Mega 2.5 (APM 2.5) board manufactured by 3D Robotics is used. Currently there is lot of development being done in the field of Unmanned Aerial Systems (UAVs), various aerial platforms and corresponding; autonomous systems for them. This technology has advanced to such a stage that UAVs can be used for specific designed missions and deployed with reliability. But in some areas like missions requiring high maneuverability with greater efficiency is still under research area. This would help in increasing reliability and augmenting range of UAVs significantly. One of the problems addressed through this thesis work is, current autopilot systems have algorithm that handles wind by attitude correction with appropriate Crab angle. But the real time wind vector (direction) and its calculated velocity is based on geometrical and algebraic transformation between ground speed and air speed vectors. This method of wind estimation and prediction, many a times leads to inaccuracy in attitude correction. The same has been proved in the following report with simulation and actual field testing. In later part, new ways to tackle while flying windy conditions have been proposed.
ContributorsBiradar, Anandrao Shesherao (Author) / Saripalli, Srikanth (Thesis advisor) / Berman, Spring (Thesis advisor) / Thanga, Jekan (Committee member) / Arizona State University (Publisher)
Created2014
153932-Thumbnail Image.png
Description
Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas

Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas are investigated: development of a model which facilitates capturing the differences among designers' problem formulation; representation and implication of those differences; the relation between problem formulation and creativity.

This dissertation proposes the Problem Map (P-maps) ontological framework. P-maps represent designers' problem formulation in terms of six groups of entities (requirement, use scenario, function, artifact, behavior, and issue). Entities have hierarchies within each group and links among groups. Variables extracted from P-maps characterize problem formulation.

Three experiments were conducted. The first experiment was to study the similarities and differences between novice and expert designers. Results show that experts use more abstraction than novices do and novices are more likely to add entities in a specific order. Experts also discover more issues.

The second experiment was to see how problem formulation relates to creativity. Ideation metrics were used to characterize creative outcome. Results include but are not limited to a positive correlation between adding more issues in an unorganized way with quantity and variety, more use scenarios and functions with novelty, more behaviors and conflicts identified with quality, and depth-first exploration with all ideation metrics. Fewer hierarchies in use scenarios lower novelty and fewer links to requirements and issues lower quality of ideas.

The third experiment was to see if problem formulation can predict creative outcome. Models based on one problem were used to predict the creativity of another. Predicted scores were compared to assessments of independent judges. Quality and novelty are predicted more accurately than variety, and quantity. Backward elimination improves model fit, though reduces prediction accuracy.

P-maps provide a theoretical framework for formalizing, tracing, and quantifying conceptual design strategies. Other potential applications are developing a test of problem formulation skill, tracking students' learning of formulation skills in a course, and reproducing other researchers’ observations about designer thinking.
ContributorsDinar, Mahmoud (Author) / Shah, Jami J. (Thesis advisor) / Langley, Pat (Committee member) / Davidson, Joseph K. (Committee member) / Lande, Micah (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2015
156953-Thumbnail Image.png
Description
Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures

Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures or processing settings. While optimization techniques have mature applications to a large range of engineering systems, their application to material design meets unique challenges due to the high dimensionality of microstructures and the high costs in computing process-structure-property (PSP) mappings. The key to addressing these challenges is the learning of material representations and predictive PSP mappings while managing a small data acquisition budget. This dissertation thus focuses on developing learning mechanisms that leverage context-specific meta-data and physics-based theories. Two research tasks will be conducted: In the first, we develop a statistical generative model that learns to characterize high-dimensional microstructure samples using low-dimensional features. We improve the data efficiency of a variational autoencoder by introducing a morphology loss to the training. We demonstrate that the resultant microstructure generator is morphology-aware when trained on a small set of material samples, and can effectively constrain the microstructure space during material design. In the second task, we investigate an active learning mechanism where new samples are acquired based on their violation to a theory-driven constraint on the physics-based model. We demonstrate using a topology optimization case that while data acquisition through the physics-based model is often expensive (e.g., obtaining microstructures through simulation or optimization processes), the evaluation of the constraint can be far more affordable (e.g., checking whether a solution is optimal or equilibrium). We show that this theory-driven learning algorithm can lead to much improved learning efficiency and generalization performance when such constraints can be derived. The outcomes of this research is a better understanding of how physics knowledge about material systems can be integrated into machine learning frameworks, in order to achieve more cost-effective and reliable learning of material representations and predictive models, which are essential to accelerate computational material design.
ContributorsCang, Ruijin (Author) / Ren, Yi (Thesis advisor) / Liu, Yongming (Committee member) / Jiao, Yang (Committee member) / Nian, Qiong (Committee member) / Zhuang, Houlong (Committee member) / Arizona State University (Publisher)
Created2018
157030-Thumbnail Image.png
Description
Aging-related damage and failure in structures, such as fatigue cracking, corrosion, and delamination, are critical for structural integrity. Most engineering structures have embedded defects such as voids, cracks, inclusions from manufacturing. The properties and locations of embedded defects are generally unknown and hard to detect in complex engineering structures.

Aging-related damage and failure in structures, such as fatigue cracking, corrosion, and delamination, are critical for structural integrity. Most engineering structures have embedded defects such as voids, cracks, inclusions from manufacturing. The properties and locations of embedded defects are generally unknown and hard to detect in complex engineering structures. Therefore, early detection of damage is beneficial for prognosis and risk management of aging infrastructure system.

Non-destructive testing (NDT) and structural health monitoring (SHM) are widely used for this purpose. Different types of NDT techniques have been proposed for the damage detection, such as optical image, ultrasound wave, thermography, eddy current, and microwave. The focus in this study is on the wave-based detection method, which is grouped into two major categories: feature-based damage detection and model-assisted damage detection. Both damage detection approaches have their own pros and cons. Feature-based damage detection is usually very fast and doesn’t involve in the solution of the physical model. The key idea is the dimension reduction of signals to achieve efficient damage detection. The disadvantage is that the loss of information due to the feature extraction can induce significant uncertainties and reduces the resolution. The resolution of the feature-based approach highly depends on the sensing path density. Model-assisted damage detection is on the opposite side. Model-assisted damage detection has the ability for high resolution imaging with limited number of sensing paths since the entire signal histories are used for damage identification. Model-based methods are time-consuming due to the requirement for the inverse wave propagation solution, which is especially true for the large 3D structures.

The motivation of the proposed method is to develop efficient and accurate model-based damage imaging technique with limited data. The special focus is on the efficiency of the damage imaging algorithm as it is the major bottleneck of the model-assisted approach. The computational efficiency is achieved by two complimentary components. First, a fast forward wave propagation solver is developed, which is verified with the classical Finite Element(FEM) solution and the speed is 10-20 times faster. Next, efficient inverse wave propagation algorithms is proposed. Classical gradient-based optimization algorithms usually require finite difference method for gradient calculation, which is prohibitively expensive for large degree of freedoms. An adjoint method-based optimization algorithms is proposed, which avoids the repetitive finite difference calculations for every imaging variables. Thus, superior computational efficiency can be achieved by combining these two methods together for the damage imaging. A coupled Piezoelectric (PZT) damage imaging model is proposed to include the interaction between PZT and host structure. Following the formulation of the framework, experimental validation is performed on isotropic and anisotropic material with defects such as cracks, delamination, and voids. The results show that the proposed method can detect and reconstruct multiple damage simultaneously and efficiently, which is promising to be applied to complex large-scale engineering structures.
ContributorsChang, Qinan (Author) / Liu, Yongming (Thesis advisor) / Mignolet, Marc (Committee member) / Chattopadhyay, Aditi (Committee member) / Yan, Hao (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2019
133887-Thumbnail Image.png
Description
This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable

This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable to results obtained using a conventional chassis dynamometer. Torque curves were generated via the experimental method under a variety of circumstances and also obtained professionally by a precision engine testing company. Metrics were created to measure the precision of the experimental device's ability to consistently generate torque curves and also to compare the similarity of these curves to the professionally obtained torque curves. The results revealed that although the test device does not quite provide the same level of precision as the professional chassis dynamometer, it does create torque curves that closely resemble the chassis dynamometer torque curves and exhibit a consistency between trials comparable to the professional results, even on rough road surfaces. The results suggest that the test device provides enough accuracy and precision to satisfy the needs of most consumers interested in measuring their vehicle's engine performance but probably lacks the level of accuracy and precision needed to appeal to professionals.
ContributorsKing, Michael (Author) / Ren, Yi (Thesis director) / Spanias, Andreas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
The traditional understanding of robotics includes mechanisms of rigid structures, which can manipulate surrounding objects, taking advantage of mechanical actuators such as motors and servomechanisms. Although these methods provide the underlying fundamental concepts behind much of modern technological infrastructure, in fields such as manufacturing, automation, and biomedical application, the robotic

The traditional understanding of robotics includes mechanisms of rigid structures, which can manipulate surrounding objects, taking advantage of mechanical actuators such as motors and servomechanisms. Although these methods provide the underlying fundamental concepts behind much of modern technological infrastructure, in fields such as manufacturing, automation, and biomedical application, the robotic structures formed by rigid axels on mechanical actuators lack the delicate differential sensors and actuators associated with known biological systems. The rigid structures of traditional robotics also inhibit the use of simple mechanisms in congested and/or fragile environments. By observing a variety of biological systems, it is shown that nature models its structures over millions of years of evolution into a combination of soft structures and rigid skeletal interior supports. Through technological bio-inspired designs, researchers hope to mimic some of the complex behaviors of biological mechanisms using pneumatic actuators coupled with highly compliant materials that exhibit relatively large reversible elastic strain. This paper begins the brief history of soft robotics, the various classifications of pneumatic fluid systems, the associated difficulties that arise with the unpredictable nature of fluid reactions, the methods of pneumatic actuators in use today, the current industrial applications of soft robotics, and focuses in large on the construction of a universally adaptable soft robotic gripper and material application tool. The central objective of this experiment is to compatibly pair traditional rigid robotics with the emerging technologies of sort robotic actuators. This will be done by combining a traditional rigid robotic arm with a soft robotic manipulator bladder for the purposes of object manipulation and excavation of extreme environments.
ContributorsShuster, Eden S. (Author) / Thanga, Jekan (Thesis director) / Asphaug, Erik (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
154629-Thumbnail Image.png
Description
In-situ exploration of planetary bodies such as Mars or the Moon have provided geologists and planetary scientists a detailed understanding of how these bodies formed and evolved. In-situ exploration has aided in the quest for water and life-supporting chemicals. In-situ exploration of Mars carried out by large SUV-sized rovers

In-situ exploration of planetary bodies such as Mars or the Moon have provided geologists and planetary scientists a detailed understanding of how these bodies formed and evolved. In-situ exploration has aided in the quest for water and life-supporting chemicals. In-situ exploration of Mars carried out by large SUV-sized rovers that travel long distance, carry sophisticated onboard laboratories to perform soil analysis and sample collection. But their large size and mobility method prevents them from accessing or exploring extreme environments, particularly caves, canyons, cliffs and craters.

This work presents sub- 2 kg ball robots that can roll and hop in low gravity environments. These robots are low-cost enabling for one or more to be deployed in the field. These small robots can be deployed from a larger rover or lander and complement their capabilities by performing scouting and identifying potential targets of interest. Their small size and ball shape allow them to tumble freely, preventing them from getting stuck. Hopping enables the robot to overcome obstacles larger than the size of the robot.

The proposed ball-robot design consists of a spherical core with two hemispherical shells with grouser which act as wheels for small movements. These robots have two cameras for stereovision which can be used for localization. Inertial Measurement Unit (IMU) and wheel encoder are used for dead reckoning. Communication is performed using Zigbee radio. This enables communication between a robot and a lander/rover or for inter-robot communication. The robots have been designed to have a payload with a 300 gram capacity. These may include chemical analysis sensors, spectrometers and other small sensors.

The performance of the robot has been evaluated in a laboratory environment using Low-gravity Offset and Motion Assistance Simulation System (LOMASS). An evaluation was done to understand the effect of grouser height and grouser separation angle on the performance of the robot in different terrains. The experiments show with higher grouser height and optimal separation angle the power requirement increases but an increase in average robot speed and traction is also observed. The robot was observed to perform hops of approximately 20 cm in simulated lunar condition. Based on theoretical calculations, the robot would be able to perform 208 hops with single charge and will operate for 35 minutes. The study will be extended to operate multiple robots in a network to perform exploration. Their small size and cost makes it possible to deploy dozens in a region of interest. Multiple ball robots can cooperatively perform unique in-situ science measurements and analyze a larger surface area than a single robot alone on a planet surface.
ContributorsRaura, Laksh Deepak (Author) / Thangavelautham, Jekanthan (Thesis advisor) / Berman, Spring (Thesis advisor) / Lee, Hyunglae (Committee member) / Asphaug, Erik (Committee member) / Arizona State University (Publisher)
Created2016
154942-Thumbnail Image.png
Description
Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification

Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification can be separated into two major tasks; tolerance schema generation and tolerance value specification. This thesis will focus on the latter part of automated tolerance specification, namely tolerance value allocation and analysis. The tolerance schema (sans values) required prior to these tasks have already been generated by the auto-tolerancing software. This information is communicated through a constraint tolerance feature graph file developed previously at Design Automation Lab (DAL) and is consistent with ASME Y14.5 standard.

The objective of this research is to allocate tolerance values to ensure that the assemblability conditions are satisfied. Assemblability refers to “the ability to assemble/fit a set of parts in specified configuration given a nominal geometry and its corresponding tolerances”. Assemblability is determined by the clearances between the mating features. These clearances are affected by accumulation of tolerances in tolerance loops and hence, the tolerance loops are extracted first. Once tolerance loops have been identified initial tolerance values are allocated to the contributors in these loops. It is highly unlikely that the initial allocation would satisfice assemblability requirements. Overlapping loops have to be simultaneously satisfied progressively. Hence, tolerances will need to be re-allocated iteratively. This is done with the help of tolerance analysis module.

The tolerance allocation and analysis module receives the constraint graph which contains all basic dimensions and mating constraints from the generated schema. The tolerance loops are detected by traversing the constraint graph. The initial allocation distributes the tolerance budget computed from clearance available in the loop, among its contributors in proportion to the associated nominal dimensions. The analysis module subjects the loops to 3D parametric variation analysis and estimates the variation parameters for the clearances. The re-allocation module uses hill climbing heuristics derived from the distribution parameters to select a loop. Re-allocation Of the tolerance values is done using sensitivities and the weights associated with the contributors in the stack.

Several test cases have been run with this software and the desired user input acceptance rates are achieved. Three test cases are presented and output of each module is discussed.
ContributorsBiswas, Deepanjan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
155081-Thumbnail Image.png
Description
ABSTRACT

A large fraction of the total energy consumption in the world comes from heating and cooling of buildings. Improving the energy efficiency of buildings to reduce the needs of seasonal heating and cooling is one of the major challenges in sustainable development. In general, the energy efficiency depends

ABSTRACT

A large fraction of the total energy consumption in the world comes from heating and cooling of buildings. Improving the energy efficiency of buildings to reduce the needs of seasonal heating and cooling is one of the major challenges in sustainable development. In general, the energy efficiency depends on the geometry and material of the buildings. To explore a framework for accurately assessing this dependence, detailed 3-D thermofluid simulations are performed by systematically sweeping the parameter space spanned by four parameters: the size of building, thickness and material of wall, and fractional size of window. The simulations incorporate realistic boundary conditions of diurnally-varying temperatures from observation, and the effect of fluid flow with explicit thermal convection inside the building. The outcome of the numerical simulations is synthesized into a simple map of an index of energy efficiency in the parameter space which can be used by stakeholders to quick look-up the energy efficiency of a proposed design of a building before its construction. Although this study only considers a special prototype of buildings, the framework developed in this work can potentially be used for a wide range of buildings and applications.
ContributorsJain, Gaurav (Author) / Huang, Huei-Ping (Thesis advisor) / Ren, Yi (Committee member) / Oswald, Jay (Committee member) / Arizona State University (Publisher)
Created2016
154994-Thumbnail Image.png
Description
When manufacturing large or complex parts, often a rough operation such as casting is used to create the majority of the part geometry. Due to the highly variable nature of the casting process, for mechanical components that require precision surfaces for functionality or assembly with others, some of the important

When manufacturing large or complex parts, often a rough operation such as casting is used to create the majority of the part geometry. Due to the highly variable nature of the casting process, for mechanical components that require precision surfaces for functionality or assembly with others, some of the important features are machined to specification. Depending on the relative locations of as-cast to-be-machined features and the amount of material at each, the part may be positioned or ‘set up’ on a fixture in a configuration that will ensure that the pre-specified machining operations will successfully clean up the rough surfaces and produce a part that conforms to any assigned tolerances. For a particular part whose features incur excessive deviation in the casting process, it may be that no setup would yield an acceptable final part. The proposed Setup-Map (S-Map) describes the positions and orientations of a part that will allow for it to be successfully machined, and will be able to determine if a particular part cannot be made to specification.

The Setup Map is a point space in six dimensions where each of the six orthogonal coordinates corresponds to one of the rigid-body displacements in three dimensional space: three rotations and three translations. Any point within the boundaries of the Setup-Map (S-Map) corresponds to a small displacement of the part that satisfies the condition that each feature will lie within its associated tolerance zone after machining. The process for creating the S-Map involves the representation of constraints imposed by the tolerances in simple coordinate systems for each to-be-machined feature. Constraints are then transformed to a single coordinate system where the intersection reveals the common allowable ‘setup’ points. Should an intersection of the six-dimensional constraints exist, an optimization scheme is used to choose a single setup that gives the best chance for machining to be completed successfully. Should no intersection exist, the particular part cannot be machined to specification or must be re-worked with weld metal added to specific locations.
ContributorsKalish, Nathan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016