This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 36
Filtering by

Clear all filters

153932-Thumbnail Image.png
Description
Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas

Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas are investigated: development of a model which facilitates capturing the differences among designers' problem formulation; representation and implication of those differences; the relation between problem formulation and creativity.

This dissertation proposes the Problem Map (P-maps) ontological framework. P-maps represent designers' problem formulation in terms of six groups of entities (requirement, use scenario, function, artifact, behavior, and issue). Entities have hierarchies within each group and links among groups. Variables extracted from P-maps characterize problem formulation.

Three experiments were conducted. The first experiment was to study the similarities and differences between novice and expert designers. Results show that experts use more abstraction than novices do and novices are more likely to add entities in a specific order. Experts also discover more issues.

The second experiment was to see how problem formulation relates to creativity. Ideation metrics were used to characterize creative outcome. Results include but are not limited to a positive correlation between adding more issues in an unorganized way with quantity and variety, more use scenarios and functions with novelty, more behaviors and conflicts identified with quality, and depth-first exploration with all ideation metrics. Fewer hierarchies in use scenarios lower novelty and fewer links to requirements and issues lower quality of ideas.

The third experiment was to see if problem formulation can predict creative outcome. Models based on one problem were used to predict the creativity of another. Predicted scores were compared to assessments of independent judges. Quality and novelty are predicted more accurately than variety, and quantity. Backward elimination improves model fit, though reduces prediction accuracy.

P-maps provide a theoretical framework for formalizing, tracing, and quantifying conceptual design strategies. Other potential applications are developing a test of problem formulation skill, tracking students' learning of formulation skills in a course, and reproducing other researchers’ observations about designer thinking.
ContributorsDinar, Mahmoud (Author) / Shah, Jami J. (Thesis advisor) / Langley, Pat (Committee member) / Davidson, Joseph K. (Committee member) / Lande, Micah (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2015
156283-Thumbnail Image.png
Description
In this dissertation, three complex material systems including a novel class of hyperuniform composite materials, cellularized collagen gel and low melting point alloy (LMPA) composite are investigated, using statistical pattern characterization, stochastic microstructure reconstruction and micromechanical analysis. In Chapter 1, an introduction of this report is provided, in which a

In this dissertation, three complex material systems including a novel class of hyperuniform composite materials, cellularized collagen gel and low melting point alloy (LMPA) composite are investigated, using statistical pattern characterization, stochastic microstructure reconstruction and micromechanical analysis. In Chapter 1, an introduction of this report is provided, in which a brief review is made about these three material systems. In Chapter 2, detailed discussion of the statistical morphological descriptors and a stochastic optimization approach for microstructure reconstruction is presented. In Chapter 3, the lattice particle method for micromechanical analysis of complex heterogeneous materials is introduced. In Chapter 4, a new class of hyperuniform heterogeneous material with superior mechanical properties is investigated. In Chapter 5, a bio-material system, i.e., cellularized collagen gel is modeled using correlation functions and stochastic reconstruction to study the collective dynamic behavior of the embed tumor cells. In chapter 6, LMPA soft robotic system is generated by generalizing the correlation functions and the rigidity tunability of this smart composite is discussed. In Chapter 7, a future work plan is presented.
ContributorsXu, Yaopengxiao (Author) / Jiao, Yang (Thesis advisor) / Liu, Yongming (Committee member) / Wang, Qing Hua (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
156902-Thumbnail Image.png
Description
Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information about manufacturing and installation, thereby raising a concern for their

Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information about manufacturing and installation, thereby raising a concern for their safety and integrity. Testing for the aging pipe strength and toughness estimation without interrupting the transmission and operations thus becomes important. The state-of-the-art techniques tend to focus on the single modality deterministic estimation of pipe strength and do not account for inhomogeneity and uncertainties, many others appear to rely on destructive means. These gaps provide an impetus for novel methods to better characterize the pipe material properties. The focus of this study is the design of a Bayesian Network information fusion model for the prediction of accurate probabilistic pipe strength and consequently the maximum allowable operating pressure. A multimodal diagnosis is performed by assessing the mechanical property variation within the pipe in terms of material property measurements, such as microstructure, composition, hardness and other mechanical properties through experimental analysis, which are then integrated with the Bayesian network model that uses a Markov chain Monte Carlo (MCMC) algorithm. Prototype testing is carried out for model verification, validation and demonstration and data training of the model is employed to obtain a more accurate measure of the probabilistic pipe strength. With a view of providing a holistic measure of material performance in service, the fatigue properties of the pipe steel are investigated. The variation in the fatigue crack growth rate (da/dN) along the direction of the pipe wall thickness is studied in relation to the microstructure and the material constants for the crack growth have been reported. A combination of imaging and composition analysis is incorporated to study the fracture surface of the fatigue specimen. Finally, some well-known statistical inference models are employed for prediction of manufacturing process parameters for steel pipelines. The adaptability of the small datasets for the accuracy of the prediction outcomes is discussed and the models are compared for their performance.
ContributorsDahire, Sonam (Author) / Liu, Yongming (Thesis advisor) / Jiao, Yang (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
156938-Thumbnail Image.png
Description
Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed.

Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed. Individual agent costs are coupled in such a way that group ob-

jective is attained while minimizing individual costs. Information Asymmetry refers

to situations where interacting agents have no knowledge or partial knowledge of cost

functions of other agents. By virtue of their intelligence, i.e., by learning from past

experiences agents learn cost functions of other agents, predict their responses and

act adaptively to accomplish the team’s goal.

Algorithms that agents use for learning others’ cost functions are called Learn-

ing Algorithms, and algorithms agents use for computing actuation (control) which

drives them towards their goal and minimize their cost functions are called Control

Algorithms. Typically knowledge acquired using learning algorithms is used in con-

trol algorithms for computing control signals. Learning and control algorithms are

designed in such a way that the multi-agent system as a whole remains stable during

learning and later at an equilibrium. An equilibrium is defined as the event/point

where cost functions of all agents are optimized simultaneously. Cost functions are

designed so that the equilibrium coincides with the goal state multi-agent system as

a whole is trying to reach.

In collective load transport, two or more agents (robots) carry a load from point

A to point B in space. Robots could have different control preferences, for example,

different actuation abilities, however, are still required to coordinate and perform

load transport. Control preferences for each robot are characterized using a scalar

parameter θ i unique to the robot being considered and unknown to other robots.

With the aid of state and control input observations, agents learn control preferences

of other agents, optimize individual costs and drive the multi-agent system to a goal

state.

Two learning and Control algorithms are presented. In the first algorithm(LCA-

1), an existing work, each agent optimizes a cost function similar to 1-step receding

horizon optimal control problem for control. LCA-1 uses recursive least squares as

the learning algorithm and guarantees complete learning in two time steps. LCA-1 is

experimentally verified as part of this thesis.

A novel learning and control algorithm (LCA-2) is proposed and verified in sim-

ulations and on hardware. In LCA-2, each agent solves an infinite horizon linear

quadratic regulator (LQR) problem for computing control. LCA-2 uses a learning al-

gorithm similar to line search methods, and guarantees learning convergence to true

values asymptotically.

Simulations and hardware implementation show that the LCA-2 is stable for a

variety of systems. Load transport is demonstrated using both the algorithms. Ex-

periments running algorithm LCA-2 are able to resist disturbances and balance the

assumed load better compared to LCA-1.
ContributorsKAMBAM, KARTHIK (Author) / Zhang, Wenlong (Thesis advisor) / Nedich, Angelia (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
157030-Thumbnail Image.png
Description
Aging-related damage and failure in structures, such as fatigue cracking, corrosion, and delamination, are critical for structural integrity. Most engineering structures have embedded defects such as voids, cracks, inclusions from manufacturing. The properties and locations of embedded defects are generally unknown and hard to detect in complex engineering structures.

Aging-related damage and failure in structures, such as fatigue cracking, corrosion, and delamination, are critical for structural integrity. Most engineering structures have embedded defects such as voids, cracks, inclusions from manufacturing. The properties and locations of embedded defects are generally unknown and hard to detect in complex engineering structures. Therefore, early detection of damage is beneficial for prognosis and risk management of aging infrastructure system.

Non-destructive testing (NDT) and structural health monitoring (SHM) are widely used for this purpose. Different types of NDT techniques have been proposed for the damage detection, such as optical image, ultrasound wave, thermography, eddy current, and microwave. The focus in this study is on the wave-based detection method, which is grouped into two major categories: feature-based damage detection and model-assisted damage detection. Both damage detection approaches have their own pros and cons. Feature-based damage detection is usually very fast and doesn’t involve in the solution of the physical model. The key idea is the dimension reduction of signals to achieve efficient damage detection. The disadvantage is that the loss of information due to the feature extraction can induce significant uncertainties and reduces the resolution. The resolution of the feature-based approach highly depends on the sensing path density. Model-assisted damage detection is on the opposite side. Model-assisted damage detection has the ability for high resolution imaging with limited number of sensing paths since the entire signal histories are used for damage identification. Model-based methods are time-consuming due to the requirement for the inverse wave propagation solution, which is especially true for the large 3D structures.

The motivation of the proposed method is to develop efficient and accurate model-based damage imaging technique with limited data. The special focus is on the efficiency of the damage imaging algorithm as it is the major bottleneck of the model-assisted approach. The computational efficiency is achieved by two complimentary components. First, a fast forward wave propagation solver is developed, which is verified with the classical Finite Element(FEM) solution and the speed is 10-20 times faster. Next, efficient inverse wave propagation algorithms is proposed. Classical gradient-based optimization algorithms usually require finite difference method for gradient calculation, which is prohibitively expensive for large degree of freedoms. An adjoint method-based optimization algorithms is proposed, which avoids the repetitive finite difference calculations for every imaging variables. Thus, superior computational efficiency can be achieved by combining these two methods together for the damage imaging. A coupled Piezoelectric (PZT) damage imaging model is proposed to include the interaction between PZT and host structure. Following the formulation of the framework, experimental validation is performed on isotropic and anisotropic material with defects such as cracks, delamination, and voids. The results show that the proposed method can detect and reconstruct multiple damage simultaneously and efficiently, which is promising to be applied to complex large-scale engineering structures.
ContributorsChang, Qinan (Author) / Liu, Yongming (Thesis advisor) / Mignolet, Marc (Committee member) / Chattopadhyay, Aditi (Committee member) / Yan, Hao (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2019
135418-Thumbnail Image.png
Description
Solid oxide fuel cells have become a promising candidate in the development of high-density clean energy sources for the rapidly increasing demands in energy and global sustainability. In order to understand more about solid oxide fuel cells, the important step is to understand how to model heterogeneous materials. Heterogeneous materials

Solid oxide fuel cells have become a promising candidate in the development of high-density clean energy sources for the rapidly increasing demands in energy and global sustainability. In order to understand more about solid oxide fuel cells, the important step is to understand how to model heterogeneous materials. Heterogeneous materials are abundant in nature and also created in various processes. The diverse properties exhibited by these materials result from their complex microstructures, which also make it hard to model the material. Microstructure modeling and reconstruction on a meso-scale level is needed in order to produce heterogeneous models without having to shave and image every slice of the physical material, which is a destructive and irreversible process. Yeong and Torquato [1] introduced a stochastic optimization technique that enables the generation of a model of the material with the use of correlation functions. Spatial correlation functions of each of the various phases within the heterogeneous structure are collected from a two-dimensional micrograph representing a slice of a solid oxide fuel cell through computational means. The assumption is that two-dimensional images contain key structural information representative of the associated full three-dimensional microstructure. The collected spatial correlation functions, a combination of one-point and two-point correlation functions are then outputted and are representative of the material. In the reconstruction process, the characteristic two-point correlation functions is then inputted through a series of computational modeling codes and software to generate a three-dimensional visual model that is statistically similar to that of the original two-dimensional micrograph. Furthermore, parameters of temperature cooling stages and number of pixel exchanges per temperature stage are utilized and altered accordingly to observe which parameters has a higher impact on the reconstruction results. Stochastic optimization techniques to produce three-dimensional visual models from two-dimensional micrographs are therefore a statistically reliable method to understanding heterogeneous materials.
ContributorsPhan, Richard Dylan (Author) / Jiao, Yang (Thesis director) / Ren, Yi (Committee member) / Chemical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
132909-Thumbnail Image.png
Description
This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including: robotic design, programming, rapid prototyping, and control theory. An electronic Inertial Measurement Unit and a DC Motor were both used

This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including: robotic design, programming, rapid prototyping, and control theory. An electronic Inertial Measurement Unit and a DC Motor were both used along with 3D printed plastic components and an electronic motor control board to develop a functional open-loop controlled gripper for use in collective transportation experiments. Code was developed that effectively acquired and filtered rate of rotation data alongside other code that allows for straightforward control of the DC motor through experimentally derived relationships between the voltage applied to the DC motor and the torque output of the DC motor. Additionally, several versions of the physical components are described through their development.
ContributorsMohr, Brennan (Author) / Berman, Spring (Thesis director) / Ren, Yi (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School for Engineering of Matter,Transport & Enrgy (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134000-Thumbnail Image.png
Description
In this thesis, Inception V3, a convolutional neural network model from Google, was partially retrained to categorize pipeline images based on their damage modes. The images for different damage modes of the pipeline were simulated through MATLAB to represent image data collected from in-line pipe inspection. The final convolutional layer

In this thesis, Inception V3, a convolutional neural network model from Google, was partially retrained to categorize pipeline images based on their damage modes. The images for different damage modes of the pipeline were simulated through MATLAB to represent image data collected from in-line pipe inspection. The final convolutional layer of the model was retrained with the simulated pipeline images using TensorFlow as the base platform. First, a small-scale retraining was done with real images and simulated images to compare the differences in performance. Then, using simulated images, a 2^5 full factorial design of experiment and individual parametric studies were performed on five different chosen parameters, including training steps, learning rate, batch size, training data size and image noise. The effect of each parameter on the performance of the model was evaluated and analyzed. It is crucial to understand that due to the nature of the experiment, the learnings may or may not apply to neural network models trained for other tasks. After analyzing the results, the effects and trade-offs for each parameter are discussed in detail. In addition, a method of predicting the training time was proposed. Based on the findings, an optimized model was proposed for this training exercise, with 1180 training steps, a learning rate of 0.01, a batch size of 100 and a training data set of 200 images. The optimized model reached 87.2% accuracy with a training time of 2 minutes and 6 seconds. This study will enhance our understanding in applying machine learning techniques in damage and risk identification.
ContributorsShen, Guangqing (Author) / Liu, Yongming (Thesis director) / Ren, Yi (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
154802-Thumbnail Image.png
Description
Increasing demand for reducing the stress on fossil fuels has motivated automotive industries to shift towards sustainable modes of transport through electric and hybrid electric vehicles. Most fuel efficient cars of year 2016 are hybrid vehicles as reported by environmental protection agency. Hybrid vehicles operate with internal combustion engine and

Increasing demand for reducing the stress on fossil fuels has motivated automotive industries to shift towards sustainable modes of transport through electric and hybrid electric vehicles. Most fuel efficient cars of year 2016 are hybrid vehicles as reported by environmental protection agency. Hybrid vehicles operate with internal combustion engine and electric motors powered by batteries, and can significantly improve fuel economy due to downsizing of the engine. Whereas, Plug-in hybrids (PHEVs) have an additional feature compared to hybrid vehicles i.e. recharging batteries through external power outlets. Among hybrid powertrains, lithium-ion batteries have emerged as a major electrochemical storage source for propulsion of vehicles.

In PHEVs, batteries operate under charge sustaining and charge depleting mode based on torque requirement and state of charge. In the current article, 26650 lithium-ion cells were cycled extensively at 25 and 50 oC under charge sustaining mode to monitor capacity and cell impedance values followed by analyzing the Lithium iron phosphate (LiFePO4) cathode material by X-ray diffraction analysis (XRD). High frequency resistance measured by electrochemical impedance spectroscopy was found to increase significantly under high temperature cycling, leading to power fading. No phase change in LiFePO4 cathode material is observed after 330 cycles at elevated temperature under charge sustaining mode from the XRD analysis. However, there was significant change in crystallite size of the cathode active material after charge/discharge cycling with charge sustaining mode. Additionally, 18650 lithium-ion cells were tested under charge depleting mode to monitor capacity values.
ContributorsBadami, Pavan Pramod (Author) / Kannan, Arunachala Mada (Thesis advisor) / Huang, Huei Ping (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
154942-Thumbnail Image.png
Description
Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification

Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification can be separated into two major tasks; tolerance schema generation and tolerance value specification. This thesis will focus on the latter part of automated tolerance specification, namely tolerance value allocation and analysis. The tolerance schema (sans values) required prior to these tasks have already been generated by the auto-tolerancing software. This information is communicated through a constraint tolerance feature graph file developed previously at Design Automation Lab (DAL) and is consistent with ASME Y14.5 standard.

The objective of this research is to allocate tolerance values to ensure that the assemblability conditions are satisfied. Assemblability refers to “the ability to assemble/fit a set of parts in specified configuration given a nominal geometry and its corresponding tolerances”. Assemblability is determined by the clearances between the mating features. These clearances are affected by accumulation of tolerances in tolerance loops and hence, the tolerance loops are extracted first. Once tolerance loops have been identified initial tolerance values are allocated to the contributors in these loops. It is highly unlikely that the initial allocation would satisfice assemblability requirements. Overlapping loops have to be simultaneously satisfied progressively. Hence, tolerances will need to be re-allocated iteratively. This is done with the help of tolerance analysis module.

The tolerance allocation and analysis module receives the constraint graph which contains all basic dimensions and mating constraints from the generated schema. The tolerance loops are detected by traversing the constraint graph. The initial allocation distributes the tolerance budget computed from clearance available in the loop, among its contributors in proportion to the associated nominal dimensions. The analysis module subjects the loops to 3D parametric variation analysis and estimates the variation parameters for the clearances. The re-allocation module uses hill climbing heuristics derived from the distribution parameters to select a loop. Re-allocation Of the tolerance values is done using sensitivities and the weights associated with the contributors in the stack.

Several test cases have been run with this software and the desired user input acceptance rates are achieved. Three test cases are presented and output of each module is discussed.
ContributorsBiswas, Deepanjan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016