Matching Items (19)
Filtering by

Clear all filters

153932-Thumbnail Image.png
Description
Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas

Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas are investigated: development of a model which facilitates capturing the differences among designers' problem formulation; representation and implication of those differences; the relation between problem formulation and creativity.

This dissertation proposes the Problem Map (P-maps) ontological framework. P-maps represent designers' problem formulation in terms of six groups of entities (requirement, use scenario, function, artifact, behavior, and issue). Entities have hierarchies within each group and links among groups. Variables extracted from P-maps characterize problem formulation.

Three experiments were conducted. The first experiment was to study the similarities and differences between novice and expert designers. Results show that experts use more abstraction than novices do and novices are more likely to add entities in a specific order. Experts also discover more issues.

The second experiment was to see how problem formulation relates to creativity. Ideation metrics were used to characterize creative outcome. Results include but are not limited to a positive correlation between adding more issues in an unorganized way with quantity and variety, more use scenarios and functions with novelty, more behaviors and conflicts identified with quality, and depth-first exploration with all ideation metrics. Fewer hierarchies in use scenarios lower novelty and fewer links to requirements and issues lower quality of ideas.

The third experiment was to see if problem formulation can predict creative outcome. Models based on one problem were used to predict the creativity of another. Predicted scores were compared to assessments of independent judges. Quality and novelty are predicted more accurately than variety, and quantity. Backward elimination improves model fit, though reduces prediction accuracy.

P-maps provide a theoretical framework for formalizing, tracing, and quantifying conceptual design strategies. Other potential applications are developing a test of problem formulation skill, tracking students' learning of formulation skills in a course, and reproducing other researchers’ observations about designer thinking.
ContributorsDinar, Mahmoud (Author) / Shah, Jami J. (Thesis advisor) / Langley, Pat (Committee member) / Davidson, Joseph K. (Committee member) / Lande, Micah (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2015
156283-Thumbnail Image.png
Description
In this dissertation, three complex material systems including a novel class of hyperuniform composite materials, cellularized collagen gel and low melting point alloy (LMPA) composite are investigated, using statistical pattern characterization, stochastic microstructure reconstruction and micromechanical analysis. In Chapter 1, an introduction of this report is provided, in which a

In this dissertation, three complex material systems including a novel class of hyperuniform composite materials, cellularized collagen gel and low melting point alloy (LMPA) composite are investigated, using statistical pattern characterization, stochastic microstructure reconstruction and micromechanical analysis. In Chapter 1, an introduction of this report is provided, in which a brief review is made about these three material systems. In Chapter 2, detailed discussion of the statistical morphological descriptors and a stochastic optimization approach for microstructure reconstruction is presented. In Chapter 3, the lattice particle method for micromechanical analysis of complex heterogeneous materials is introduced. In Chapter 4, a new class of hyperuniform heterogeneous material with superior mechanical properties is investigated. In Chapter 5, a bio-material system, i.e., cellularized collagen gel is modeled using correlation functions and stochastic reconstruction to study the collective dynamic behavior of the embed tumor cells. In chapter 6, LMPA soft robotic system is generated by generalizing the correlation functions and the rigidity tunability of this smart composite is discussed. In Chapter 7, a future work plan is presented.
ContributorsXu, Yaopengxiao (Author) / Jiao, Yang (Thesis advisor) / Liu, Yongming (Committee member) / Wang, Qing Hua (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
156902-Thumbnail Image.png
Description
Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information about manufacturing and installation, thereby raising a concern for their

Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information about manufacturing and installation, thereby raising a concern for their safety and integrity. Testing for the aging pipe strength and toughness estimation without interrupting the transmission and operations thus becomes important. The state-of-the-art techniques tend to focus on the single modality deterministic estimation of pipe strength and do not account for inhomogeneity and uncertainties, many others appear to rely on destructive means. These gaps provide an impetus for novel methods to better characterize the pipe material properties. The focus of this study is the design of a Bayesian Network information fusion model for the prediction of accurate probabilistic pipe strength and consequently the maximum allowable operating pressure. A multimodal diagnosis is performed by assessing the mechanical property variation within the pipe in terms of material property measurements, such as microstructure, composition, hardness and other mechanical properties through experimental analysis, which are then integrated with the Bayesian network model that uses a Markov chain Monte Carlo (MCMC) algorithm. Prototype testing is carried out for model verification, validation and demonstration and data training of the model is employed to obtain a more accurate measure of the probabilistic pipe strength. With a view of providing a holistic measure of material performance in service, the fatigue properties of the pipe steel are investigated. The variation in the fatigue crack growth rate (da/dN) along the direction of the pipe wall thickness is studied in relation to the microstructure and the material constants for the crack growth have been reported. A combination of imaging and composition analysis is incorporated to study the fracture surface of the fatigue specimen. Finally, some well-known statistical inference models are employed for prediction of manufacturing process parameters for steel pipelines. The adaptability of the small datasets for the accuracy of the prediction outcomes is discussed and the models are compared for their performance.
ContributorsDahire, Sonam (Author) / Liu, Yongming (Thesis advisor) / Jiao, Yang (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
156953-Thumbnail Image.png
Description
Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures

Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures or processing settings. While optimization techniques have mature applications to a large range of engineering systems, their application to material design meets unique challenges due to the high dimensionality of microstructures and the high costs in computing process-structure-property (PSP) mappings. The key to addressing these challenges is the learning of material representations and predictive PSP mappings while managing a small data acquisition budget. This dissertation thus focuses on developing learning mechanisms that leverage context-specific meta-data and physics-based theories. Two research tasks will be conducted: In the first, we develop a statistical generative model that learns to characterize high-dimensional microstructure samples using low-dimensional features. We improve the data efficiency of a variational autoencoder by introducing a morphology loss to the training. We demonstrate that the resultant microstructure generator is morphology-aware when trained on a small set of material samples, and can effectively constrain the microstructure space during material design. In the second task, we investigate an active learning mechanism where new samples are acquired based on their violation to a theory-driven constraint on the physics-based model. We demonstrate using a topology optimization case that while data acquisition through the physics-based model is often expensive (e.g., obtaining microstructures through simulation or optimization processes), the evaluation of the constraint can be far more affordable (e.g., checking whether a solution is optimal or equilibrium). We show that this theory-driven learning algorithm can lead to much improved learning efficiency and generalization performance when such constraints can be derived. The outcomes of this research is a better understanding of how physics knowledge about material systems can be integrated into machine learning frameworks, in order to achieve more cost-effective and reliable learning of material representations and predictive models, which are essential to accelerate computational material design.
ContributorsCang, Ruijin (Author) / Ren, Yi (Thesis advisor) / Liu, Yongming (Committee member) / Jiao, Yang (Committee member) / Nian, Qiong (Committee member) / Zhuang, Houlong (Committee member) / Arizona State University (Publisher)
Created2018
157030-Thumbnail Image.png
Description
Aging-related damage and failure in structures, such as fatigue cracking, corrosion, and delamination, are critical for structural integrity. Most engineering structures have embedded defects such as voids, cracks, inclusions from manufacturing. The properties and locations of embedded defects are generally unknown and hard to detect in complex engineering structures.

Aging-related damage and failure in structures, such as fatigue cracking, corrosion, and delamination, are critical for structural integrity. Most engineering structures have embedded defects such as voids, cracks, inclusions from manufacturing. The properties and locations of embedded defects are generally unknown and hard to detect in complex engineering structures. Therefore, early detection of damage is beneficial for prognosis and risk management of aging infrastructure system.

Non-destructive testing (NDT) and structural health monitoring (SHM) are widely used for this purpose. Different types of NDT techniques have been proposed for the damage detection, such as optical image, ultrasound wave, thermography, eddy current, and microwave. The focus in this study is on the wave-based detection method, which is grouped into two major categories: feature-based damage detection and model-assisted damage detection. Both damage detection approaches have their own pros and cons. Feature-based damage detection is usually very fast and doesn’t involve in the solution of the physical model. The key idea is the dimension reduction of signals to achieve efficient damage detection. The disadvantage is that the loss of information due to the feature extraction can induce significant uncertainties and reduces the resolution. The resolution of the feature-based approach highly depends on the sensing path density. Model-assisted damage detection is on the opposite side. Model-assisted damage detection has the ability for high resolution imaging with limited number of sensing paths since the entire signal histories are used for damage identification. Model-based methods are time-consuming due to the requirement for the inverse wave propagation solution, which is especially true for the large 3D structures.

The motivation of the proposed method is to develop efficient and accurate model-based damage imaging technique with limited data. The special focus is on the efficiency of the damage imaging algorithm as it is the major bottleneck of the model-assisted approach. The computational efficiency is achieved by two complimentary components. First, a fast forward wave propagation solver is developed, which is verified with the classical Finite Element(FEM) solution and the speed is 10-20 times faster. Next, efficient inverse wave propagation algorithms is proposed. Classical gradient-based optimization algorithms usually require finite difference method for gradient calculation, which is prohibitively expensive for large degree of freedoms. An adjoint method-based optimization algorithms is proposed, which avoids the repetitive finite difference calculations for every imaging variables. Thus, superior computational efficiency can be achieved by combining these two methods together for the damage imaging. A coupled Piezoelectric (PZT) damage imaging model is proposed to include the interaction between PZT and host structure. Following the formulation of the framework, experimental validation is performed on isotropic and anisotropic material with defects such as cracks, delamination, and voids. The results show that the proposed method can detect and reconstruct multiple damage simultaneously and efficiently, which is promising to be applied to complex large-scale engineering structures.
ContributorsChang, Qinan (Author) / Liu, Yongming (Thesis advisor) / Mignolet, Marc (Committee member) / Chattopadhyay, Aditi (Committee member) / Yan, Hao (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2019
155806-Thumbnail Image.png
Description
In order for assistive mobile robots to operate in the same environment as humans, they must be able to navigate the same obstacles as humans do. Many elements are required to do this: a powerful controller which can understand the obstacle, and power-dense actuators which will be able to achieve

In order for assistive mobile robots to operate in the same environment as humans, they must be able to navigate the same obstacles as humans do. Many elements are required to do this: a powerful controller which can understand the obstacle, and power-dense actuators which will be able to achieve the necessary limb accelerations and output energies. Rapid growth in information technology has made complex controllers, and the devices which run them considerably light and cheap. The energy density of batteries, motors, and engines has not grown nearly as fast. This is problematic because biological systems are more agile, and more efficient than robotic systems. This dissertation introduces design methods which may be used optimize a multiactuator robotic limb's natural dynamics in an effort to reduce energy waste. These energy savings decrease the robot's cost of transport, and the weight of the required fuel storage system. To achieve this, an optimal design method, which allows the specialization of robot geometry, is introduced. In addition to optimal geometry design, a gearing optimization is presented which selects a gear ratio which minimizes the electrical power at the motor while considering the constraints of the motor. Furthermore, an efficient algorithm for the optimization of parallel stiffness elements in the robot is introduced. In addition to the optimal design tools introduced, the KiTy SP robotic limb structure is also presented. Which is a novel hybrid parallel-serial actuation method. This novel leg structure has many desirable attributes such as: three dimensional end-effector positioning, low mobile mass, compact form-factor, and a large workspace. We also show that the KiTy SP structure outperforms the classical, biologically-inspired serial limb structure.
ContributorsCahill, Nathan M (Author) / Sugar, Thomas (Thesis advisor) / Ren, Yi (Thesis advisor) / Holgate, Matthew (Committee member) / Berman, Spring (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
155229-Thumbnail Image.png
Description
An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP)

An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive.

In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a “probability map”, which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained.

Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information.

Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.
ContributorsLi, Hechao (Author) / Jiao, Yang (Thesis advisor) / Chawla, Nikhilesh (Committee member) / Liu, Yongming (Committee member) / Ren, Yi (Committee member) / Mu, Bin (Committee member) / Arizona State University (Publisher)
Created2017
168682-Thumbnail Image.png
Description
In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical

In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical problems in a computational efficient manner without necessitating the iterative computations of the governing physical equations. However, the research on data-driven approach for convective heat transfer is still in nascent stage. This study aims to introduce data-driven approaches for modeling heat and mass convection phenomena. As the first step, this research explores a deep learning approach for modeling the internal forced convection heat transfer problems. Conditional generative adversarial networks (cGAN) are trained to predict the solution based on a graphical input describing fluid channel geometries and initial flow conditions. A trained cGAN model rapidly approximates the flow temperature, Nusselt number (Nu) and friction factor (f) of a flow in a heated channel over Reynolds number (Re) ranging from 100 to 27750. The optimized cGAN model exhibited an accuracy up to 97.6% when predicting the local distributions of Nu and f. Next, this research introduces a deep learning based surrogate model for three-dimensional (3D) transient mixed convention in a horizontal channel with a heated bottom surface. Conditional generative adversarial networks (cGAN) are trained to approximate the temperature maps at arbitrary channel locations and time steps. The model is developed for a mixed convection occurring at the Re of 100, Rayleigh number of 3.9E6, and Richardson number of 88.8. The cGAN with the PatchGAN based classifier without the strided convolutions infers the temperature map with the best clarity and accuracy. Finally, this study investigates how machine learning analyzes the mass transfer in 3D printed fluidic devices. Random forests algorithm is hired to classify the flow images taken from semi-transparent 3D printed tubes. Particularly, this work focuses on laminar-turbulent transition process occurring in a 3D wavy tube and a straight tube visualized by dye injection. The machine learning model automatically classifies experimentally obtained flow images with an accuracy > 0.95.
ContributorsKang, Munku (Author) / Kwon, Beomjin (Thesis advisor) / Phelan, Patrick (Committee member) / Ren, Yi (Committee member) / Rykaczewski, Konrad (Committee member) / Sohn, SungMin (Committee member) / Arizona State University (Publisher)
Created2022
168584-Thumbnail Image.png
Description
Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among

Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among all the statistical methods. This study focuses on the mechanical properties of materials, both static and fatigue, the main engineering field on which this study focuses. This work can be summarized in the following items: First, maintaining the safety of vintage pipelines requires accurately estimating the strength. The objective is to predict the reliability-based strength using nondestructive multimodality surface information. Bayesian model averaging (BMA) is implemented for fusing multimodality non-destructive testing results for gas pipeline strength estimation. Several incremental improvements are proposed in the algorithm implementation. Second, the objective is to develop a statistical uncertainty quantification method for fatigue stress-life (S-N) curves with sparse data.Hierarchical Bayesian data augmentation (HBDA) is proposed to integrate hierarchical Bayesian modeling (HBM) and Bayesian data augmentation (BDA) to deal with sparse data problems for fatigue S-N curves. The third objective is to develop a physics-guided machine learning model to overcome limitations in parametric regression models and classical machine learning models for fatigue data analysis. A Probabilistic Physics-guided Neural Network (PPgNN) is proposed for probabilistic fatigue S-N curve estimation. This model is further developed for missing data and arbitrary output distribution problems. Fourth, multi-fidelity modeling combines the advantages of low- and high-fidelity models to achieve a required accuracy at a reasonable computation cost. The fourth objective is to develop a neural network approach for multi-fidelity modeling by learning the correlation between low- and high-fidelity models. Finally, conclusions are drawn, and future work is outlined based on the current study.
ContributorsChen, Jie (Author) / Liu, Yongming (Thesis advisor) / Chattopadhyay, Aditi (Committee member) / Mignolet, Marc (Committee member) / Ren, Yi (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2022
168714-Thumbnail Image.png
Description
Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices.

Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices. For instance, the intersection management of Connected Autonomous Vehicles (CAVs) requires running computationally intensive object recognition algorithms on low-power traffic cameras. This dissertation aims to study the effect of a dynamic hardware and software approach to address this issue. Characteristics of real-world applications can facilitate this dynamic adjustment and reduce the computation. Specifically, this dissertation starts with a dynamic hardware approach that adjusts itself based on the toughness of input and extracts deeper features if needed. Next, an adaptive learning mechanism has been studied that use extracted feature from previous inputs to improve system performance. Finally, a system (ARGOS) was proposed and evaluated that can be run on embedded systems while maintaining the desired accuracy. This system adopts shallow features at inference time, but it can switch to deep features if the system desires a higher accuracy. To improve the performance, ARGOS distills the temporal knowledge from deep features to the shallow system. Moreover, ARGOS reduces the computation furthermore by focusing on regions of interest. The response time and mean average precision are adopted for the performance evaluation to evaluate the proposed ARGOS system.
ContributorsFarhadi, Mohammad (Author) / Yang, Yezhou (Thesis advisor) / Vrudhula, Sarma (Committee member) / Wu, Carole-Jean (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2022