Matching Items (464)
Filtering by

Clear all filters

153932-Thumbnail Image.png
Description
Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas

Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas are investigated: development of a model which facilitates capturing the differences among designers' problem formulation; representation and implication of those differences; the relation between problem formulation and creativity.

This dissertation proposes the Problem Map (P-maps) ontological framework. P-maps represent designers' problem formulation in terms of six groups of entities (requirement, use scenario, function, artifact, behavior, and issue). Entities have hierarchies within each group and links among groups. Variables extracted from P-maps characterize problem formulation.

Three experiments were conducted. The first experiment was to study the similarities and differences between novice and expert designers. Results show that experts use more abstraction than novices do and novices are more likely to add entities in a specific order. Experts also discover more issues.

The second experiment was to see how problem formulation relates to creativity. Ideation metrics were used to characterize creative outcome. Results include but are not limited to a positive correlation between adding more issues in an unorganized way with quantity and variety, more use scenarios and functions with novelty, more behaviors and conflicts identified with quality, and depth-first exploration with all ideation metrics. Fewer hierarchies in use scenarios lower novelty and fewer links to requirements and issues lower quality of ideas.

The third experiment was to see if problem formulation can predict creative outcome. Models based on one problem were used to predict the creativity of another. Predicted scores were compared to assessments of independent judges. Quality and novelty are predicted more accurately than variety, and quantity. Backward elimination improves model fit, though reduces prediction accuracy.

P-maps provide a theoretical framework for formalizing, tracing, and quantifying conceptual design strategies. Other potential applications are developing a test of problem formulation skill, tracking students' learning of formulation skills in a course, and reproducing other researchers’ observations about designer thinking.
ContributorsDinar, Mahmoud (Author) / Shah, Jami J. (Thesis advisor) / Langley, Pat (Committee member) / Davidson, Joseph K. (Committee member) / Lande, Micah (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2015
156283-Thumbnail Image.png
Description
In this dissertation, three complex material systems including a novel class of hyperuniform composite materials, cellularized collagen gel and low melting point alloy (LMPA) composite are investigated, using statistical pattern characterization, stochastic microstructure reconstruction and micromechanical analysis. In Chapter 1, an introduction of this report is provided, in which a

In this dissertation, three complex material systems including a novel class of hyperuniform composite materials, cellularized collagen gel and low melting point alloy (LMPA) composite are investigated, using statistical pattern characterization, stochastic microstructure reconstruction and micromechanical analysis. In Chapter 1, an introduction of this report is provided, in which a brief review is made about these three material systems. In Chapter 2, detailed discussion of the statistical morphological descriptors and a stochastic optimization approach for microstructure reconstruction is presented. In Chapter 3, the lattice particle method for micromechanical analysis of complex heterogeneous materials is introduced. In Chapter 4, a new class of hyperuniform heterogeneous material with superior mechanical properties is investigated. In Chapter 5, a bio-material system, i.e., cellularized collagen gel is modeled using correlation functions and stochastic reconstruction to study the collective dynamic behavior of the embed tumor cells. In chapter 6, LMPA soft robotic system is generated by generalizing the correlation functions and the rigidity tunability of this smart composite is discussed. In Chapter 7, a future work plan is presented.
ContributorsXu, Yaopengxiao (Author) / Jiao, Yang (Thesis advisor) / Liu, Yongming (Committee member) / Wang, Qing Hua (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
156902-Thumbnail Image.png
Description
Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information about manufacturing and installation, thereby raising a concern for their

Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information about manufacturing and installation, thereby raising a concern for their safety and integrity. Testing for the aging pipe strength and toughness estimation without interrupting the transmission and operations thus becomes important. The state-of-the-art techniques tend to focus on the single modality deterministic estimation of pipe strength and do not account for inhomogeneity and uncertainties, many others appear to rely on destructive means. These gaps provide an impetus for novel methods to better characterize the pipe material properties. The focus of this study is the design of a Bayesian Network information fusion model for the prediction of accurate probabilistic pipe strength and consequently the maximum allowable operating pressure. A multimodal diagnosis is performed by assessing the mechanical property variation within the pipe in terms of material property measurements, such as microstructure, composition, hardness and other mechanical properties through experimental analysis, which are then integrated with the Bayesian network model that uses a Markov chain Monte Carlo (MCMC) algorithm. Prototype testing is carried out for model verification, validation and demonstration and data training of the model is employed to obtain a more accurate measure of the probabilistic pipe strength. With a view of providing a holistic measure of material performance in service, the fatigue properties of the pipe steel are investigated. The variation in the fatigue crack growth rate (da/dN) along the direction of the pipe wall thickness is studied in relation to the microstructure and the material constants for the crack growth have been reported. A combination of imaging and composition analysis is incorporated to study the fracture surface of the fatigue specimen. Finally, some well-known statistical inference models are employed for prediction of manufacturing process parameters for steel pipelines. The adaptability of the small datasets for the accuracy of the prediction outcomes is discussed and the models are compared for their performance.
ContributorsDahire, Sonam (Author) / Liu, Yongming (Thesis advisor) / Jiao, Yang (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
156953-Thumbnail Image.png
Description
Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures

Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures or processing settings. While optimization techniques have mature applications to a large range of engineering systems, their application to material design meets unique challenges due to the high dimensionality of microstructures and the high costs in computing process-structure-property (PSP) mappings. The key to addressing these challenges is the learning of material representations and predictive PSP mappings while managing a small data acquisition budget. This dissertation thus focuses on developing learning mechanisms that leverage context-specific meta-data and physics-based theories. Two research tasks will be conducted: In the first, we develop a statistical generative model that learns to characterize high-dimensional microstructure samples using low-dimensional features. We improve the data efficiency of a variational autoencoder by introducing a morphology loss to the training. We demonstrate that the resultant microstructure generator is morphology-aware when trained on a small set of material samples, and can effectively constrain the microstructure space during material design. In the second task, we investigate an active learning mechanism where new samples are acquired based on their violation to a theory-driven constraint on the physics-based model. We demonstrate using a topology optimization case that while data acquisition through the physics-based model is often expensive (e.g., obtaining microstructures through simulation or optimization processes), the evaluation of the constraint can be far more affordable (e.g., checking whether a solution is optimal or equilibrium). We show that this theory-driven learning algorithm can lead to much improved learning efficiency and generalization performance when such constraints can be derived. The outcomes of this research is a better understanding of how physics knowledge about material systems can be integrated into machine learning frameworks, in order to achieve more cost-effective and reliable learning of material representations and predictive models, which are essential to accelerate computational material design.
ContributorsCang, Ruijin (Author) / Ren, Yi (Thesis advisor) / Liu, Yongming (Committee member) / Jiao, Yang (Committee member) / Nian, Qiong (Committee member) / Zhuang, Houlong (Committee member) / Arizona State University (Publisher)
Created2018
156938-Thumbnail Image.png
Description
Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed.

Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed. Individual agent costs are coupled in such a way that group ob-

jective is attained while minimizing individual costs. Information Asymmetry refers

to situations where interacting agents have no knowledge or partial knowledge of cost

functions of other agents. By virtue of their intelligence, i.e., by learning from past

experiences agents learn cost functions of other agents, predict their responses and

act adaptively to accomplish the team’s goal.

Algorithms that agents use for learning others’ cost functions are called Learn-

ing Algorithms, and algorithms agents use for computing actuation (control) which

drives them towards their goal and minimize their cost functions are called Control

Algorithms. Typically knowledge acquired using learning algorithms is used in con-

trol algorithms for computing control signals. Learning and control algorithms are

designed in such a way that the multi-agent system as a whole remains stable during

learning and later at an equilibrium. An equilibrium is defined as the event/point

where cost functions of all agents are optimized simultaneously. Cost functions are

designed so that the equilibrium coincides with the goal state multi-agent system as

a whole is trying to reach.

In collective load transport, two or more agents (robots) carry a load from point

A to point B in space. Robots could have different control preferences, for example,

different actuation abilities, however, are still required to coordinate and perform

load transport. Control preferences for each robot are characterized using a scalar

parameter θ i unique to the robot being considered and unknown to other robots.

With the aid of state and control input observations, agents learn control preferences

of other agents, optimize individual costs and drive the multi-agent system to a goal

state.

Two learning and Control algorithms are presented. In the first algorithm(LCA-

1), an existing work, each agent optimizes a cost function similar to 1-step receding

horizon optimal control problem for control. LCA-1 uses recursive least squares as

the learning algorithm and guarantees complete learning in two time steps. LCA-1 is

experimentally verified as part of this thesis.

A novel learning and control algorithm (LCA-2) is proposed and verified in sim-

ulations and on hardware. In LCA-2, each agent solves an infinite horizon linear

quadratic regulator (LQR) problem for computing control. LCA-2 uses a learning al-

gorithm similar to line search methods, and guarantees learning convergence to true

values asymptotically.

Simulations and hardware implementation show that the LCA-2 is stable for a

variety of systems. Load transport is demonstrated using both the algorithms. Ex-

periments running algorithm LCA-2 are able to resist disturbances and balance the

assumed load better compared to LCA-1.
ContributorsKAMBAM, KARTHIK (Author) / Zhang, Wenlong (Thesis advisor) / Nedich, Angelia (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
157030-Thumbnail Image.png
Description
Aging-related damage and failure in structures, such as fatigue cracking, corrosion, and delamination, are critical for structural integrity. Most engineering structures have embedded defects such as voids, cracks, inclusions from manufacturing. The properties and locations of embedded defects are generally unknown and hard to detect in complex engineering structures.

Aging-related damage and failure in structures, such as fatigue cracking, corrosion, and delamination, are critical for structural integrity. Most engineering structures have embedded defects such as voids, cracks, inclusions from manufacturing. The properties and locations of embedded defects are generally unknown and hard to detect in complex engineering structures. Therefore, early detection of damage is beneficial for prognosis and risk management of aging infrastructure system.

Non-destructive testing (NDT) and structural health monitoring (SHM) are widely used for this purpose. Different types of NDT techniques have been proposed for the damage detection, such as optical image, ultrasound wave, thermography, eddy current, and microwave. The focus in this study is on the wave-based detection method, which is grouped into two major categories: feature-based damage detection and model-assisted damage detection. Both damage detection approaches have their own pros and cons. Feature-based damage detection is usually very fast and doesn’t involve in the solution of the physical model. The key idea is the dimension reduction of signals to achieve efficient damage detection. The disadvantage is that the loss of information due to the feature extraction can induce significant uncertainties and reduces the resolution. The resolution of the feature-based approach highly depends on the sensing path density. Model-assisted damage detection is on the opposite side. Model-assisted damage detection has the ability for high resolution imaging with limited number of sensing paths since the entire signal histories are used for damage identification. Model-based methods are time-consuming due to the requirement for the inverse wave propagation solution, which is especially true for the large 3D structures.

The motivation of the proposed method is to develop efficient and accurate model-based damage imaging technique with limited data. The special focus is on the efficiency of the damage imaging algorithm as it is the major bottleneck of the model-assisted approach. The computational efficiency is achieved by two complimentary components. First, a fast forward wave propagation solver is developed, which is verified with the classical Finite Element(FEM) solution and the speed is 10-20 times faster. Next, efficient inverse wave propagation algorithms is proposed. Classical gradient-based optimization algorithms usually require finite difference method for gradient calculation, which is prohibitively expensive for large degree of freedoms. An adjoint method-based optimization algorithms is proposed, which avoids the repetitive finite difference calculations for every imaging variables. Thus, superior computational efficiency can be achieved by combining these two methods together for the damage imaging. A coupled Piezoelectric (PZT) damage imaging model is proposed to include the interaction between PZT and host structure. Following the formulation of the framework, experimental validation is performed on isotropic and anisotropic material with defects such as cracks, delamination, and voids. The results show that the proposed method can detect and reconstruct multiple damage simultaneously and efficiently, which is promising to be applied to complex large-scale engineering structures.
ContributorsChang, Qinan (Author) / Liu, Yongming (Thesis advisor) / Mignolet, Marc (Committee member) / Chattopadhyay, Aditi (Committee member) / Yan, Hao (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2019
133345-Thumbnail Image.png
Description
The purpose of this study was to observe the effectiveness of the phenylalanyl arginine β-naphthylamide dihydrochloride inhibitor and Tween 20 when combined with an antibiotic against Escherichia. coli. As antibiotic resistance becomes more and more prevalent it is necessary to think outside the box and do more than just increase

The purpose of this study was to observe the effectiveness of the phenylalanyl arginine β-naphthylamide dihydrochloride inhibitor and Tween 20 when combined with an antibiotic against Escherichia. coli. As antibiotic resistance becomes more and more prevalent it is necessary to think outside the box and do more than just increase the dosage of currently prescribed antibiotics. This study attempted to combat two forms of antibiotic resistance. The first is the AcrAB efflux pump which is able to pump antibiotics out of the cell. The second is the biofilms that E. coli can form. By using an inhibitor, the pump should be unable to rid itself of an antibiotic. On the other hand, using Tween allows for biofilm formation to either be disrupted or for the biofilm to be dissolved. By combining these two chemicals with an antibiotic that the efflux pump is known to expel, low concentrations of each chemical should result in an equivalent or greater effect on bacteria compared to any one chemical in higher concentrations. To test this hypothesis a 96 well plate BEC screen test was performed. A range of antibiotics were used at various concentrations and with varying concentrations of both Tween and the inhibitor to find a starting point. Following this, Erythromycin and Ciprofloxacin were picked as the best candidates and the optimum range of the antibiotic, Tween, and inhibitor were established. Finally, all three chemicals were combined to observe the effects they had together as opposed to individually or paired together. From the results of this experiment several conclusions were made. First, the inhibitor did in fact increase the effectiveness of the antibiotic as less antibiotic was needed if the inhibitor was present. Second, Tween showed an ability to prevent recovery in the MBEC reading, showing that it has the ability to disrupt or dissolve biofilms. However, Tween also showed a noticeable decrease in effectiveness in the overall treatment. This negative interaction was unable to be compensated for when using the inhibitor and so the hypothesis was proven false as combining the three chemicals led to a less effective treatment method.
ContributorsPetrovich Flynn, Chandler James (Author) / Misra, Rajeev (Thesis director) / Bean, Heather (Committee member) / Perkins, Kim (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133363-Thumbnail Image.png
Description
An in-depth analysis on the effects vortex generators cause to the boundary layer separation that occurs when an internal flow passes through a diffuser is presented. By understanding the effects vortex generators demonstrate on the boundary layer, they can be utilized to improve the performance and efficiencies of diffusers and

An in-depth analysis on the effects vortex generators cause to the boundary layer separation that occurs when an internal flow passes through a diffuser is presented. By understanding the effects vortex generators demonstrate on the boundary layer, they can be utilized to improve the performance and efficiencies of diffusers and other internal flow applications. An experiment was constructed to acquire physical data that could assess the change in performance of the diffusers once vortex generators were applied. The experiment consisted of pushing air through rectangular diffusers with half angles of 10, 20, and 30 degrees. A velocity distribution model was created for each diffuser without the application of vortex generators before modeling the velocity distribution with the application of vortex generators. This allowed the two results to be directly compared to one another and the improvements to be quantified. This was completed by using the velocity distribution model to find the partial mass flow rate through the outer portion of the diffuser's cross-sectional area. The analysis concluded that the vortex generators noticeably increased the performance of the diffusers. This was best seen in the performance of the 30-degree diffuser. Initially the diffuser experienced airflow velocities near zero towards the edges. This led to 0.18% of the mass flow rate occurring in the outer one-fourth portion of the cross-sectional area. With the application of vortex generators, this percentage increased to 5.7%. The 20-degree diffuser improved from 2.5% to 7.9% of the total mass flow rate in the outer portion and the 10-degree diffuser improved from 11.9% to 19.2%. These results demonstrate an increase in performance by the addition of vortex generators while allowing the possibility for further investigation on improvement through the design and configuration of these vortex generators.
ContributorsSanchez, Zachary Daniel (Author) / Takahashi, Timothy (Thesis director) / Herrmann, Marcus (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133366-Thumbnail Image.png
Description
The objective of this project was to design an electrically driven centrifugal pump for the Daedalus Astronautics @ASU hybrid rocket engine (HRE). The pump design was purposefully simplified due to time, fabrication, calculation, and capability constraints, which resulted in a lower fidelity design, with the option to be improved later.

The objective of this project was to design an electrically driven centrifugal pump for the Daedalus Astronautics @ASU hybrid rocket engine (HRE). The pump design was purposefully simplified due to time, fabrication, calculation, and capability constraints, which resulted in a lower fidelity design, with the option to be improved later. The impeller, shroud, volute, shaft, motor, and ESC were the main focuses of the pump assembly, but the seals, bearings, lubrication methods, and flow path connections were considered as elements which would require future attention. The resulting pump design is intended to be used on the Daedalus Astronautics HRE test cart for design verification. In the future, trade studies and more detailed analyses should and will be performed before this pump is integrated into the Daedalus Astronautics flight-ready HRE.
ContributorsShillingburg, Ryan Carl (Author) / White, Daniel (Thesis director) / Brunacini, Lauren (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
131515-Thumbnail Image.png
Description
Human habitation of other planets requires both cost-effective transportation and low time-of-flight for human passengers and critical supplies. The current methods for interplanetary orbital transfers, such as the Hohmann transfer, require either expensive, high fuel maneuvers or extended space travel. However, by utilizing the high velocities of a super-geosynchronous space

Human habitation of other planets requires both cost-effective transportation and low time-of-flight for human passengers and critical supplies. The current methods for interplanetary orbital transfers, such as the Hohmann transfer, require either expensive, high fuel maneuvers or extended space travel. However, by utilizing the high velocities of a super-geosynchronous space elevator, spacecraft released from an apex anchor could achieve interplanetary transfers with minimal Delta V fuel and time of flight requirements. By using Lambert’s Problem and Free Release propagation to determine the minimal fuel transfer from a terrestrial space elevator to Mars under a variety of initial conditions and time-of-flight constraints, this paper demonstrates that the use of a space elevator release can address both needs by dramatically reducing the time-of-flight and the fuel budget.
ContributorsTorla, James (Author) / Peet, Matthew (Thesis director) / Swan, Peter (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05