Matching Items (544)
153932-Thumbnail Image.png
Description
Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas

Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas are investigated: development of a model which facilitates capturing the differences among designers' problem formulation; representation and implication of those differences; the relation between problem formulation and creativity.

This dissertation proposes the Problem Map (P-maps) ontological framework. P-maps represent designers' problem formulation in terms of six groups of entities (requirement, use scenario, function, artifact, behavior, and issue). Entities have hierarchies within each group and links among groups. Variables extracted from P-maps characterize problem formulation.

Three experiments were conducted. The first experiment was to study the similarities and differences between novice and expert designers. Results show that experts use more abstraction than novices do and novices are more likely to add entities in a specific order. Experts also discover more issues.

The second experiment was to see how problem formulation relates to creativity. Ideation metrics were used to characterize creative outcome. Results include but are not limited to a positive correlation between adding more issues in an unorganized way with quantity and variety, more use scenarios and functions with novelty, more behaviors and conflicts identified with quality, and depth-first exploration with all ideation metrics. Fewer hierarchies in use scenarios lower novelty and fewer links to requirements and issues lower quality of ideas.

The third experiment was to see if problem formulation can predict creative outcome. Models based on one problem were used to predict the creativity of another. Predicted scores were compared to assessments of independent judges. Quality and novelty are predicted more accurately than variety, and quantity. Backward elimination improves model fit, though reduces prediction accuracy.

P-maps provide a theoretical framework for formalizing, tracing, and quantifying conceptual design strategies. Other potential applications are developing a test of problem formulation skill, tracking students' learning of formulation skills in a course, and reproducing other researchers’ observations about designer thinking.
ContributorsDinar, Mahmoud (Author) / Shah, Jami J. (Thesis advisor) / Langley, Pat (Committee member) / Davidson, Joseph K. (Committee member) / Lande, Micah (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2015
156283-Thumbnail Image.png
Description
In this dissertation, three complex material systems including a novel class of hyperuniform composite materials, cellularized collagen gel and low melting point alloy (LMPA) composite are investigated, using statistical pattern characterization, stochastic microstructure reconstruction and micromechanical analysis. In Chapter 1, an introduction of this report is provided, in which a

In this dissertation, three complex material systems including a novel class of hyperuniform composite materials, cellularized collagen gel and low melting point alloy (LMPA) composite are investigated, using statistical pattern characterization, stochastic microstructure reconstruction and micromechanical analysis. In Chapter 1, an introduction of this report is provided, in which a brief review is made about these three material systems. In Chapter 2, detailed discussion of the statistical morphological descriptors and a stochastic optimization approach for microstructure reconstruction is presented. In Chapter 3, the lattice particle method for micromechanical analysis of complex heterogeneous materials is introduced. In Chapter 4, a new class of hyperuniform heterogeneous material with superior mechanical properties is investigated. In Chapter 5, a bio-material system, i.e., cellularized collagen gel is modeled using correlation functions and stochastic reconstruction to study the collective dynamic behavior of the embed tumor cells. In chapter 6, LMPA soft robotic system is generated by generalizing the correlation functions and the rigidity tunability of this smart composite is discussed. In Chapter 7, a future work plan is presented.
ContributorsXu, Yaopengxiao (Author) / Jiao, Yang (Thesis advisor) / Liu, Yongming (Committee member) / Wang, Qing Hua (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
156388-Thumbnail Image.png
Description
The Multiple Antibiotic Resistance Regulator Family (MarR) are transcriptional regulators, many of which forms a dimer. Transcriptional regulation provides bacteria a stabilized responding system to ensure the bacteria is able to efficiently adapt to different environmental conditions. The main function of the MarR family is to create multiple antibiotic resistance

The Multiple Antibiotic Resistance Regulator Family (MarR) are transcriptional regulators, many of which forms a dimer. Transcriptional regulation provides bacteria a stabilized responding system to ensure the bacteria is able to efficiently adapt to different environmental conditions. The main function of the MarR family is to create multiple antibiotic resistance from a mutated protein; this process occurs when the MarR regulates an operon. We hypothesized that different transcriptional regulator genes have interactions with each other. It is known that Salmonella pagC transcription is activated by three regulators, i.e., SlyA, MprA, and PhoP. Bacterial Adenylate Cyclase-based Two-Hybrid (BACTH) system was used to research the protein-protein interactions in SlyA, MprA, and PhoP as heterodimers and homodimers in vivo. Two fragments, T25 and T18, that lack endogenous adenylate cyclase activity, were used for construction of chimeric proteins and reconstruction of adenylate cyclase activity was tested. The significant adenylate cyclase activities has proved that SlyA is able to form homodimers. However, weak adenylate cyclase activities in this study has proved that MprA and PhoP are not likely to form homodimers, and no protein-protein interactions were detected in between SlyA, MprA and PhoP, which no heterodimers have formed in between three transcriptional regulators.
ContributorsTao, Zenan (Author) / Shi, Yixin (Thesis advisor) / Wang, Xuan (Committee member) / Bean, Heather (Committee member) / Arizona State University (Publisher)
Created2018
157426-Thumbnail Image.png
Description
Mycobacterium tuberculosis (Mtb), the causative agent of tuberculosis, is the 10th leading cause of death, worldwide. The prevalence of drug-resistant clinical isolates and the paucity of newly-approved antituberculosis drugs impedes the successful eradication of Mtb. Bacteria commonly use two-component systems (TCS) to sense their environment and genetically modulate adaptive responses.

Mycobacterium tuberculosis (Mtb), the causative agent of tuberculosis, is the 10th leading cause of death, worldwide. The prevalence of drug-resistant clinical isolates and the paucity of newly-approved antituberculosis drugs impedes the successful eradication of Mtb. Bacteria commonly use two-component systems (TCS) to sense their environment and genetically modulate adaptive responses. The prrAB TCS is essential in Mtb, thus representing an auspicious drug target; however, the inability to generate an Mtb ΔprrAB mutant complicates investigating how this TCS contributes to pathogenesis. Mycobacterium smegmatis, a commonly used M. tuberculosis genetic surrogate was used here. This work shows that prrAB is not essential in M. smegmatis. During ammonium stress, the ΔprrAB mutant excessively accumulates triacylglycerol lipids, a phenotype associated with M. tuberculosis dormancy and chronic infection. Additionally, triacylglycerol biosynthetic genes were induced in the ΔprrAB mutant relative to the wild-type and complementation strains during ammonium stress. Next, RNA-seq was used to define the M. smegmatis PrrAB regulon. PrrAB regulates genes participating in respiration, metabolism, redox balance, and oxidative phosphorylation. The M. smegmatis ΔprrAB mutant is compromised for growth under hypoxia, is hypersensitive to cyanide, and fails to induce high-affinity respiratory genes during hypoxia. Furthermore, PrrAB positively regulates the hypoxia-responsive dosR TCS response regulator, potentially explaining the hypoxia-mediated growth defects in the ΔprrAB mutant. Despite inducing genes encoding the F1F0 ATP synthase, the ΔprrAB mutant accumulates significantly less ATP during aerobic, exponential growth compared to the wild-type and complementation strains. Finally, the M. smegmatis ΔprrAB mutant exhibited growth impairment in media containing gluconeogenic carbon sources. M. tuberculosis mutants unable to utilize these substrates fail to establish chronic infection, suggesting that PrrAB may regulate Mtb central carbon metabolism in response to chronic infection. In conclusion, 1) prrAB is not universally essential in mycobacteria; 2) M. smegmatis PrrAB regulates genetic responsiveness to nutrient and oxygen stress; and 3) PrrAB may provide feed-forward control of the DosRS TCS and dormancy phenotypes. The data generated in these studies provide insight into the mycobacterial PrrAB TCS transcriptional regulon, PrrAB essentiality in Mtb, and how PrrAB may mediate stresses encountered by Mtb during the transition to chronic infection.
ContributorsMaarsingh, Jason (Author) / Haydel, Shelley E (Thesis advisor) / Roland, Kenneth (Committee member) / Sandrin, Todd (Committee member) / Bean, Heather (Committee member) / Arizona State University (Publisher)
Created2019
156522-Thumbnail Image.png
Description
One out of ten women has a difficult time getting or staying pregnant in the United States. Recent studies have identified aging as one of the key factors attributed to a decline in female reproductive health. Existing fertility diagnostic methods do not allow for the non-invasive monitoring of hormone levels

One out of ten women has a difficult time getting or staying pregnant in the United States. Recent studies have identified aging as one of the key factors attributed to a decline in female reproductive health. Existing fertility diagnostic methods do not allow for the non-invasive monitoring of hormone levels across time. In recent years, olfactory sensing has emerged as a promising diagnostic tool for its potential for real-time, non-invasive monitoring. This technology has been proven promising in the areas of oncology, diabetes, and neurological disorders. Little work, however, has addressed the use of olfactory sensing with respect to female fertility. In this work, we perform a study on ten healthy female subjects to determine the volatile signature in biological samples across 28 days, correlating to fertility hormones. Volatile organic compounds (VOCs) present in the air above the biological sample, or headspace, were collected by solid phase microextraction (SPME), using a 50/30 µm divinylbenzene/carboxen/polydimethylsiloxane (DVB/CAR/PDMS) coated fiber. Samples were analyzed, using comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC×GC-TOFMS). A regression model was used to identify key analytes, corresponding to the fertility hormones estrogen and progesterone. Results indicate shifts in volatile signatures in biological samples across the 28 days, relevant to hormonal changes. Further work includes evaluating metabolic changes in volatile hormone expression as an early indicator of declining fertility, so women may one day be able to monitor their reproductive health in real-time as they age.
ContributorsOng, Stephanie (Author) / Smith, Barbara (Thesis advisor) / Bean, Heather (Committee member) / Plaisier, Christopher (Committee member) / Arizona State University (Publisher)
Created2018
156902-Thumbnail Image.png
Description
Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information about manufacturing and installation, thereby raising a concern for their

Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information about manufacturing and installation, thereby raising a concern for their safety and integrity. Testing for the aging pipe strength and toughness estimation without interrupting the transmission and operations thus becomes important. The state-of-the-art techniques tend to focus on the single modality deterministic estimation of pipe strength and do not account for inhomogeneity and uncertainties, many others appear to rely on destructive means. These gaps provide an impetus for novel methods to better characterize the pipe material properties. The focus of this study is the design of a Bayesian Network information fusion model for the prediction of accurate probabilistic pipe strength and consequently the maximum allowable operating pressure. A multimodal diagnosis is performed by assessing the mechanical property variation within the pipe in terms of material property measurements, such as microstructure, composition, hardness and other mechanical properties through experimental analysis, which are then integrated with the Bayesian network model that uses a Markov chain Monte Carlo (MCMC) algorithm. Prototype testing is carried out for model verification, validation and demonstration and data training of the model is employed to obtain a more accurate measure of the probabilistic pipe strength. With a view of providing a holistic measure of material performance in service, the fatigue properties of the pipe steel are investigated. The variation in the fatigue crack growth rate (da/dN) along the direction of the pipe wall thickness is studied in relation to the microstructure and the material constants for the crack growth have been reported. A combination of imaging and composition analysis is incorporated to study the fracture surface of the fatigue specimen. Finally, some well-known statistical inference models are employed for prediction of manufacturing process parameters for steel pipelines. The adaptability of the small datasets for the accuracy of the prediction outcomes is discussed and the models are compared for their performance.
ContributorsDahire, Sonam (Author) / Liu, Yongming (Thesis advisor) / Jiao, Yang (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
156953-Thumbnail Image.png
Description
Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures

Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures or processing settings. While optimization techniques have mature applications to a large range of engineering systems, their application to material design meets unique challenges due to the high dimensionality of microstructures and the high costs in computing process-structure-property (PSP) mappings. The key to addressing these challenges is the learning of material representations and predictive PSP mappings while managing a small data acquisition budget. This dissertation thus focuses on developing learning mechanisms that leverage context-specific meta-data and physics-based theories. Two research tasks will be conducted: In the first, we develop a statistical generative model that learns to characterize high-dimensional microstructure samples using low-dimensional features. We improve the data efficiency of a variational autoencoder by introducing a morphology loss to the training. We demonstrate that the resultant microstructure generator is morphology-aware when trained on a small set of material samples, and can effectively constrain the microstructure space during material design. In the second task, we investigate an active learning mechanism where new samples are acquired based on their violation to a theory-driven constraint on the physics-based model. We demonstrate using a topology optimization case that while data acquisition through the physics-based model is often expensive (e.g., obtaining microstructures through simulation or optimization processes), the evaluation of the constraint can be far more affordable (e.g., checking whether a solution is optimal or equilibrium). We show that this theory-driven learning algorithm can lead to much improved learning efficiency and generalization performance when such constraints can be derived. The outcomes of this research is a better understanding of how physics knowledge about material systems can be integrated into machine learning frameworks, in order to achieve more cost-effective and reliable learning of material representations and predictive models, which are essential to accelerate computational material design.
ContributorsCang, Ruijin (Author) / Ren, Yi (Thesis advisor) / Liu, Yongming (Committee member) / Jiao, Yang (Committee member) / Nian, Qiong (Committee member) / Zhuang, Houlong (Committee member) / Arizona State University (Publisher)
Created2018
156938-Thumbnail Image.png
Description
Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed.

Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed. Individual agent costs are coupled in such a way that group ob-

jective is attained while minimizing individual costs. Information Asymmetry refers

to situations where interacting agents have no knowledge or partial knowledge of cost

functions of other agents. By virtue of their intelligence, i.e., by learning from past

experiences agents learn cost functions of other agents, predict their responses and

act adaptively to accomplish the team’s goal.

Algorithms that agents use for learning others’ cost functions are called Learn-

ing Algorithms, and algorithms agents use for computing actuation (control) which

drives them towards their goal and minimize their cost functions are called Control

Algorithms. Typically knowledge acquired using learning algorithms is used in con-

trol algorithms for computing control signals. Learning and control algorithms are

designed in such a way that the multi-agent system as a whole remains stable during

learning and later at an equilibrium. An equilibrium is defined as the event/point

where cost functions of all agents are optimized simultaneously. Cost functions are

designed so that the equilibrium coincides with the goal state multi-agent system as

a whole is trying to reach.

In collective load transport, two or more agents (robots) carry a load from point

A to point B in space. Robots could have different control preferences, for example,

different actuation abilities, however, are still required to coordinate and perform

load transport. Control preferences for each robot are characterized using a scalar

parameter θ i unique to the robot being considered and unknown to other robots.

With the aid of state and control input observations, agents learn control preferences

of other agents, optimize individual costs and drive the multi-agent system to a goal

state.

Two learning and Control algorithms are presented. In the first algorithm(LCA-

1), an existing work, each agent optimizes a cost function similar to 1-step receding

horizon optimal control problem for control. LCA-1 uses recursive least squares as

the learning algorithm and guarantees complete learning in two time steps. LCA-1 is

experimentally verified as part of this thesis.

A novel learning and control algorithm (LCA-2) is proposed and verified in sim-

ulations and on hardware. In LCA-2, each agent solves an infinite horizon linear

quadratic regulator (LQR) problem for computing control. LCA-2 uses a learning al-

gorithm similar to line search methods, and guarantees learning convergence to true

values asymptotically.

Simulations and hardware implementation show that the LCA-2 is stable for a

variety of systems. Load transport is demonstrated using both the algorithms. Ex-

periments running algorithm LCA-2 are able to resist disturbances and balance the

assumed load better compared to LCA-1.
ContributorsKAMBAM, KARTHIK (Author) / Zhang, Wenlong (Thesis advisor) / Nedich, Angelia (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
157030-Thumbnail Image.png
Description
Aging-related damage and failure in structures, such as fatigue cracking, corrosion, and delamination, are critical for structural integrity. Most engineering structures have embedded defects such as voids, cracks, inclusions from manufacturing. The properties and locations of embedded defects are generally unknown and hard to detect in complex engineering structures.

Aging-related damage and failure in structures, such as fatigue cracking, corrosion, and delamination, are critical for structural integrity. Most engineering structures have embedded defects such as voids, cracks, inclusions from manufacturing. The properties and locations of embedded defects are generally unknown and hard to detect in complex engineering structures. Therefore, early detection of damage is beneficial for prognosis and risk management of aging infrastructure system.

Non-destructive testing (NDT) and structural health monitoring (SHM) are widely used for this purpose. Different types of NDT techniques have been proposed for the damage detection, such as optical image, ultrasound wave, thermography, eddy current, and microwave. The focus in this study is on the wave-based detection method, which is grouped into two major categories: feature-based damage detection and model-assisted damage detection. Both damage detection approaches have their own pros and cons. Feature-based damage detection is usually very fast and doesn’t involve in the solution of the physical model. The key idea is the dimension reduction of signals to achieve efficient damage detection. The disadvantage is that the loss of information due to the feature extraction can induce significant uncertainties and reduces the resolution. The resolution of the feature-based approach highly depends on the sensing path density. Model-assisted damage detection is on the opposite side. Model-assisted damage detection has the ability for high resolution imaging with limited number of sensing paths since the entire signal histories are used for damage identification. Model-based methods are time-consuming due to the requirement for the inverse wave propagation solution, which is especially true for the large 3D structures.

The motivation of the proposed method is to develop efficient and accurate model-based damage imaging technique with limited data. The special focus is on the efficiency of the damage imaging algorithm as it is the major bottleneck of the model-assisted approach. The computational efficiency is achieved by two complimentary components. First, a fast forward wave propagation solver is developed, which is verified with the classical Finite Element(FEM) solution and the speed is 10-20 times faster. Next, efficient inverse wave propagation algorithms is proposed. Classical gradient-based optimization algorithms usually require finite difference method for gradient calculation, which is prohibitively expensive for large degree of freedoms. An adjoint method-based optimization algorithms is proposed, which avoids the repetitive finite difference calculations for every imaging variables. Thus, superior computational efficiency can be achieved by combining these two methods together for the damage imaging. A coupled Piezoelectric (PZT) damage imaging model is proposed to include the interaction between PZT and host structure. Following the formulation of the framework, experimental validation is performed on isotropic and anisotropic material with defects such as cracks, delamination, and voids. The results show that the proposed method can detect and reconstruct multiple damage simultaneously and efficiently, which is promising to be applied to complex large-scale engineering structures.
ContributorsChang, Qinan (Author) / Liu, Yongming (Thesis advisor) / Mignolet, Marc (Committee member) / Chattopadhyay, Aditi (Committee member) / Yan, Hao (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2019
133345-Thumbnail Image.png
Description
The purpose of this study was to observe the effectiveness of the phenylalanyl arginine β-naphthylamide dihydrochloride inhibitor and Tween 20 when combined with an antibiotic against Escherichia. coli. As antibiotic resistance becomes more and more prevalent it is necessary to think outside the box and do more than just increase

The purpose of this study was to observe the effectiveness of the phenylalanyl arginine β-naphthylamide dihydrochloride inhibitor and Tween 20 when combined with an antibiotic against Escherichia. coli. As antibiotic resistance becomes more and more prevalent it is necessary to think outside the box and do more than just increase the dosage of currently prescribed antibiotics. This study attempted to combat two forms of antibiotic resistance. The first is the AcrAB efflux pump which is able to pump antibiotics out of the cell. The second is the biofilms that E. coli can form. By using an inhibitor, the pump should be unable to rid itself of an antibiotic. On the other hand, using Tween allows for biofilm formation to either be disrupted or for the biofilm to be dissolved. By combining these two chemicals with an antibiotic that the efflux pump is known to expel, low concentrations of each chemical should result in an equivalent or greater effect on bacteria compared to any one chemical in higher concentrations. To test this hypothesis a 96 well plate BEC screen test was performed. A range of antibiotics were used at various concentrations and with varying concentrations of both Tween and the inhibitor to find a starting point. Following this, Erythromycin and Ciprofloxacin were picked as the best candidates and the optimum range of the antibiotic, Tween, and inhibitor were established. Finally, all three chemicals were combined to observe the effects they had together as opposed to individually or paired together. From the results of this experiment several conclusions were made. First, the inhibitor did in fact increase the effectiveness of the antibiotic as less antibiotic was needed if the inhibitor was present. Second, Tween showed an ability to prevent recovery in the MBEC reading, showing that it has the ability to disrupt or dissolve biofilms. However, Tween also showed a noticeable decrease in effectiveness in the overall treatment. This negative interaction was unable to be compensated for when using the inhibitor and so the hypothesis was proven false as combining the three chemicals led to a less effective treatment method.
ContributorsPetrovich Flynn, Chandler James (Author) / Misra, Rajeev (Thesis director) / Bean, Heather (Committee member) / Perkins, Kim (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05