Matching Items (847)
Filtering by

Clear all filters

152025-Thumbnail Image.png
Description
At present, almost 70% of the electric energy in the United States is produced utilizing fossil fuels. Combustion of fossil fuels contributes CO2 to the atmosphere, potentially exacerbating the impact on global warming. To make the electric power system (EPS) more sustainable for the future, there has been an emphasis

At present, almost 70% of the electric energy in the United States is produced utilizing fossil fuels. Combustion of fossil fuels contributes CO2 to the atmosphere, potentially exacerbating the impact on global warming. To make the electric power system (EPS) more sustainable for the future, there has been an emphasis on scaling up generation of electric energy from wind and solar resources. These resources are renewable in nature and have pollution free operation. Various states in the US have set up different goals for achieving certain amount of electrical energy to be produced from renewable resources. The Southwestern region of the United States receives significant solar radiation throughout the year. High solar radiation makes concentrated solar power and solar PV the most suitable means of renewable energy production in this region. However, the majority of the projects that are presently being developed are either residential or utility owned solar PV plants. This research explores the impact of significant PV penetration on the steady state voltage profile of the electric power transmission system. This study also identifies the impact of PV penetration on the dynamic response of the transmission system such as rotor angle stability, frequency response and voltage response after a contingency. The light load case of spring 2010 and the peak load case of summer 2018 have been considered for analyzing the impact of PV. If the impact is found to be detrimental to the normal operation of the EPS, mitigation measures have been devised and presented in the thesis. Commercially available software tools/packages such as PSLF, PSS/E, DSA Tools have been used to analyze the power network and validate the results.
ContributorsPrakash, Nitin (Author) / Heydt, Gerald T. (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2013
152033-Thumbnail Image.png
Description
The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of

The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.
ContributorsHaghnevis, Moeed (Author) / Askin, Ronald G. (Thesis advisor) / Armbruster, Dieter (Thesis advisor) / Mirchandani, Pitu (Committee member) / Wu, Tong (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2013
151872-Thumbnail Image.png
Description
Since Darwin popularized the evolution theory in 1895, it has been completed and studied through the years. Starting in 1990s, evolution at molecular level has been used to discover functional molecules while studying the origin of functional molecules in nature by mimicing the natural selection process in laboratory. Along this

Since Darwin popularized the evolution theory in 1895, it has been completed and studied through the years. Starting in 1990s, evolution at molecular level has been used to discover functional molecules while studying the origin of functional molecules in nature by mimicing the natural selection process in laboratory. Along this line, my Ph.D. dissertation focuses on the in vitro selection of two important biomolecules, deoxynucleotide acid (DNA) and protein with binding properties. Chapter two focuses on in vitro selection of DNA. Aptamers are single-stranded nucleic acids that generated from a random pool and fold into stable three-dimensional structures with ligand binding sites that are complementary in shape and charge to a desired target. While aptamers have been selected to bind a wide range of targets, it is generally thought that these molecules are incapable of discriminating strongly alkaline proteins due to the attractive forces that govern oppositely charged polymers. By employing negative selection step to eliminate aptamers that bind with off-target through charge unselectively, an aptamer that binds with histone H4 protein with high specificity (>100 fold)was generated. Chapter four focuses on another functional molecule: protein. It is long believed that complex molecules with different function originated from simple progenitor proteins, but very little is known about this process. By employing a previously selected protein that binds and catalyzes ATP, which is the first and only protein that was evolved completely from random pool and has a unique α/β-fold protein scaffold, I fused random library to the C-terminus of this protein and evolved a multi-domain protein with decent properties. Also, in chapter 3, a unique bivalent molecule was generated by conjugating peptides that bind different sites on the protein with nucleic acids. By using the ligand interactions by nucleotide conjugates technique, off-the shelf peptide was transferred into high affinity protein capture reagents that mimic the recognition properties of natural antibodies. The designer synthetic antibody amplifies the binding affinity of the individual peptides by ∼1000-fold to bind Grb2 with a Kd of 2 nM, and functions with high selectivity in conventional pull-down assays from HeLa cell lysates.
ContributorsJiang, Bing (Author) / Chaput, John C (Thesis advisor) / Chen, Julian (Committee member) / Liu, Yan (Committee member) / Arizona State University (Publisher)
Created2013
151693-Thumbnail Image.png
Description
The principle of Darwinian evolution has been applied in the laboratory to nucleic acid molecules since 1990, and led to the emergence of in vitro evolution technique. The methodology of in vitro evolution surveys a large number of different molecules simultaneously for a pre-defined chemical property, and enrich for molecules

The principle of Darwinian evolution has been applied in the laboratory to nucleic acid molecules since 1990, and led to the emergence of in vitro evolution technique. The methodology of in vitro evolution surveys a large number of different molecules simultaneously for a pre-defined chemical property, and enrich for molecules with the particular property. DNA and RNA sequences with versatile functions have been identified by in vitro selection experiments, but many basic questions remain to be answered about how these molecules achieve their functions. This dissertation first focuses on addressing a fundamental question regarding the molecular recognition properties of in vitro selected DNA sequences, namely whether negatively charged DNA sequences can be evolved to bind alkaline proteins with high specificity. We showed that DNA binders could be made, through carefully designed stringent in vitro selection, to discriminate different alkaline proteins. The focus of this dissertation is then shifted to in vitro evolution of an artificial genetic polymer called threose nucleic acid (TNA). TNA has been considered a potential RNA progenitor during early evolution of life on Earth. However, further experimental evidence to support TNA as a primordial genetic material is lacking. In this dissertation we demonstrated the capacity of TNA to form stable tertiary structure with specific ligand binding property, which suggests a possible role of TNA as a pre-RNA genetic polymer. Additionally, we discussed the challenges in in vitro evolution for TNA enzymes and developed the necessary methodology for future TNA enzyme evolution.
ContributorsYu, Hanyang (Author) / Chaput, John C (Thesis advisor) / Chen, Julian (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2013
151711-Thumbnail Image.png
Description
Cyanovirin-N (CV-N) is a naturally occurring lectin originally isolated from the cyanobacteria Nostoc ellipsosporum. This 11 kDa lectin is 101 amino acids long with two binding sites, one at each end of the protein. CV-N specifically binds to terminal Manα1-2Manα motifs on the branched, high mannose Man9 and Man8 glycosylations

Cyanovirin-N (CV-N) is a naturally occurring lectin originally isolated from the cyanobacteria Nostoc ellipsosporum. This 11 kDa lectin is 101 amino acids long with two binding sites, one at each end of the protein. CV-N specifically binds to terminal Manα1-2Manα motifs on the branched, high mannose Man9 and Man8 glycosylations found on enveloped viruses including Ebola, Influenza, and HIV. wt-CVN has micromolar binding to soluble Manα1-2Manα and also inhibits HIV entry at low nanomolar concentrations. CV-N's high affinity and specificity for Manα1-2Manα makes it an excellent lectin to study for its glycan-specific properties. The long-term aim of this project is to make a variety of mutant CV-Ns to specifically bind other glycan targets. Such a set of lectins may be used as screening reagents to identify biomarkers and other glycan motifs of interest. As proof of concept, a T7 phage display library was constructed using P51G-m4-CVN genes mutated at positions 41, 44, 52, 53, 56, 74, and 76 in binding Domain B. Five CV-N mutants were selected from the library and expressed in BL21(DE3) E. coli. Two of the mutants, SSDGLQQ-P51Gm4-CVN and AAGRLSK-P51Gm4-CVN, were sufficiently stable for characterization and were examined by CD, Tm, ELISA, and glycan array. Both proteins have CD minima at approximately 213 nm, indicating largely β-sheet structure, and have Tm values greater than 40°C. ELISA against gp120 and RNase B demonstrate both proteins' ability to bind high mannose glycans. To more specifically determine the binding specificity of each protein, AAGRLSK-P51Gm4-CVN, SSDGLQQ-P51Gm4-CVN, wt-CVN, and P51G-m4-CVN were sent to the Consortium for Functional Glycomics (CFG) for glycan array analysis. AAGRLSK-P51Gm4-CVN, wt-CVN, and P51G-m4-CVN, have identical specificities for high mannose glycans containing terminal Manα1-2Manα. SSDGLQQ-P51Gm4-CVN binds to terminal GlcNAcα1-4Gal motifs and a subgroup of high mannose glycans bound by P51G-m4-CVN. SSDGLQQ-wt-CVN was produced to restore anti-HIV activity and has a high nanomolar EC50 value compared to wt-CVN's low nanomolar activity. Overall, these experiments show that CV-N Domain B can be mutated and retain specificity identical to wt-CVN or acquire new glycan specificities. This first generation information can be used to produce glycan-specific lectins for a variety of applications.
ContributorsRuben, Melissa (Author) / Ghirlanda, Giovanna (Thesis advisor) / Allen, James (Committee member) / Wachter, Rebekka (Committee member) / Arizona State University (Publisher)
Created2013
152233-Thumbnail Image.png
Description
Continuous monitoring in the adequate temporal and spatial scale is necessary for a better understanding of environmental variations. But field deployments of molecular biological analysis platforms in that scale are currently hindered because of issues with power, throughput and automation. Currently, such analysis is performed by the collection of large

Continuous monitoring in the adequate temporal and spatial scale is necessary for a better understanding of environmental variations. But field deployments of molecular biological analysis platforms in that scale are currently hindered because of issues with power, throughput and automation. Currently, such analysis is performed by the collection of large sample volumes from over a wide area and transporting them to laboratory testing facilities, which fail to provide any real-time information. This dissertation evaluates the systems currently utilized for in-situ field analyses and the issues hampering the successful deployment of such bioanalytial instruments for environmental applications. The design and development of high throughput, low power, and autonomous Polymerase Chain Reaction (PCR) instruments, amenable for portable field operations capable of providing quantitative results is presented here as part of this dissertation. A number of novel innovations have been reported here as part of this work in microfluidic design, PCR thermocycler design, optical design and systems integration. Emulsion microfluidics in conjunction with fluorinated oils and Teflon tubing have been used for the fluidic module that reduces cross-contamination eliminating the need for disposable components or constant cleaning. A cylindrical heater has been designed with the tubing wrapped around fixed temperature zones enabling continuous operation. Fluorescence excitation and detection have been achieved by using a light emitting diode (LED) as the excitation source and a photomultiplier tube (PMT) as the detector. Real-time quantitative PCR results were obtained by using multi-channel fluorescence excitation and detection using LED, optical fibers and a 64-channel multi-anode PMT for measuring continuous real-time fluorescence. The instrument was evaluated by comparing the results obtained with those obtained from a commercial instrument and found to be comparable. To further improve the design and enhance its field portability, this dissertation also presents a framework for the instrumentation necessary for a portable digital PCR platform to achieve higher throughputs with lower power. Both systems were designed such that it can easily couple with any upstream platform capable of providing nucleic acid for analysis using standard fluidic connections. Consequently, these instruments can be used not only in environmental applications, but portable diagnostics applications as well.
ContributorsRay, Tathagata (Author) / Youngbull, Cody (Thesis advisor) / Goryll, Michael (Thesis advisor) / Blain Christen, Jennifer (Committee member) / Yu, Hongyu (Committee member) / Arizona State University (Publisher)
Created2013
152235-Thumbnail Image.png
Description
The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the

The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the human capabilities to perceive, evaluate and ultimately select a suitable solution. While performance prediction can be highly automated through the use of computers, performance evaluation cannot, unless it is with respect to a single criterion. The need to address multi-criteria requirements makes it more valuable for a designer to know the "latitude" or "degrees of freedom" he has in changing certain design variables while achieving preset criteria such as energy performance, life cycle cost, environmental impacts etc. This requirement can be met by a decision support framework based on near-optimal "satisficing" as opposed to purely optimal decision making techniques. Currently, such a comprehensive design framework is lacking, which is the basis for undertaking this research. The primary objective of this research is to facilitate a complementary relationship between designers and computers for Multi-Criterion Decision Making (MCDM) during high performance building design. It is based on the application of Monte Carlo approaches to create a database of solutions using deterministic whole building energy simulations, along with data mining methods to rank variable importance and reduce the multi-dimensionality of the problem. A novel interactive visualization approach is then proposed which uses regression based models to create dynamic interplays of how varying these important variables affect the multiple criteria, while providing a visual range or band of variation of the different design parameters. The MCDM process has been incorporated into an alternative methodology for high performance building design referred to as Visual Analytics based Decision Support Methodology [VADSM]. VADSM is envisioned to be most useful during the conceptual and early design performance modeling stages by providing a set of potential solutions that can be analyzed further for final design selection. The proposed methodology can be used for new building design synthesis as well as evaluation of retrofits and operational deficiencies in existing buildings.
ContributorsDutta, Ranojoy (Author) / Reddy, T Agami (Thesis advisor) / Runger, George C. (Committee member) / Addison, Marlin S. (Committee member) / Arizona State University (Publisher)
Created2013
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
152202-Thumbnail Image.png
Description
This thesis addresses the issue of making an economic case for energy storage in power systems. Bulk energy storage has often been suggested for large scale electric power systems in order to levelize load; store energy when it is inexpensive and discharge energy when it is expensive; potentially defer transmission

This thesis addresses the issue of making an economic case for energy storage in power systems. Bulk energy storage has often been suggested for large scale electric power systems in order to levelize load; store energy when it is inexpensive and discharge energy when it is expensive; potentially defer transmission and generation expansion; and provide for generation reserve margins. As renewable energy resource penetration increases, the uncertainty and variability of wind and solar may be alleviated by bulk energy storage technologies. The quadratic programming function in MATLAB is used to simulate an economic dispatch that includes energy storage. A program is created that utilizes quadratic programming to analyze various cases using a 2010 summer peak load from the Arizona transmission system, part of the Western Electricity Coordinating Council (WECC). The MATLAB program is used first to test the Arizona test bed with a low level of energy storage to study how the storage power limit effects several optimization out-puts such as the system wide operating costs. Very high levels of energy storage are then added to see how high level energy storage affects peak shaving, load factor, and other system applications. Finally, various constraint relaxations are made to analyze why the applications tested eventually approach a constant value. This research illustrates the use of energy storage which helps minimize the system wide generator operating cost by "shaving" energy off of the peak demand.
ContributorsRuggiero, John (Author) / Heydt, Gerald T (Thesis advisor) / Datta, Rajib (Committee member) / Karady, George G. (Committee member) / Arizona State University (Publisher)
Created2013