Matching Items (763)
Filtering by

Clear all filters

151872-Thumbnail Image.png
Description
Since Darwin popularized the evolution theory in 1895, it has been completed and studied through the years. Starting in 1990s, evolution at molecular level has been used to discover functional molecules while studying the origin of functional molecules in nature by mimicing the natural selection process in laboratory. Along this

Since Darwin popularized the evolution theory in 1895, it has been completed and studied through the years. Starting in 1990s, evolution at molecular level has been used to discover functional molecules while studying the origin of functional molecules in nature by mimicing the natural selection process in laboratory. Along this line, my Ph.D. dissertation focuses on the in vitro selection of two important biomolecules, deoxynucleotide acid (DNA) and protein with binding properties. Chapter two focuses on in vitro selection of DNA. Aptamers are single-stranded nucleic acids that generated from a random pool and fold into stable three-dimensional structures with ligand binding sites that are complementary in shape and charge to a desired target. While aptamers have been selected to bind a wide range of targets, it is generally thought that these molecules are incapable of discriminating strongly alkaline proteins due to the attractive forces that govern oppositely charged polymers. By employing negative selection step to eliminate aptamers that bind with off-target through charge unselectively, an aptamer that binds with histone H4 protein with high specificity (>100 fold)was generated. Chapter four focuses on another functional molecule: protein. It is long believed that complex molecules with different function originated from simple progenitor proteins, but very little is known about this process. By employing a previously selected protein that binds and catalyzes ATP, which is the first and only protein that was evolved completely from random pool and has a unique α/β-fold protein scaffold, I fused random library to the C-terminus of this protein and evolved a multi-domain protein with decent properties. Also, in chapter 3, a unique bivalent molecule was generated by conjugating peptides that bind different sites on the protein with nucleic acids. By using the ligand interactions by nucleotide conjugates technique, off-the shelf peptide was transferred into high affinity protein capture reagents that mimic the recognition properties of natural antibodies. The designer synthetic antibody amplifies the binding affinity of the individual peptides by ∼1000-fold to bind Grb2 with a Kd of 2 nM, and functions with high selectivity in conventional pull-down assays from HeLa cell lysates.
ContributorsJiang, Bing (Author) / Chaput, John C (Thesis advisor) / Chen, Julian (Committee member) / Liu, Yan (Committee member) / Arizona State University (Publisher)
Created2013
151693-Thumbnail Image.png
Description
The principle of Darwinian evolution has been applied in the laboratory to nucleic acid molecules since 1990, and led to the emergence of in vitro evolution technique. The methodology of in vitro evolution surveys a large number of different molecules simultaneously for a pre-defined chemical property, and enrich for molecules

The principle of Darwinian evolution has been applied in the laboratory to nucleic acid molecules since 1990, and led to the emergence of in vitro evolution technique. The methodology of in vitro evolution surveys a large number of different molecules simultaneously for a pre-defined chemical property, and enrich for molecules with the particular property. DNA and RNA sequences with versatile functions have been identified by in vitro selection experiments, but many basic questions remain to be answered about how these molecules achieve their functions. This dissertation first focuses on addressing a fundamental question regarding the molecular recognition properties of in vitro selected DNA sequences, namely whether negatively charged DNA sequences can be evolved to bind alkaline proteins with high specificity. We showed that DNA binders could be made, through carefully designed stringent in vitro selection, to discriminate different alkaline proteins. The focus of this dissertation is then shifted to in vitro evolution of an artificial genetic polymer called threose nucleic acid (TNA). TNA has been considered a potential RNA progenitor during early evolution of life on Earth. However, further experimental evidence to support TNA as a primordial genetic material is lacking. In this dissertation we demonstrated the capacity of TNA to form stable tertiary structure with specific ligand binding property, which suggests a possible role of TNA as a pre-RNA genetic polymer. Additionally, we discussed the challenges in in vitro evolution for TNA enzymes and developed the necessary methodology for future TNA enzyme evolution.
ContributorsYu, Hanyang (Author) / Chaput, John C (Thesis advisor) / Chen, Julian (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2013
151711-Thumbnail Image.png
Description
Cyanovirin-N (CV-N) is a naturally occurring lectin originally isolated from the cyanobacteria Nostoc ellipsosporum. This 11 kDa lectin is 101 amino acids long with two binding sites, one at each end of the protein. CV-N specifically binds to terminal Manα1-2Manα motifs on the branched, high mannose Man9 and Man8 glycosylations

Cyanovirin-N (CV-N) is a naturally occurring lectin originally isolated from the cyanobacteria Nostoc ellipsosporum. This 11 kDa lectin is 101 amino acids long with two binding sites, one at each end of the protein. CV-N specifically binds to terminal Manα1-2Manα motifs on the branched, high mannose Man9 and Man8 glycosylations found on enveloped viruses including Ebola, Influenza, and HIV. wt-CVN has micromolar binding to soluble Manα1-2Manα and also inhibits HIV entry at low nanomolar concentrations. CV-N's high affinity and specificity for Manα1-2Manα makes it an excellent lectin to study for its glycan-specific properties. The long-term aim of this project is to make a variety of mutant CV-Ns to specifically bind other glycan targets. Such a set of lectins may be used as screening reagents to identify biomarkers and other glycan motifs of interest. As proof of concept, a T7 phage display library was constructed using P51G-m4-CVN genes mutated at positions 41, 44, 52, 53, 56, 74, and 76 in binding Domain B. Five CV-N mutants were selected from the library and expressed in BL21(DE3) E. coli. Two of the mutants, SSDGLQQ-P51Gm4-CVN and AAGRLSK-P51Gm4-CVN, were sufficiently stable for characterization and were examined by CD, Tm, ELISA, and glycan array. Both proteins have CD minima at approximately 213 nm, indicating largely β-sheet structure, and have Tm values greater than 40°C. ELISA against gp120 and RNase B demonstrate both proteins' ability to bind high mannose glycans. To more specifically determine the binding specificity of each protein, AAGRLSK-P51Gm4-CVN, SSDGLQQ-P51Gm4-CVN, wt-CVN, and P51G-m4-CVN were sent to the Consortium for Functional Glycomics (CFG) for glycan array analysis. AAGRLSK-P51Gm4-CVN, wt-CVN, and P51G-m4-CVN, have identical specificities for high mannose glycans containing terminal Manα1-2Manα. SSDGLQQ-P51Gm4-CVN binds to terminal GlcNAcα1-4Gal motifs and a subgroup of high mannose glycans bound by P51G-m4-CVN. SSDGLQQ-wt-CVN was produced to restore anti-HIV activity and has a high nanomolar EC50 value compared to wt-CVN's low nanomolar activity. Overall, these experiments show that CV-N Domain B can be mutated and retain specificity identical to wt-CVN or acquire new glycan specificities. This first generation information can be used to produce glycan-specific lectins for a variety of applications.
ContributorsRuben, Melissa (Author) / Ghirlanda, Giovanna (Thesis advisor) / Allen, James (Committee member) / Wachter, Rebekka (Committee member) / Arizona State University (Publisher)
Created2013
152235-Thumbnail Image.png
Description
The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the

The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the human capabilities to perceive, evaluate and ultimately select a suitable solution. While performance prediction can be highly automated through the use of computers, performance evaluation cannot, unless it is with respect to a single criterion. The need to address multi-criteria requirements makes it more valuable for a designer to know the "latitude" or "degrees of freedom" he has in changing certain design variables while achieving preset criteria such as energy performance, life cycle cost, environmental impacts etc. This requirement can be met by a decision support framework based on near-optimal "satisficing" as opposed to purely optimal decision making techniques. Currently, such a comprehensive design framework is lacking, which is the basis for undertaking this research. The primary objective of this research is to facilitate a complementary relationship between designers and computers for Multi-Criterion Decision Making (MCDM) during high performance building design. It is based on the application of Monte Carlo approaches to create a database of solutions using deterministic whole building energy simulations, along with data mining methods to rank variable importance and reduce the multi-dimensionality of the problem. A novel interactive visualization approach is then proposed which uses regression based models to create dynamic interplays of how varying these important variables affect the multiple criteria, while providing a visual range or band of variation of the different design parameters. The MCDM process has been incorporated into an alternative methodology for high performance building design referred to as Visual Analytics based Decision Support Methodology [VADSM]. VADSM is envisioned to be most useful during the conceptual and early design performance modeling stages by providing a set of potential solutions that can be analyzed further for final design selection. The proposed methodology can be used for new building design synthesis as well as evaluation of retrofits and operational deficiencies in existing buildings.
ContributorsDutta, Ranojoy (Author) / Reddy, T Agami (Thesis advisor) / Runger, George C. (Committee member) / Addison, Marlin S. (Committee member) / Arizona State University (Publisher)
Created2013
152197-Thumbnail Image.png
Description
Microelectronic industry is continuously moving in a trend requiring smaller and smaller devices and reduced form factors with time, resulting in new challenges. Reduction in device and interconnect solder bump sizes has led to increased current density in these small solders. Higher level of electromigration occurring due to increased current

Microelectronic industry is continuously moving in a trend requiring smaller and smaller devices and reduced form factors with time, resulting in new challenges. Reduction in device and interconnect solder bump sizes has led to increased current density in these small solders. Higher level of electromigration occurring due to increased current density is of great concern affecting the reliability of the entire microelectronics systems. This paper reviews electromigration in Pb- free solders, focusing specifically on Sn0.7wt.% Cu solder joints. Effect of texture, grain orientation, and grain-boundary misorientation angle on electromigration and intermetallic compound (IMC) formation is studied through EBSD analysis performed on actual C4 bumps.
ContributorsLara, Leticia (Author) / Tasooji, Amaneh (Thesis advisor) / Lee, Kyuoh (Committee member) / Krause, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152208-Thumbnail Image.png
Description
Vehicle type choice is a significant determinant of fuel consumption and energy sustainability; larger, heavier vehicles consume more fuel, and expel twice as many pollutants, than their smaller, lighter counterparts. Over the course of the past few decades, vehicle type choice has seen a vast shift, due to many households

Vehicle type choice is a significant determinant of fuel consumption and energy sustainability; larger, heavier vehicles consume more fuel, and expel twice as many pollutants, than their smaller, lighter counterparts. Over the course of the past few decades, vehicle type choice has seen a vast shift, due to many households making more trips in larger vehicles with lower fuel economy. During the 1990s, SUVs were the fastest growing segment of the automotive industry, comprising 7% of the total light vehicle market in 1990, and 25% in 2005. More recently, due to rising oil prices, greater awareness to environmental sensitivity, the desire to reduce dependence on foreign oil, and the availability of new vehicle technologies, many households are considering the use of newer vehicles with better fuel economy, such as hybrids and electric vehicles, over the use of the SUV or low fuel economy vehicles they may already own. The goal of this research is to examine how vehicle miles traveled, fuel consumption and emissions may be reduced through shifts in vehicle type choice behavior. Using the 2009 National Household Travel Survey data it is possible to develop a model to estimate household travel demand and total fuel consumption. If given a vehicle choice shift scenario, using the model it would be possible to calculate the potential fuel consumption savings that would result from such a shift. In this way, it is possible to estimate fuel consumption reductions that would take place under a wide variety of scenarios.
ContributorsChristian, Keith (Author) / Pendyala, Ram M. (Thesis advisor) / Chester, Mikhail (Committee member) / Kaloush, Kamil (Committee member) / Ahn, Soyoung (Committee member) / Arizona State University (Publisher)
Created2013
152178-Thumbnail Image.png
Description
The construction industry in India suffers from major time and cost overruns. Data from government and industry reports suggest that projects suffer from 20 to 25 percent time and cost overruns. Waste of resources has been identified as a major source of inefficiency. Despite a substantial increase in the past

The construction industry in India suffers from major time and cost overruns. Data from government and industry reports suggest that projects suffer from 20 to 25 percent time and cost overruns. Waste of resources has been identified as a major source of inefficiency. Despite a substantial increase in the past few years, demand for professionals and contractors still exceeds supply by a large margin. The traditional methods adopted in the Indian construction industry may not suffice the needs of this dynamic environment, as they have produced large inefficiencies. Innovative ways of procurement and project management can satisfy the needs aspired to as well as bring added value. The problems faced by the Indian construction industry are very similar to those faced by other developing countries. The objective of this paper is to discuss and analyze the economic concerns, inefficiencies and investigate a model that both explains the Indian construction industry structure and provides a framework to improve efficiencies. The Best Value (BV) model is examined as an approach to be adopted in lieu of the traditional approach. This could result in efficient construction projects by minimizing cost overruns and delays, which until now have been a rarity.
ContributorsNihas, Syed (Author) / Kashiwagi, Dean (Thesis advisor) / Sullivan, Kenneth (Committee member) / Kashiwagi, Jacob (Committee member) / Arizona State University (Publisher)
Created2013
152181-Thumbnail Image.png
Description
The objective of this thesis was to compare various approaches for classification of the `good' and `bad' parts via non-destructive resonance testing methods by collecting and analyzing experimental data in the frequency and time domains. A Laser Scanning Vibrometer was employed to measure vibrations samples in order to determine the

The objective of this thesis was to compare various approaches for classification of the `good' and `bad' parts via non-destructive resonance testing methods by collecting and analyzing experimental data in the frequency and time domains. A Laser Scanning Vibrometer was employed to measure vibrations samples in order to determine the spectral characteristics such as natural frequencies and amplitudes. Statistical pattern recognition tools such as Hilbert Huang, Fisher's Discriminant, and Neural Network were used to identify and classify the unknown samples whether they are defective or not. In this work, a Finite Element Analysis software packages (ANSYS 13.0 and NASTRAN NX8.0) was used to obtain estimates of resonance frequencies in `good' and `bad' samples. Furthermore, a system identification approach was used to generate Auto-Regressive-Moving Average with exogenous component, Box-Jenkins, and Output Error models from experimental data that can be used for classification
ContributorsJameel, Osama (Author) / Redkar, Sangram (Thesis advisor) / Arizona State University (Publisher)
Created2013
152182-Thumbnail Image.png
Description
There is a critical need for the development of clean and efficient energy sources. Hydrogen is being explored as a viable alternative to fuels in current use, many of which have limited availability and detrimental byproducts. Biological photo-production of H2 could provide a potential energy source directly manufactured from water

There is a critical need for the development of clean and efficient energy sources. Hydrogen is being explored as a viable alternative to fuels in current use, many of which have limited availability and detrimental byproducts. Biological photo-production of H2 could provide a potential energy source directly manufactured from water and sunlight. As a part of the photosynthetic electron transport chain (PETC) of the green algae Chlamydomonas reinhardtii, water is split via Photosystem II (PSII) and the electrons flow through a series of electron transfer cofactors in cytochrome b6f, plastocyanin and Photosystem I (PSI). The terminal electron acceptor of PSI is ferredoxin, from which electrons may be used to reduce NADP+ for metabolic purposes. Concomitant production of a H+ gradient allows production of energy for the cell. Under certain conditions and using the endogenous hydrogenase, excess protons and electrons from ferredoxin may be converted to molecular hydrogen. In this work it is demonstrated both that certain mutations near the quinone electron transfer cofactor in PSI can speed up electron transfer through the PETC, and also that a native [FeFe]-hydrogenase can be expressed in the C. reinhardtii chloroplast. Taken together, these research findings form the foundation for the design of a PSI-hydrogenase fusion for the direct and continuous photo-production of hydrogen in vivo.
ContributorsReifschneider, Kiera (Author) / Redding, Kevin (Thesis advisor) / Fromme, Petra (Committee member) / Jones, Anne (Committee member) / Arizona State University (Publisher)
Created2013