Matching Items (994)
Filtering by

Clear all filters

151872-Thumbnail Image.png
Description
Since Darwin popularized the evolution theory in 1895, it has been completed and studied through the years. Starting in 1990s, evolution at molecular level has been used to discover functional molecules while studying the origin of functional molecules in nature by mimicing the natural selection process in laboratory. Along this

Since Darwin popularized the evolution theory in 1895, it has been completed and studied through the years. Starting in 1990s, evolution at molecular level has been used to discover functional molecules while studying the origin of functional molecules in nature by mimicing the natural selection process in laboratory. Along this line, my Ph.D. dissertation focuses on the in vitro selection of two important biomolecules, deoxynucleotide acid (DNA) and protein with binding properties. Chapter two focuses on in vitro selection of DNA. Aptamers are single-stranded nucleic acids that generated from a random pool and fold into stable three-dimensional structures with ligand binding sites that are complementary in shape and charge to a desired target. While aptamers have been selected to bind a wide range of targets, it is generally thought that these molecules are incapable of discriminating strongly alkaline proteins due to the attractive forces that govern oppositely charged polymers. By employing negative selection step to eliminate aptamers that bind with off-target through charge unselectively, an aptamer that binds with histone H4 protein with high specificity (>100 fold)was generated. Chapter four focuses on another functional molecule: protein. It is long believed that complex molecules with different function originated from simple progenitor proteins, but very little is known about this process. By employing a previously selected protein that binds and catalyzes ATP, which is the first and only protein that was evolved completely from random pool and has a unique α/β-fold protein scaffold, I fused random library to the C-terminus of this protein and evolved a multi-domain protein with decent properties. Also, in chapter 3, a unique bivalent molecule was generated by conjugating peptides that bind different sites on the protein with nucleic acids. By using the ligand interactions by nucleotide conjugates technique, off-the shelf peptide was transferred into high affinity protein capture reagents that mimic the recognition properties of natural antibodies. The designer synthetic antibody amplifies the binding affinity of the individual peptides by ∼1000-fold to bind Grb2 with a Kd of 2 nM, and functions with high selectivity in conventional pull-down assays from HeLa cell lysates.
ContributorsJiang, Bing (Author) / Chaput, John C (Thesis advisor) / Chen, Julian (Committee member) / Liu, Yan (Committee member) / Arizona State University (Publisher)
Created2013
151693-Thumbnail Image.png
Description
The principle of Darwinian evolution has been applied in the laboratory to nucleic acid molecules since 1990, and led to the emergence of in vitro evolution technique. The methodology of in vitro evolution surveys a large number of different molecules simultaneously for a pre-defined chemical property, and enrich for molecules

The principle of Darwinian evolution has been applied in the laboratory to nucleic acid molecules since 1990, and led to the emergence of in vitro evolution technique. The methodology of in vitro evolution surveys a large number of different molecules simultaneously for a pre-defined chemical property, and enrich for molecules with the particular property. DNA and RNA sequences with versatile functions have been identified by in vitro selection experiments, but many basic questions remain to be answered about how these molecules achieve their functions. This dissertation first focuses on addressing a fundamental question regarding the molecular recognition properties of in vitro selected DNA sequences, namely whether negatively charged DNA sequences can be evolved to bind alkaline proteins with high specificity. We showed that DNA binders could be made, through carefully designed stringent in vitro selection, to discriminate different alkaline proteins. The focus of this dissertation is then shifted to in vitro evolution of an artificial genetic polymer called threose nucleic acid (TNA). TNA has been considered a potential RNA progenitor during early evolution of life on Earth. However, further experimental evidence to support TNA as a primordial genetic material is lacking. In this dissertation we demonstrated the capacity of TNA to form stable tertiary structure with specific ligand binding property, which suggests a possible role of TNA as a pre-RNA genetic polymer. Additionally, we discussed the challenges in in vitro evolution for TNA enzymes and developed the necessary methodology for future TNA enzyme evolution.
ContributorsYu, Hanyang (Author) / Chaput, John C (Thesis advisor) / Chen, Julian (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2013
151711-Thumbnail Image.png
Description
Cyanovirin-N (CV-N) is a naturally occurring lectin originally isolated from the cyanobacteria Nostoc ellipsosporum. This 11 kDa lectin is 101 amino acids long with two binding sites, one at each end of the protein. CV-N specifically binds to terminal Manα1-2Manα motifs on the branched, high mannose Man9 and Man8 glycosylations

Cyanovirin-N (CV-N) is a naturally occurring lectin originally isolated from the cyanobacteria Nostoc ellipsosporum. This 11 kDa lectin is 101 amino acids long with two binding sites, one at each end of the protein. CV-N specifically binds to terminal Manα1-2Manα motifs on the branched, high mannose Man9 and Man8 glycosylations found on enveloped viruses including Ebola, Influenza, and HIV. wt-CVN has micromolar binding to soluble Manα1-2Manα and also inhibits HIV entry at low nanomolar concentrations. CV-N's high affinity and specificity for Manα1-2Manα makes it an excellent lectin to study for its glycan-specific properties. The long-term aim of this project is to make a variety of mutant CV-Ns to specifically bind other glycan targets. Such a set of lectins may be used as screening reagents to identify biomarkers and other glycan motifs of interest. As proof of concept, a T7 phage display library was constructed using P51G-m4-CVN genes mutated at positions 41, 44, 52, 53, 56, 74, and 76 in binding Domain B. Five CV-N mutants were selected from the library and expressed in BL21(DE3) E. coli. Two of the mutants, SSDGLQQ-P51Gm4-CVN and AAGRLSK-P51Gm4-CVN, were sufficiently stable for characterization and were examined by CD, Tm, ELISA, and glycan array. Both proteins have CD minima at approximately 213 nm, indicating largely β-sheet structure, and have Tm values greater than 40°C. ELISA against gp120 and RNase B demonstrate both proteins' ability to bind high mannose glycans. To more specifically determine the binding specificity of each protein, AAGRLSK-P51Gm4-CVN, SSDGLQQ-P51Gm4-CVN, wt-CVN, and P51G-m4-CVN were sent to the Consortium for Functional Glycomics (CFG) for glycan array analysis. AAGRLSK-P51Gm4-CVN, wt-CVN, and P51G-m4-CVN, have identical specificities for high mannose glycans containing terminal Manα1-2Manα. SSDGLQQ-P51Gm4-CVN binds to terminal GlcNAcα1-4Gal motifs and a subgroup of high mannose glycans bound by P51G-m4-CVN. SSDGLQQ-wt-CVN was produced to restore anti-HIV activity and has a high nanomolar EC50 value compared to wt-CVN's low nanomolar activity. Overall, these experiments show that CV-N Domain B can be mutated and retain specificity identical to wt-CVN or acquire new glycan specificities. This first generation information can be used to produce glycan-specific lectins for a variety of applications.
ContributorsRuben, Melissa (Author) / Ghirlanda, Giovanna (Thesis advisor) / Allen, James (Committee member) / Wachter, Rebekka (Committee member) / Arizona State University (Publisher)
Created2013
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
152235-Thumbnail Image.png
Description
The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the

The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the human capabilities to perceive, evaluate and ultimately select a suitable solution. While performance prediction can be highly automated through the use of computers, performance evaluation cannot, unless it is with respect to a single criterion. The need to address multi-criteria requirements makes it more valuable for a designer to know the "latitude" or "degrees of freedom" he has in changing certain design variables while achieving preset criteria such as energy performance, life cycle cost, environmental impacts etc. This requirement can be met by a decision support framework based on near-optimal "satisficing" as opposed to purely optimal decision making techniques. Currently, such a comprehensive design framework is lacking, which is the basis for undertaking this research. The primary objective of this research is to facilitate a complementary relationship between designers and computers for Multi-Criterion Decision Making (MCDM) during high performance building design. It is based on the application of Monte Carlo approaches to create a database of solutions using deterministic whole building energy simulations, along with data mining methods to rank variable importance and reduce the multi-dimensionality of the problem. A novel interactive visualization approach is then proposed which uses regression based models to create dynamic interplays of how varying these important variables affect the multiple criteria, while providing a visual range or band of variation of the different design parameters. The MCDM process has been incorporated into an alternative methodology for high performance building design referred to as Visual Analytics based Decision Support Methodology [VADSM]. VADSM is envisioned to be most useful during the conceptual and early design performance modeling stages by providing a set of potential solutions that can be analyzed further for final design selection. The proposed methodology can be used for new building design synthesis as well as evaluation of retrofits and operational deficiencies in existing buildings.
ContributorsDutta, Ranojoy (Author) / Reddy, T Agami (Thesis advisor) / Runger, George C. (Committee member) / Addison, Marlin S. (Committee member) / Arizona State University (Publisher)
Created2013
152197-Thumbnail Image.png
Description
Microelectronic industry is continuously moving in a trend requiring smaller and smaller devices and reduced form factors with time, resulting in new challenges. Reduction in device and interconnect solder bump sizes has led to increased current density in these small solders. Higher level of electromigration occurring due to increased current

Microelectronic industry is continuously moving in a trend requiring smaller and smaller devices and reduced form factors with time, resulting in new challenges. Reduction in device and interconnect solder bump sizes has led to increased current density in these small solders. Higher level of electromigration occurring due to increased current density is of great concern affecting the reliability of the entire microelectronics systems. This paper reviews electromigration in Pb- free solders, focusing specifically on Sn0.7wt.% Cu solder joints. Effect of texture, grain orientation, and grain-boundary misorientation angle on electromigration and intermetallic compound (IMC) formation is studied through EBSD analysis performed on actual C4 bumps.
ContributorsLara, Leticia (Author) / Tasooji, Amaneh (Thesis advisor) / Lee, Kyuoh (Committee member) / Krause, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152208-Thumbnail Image.png
Description
Vehicle type choice is a significant determinant of fuel consumption and energy sustainability; larger, heavier vehicles consume more fuel, and expel twice as many pollutants, than their smaller, lighter counterparts. Over the course of the past few decades, vehicle type choice has seen a vast shift, due to many households

Vehicle type choice is a significant determinant of fuel consumption and energy sustainability; larger, heavier vehicles consume more fuel, and expel twice as many pollutants, than their smaller, lighter counterparts. Over the course of the past few decades, vehicle type choice has seen a vast shift, due to many households making more trips in larger vehicles with lower fuel economy. During the 1990s, SUVs were the fastest growing segment of the automotive industry, comprising 7% of the total light vehicle market in 1990, and 25% in 2005. More recently, due to rising oil prices, greater awareness to environmental sensitivity, the desire to reduce dependence on foreign oil, and the availability of new vehicle technologies, many households are considering the use of newer vehicles with better fuel economy, such as hybrids and electric vehicles, over the use of the SUV or low fuel economy vehicles they may already own. The goal of this research is to examine how vehicle miles traveled, fuel consumption and emissions may be reduced through shifts in vehicle type choice behavior. Using the 2009 National Household Travel Survey data it is possible to develop a model to estimate household travel demand and total fuel consumption. If given a vehicle choice shift scenario, using the model it would be possible to calculate the potential fuel consumption savings that would result from such a shift. In this way, it is possible to estimate fuel consumption reductions that would take place under a wide variety of scenarios.
ContributorsChristian, Keith (Author) / Pendyala, Ram M. (Thesis advisor) / Chester, Mikhail (Committee member) / Kaloush, Kamil (Committee member) / Ahn, Soyoung (Committee member) / Arizona State University (Publisher)
Created2013
152178-Thumbnail Image.png
Description
The construction industry in India suffers from major time and cost overruns. Data from government and industry reports suggest that projects suffer from 20 to 25 percent time and cost overruns. Waste of resources has been identified as a major source of inefficiency. Despite a substantial increase in the past

The construction industry in India suffers from major time and cost overruns. Data from government and industry reports suggest that projects suffer from 20 to 25 percent time and cost overruns. Waste of resources has been identified as a major source of inefficiency. Despite a substantial increase in the past few years, demand for professionals and contractors still exceeds supply by a large margin. The traditional methods adopted in the Indian construction industry may not suffice the needs of this dynamic environment, as they have produced large inefficiencies. Innovative ways of procurement and project management can satisfy the needs aspired to as well as bring added value. The problems faced by the Indian construction industry are very similar to those faced by other developing countries. The objective of this paper is to discuss and analyze the economic concerns, inefficiencies and investigate a model that both explains the Indian construction industry structure and provides a framework to improve efficiencies. The Best Value (BV) model is examined as an approach to be adopted in lieu of the traditional approach. This could result in efficient construction projects by minimizing cost overruns and delays, which until now have been a rarity.
ContributorsNihas, Syed (Author) / Kashiwagi, Dean (Thesis advisor) / Sullivan, Kenneth (Committee member) / Kashiwagi, Jacob (Committee member) / Arizona State University (Publisher)
Created2013