Matching Items (101)
150231-Thumbnail Image.png
Description
In this thesis I introduce a new direction to computing using nonlinear chaotic dynamics. The main idea is rich dynamics of a chaotic system enables us to (1) build better computers that have a flexible instruction set, and (2) carry out computation that conventional computers are not good at it.

In this thesis I introduce a new direction to computing using nonlinear chaotic dynamics. The main idea is rich dynamics of a chaotic system enables us to (1) build better computers that have a flexible instruction set, and (2) carry out computation that conventional computers are not good at it. Here I start from the theory, explaining how one can build a computing logic block using a chaotic system, and then I introduce a new theoretical analysis for chaos computing. Specifically, I demonstrate how unstable periodic orbits and a model based on them explains and predicts how and how well a chaotic system can do computation. Furthermore, since unstable periodic orbits and their stability measures in terms of eigenvalues are extractable from experimental times series, I develop a time series technique for modeling and predicting chaos computing from a given time series of a chaotic system. After building a theoretical framework for chaos computing I proceed to architecture of these chaos-computing blocks to build a sophisticated computing system out of them. I describe how one can arrange and organize these chaos-based blocks to build a computer. I propose a brand new computer architecture using chaos computing, which shifts the limits of conventional computers by introducing flexible instruction set. Our new chaos based computer has a flexible instruction set, meaning that the user can load its desired instruction set to the computer to reconfigure the computer to be an implementation for the desired instruction set. Apart from direct application of chaos theory in generic computation, the application of chaos theory to speech processing is explained and a novel application for chaos theory in speech coding and synthesizing is introduced. More specifically it is demonstrated how a chaotic system can model the natural turbulent flow of the air in the human speech production system and how chaotic orbits can be used to excite a vocal tract model. Also as another approach to build computing system based on nonlinear system, the idea of Logical Stochastic Resonance is studied and adapted to an autoregulatory gene network in the bacteriophage λ.
ContributorsKia, Behnam (Author) / Ditto, William (Thesis advisor) / Huang, Liang (Committee member) / Lai, Ying-Cheng (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2011
150288-Thumbnail Image.png
Description
In an effort to begin validating the large number of discovered candidate biomarkers, proteomics is beginning to shift from shotgun proteomic experiments towards targeted proteomic approaches that provide solutions to automation and economic concerns. Such approaches to validate biomarkers necessitate the mass spectrometric analysis of hundreds to thousands of human

In an effort to begin validating the large number of discovered candidate biomarkers, proteomics is beginning to shift from shotgun proteomic experiments towards targeted proteomic approaches that provide solutions to automation and economic concerns. Such approaches to validate biomarkers necessitate the mass spectrometric analysis of hundreds to thousands of human samples. As this takes place, a serendipitous opportunity has become evident. By the virtue that as one narrows the focus towards "single" protein targets (instead of entire proteomes) using pan-antibody-based enrichment techniques, a discovery science has emerged, so to speak. This is due to the largely unknown context in which "single" proteins exist in blood (i.e. polymorphisms, transcript variants, and posttranslational modifications) and hence, targeted proteomics has applications for established biomarkers. Furthermore, besides protein heterogeneity accounting for interferences with conventional immunometric platforms, it is becoming evident that this formerly hidden dimension of structural information also contains rich-pathobiological information. Consequently, targeted proteomics studies that aim to ascertain a protein's genuine presentation within disease- stratified populations and serve as a stepping-stone within a biomarker translational pipeline are of clinical interest. Roughly 128 million Americans are pre-diabetic, diabetic, and/or have kidney disease and public and private spending for treating these diseases is in the hundreds of billions of dollars. In an effort to create new solutions for the early detection and management of these conditions, described herein is the design, development, and translation of mass spectrometric immunoassays targeted towards diabetes and kidney disease. Population proteomics experiments were performed for the following clinically relevant proteins: insulin, C-peptide, RANTES, and parathyroid hormone. At least thirty-eight protein isoforms were detected. Besides the numerous disease correlations confronted within the disease-stratified cohorts, certain isoforms also appeared to be causally related to the underlying pathophysiology and/or have therapeutic implications. Technical advancements include multiplexed isoform quantification as well a "dual- extraction" methodology for eliminating non-specific proteins while simultaneously validating isoforms. Industrial efforts towards widespread clinical adoption are also described. Consequently, this work lays a foundation for the translation of mass spectrometric immunoassays into the clinical arena and simultaneously presents the most recent advancements concerning the mass spectrometric immunoassay approach.
ContributorsOran, Paul (Author) / Nelson, Randall (Thesis advisor) / Hayes, Mark (Thesis advisor) / Ros, Alexandra (Committee member) / Williams, Peter (Committee member) / Arizona State University (Publisher)
Created2011
151436-Thumbnail Image.png
Description
Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay

Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay (MSIA) which has been one of the primary methods of biomarker discovery techniques. MSIA analyzes protein molecules as potential biomarkers using time of flight mass spectrometry (TOF-MS). Peak detection in TOF-MS is important for biomarker analysis and many other MS related application. Though many peak detection algorithms exist, most of them are based on heuristics models. One of the ways of detecting signal peaks is by deploying stochastic models of the signal and noise observations. Likelihood ratio test (LRT) detector, based on the Neyman-Pearson (NP) lemma, is an uniformly most powerful test to decision making in the form of a hypothesis test. The primary goal of this dissertation is to develop signal and noise models for the electrospray ionization (ESI) TOF-MS data. A new method is proposed for developing the signal model by employing first principles calculations based on device physics and molecular properties. The noise model is developed by analyzing MS data from careful experiments in the ESI mass spectrometer. A non-flat baseline in MS data is common. The reasons behind the formation of this baseline has not been fully comprehended. A new signal model explaining the presence of baseline is proposed, though detailed experiments are needed to further substantiate the model assumptions. Signal detection schemes based on these signal and noise models are proposed. A maximum likelihood (ML) method is introduced for estimating the signal peak amplitudes. The performance of the detection methods and ML estimation are evaluated with Monte Carlo simulation which shows promising results. An application of these methods is proposed for fractional abundance calculation for biomarker analysis, which is mathematically robust and fundamentally different than the current algorithms. Biomarker panels for type 2 diabetes and cardiovascular disease are analyzed using existing MS analysis algorithms. Finally, a support vector machine based multi-classification algorithm is developed for evaluating the biomarkers' effectiveness in discriminating type 2 diabetes and cardiovascular diseases and is shown to perform better than a linear discriminant analysis based classifier.
ContributorsBuddi, Sai (Author) / Taylor, Thomas (Thesis advisor) / Cochran, Douglas (Thesis advisor) / Nelson, Randall (Committee member) / Duman, Tolga (Committee member) / Arizona State University (Publisher)
Created2012
150551-Thumbnail Image.png
Description
Complex dynamical systems consisting interacting dynamical units are ubiquitous in nature and society. Predicting and reconstructing nonlinear dynamics of units and the complex interacting networks among them serves the base for the understanding of a variety of collective dynamical phenomena. I present a general method to address the two outstanding

Complex dynamical systems consisting interacting dynamical units are ubiquitous in nature and society. Predicting and reconstructing nonlinear dynamics of units and the complex interacting networks among them serves the base for the understanding of a variety of collective dynamical phenomena. I present a general method to address the two outstanding problems as a whole based solely on time-series measurements. The method is implemented by incorporating compressive sensing approach that enables an accurate reconstruction of complex dynamical systems in terms of both nodal equations that determines the self-dynamics of units and detailed coupling patterns among units. The representative advantages of the approach are (i) the sparse data requirement which allows for a successful reconstruction from limited measurements, and (ii) general applicability to identical and nonidentical nodal dynamics, and to networks with arbitrary interacting structure, strength and sizes. Another two challenging problem of significant interest in nonlinear dynamics: (i) predicting catastrophes in nonlinear dynamical systems in advance of their occurrences and (ii) predicting the future state for time-varying nonlinear dynamical systems, can be formulated and solved in the framework of compressive sensing using only limited measurements. Once the network structure can be inferred, the dynamics behavior on them can be investigated, for example optimize information spreading dynamics, suppress cascading dynamics and traffic congestion, enhance synchronization, game dynamics, etc. The results can yield insights to control strategies design in the real-world social and natural systems. Since 2004, there has been a tremendous amount of interest in graphene. The most amazing feature of graphene is that there exists linear energy-momentum relationship when energy is low. The quasi-particles inside the system can be treated as chiral, massless Dirac fermions obeying relativistic quantum mechanics. Therefore, the graphene provides one perfect test bed to investigate relativistic quantum phenomena, such as relativistic quantum chaotic scattering and abnormal electron paths induced by klein tunneling. This phenomenon has profound implications to the development of graphene based devices that require stable electronic properties.
ContributorsYang, Rui (Author) / Lai, Ying-Cheng (Thesis advisor) / Duman, Tolga M. (Committee member) / Akis, Richard (Committee member) / Huang, Liang (Committee member) / Arizona State University (Publisher)
Created2012
151170-Thumbnail Image.png
Description
Cancer claims hundreds of thousands of lives every year in US alone. Finding ways for early detection of cancer onset is crucial for better management and treatment of cancer. Thus, biomarkers especially protein biomarkers, being the functional units which reflect dynamic physiological changes, need to be discovered. Though important, there

Cancer claims hundreds of thousands of lives every year in US alone. Finding ways for early detection of cancer onset is crucial for better management and treatment of cancer. Thus, biomarkers especially protein biomarkers, being the functional units which reflect dynamic physiological changes, need to be discovered. Though important, there are only a few approved protein cancer biomarkers till date. To accelerate this process, fast, comprehensive and affordable assays are required which can be applied to large population studies. For this, these assays should be able to comprehensively characterize and explore the molecular diversity of nominally "single" proteins across populations. This information is usually unavailable with commonly used immunoassays such as ELISA (enzyme linked immunosorbent assay) which either ignore protein microheterogeneity, or are confounded by it. To this end, mass spectrometric immuno assays (MSIA) for three different human plasma proteins have been developed. These proteins viz. IGF-1, hemopexin and tetranectin have been found in reported literature to show correlations with many diseases along with several carcinomas. Developed assays were used to extract entire proteins from plasma samples and subsequently analyzed on mass spectrometric platforms. Matrix assisted laser desorption ionization (MALDI) and electrospray ionization (ESI) mass spectrometric techniques where used due to their availability and suitability for the analysis. This resulted in visibility of different structural forms of these proteins showing their structural micro-heterogeneity which is invisible to commonly used immunoassays. These assays are fast, comprehensive and can be applied in large sample studies to analyze proteins for biomarker discovery.
ContributorsRai, Samita (Author) / Nelson, Randall (Thesis advisor) / Hayes, Mark (Thesis advisor) / Borges, Chad (Committee member) / Ros, Alexandra (Committee member) / Arizona State University (Publisher)
Created2012
151230-Thumbnail Image.png
Description
What can classical chaos do to quantum systems is a fundamental issue highly relevant to a number of branches in physics. The field of quantum chaos has been active for three decades, where the focus was on non-relativistic quantumsystems described by the Schr¨odinger equation. By developing an efficient method to

What can classical chaos do to quantum systems is a fundamental issue highly relevant to a number of branches in physics. The field of quantum chaos has been active for three decades, where the focus was on non-relativistic quantumsystems described by the Schr¨odinger equation. By developing an efficient method to solve the Dirac equation in the setting where relativistic particles can tunnel between two symmetric cavities through a potential barrier, chaotic cavities are found to suppress the spread in the tunneling rate. Tunneling rate for any given energy assumes a wide range that increases with the energy for integrable classical dynamics. However, for chaotic underlying dynamics, the spread is greatly reduced. A remarkable feature, which is a consequence of Klein tunneling, arise only in relativistc quantum systems that substantial tunneling exists even for particle energy approaching zero. Similar results are found in graphene tunneling devices, implying high relevance of relativistic quantum chaos to the development of such devices. Wave propagation through random media occurs in many physical systems, where interesting phenomena such as branched, fracal-like wave patterns can arise. The generic origin of these wave structures is currently a matter of active debate. It is of fundamental interest to develop a minimal, paradigmaticmodel that can generate robust branched wave structures. In so doing, a general observation in all situations where branched structures emerge is non-Gaussian statistics of wave intensity with an algebraic tail in the probability density function. Thus, a universal algebraic wave-intensity distribution becomes the criterion for the validity of any minimal model of branched wave patterns. Coexistence of competing species in spatially extended ecosystems is key to biodiversity in nature. Understanding the dynamical mechanisms of coexistence is a fundamental problem of continuous interest not only in evolutionary biology but also in nonlinear science. A continuous model is proposed for cyclically competing species and the effect of the interplay between the interaction range and mobility on coexistence is investigated. A transition from coexistence to extinction is uncovered with a non-monotonic behavior in the coexistence probability and switches between spiral and plane-wave patterns arise. Strong mobility can either promote or hamper coexistence, while absent in lattice-based models, can be explained in terms of nonlinear partial differential equations.
ContributorsNi, Xuan (Author) / Lai, Ying-Cheng (Thesis advisor) / Huang, Liang (Committee member) / Yu, Hongbin (Committee member) / Akis, Richard (Committee member) / Arizona State University (Publisher)
Created2012
135977-Thumbnail Image.png
Description
This paper features analysis of interdisciplinary collaboration, based on the results from the Kolbe A™ Index of students in the Nano Ethics at Play (NEAP) class, a four week course in Spring 2015. The Kolbe A™ is a system which describes the Conative Strengths of each student, or their

This paper features analysis of interdisciplinary collaboration, based on the results from the Kolbe A™ Index of students in the Nano Ethics at Play (NEAP) class, a four week course in Spring 2015. The Kolbe A™ is a system which describes the Conative Strengths of each student, or their natural drive and instinct. NEAP utilized the LEGO® SERIOUS PLAY® (LSP) method, which uses abstract LEGO models to describe answers to a proposed question in school or work environments. The models could be described piece by piece to provide clear explanations without allowing disciplinary jargon, which is why the class contained students from eleven different majors (Engineering (Civil, Biomedical, & Electrical), Business (Marketing & Supply Chain Management), Architectural Studies, Sustainability, Anthropology, Communications, Philosophy, & Psychology).

The proposed hypotheses was based on the four different Kolbe A™ strengths, or Action Modes: Fact Finder, Follow Through, Quick Start, and Implementor. Hypotheses were made about class participation and official class twitter use, using #ASUsp, for each Kolbe type. The results proved these hypotheses incorrect, indicating a lack of correlation between Kolbe A™ types and playing. The report also includes qualitative results such as Twitter Keywords and a Sentiment calculation for each week of the course. The class had many positive outcomes, including growth in the ability to collaborate by students, further understanding of how to integrate Twitter use into the classroom, and more knowledge about the effectiveness of LSP.
Created2015-12
136692-Thumbnail Image.png
Description
One of the salient challenges of sustainability is the Tragedy of the Commons, where individuals acting independently and rationally deplete a common resource despite their understanding that it is not in the group's long term best interest to do so. Hardin presents this dilemma as nearly intractable and solvable only

One of the salient challenges of sustainability is the Tragedy of the Commons, where individuals acting independently and rationally deplete a common resource despite their understanding that it is not in the group's long term best interest to do so. Hardin presents this dilemma as nearly intractable and solvable only by drastic, government-mandated social reforms, while Ostrom's empirical work demonstrates that community-scale collaboration can circumvent tragedy without any elaborate outside intervention. Though more optimistic, Ostrom's work provides scant insight into larger-scale dilemmas such as climate change. Consequently, it remains unclear if the sustainable management of global resources is possible without significant government mediation. To investigate, we conducted two game theoretic experiments that challenged students in different countries to collaborate digitally and manage a hypothetical common resource. One experiment involved students attending Arizona State University and the Rochester Institute of Technology in the US and Mountains of the Moon University in Uganda, while the other included students at Arizona State and the Management Development Institute in India. In both experiments, students were randomly assigned to one of three production roles: Luxury, Intermediate, and Subsistence. Students then made individual decisions about how many units of goods they wished to produce up to a set maximum per production class. Luxury players gain the most profit (i.e. grade points) per unit produced, but they also emit the most externalities, or social costs, which directly subtract from the profit of everybody else in the game; Intermediate players produce a medium amount of profit and externalities per unit, and Subsistence players produce a low amount of profit and externalities per unit. Variables influencing and/or inhibiting collaboration were studied using pre- and post-game surveys. This research sought to answer three questions: 1) Are international groups capable of self-organizing in a way that promotes sustainable resource management?, 2) What are the key factors that inhibit or foster collective action among international groups?, and 3) How well do Hardin's theories and Ostrom's empirical models predict the observed behavior of students in the game? The results of gameplay suggest that international cooperation is possible, though likely sub-optimal. Statistical analysis of survey data revealed that heterogeneity and levels of trust significantly influenced game behavior. Specific traits of heterogeneity among students found to be significant were income, education, assigned production role, number of people in one's household, college class, college major, and military service. Additionally, it was found that Ostrom's collective action framework was a better predictor of game outcome than Hardin's theories. Overall, this research lends credence to the plausibility of international cooperation in tragedy of the commons scenarios such as climate change, though much work remains to be done.
ContributorsStanton, Albert Grayson (Author) / Clark, Susan Spierre (Thesis director) / Seager, Thomas (Committee member) / Civil, Environmental and Sustainable Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2014-12
136721-Thumbnail Image.png
Description
While public transit systems are perceived to produce lower GHG emission intensities per passenger miles traveled (PMT) and per vehicle miles traveled (VMT), there is a limited understanding of emissions per PMT/VMT across cities, or of how emissions may change across modes (light, metro, commuter, and bus) and time (e.g.,

While public transit systems are perceived to produce lower GHG emission intensities per passenger miles traveled (PMT) and per vehicle miles traveled (VMT), there is a limited understanding of emissions per PMT/VMT across cities, or of how emissions may change across modes (light, metro, commuter, and bus) and time (e.g., with changing electricity mixes in the future). In order to better understand the GHG emissions intensity of public transit systems, a comparative emissions assessment was developed utilizing the National Transit Database (NTD) which reports energy use from 1997 to 2012 of rail and bus systems across the US. By determining the GHG emission intensities (per VMT or per PMT) for each mode of transit across multiple years, the modes of transit can be better compared between one another. This comparison can help inform future goals to reduce GHG emissions as well as target reductions from the mode of transit that has the highest emissions. The proposed analysis of the NTD and comparison of modal emission intensities will be used to develop future forecasting that can guide public transit systems towards a sustainable future.
ContributorsCano, Alex (Author) / Chester, Mikhail (Thesis director) / Seager, Thomas (Committee member) / Civil, Environmental and Sustainable Engineering Programs (Contributor) / Barrett, The Honors College (Contributor) / School of Sustainability (Contributor) / School of Human Evolution and Social Change (Contributor)
Created2014-12
137676-Thumbnail Image.png
Description
Life cycle assessment (LCA) is increasingly identified as the proper tool/framework for performing cradle to grave analysis of a product, technology, or supply chain. LCA proceeds by comparing the materials and energy needed for materials extraction, benefaction, and end-of-life management, in addition to the actual lifetime of the product. This

Life cycle assessment (LCA) is increasingly identified as the proper tool/framework for performing cradle to grave analysis of a product, technology, or supply chain. LCA proceeds by comparing the materials and energy needed for materials extraction, benefaction, and end-of-life management, in addition to the actual lifetime of the product. This type of analysis is commonly used to evaluate forms of renewable energy to ensure that we don't harm the environment in the name of saving it. For instance, LCA for photovoltaic (PV) technologies can be used to evaluate the environmental impacts. CdTe thin film solar cells rely on cadmium and tellurium metals which are produced as by-products in the refining of zinc and copper ore, respectively. In order to understand the environmental burdens of tellurium, it is useful to explore the extraction and refining process of copper. Copper can be refined using either a hydrometallurgical or pyrometallurgical process. I conducted a comparison of these two methods to determine the environmental impacts, the chemical reactions which take place, the energy requirements, and the extraction costs of each. I then looked into the extraction of tellurium from anode slime produced in the pyrometallurgical process and determined the energy requirements. I connected this to the production of CdTe and the power produced from a CdTe module, and analyzed the production cost of CdTe modules under increasing tellurium prices. It was concluded that tellurium production will be limited by increasing hydrometallurgical extraction of copper. Additionally, tellurium scarcity will not provide a physical constraint to CdTe commercial expansion; however it could affect the price reduction goals.
ContributorsMacIsaac, Kirsten Breanne (Author) / Seager, Thomas (Thesis director) / Fraser, Matthew (Committee member) / Wender, Ben (Committee member) / Barrett, The Honors College (Contributor) / Chemical Engineering Program (Contributor)
Created2013-05