Matching Items (1,211)
Filtering by

Clear all filters

150056-Thumbnail Image.png
Description
Bioparticles comprise a diverse amount of materials ubiquitously present in nature. From proteins to aerosolized biological debris, bioparticles have important roles spanning from regulating cellular functions to possibly influencing global climate. Understanding their structures, functions, and properties provides the necessary tools to expand our fundamental knowledge of biological

Bioparticles comprise a diverse amount of materials ubiquitously present in nature. From proteins to aerosolized biological debris, bioparticles have important roles spanning from regulating cellular functions to possibly influencing global climate. Understanding their structures, functions, and properties provides the necessary tools to expand our fundamental knowledge of biological systems and exploit them for useful applications. In order to contribute to this efforts, the work presented in this dissertation focuses on the study of electrokinetic properties of liposomes and novel applications of bioaerosol analysis. Using immobilized lipid vesicles under the influence of modest (less than 100 V/cm) electric fields, a novel strategy for bionanotubule fabrication with superior throughput and simplicity was developed. Fluorescence and bright field microscopy was used to describe the formation of these bilayer-bound cylindrical structures, which have been previously identified in nature (playing crucial roles in intercellular communication) and made synthetically by direct mechanical manipulation of membranes. In the biological context, the results of this work suggest that mechanical electrostatic interaction may play a role in the shape and function of individual biological membranes and networks of membrane-bound structures. A second project involving liposomes focused on membrane potential measurements in vesicles containing trans-membrane pH gradients. These types of gradients consist of differential charge states in the lipid bilayer leaflets, which have been shown to greatly influence the efficacy of drug targeting and the treatment of diseases such as cancer. Here, these systems are qualitatively and quantitatively assessed by using voltage-sensitive membrane dyes and fluorescence spectroscopy. Bioaerosol studies involved exploring the feasibility of a fingerprinting technology based on current understanding of cellular debris in aerosols and arguments regarding sampling, sensitivity, separations and detection schemes of these debris. Aerosolized particles of cellular material and proteins emitted by humans, animals and plants can be considered information-rich packets that carry biochemical information specific to the living organisms present in the collection settings. These materials could potentially be exploited for identification purposes. Preliminary studies evaluated protein concentration trends in both indoor and outdoor locations. Results indicated that concentrations correlate to certain conditions of the collection environment (e.g. extent of human presence), supporting the idea that bioaerosol fingerprinting is possible.
ContributorsCastillo Gutiérrez, Josemar Andreina (Author) / Hayes, Mark A. (Thesis advisor) / Herckes, Pierre (Committee member) / Ghrilanda, Giovanna (Committee member) / Arizona State University (Publisher)
Created2011
150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150046-Thumbnail Image.png
Description
This thesis describes a synthetic task environment, CyberCog, created for the purposes of 1) understanding and measuring individual and team situation awareness in the context of a cyber security defense task and 2) providing a context for evaluating algorithms, visualizations, and other interventions that are intended to improve cyber situation

This thesis describes a synthetic task environment, CyberCog, created for the purposes of 1) understanding and measuring individual and team situation awareness in the context of a cyber security defense task and 2) providing a context for evaluating algorithms, visualizations, and other interventions that are intended to improve cyber situation awareness. CyberCog provides an interactive environment for conducting human-in-loop experiments in which the participants of the experiment perform the tasks of a cyber security defense analyst in response to a cyber-attack scenario. CyberCog generates the necessary performance measures and interaction logs needed for measuring individual and team cyber situation awareness. Moreover, the CyberCog environment provides good experimental control for conducting effective situation awareness studies while retaining realism in the scenario and in the tasks performed.
ContributorsRajivan, Prashanth (Author) / Femiani, John (Thesis advisor) / Cooke, Nancy J. (Thesis advisor) / Lindquist, Timothy (Committee member) / Gary, Kevin (Committee member) / Arizona State University (Publisher)
Created2011
149991-Thumbnail Image.png
Description
With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications of compressive sensing and sparse representation with regards to image enhancement, restoration and classication. The first application deals with image Super-Resolution through compressive sensing based sparse representation. A novel framework is developed for understanding and analyzing some of the implications of compressive sensing in reconstruction and recovery of an image through raw-sampled and trained dictionaries. Properties of the projection operator and the dictionary are examined and the corresponding results presented. In the second application a novel technique for representing image classes uniquely in a high-dimensional space for image classification is presented. In this method, design and implementation strategy of the image classification system through unique affine sparse codes is presented, which leads to state of the art results. This further leads to analysis of some of the properties attributed to these unique sparse codes. In addition to obtaining these codes, a strong classier is designed and implemented to boost the results obtained. Evaluation with publicly available datasets shows that the proposed method outperforms other state of the art results in image classication. The final part of the thesis deals with image denoising with a novel approach towards obtaining high quality denoised image patches using only a single image. A new technique is proposed to obtain highly correlated image patches through sparse representation, which are then subjected to matrix completion to obtain high quality image patches. Experiments suggest that there may exist a structure within a noisy image which can be exploited for denoising through a low-rank constraint.
ContributorsKulkarni, Naveen (Author) / Li, Baoxin (Thesis advisor) / Ye, Jieping (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149677-Thumbnail Image.png
Description
Applications of non-traditional stable isotope variations are moving beyond geosciences to biomedicine, made possible by advances in multiple collector inductively coupled plasma mass spectrometry (MC-ICP-MS) technology. Mass-dependent isotope variation can provide information about the sources of elements and the chemical reactions that they undergo. Iron and calcium isotope systematics in

Applications of non-traditional stable isotope variations are moving beyond geosciences to biomedicine, made possible by advances in multiple collector inductively coupled plasma mass spectrometry (MC-ICP-MS) technology. Mass-dependent isotope variation can provide information about the sources of elements and the chemical reactions that they undergo. Iron and calcium isotope systematics in biomedicine are relatively unexplored but have great potential scientific interest due to their essential nature in metabolism. Iron, a crucial element in biology, fractionates during biochemically relevant reactions. To test the extent of this fractionation in an important reaction process, equilibrium iron isotope fractionation during organic ligand exchange was determined. The results show that iron fractionates during organic ligand exchange, and that isotope enrichment increases as a function of the difference in binding constants between ligands. Additionally, to create a mass balance model for iron in a whole organism, iron isotope compositions in a whole mouse and in individual mouse organs were measured. The results indicate that fractionation occurs during transfer between individual organs, and that the whole organism was isotopically light compared with food. These two experiments advance our ability to interpret stable iron isotopes in biomedicine. Previous research demonstrated that calcium isotope variations in urine can be used as an indicator of changes in net bone mineral balance. In order to measure calcium isotopes by MC-ICP-MS, a chemical purification method was developed to quantitatively separate calcium from other elements in a biological matrix. Subsequently, this method was used to evaluate if calcium isotopes respond when organisms are subjected to conditions known to induce bone loss: 1) Rhesus monkeys were given an estrogen-suppressing drug; 2) Human patients underwent extended bed rest. In both studies, there were rapid, detectable changes in calcium isotope compositions from baseline - verifying that calcium isotopes can be used to rapidly detect changes in bone mineral balance. By characterizing iron isotope fractionation in biologically relevant processes and by demonstrating that calcium isotopes vary rapidly in response to bone loss, this thesis represents an important step in utilizing these isotope systems as a diagnostic and mechanistic tool to study the metabolism of these elements in vivo.
ContributorsMorgan, Jennifer Lynn Louden (Author) / Anbar, Ariel D. (Thesis advisor) / Wasylenki, Laura E. (Committee member) / Jones, Anne K. (Committee member) / Shock, Everett (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
149668-Thumbnail Image.png
Description
Service based software (SBS) systems are software systems consisting of services based on the service oriented architecture (SOA). Each service in SBS systems provides partial functionalities and collaborates with other services as workflows to provide the functionalities required by the systems. These services may be developed and/or owned by different

Service based software (SBS) systems are software systems consisting of services based on the service oriented architecture (SOA). Each service in SBS systems provides partial functionalities and collaborates with other services as workflows to provide the functionalities required by the systems. These services may be developed and/or owned by different entities and physically distributed across the Internet. Compared with traditional software system components which are usually specifically designed for the target systems and bound tightly, the interfaces of services and their communication protocols are standardized, which allow SBS systems to support late binding, provide better interoperability, better flexibility in dynamic business logics, and higher fault tolerance. The development process of SBS systems can be divided to three major phases: 1) SBS specification, 2) service discovery and matching, and 3) service composition and workflow execution. This dissertation focuses on the second phase, and presents a privacy preserving service discovery and ranking approach for multiple user QoS requirements. This approach helps service providers to register services and service users to search services through public, but untrusted service directories with the protection of their privacy against the service directories. The service directories can match the registered services with service requests, but do not learn any information about them. Our approach also enforces access control on services during the matching process, which prevents unauthorized users from discovering services. After the service directories match a set of services that satisfy the service users' functionality requirements, the service discovery approach presented in this dissertation further considers service users' QoS requirements in two steps. First, this approach optimizes services' QoS by making tradeoff among various QoS aspects with users' QoS requirements and preferences. Second, this approach ranks services based on how well they satisfy users' QoS requirements to help service users select the most suitable service to develop their SBSs.
ContributorsYin, Yin (Author) / Yau, Stephen S. (Thesis advisor) / Candan, Kasim (Committee member) / Dasgupta, Partha (Committee member) / Santanam, Raghu (Committee member) / Arizona State University (Publisher)
Created2011
150409-Thumbnail Image.png
Description
The electrode-electrolyte interface in electrochemical environments involves the understanding of complex processes relevant for all electrochemical applications. Some of these processes include electronic structure, charge storage, charge transfer, solvent dynamics and structure and surface adsorption. In order to engineer electrochemical systems, no matter the function, requires fundamental intuition of all

The electrode-electrolyte interface in electrochemical environments involves the understanding of complex processes relevant for all electrochemical applications. Some of these processes include electronic structure, charge storage, charge transfer, solvent dynamics and structure and surface adsorption. In order to engineer electrochemical systems, no matter the function, requires fundamental intuition of all the processes at the interface. The following work presents different systems in which the electrode-electrolyte interface is highly important. The first is a charge storage electrode utilizing percolation theory to develop an electrode architecture producing high capacities. This is followed by Zn deposition in an ionic liquid in which the deposition morphology is highly dependant on the charge transfer and surface adsorption at the interface. Electrode Architecture: A three-dimensional manganese oxide supercapacitor electrode architecture is synthesized by leveraging percolation theory to develop a hierarchically designed tri-continuous percolated network. The three percolated phases include a faradaically-active material, electrically conductive material and pore-former templated void space. The micropores create pathways for ionic conductivity, while the nanoscale electrically conducting phase provides both bulk conductivity and local electron transfer with the electrochemically active phase. Zn Electrodeposition: Zn redox in air and water stable N-ethyl-N-methylmorpholinium bis(trifluoromethanesulfonyl)imide, [C2nmm][NTf2] is presented. Under various conditions, characterization of overpotential, kinetics and diffusion of Zn species and morphological evolution as a function of overpotential and Zn concentration are analyzed. The surface stress evolution during Zn deposition is examined where grain size and texturing play significant rolls in compressive stress generation. Morphological repeatability in the ILs led to a novel study of purity in ionic liquids where it is found that surface adsorption of residual amine and chloride from the organic synthesis affect growth characteristics. The drivers of this work are to understand the processes occurring at the electrode-electrolyte interface and with that knowledge, engineer systems yielding optimal performance. With this in mind, the design of a bulk supercapacitor electrode architecture with excellent composite specific capacitances, as well as develop conditions producing ideal Zn deposition morphologies was completed.
ContributorsEngstrom, Erika (Author) / Friesen, Cody (Thesis advisor) / Buttry, Daniel (Committee member) / Sieradzki, Karl (Committee member) / Arizona State University (Publisher)
Created2011