Matching Items (1,305)
Filtering by

Clear all filters

150054-Thumbnail Image.png
Description
Emergent environmental issues, ever-shrinking petroleum reserves, and rising fossil fuel costs continue to spur interest in the development of sustainable biofuels from renewable feed-stocks. Meanwhile, however, the development and viability of biofuel fermentations remain limited by numerous factors such as feedback inhibition and inefficient and generally energy intensive product recovery

Emergent environmental issues, ever-shrinking petroleum reserves, and rising fossil fuel costs continue to spur interest in the development of sustainable biofuels from renewable feed-stocks. Meanwhile, however, the development and viability of biofuel fermentations remain limited by numerous factors such as feedback inhibition and inefficient and generally energy intensive product recovery processes. To circumvent both feedback inhibition and recovery issues, researchers have turned their attention to incorporating energy efficient separation techniques such as adsorption in in situ product recovery (ISPR) approaches. This thesis focused on the characterization of two novel adsorbents for the recovery of alcohol biofuels from model aqueous solutions. First, a hydrophobic silica aerogel was evaluated as a biofuel adsorbent through characterization of equilibrium behavior for conventional second generation biofuels (e.g., ethanol and n-butanol). Longer chain and accordingly more hydrophobic alcohols (i.e., n-butanol and 2-pentanol) were more effectively adsorbed than shorter chain alcohols (i.e., ethanol and i-propanol), suggesting a mechanism of hydrophobic adsorption. Still, the adsorbed alcohol capacity at biologically relevant conditions were low relative to other `model' biofuel adsorbents as a result of poor interfacial contact between the aqueous and sorbent. However, sorbent wettability and adsorption is greatly enhanced at high concentrations of alcohol in the aqueous. Consequently, the sorbent exhibits Type IV adsorption isotherms for all biofuels studied, which results from significant multilayer adsorption at elevated alcohol concentrations in the aqueous. Additionally, sorbent wettability significantly affects the dynamic binding efficiency within a packed adsorption column. Second, mesoporous carbons were evaluated as biofuel adsorbents through characterization of equilibrium and kinetic behavior. Variations in synthetic conditions enabled tuning of specific surface area and pore morphology of adsorbents. The adsorbed alcohol capacity increased with elevated specific surface area of the adsorbents. While their adsorption capacity is comparable to polymeric adsorbents of similar surface area, pore morphology and structure of mesoporous carbons greatly influenced adsorption rates. Multiple cycles of adsorbent regeneration rendered no impact on adsorption equilibrium or kinetics. The high chemical and thermal stability of mesoporous carbons provide potential significant advantages over other commonly examined biofuel adsorbents. Correspondingly, mesoporous carbons should be further studied for biofuel ISPR applications.
ContributorsLevario, Thomas (Author) / Nielsen, David R (Thesis advisor) / Vogt, Bryan D (Committee member) / Lind, Mary L (Committee member) / Arizona State University (Publisher)
Created2011
150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150037-Thumbnail Image.png
Description
Intimate coupling of Ti2 photocatalysis and biodegradation (ICPB) offers potential for degrading biorecalcitrant and toxic organic compounds much better than possible with conventional wastewater treatments. This study reports on using a novel sponge-type, Ti2-coated biofilm carrier that shows significant adherence of Ti2 to its exterior and the ability to accumulate

Intimate coupling of Ti2 photocatalysis and biodegradation (ICPB) offers potential for degrading biorecalcitrant and toxic organic compounds much better than possible with conventional wastewater treatments. This study reports on using a novel sponge-type, Ti2-coated biofilm carrier that shows significant adherence of Ti2 to its exterior and the ability to accumulate biomass in its interior (protected from UV light and free radicals). First, this carrier was tested for ICPB in a continuous-flow photocatalytic circulating-bed biofilm reactor (PCBBR) to mineralize biorecalcitrant organic: 2,4,5-trichlorophenol (TCP). Four mechanisms possibly acting of ICPB were tested separately: TCP adsorption, UV photolysis/photocatalysis, and biodegradation. The carrier exhibited strong TCP adsorption, while photolysis was negligible. Photocatalysis produced TCP-degradation products that could be mineralized and the strong adsorption of TCP to the carrier enhanced biodegradation by relieving toxicity. Validating the ICPB concept, biofilm was protected inside the carriers from UV light and free radicals. ICPB significantly lowered the diversity of the bacterial community, but five genera known to biodegrade chlorinated phenols were markedly enriched. Secondly, decolorization and mineralization of reactive dyes by ICPB were investigated on a refined Ti2-coated biofilm carrier in a PCBBR. Two typical reactive dyes: Reactive Black 5 (RB5) and Reactive Yellow 86 (RY86), showed similar first-order kinetics when being photocatalytically decolorized at low pH (~4-5), which was inhibited at neutral pH in the presence of phosphate or carbonate buffer, presumably due to electrostatic repulsion from negatively charged surface sites on Ti2, radical scavenging by phosphate or carbonate, or both. In the PCBBR, photocatalysis alone with Ti2-coated carriers could remove RB5 and COD by 97% and 47%, respectively. Addition of biofilm inside macroporous carriers maintained a similar RB5 removal efficiency, but COD removal increased to 65%, which is evidence of ICPB despite the low pH. A proposed ICPB pathway for RB5 suggests that a major intermediate, a naphthol derivative, was responsible for most of the residual COD. Finally, three low-temperature sintering methods, called O, D and DN, were compared based on photocatalytic efficiency and Ti2 adherence. The DN method had the best Ti2-coating properties and was a successful carrier for ICPB of RB5 in a PCBBR.
ContributorsLi, Guozheng (Author) / Rittmann, Bruce E. (Thesis advisor) / Halden, Rolf (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Arizona State University (Publisher)
Created2011
150046-Thumbnail Image.png
Description
This thesis describes a synthetic task environment, CyberCog, created for the purposes of 1) understanding and measuring individual and team situation awareness in the context of a cyber security defense task and 2) providing a context for evaluating algorithms, visualizations, and other interventions that are intended to improve cyber situation

This thesis describes a synthetic task environment, CyberCog, created for the purposes of 1) understanding and measuring individual and team situation awareness in the context of a cyber security defense task and 2) providing a context for evaluating algorithms, visualizations, and other interventions that are intended to improve cyber situation awareness. CyberCog provides an interactive environment for conducting human-in-loop experiments in which the participants of the experiment perform the tasks of a cyber security defense analyst in response to a cyber-attack scenario. CyberCog generates the necessary performance measures and interaction logs needed for measuring individual and team cyber situation awareness. Moreover, the CyberCog environment provides good experimental control for conducting effective situation awareness studies while retaining realism in the scenario and in the tasks performed.
ContributorsRajivan, Prashanth (Author) / Femiani, John (Thesis advisor) / Cooke, Nancy J. (Thesis advisor) / Lindquist, Timothy (Committee member) / Gary, Kevin (Committee member) / Arizona State University (Publisher)
Created2011
149991-Thumbnail Image.png
Description
With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications of compressive sensing and sparse representation with regards to image enhancement, restoration and classication. The first application deals with image Super-Resolution through compressive sensing based sparse representation. A novel framework is developed for understanding and analyzing some of the implications of compressive sensing in reconstruction and recovery of an image through raw-sampled and trained dictionaries. Properties of the projection operator and the dictionary are examined and the corresponding results presented. In the second application a novel technique for representing image classes uniquely in a high-dimensional space for image classification is presented. In this method, design and implementation strategy of the image classification system through unique affine sparse codes is presented, which leads to state of the art results. This further leads to analysis of some of the properties attributed to these unique sparse codes. In addition to obtaining these codes, a strong classier is designed and implemented to boost the results obtained. Evaluation with publicly available datasets shows that the proposed method outperforms other state of the art results in image classication. The final part of the thesis deals with image denoising with a novel approach towards obtaining high quality denoised image patches using only a single image. A new technique is proposed to obtain highly correlated image patches through sparse representation, which are then subjected to matrix completion to obtain high quality image patches. Experiments suggest that there may exist a structure within a noisy image which can be exploited for denoising through a low-rank constraint.
ContributorsKulkarni, Naveen (Author) / Li, Baoxin (Thesis advisor) / Ye, Jieping (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149765-Thumbnail Image.png
Description
The goal of the study was twofold: (i) to investigate the synthesis of hematite-impregnated granular activated carbon (Fe-GAC) by hydrolysis of Fe (III) and (ii) to assess the effectiveness of the fabricated media in removal of arsenic from water. Fe-GAC was synthesized by hydrolysis of Fe(III) salts under two Fe

The goal of the study was twofold: (i) to investigate the synthesis of hematite-impregnated granular activated carbon (Fe-GAC) by hydrolysis of Fe (III) and (ii) to assess the effectiveness of the fabricated media in removal of arsenic from water. Fe-GAC was synthesized by hydrolysis of Fe(III) salts under two Fe (III) initial dosages (0.5M and 2M) and two hydrolysis periods (24 hrs and 72 hrs). The iron content of the fabricated Fe-GAC media ranged from 0.9% to 4.4% Fe/g of the dry media. Pseudo-equilibrium batch test data at pH = 7.7±0.2 in 1mM NaHCO3 buffered ultrapure water and challenge groundwater representative of the Arizona Mexico border region were fitted to a Freundlich isotherm model. The findings suggested that the arsenic adsorption capacity of the metal (hydr)oxide modified GAC media is primarily controlled by the surface area of the media, while the metal content exhibited lesser effect. The adsorption capacity of the media in the model Mexican groundwater matrix was significantly lower for all adsorbent media. Continuous flow short bed adsorber tests (SBA) demonstrated that the adsorption capacity for arsenic in the challenge groundwater was reduced by a factor of 3 to 4 as a result of the mass transport effects. When compared on metal basis, the iron (hydr)oxide modified media performed comparably well as existing commercial media for treatment of arsenic. On dry mass basis, the fabricated media in this study removed less arsenic than their commercial counterparts because the metal content of the commercial media was significantly higher.
ContributorsJain, Arti (Author) / Hristovski, Kiril (Thesis advisor) / Olson, Larry (Committee member) / Madar, David (Committee member) / Edwards, David (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
149660-Thumbnail Image.png
Description
Proton exchange membrane fuel cells (PEMFCs) run on pure hydrogen and oxygen (or air), producing electricity, water, and some heat. This makes PEMFC an attractive option for clean power generation. PEMFCs also operate at low temperature which makes them quick to start up and easy to handle. PEMFCs have several

Proton exchange membrane fuel cells (PEMFCs) run on pure hydrogen and oxygen (or air), producing electricity, water, and some heat. This makes PEMFC an attractive option for clean power generation. PEMFCs also operate at low temperature which makes them quick to start up and easy to handle. PEMFCs have several important limitations which must be overcome before commercial viability can be achieved. Active areas of research into making them commercially viable include reducing the cost, size and weight of fuel cells while also increasing their durability and performance. A growing and important part of this research involves the computer modeling of fuel cells. High quality computer modeling and simulation of fuel cells can help speed up the discovery of optimized fuel cell components. Computer modeling can also help improve fundamental understanding of the mechanisms and reactions that take place within the fuel cell. The work presented in this thesis describes a procedure for utilizing computer modeling to create high quality fuel cell simulations using Ansys Fluent 12.1. Methods for creating computer aided design (CAD) models of fuel cells are discussed. Detailed simulation parameters are described and emphasis is placed on establishing convergence criteria which are essential for producing consistent results. A mesh sensitivity study of the catalyst and membrane layers is presented showing the importance of adhering to strictly defined convergence criteria. A study of iteration sensitivity of the simulation at low and high current densities is performed which demonstrates the variance in the rate of convergence and the absolute difference between solution values derived at low numbers of iterations and high numbers of iterations.
ContributorsArvay, Adam (Author) / Madakannan, Arunachalanadar (Thesis advisor) / Peng, Xihong (Committee member) / Liang, Yong (Committee member) / Subach, James (Committee member) / Arizona State University (Publisher)
Created2011