This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 246
149707-Thumbnail Image.png
Description
Emission of CO2 into the atmosphere has become an increasingly concerning issue as we progress into the 21st century Flue gas from coal-burning power plants accounts for 40% of all carbon dioxide emissions. The key to successful separation and sequestration is to separate CO2 directly from flue gas

Emission of CO2 into the atmosphere has become an increasingly concerning issue as we progress into the 21st century Flue gas from coal-burning power plants accounts for 40% of all carbon dioxide emissions. The key to successful separation and sequestration is to separate CO2 directly from flue gas (10-15% CO2, 70% N2), which can range from a few hundred to as high as 1000°C. Conventional microporous membranes (carbons/silicas/zeolites) are capable of separating CO2 from N2 at low temperatures, but cannot achieve separation above 200°C. To overcome the limitations of microporous membranes, a novel ceramic-carbonate dual-phase membrane for high temperature CO2 separation was proposed. The membrane was synthesized from porous La0.6Sr0.4Co0.8Fe0.2O3-d (LSCF) supports and infiltrated with molten carbonate (Li2CO3/Na2CO3/K2CO3). The CO2 permeation mechanism involves a reaction between CO2 (gas phase) and O= (solid phase) to form CO3=, which is then transported through the molten carbonate (liquid phase) to achieve separation. The effects of membrane thickness, temperature and CO2 partial pressure were studied. Decreasing thickness from 3.0 to 0.375 mm led to higher fluxes at 900°C, ranging from 0.186 to 0.322 mL.min-1.cm-2 respectively. CO2 flux increased with temperature from 700 to 900°C. Activation energy for permeation was similar to that for oxygen ion conduction in LSCF. For partial pressures above 0.05 atm, the membrane exhibited a nearly constant flux. From these observations, it was determined that oxygen ion conductivity limits CO2 permeation and that the equilibrium oxygen vacancy concentration in LSCF is dependent on the partial pressure of CO2 in the gas phase. Finally, the dual-phase membrane was used as a membrane reactor. Separation at high temperatures can produce warm, highly concentrated streams of CO2 that could be used as a chemical feedstock for the synthesis of syngas (H2 + CO). Towards this, three different membrane reactor configurations were examined: 1) blank system, 2) LSCF catalyst and 3) 10% Ni/y-alumina catalyst. Performance increased in the order of blank system < LSCF catalyst < Ni/y-alumina catalyst. Favorable conditions for syngas production were high temperature (850°C), low sweep gas flow rate (10 mL.min-1) and high methane concentration (50%) using the Ni/y-alumina catalyst.
ContributorsAnderson, Matthew Brandon (Author) / Lin, Jerry (Thesis advisor) / Alford, Terry (Committee member) / Rege, Kaushal (Committee member) / Anderson, James (Committee member) / Rivera, Daniel (Committee member) / Arizona State University (Publisher)
Created2011
150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150044-Thumbnail Image.png
Description
The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays

The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays in category definition was explained. It was hypothesized that the effects of partial experience are not explained by a shifting of attention between dimensions (Taylor & Ross, 2009) but rather by an increased reliance on prototypical values used to fill in missing information during incomplete experiences. Results indicated that participants (1) do not fill in missing information with prototypical values, (2) integrate information less efficiently between different modalities than within a single modality, and (3) have difficulty learning only when partial experience prevents access to diagnostic information.
ContributorsCrawford, Thomas (Author) / Homa, Donald (Thesis advisor) / Mcbeath, Micheal (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
149644-Thumbnail Image.png
Description
Intuitive decision making refers to decision making based on situational pattern recognition, which happens without deliberation. It is a fast and effortless process that occurs without complete awareness. Moreover, it is believed that implicit learning is one means by which a foundation for intuitive decision making is developed. Accordingly, the

Intuitive decision making refers to decision making based on situational pattern recognition, which happens without deliberation. It is a fast and effortless process that occurs without complete awareness. Moreover, it is believed that implicit learning is one means by which a foundation for intuitive decision making is developed. Accordingly, the present study investigated several factors that affect implicit learning and the development of intuitive decision making in a simulated real-world environment: (1) simple versus complex situational patterns; (2) the diversity of the patterns to which an individual is exposed; (3) the underlying mechanisms. The results showed that simple patterns led to higher levels of implicit learning and intuitive decision-making accuracy than complex patterns; increased diversity enhanced implicit learning and intuitive decision-making accuracy; and an embodied mechanism, labeling, contributes to the development of intuitive decision making in a simulated real-world environment. The results suggest that simulated real-world environments can provide the basis for training intuitive decision making, that diversity is influential in the process of training intuitive decision making, and that labeling contributes to the development of intuitive decision making. These results are interpreted in the context of applied situations such as military applications involving remotely piloted aircraft.
ContributorsCovas-Smith, Christine Marie (Author) / Cooke, Nancy J. (Thesis advisor) / Patterson, Robert (Committee member) / Glenberg, Arthur (Committee member) / Homa, Donald (Committee member) / Arizona State University (Publisher)
Created2011
150362-Thumbnail Image.png
Description
There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal

There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal of enabling such applications; however, significant challenges still remain, particularly, in the context of multi-user communications. With the motivation of addressing some of these challenges, the main focus of this dissertation is the design and analysis of capacity approaching coding schemes for several (wireless) multi-user communication scenarios. Specifically, three main themes are studied: superposition coding over broadcast channels, practical coding for binary-input binary-output broadcast channels, and signalling schemes for two-way relay channels. As the first contribution, we propose an analytical tool that allows for reliable comparison of different practical codes and decoding strategies over degraded broadcast channels, even for very low error rates for which simulations are impractical. The second contribution deals with binary-input binary-output degraded broadcast channels, for which an optimal encoding scheme that achieves the capacity boundary is found, and a practical coding scheme is given by concatenation of an outer low density parity check code and an inner (non-linear) mapper that induces desired distribution of "one" in a codeword. The third contribution considers two-way relay channels where the information exchange between two nodes takes place in two transmission phases using a coding scheme called physical-layer network coding. At the relay, a near optimal decoding strategy is derived using a list decoding algorithm, and an approximation is obtained by a joint decoding approach. For the latter scheme, an analytical approximation of the word error rate based on a union bounding technique is computed under the assumption that linear codes are employed at the two nodes exchanging data. Further, when the wireless channel is frequency selective, two decoding strategies at the relay are developed, namely, a near optimal decoding scheme implemented using list decoding, and a reduced complexity detection/decoding scheme utilizing a linear minimum mean squared error based detector followed by a network coded sequence decoder.
ContributorsBhat, Uttam (Author) / Duman, Tolga M. (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Li, Baoxin (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
Description
In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably more important than any other data type, because the data point could be a cancer patient or the classication decision could help determine what gene might be over expressed and perhaps a cause of cancer. These mis-classications are typically higher in the presence of outlier data points. The aim of this thesis is to develop a maximum margin classier that is suited to address the lack of robustness of discriminant based classiers (like the Support Vector Machine (SVM)) to noise and outliers. The underlying notion is to adopt and develop a natural loss function that is more robust to outliers and more representative of the true loss function of the data. It is demonstrated experimentally that SVM's are indeed susceptible to outliers and that the new classier developed, here coined as Robust-SVM (RSVM), is superior to all studied classier on the synthetic datasets. It is superior to the SVM in both the synthetic and experimental data from biomedical studies and is competent to a classier derived on similar lines when real life data examples are considered.
ContributorsGupta, Sidharth (Author) / Kim, Seungchan (Thesis advisor) / Welfert, Bruno (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150244-Thumbnail Image.png
Description
A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment

A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment challenging, including the massive amounts of data available, large numbers of users, and a highly dynamic environment, provide unique and untapped opportunities for solving the provenance problem for social media. Current approaches for tracking provenance data do not scale for online social media and consequently there is a gap in provenance methodologies and technologies providing exciting research opportunities. The guiding vision is the use of social media information itself to realize a useful amount of provenance data for information in social media. This departs from traditional approaches for data provenance which rely on a central store of provenance information. The contemporary online social media environment is an enormous and constantly updated "central store" that can be mined for provenance information that is not readily made available to the average social media user. This research introduces an approach and builds a foundation aimed at realizing a provenance data capability for social media users that is not accessible today.
ContributorsBarbier, Geoffrey P (Author) / Liu, Huan (Thesis advisor) / Bell, Herbert (Committee member) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150255-Thumbnail Image.png
Description
Thin films of ever reducing thickness are used in a plethora of applications and their performance is highly dependent on their microstructure. Computer simulations could then play a vital role in predicting the microstructure of thin films as a function of processing conditions. FACET is one such software tool designed

Thin films of ever reducing thickness are used in a plethora of applications and their performance is highly dependent on their microstructure. Computer simulations could then play a vital role in predicting the microstructure of thin films as a function of processing conditions. FACET is one such software tool designed by our research group to model polycrystalline thin film growth, including texture evolution and grain growth of polycrystalline films in 2D. Several modifications to the original FACET code were done to enhance its usability and accuracy. Simulations of sputtered silver thin films are presented here with FACET 2.0 with qualitative and semi-quantitative comparisons with previously published experimental results. Comparisons of grain size, texture and film thickness between simulations and experiments are presented which describe growth modes due to various deposition factors like flux angle and substrate temperature. These simulations provide reasonable agreement with the experimental data over a diverse range of process parameters. Preliminary experiments in depositions of Silver films are also attempted with varying substrates and thickness in order to generate complementary experimental and simulation studies of microstructure evolution. Overall, based on the comparisons, FACET provides interesting insights into thin film growth processes, and the effects of various deposition conditions on thin film structure and microstructure. Lastly, simple molecular dynamics simulations of deposition on bi-crystals are attempted for gaining insight into texture based grain competition during film growth. These simulations predict texture based grain coarsening mechanisms like twinning and grain boundary migration that have been commonly reported in FCC films.
ContributorsRairkar, Asit (Author) / Adams, James B (Thesis advisor) / Krause, Stephen (Committee member) / Alford, Terry (Committee member) / Arizona State University (Publisher)
Created2011
150158-Thumbnail Image.png
Description
Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering

Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering the correlation among different labels in multi-label learning. Specifically, I propose Hypergraph Spectral Learning (HSL) to perform dimensionality reduction for multi-label data by exploiting correlations among different labels using a hypergraph. The regularization effect on the classical dimensionality reduction algorithm known as Canonical Correlation Analysis (CCA) is elucidated in this thesis. The relationship between CCA and Orthonormalized Partial Least Squares (OPLS) is also investigated. To perform dimensionality reduction efficiently for large-scale problems, two efficient implementations are proposed for a class of dimensionality reduction algorithms, including canonical correlation analysis, orthonormalized partial least squares, linear discriminant analysis, and hypergraph spectral learning. The first approach is a direct least squares approach which allows the use of different regularization penalties, but is applicable under a certain assumption; the second one is a two-stage approach which can be applied in the regularization setting without any assumption. Furthermore, an online implementation for the same class of dimensionality reduction algorithms is proposed when the data comes sequentially. A Matlab toolbox for multi-label dimensionality reduction has been developed and released. The proposed algorithms have been applied successfully in the Drosophila gene expression pattern image annotation. The experimental results on some benchmark data sets in multi-label learning also demonstrate the effectiveness and efficiency of the proposed algorithms.
ContributorsSun, Liang (Author) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Liu, Huan (Committee member) / Mittelmann, Hans D. (Committee member) / Arizona State University (Publisher)
Created2011