Matching Items (129)
Filtering by

Clear all filters

150024-Thumbnail Image.png
Description
Chemical and mineralogical data from Mars shows that the surface has been chemically weathered on local to regional scales. Chemical trends and the types of chemical weathering products present on the surface and their abundances can elucidate information about past aqueous processes. Thermal-infrared (TIR) data and their respective models are

Chemical and mineralogical data from Mars shows that the surface has been chemically weathered on local to regional scales. Chemical trends and the types of chemical weathering products present on the surface and their abundances can elucidate information about past aqueous processes. Thermal-infrared (TIR) data and their respective models are essential for interpreting Martian mineralogy and geologic history. However, previous studies have shown that chemical weathering and the precipitation of fine-grained secondary silicates can adversely affect the accuracy of TIR spectral models. Furthermore, spectral libraries used to identify minerals on the Martian surface lack some important weathering products, including poorly-crystalline aluminosilicates like allophane, thus eliminating their identification in TIR spectral models. It is essential to accurately interpret TIR spectral data from chemically weathered surfaces to understand the evolution of aqueous processes on Mars. Laboratory experiments were performed to improve interpretations of TIR data from weathered surfaces. To test the accuracy of deriving chemistry of weathered rocks from TIR spectroscopy, chemistry was derived from TIR models of weathered basalts from Baynton, Australia and compared to actual weathering rind chemistry. To determine how specific secondary silicates affect the TIR spectroscopy of weathered basalts, mixtures of basaltic minerals and small amounts of secondary silicates were modeled. Poorly-crystalline aluminosilicates were synthesized and their TIR spectra were added to spectral libraries. Regional Thermal Emission Spectrometer (TES) data were modeled using libraries containing these poorly-crystalline aluminosilicates to test for their presence on the Mars. Chemistry derived from models of weathered Baynton basalts is not accurate, but broad chemical weathering trends can be interpreted from the data. TIR models of mineral mixtures show that small amounts of crystalline and amorphous silicate weathering products (2.5-5 wt.%) can be detected in TIR models and can adversely affect modeled plagioclase abundances. Poorly-crystalline aluminosilicates are identified in Northern Acidalia, Solis Planum, and Meridiani. Previous studies have suggested that acid sulfate weathering was the dominant surface alteration process for the past 3.5 billion years; however, the identification of allophane indicates that alteration at near-neutral pH occurred on regional scales and that acid sulfate weathering is not the only weathering process on Mars.
ContributorsRampe, Elizabeth Barger (Author) / Sharp, Thomas G (Thesis advisor) / Christensen, Phillip (Committee member) / Hervig, Richard (Committee member) / Shock, Everett (Committee member) / Williams, Lynda (Committee member) / Arizona State University (Publisher)
Created2011
149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149677-Thumbnail Image.png
Description
Applications of non-traditional stable isotope variations are moving beyond geosciences to biomedicine, made possible by advances in multiple collector inductively coupled plasma mass spectrometry (MC-ICP-MS) technology. Mass-dependent isotope variation can provide information about the sources of elements and the chemical reactions that they undergo. Iron and calcium isotope systematics in

Applications of non-traditional stable isotope variations are moving beyond geosciences to biomedicine, made possible by advances in multiple collector inductively coupled plasma mass spectrometry (MC-ICP-MS) technology. Mass-dependent isotope variation can provide information about the sources of elements and the chemical reactions that they undergo. Iron and calcium isotope systematics in biomedicine are relatively unexplored but have great potential scientific interest due to their essential nature in metabolism. Iron, a crucial element in biology, fractionates during biochemically relevant reactions. To test the extent of this fractionation in an important reaction process, equilibrium iron isotope fractionation during organic ligand exchange was determined. The results show that iron fractionates during organic ligand exchange, and that isotope enrichment increases as a function of the difference in binding constants between ligands. Additionally, to create a mass balance model for iron in a whole organism, iron isotope compositions in a whole mouse and in individual mouse organs were measured. The results indicate that fractionation occurs during transfer between individual organs, and that the whole organism was isotopically light compared with food. These two experiments advance our ability to interpret stable iron isotopes in biomedicine. Previous research demonstrated that calcium isotope variations in urine can be used as an indicator of changes in net bone mineral balance. In order to measure calcium isotopes by MC-ICP-MS, a chemical purification method was developed to quantitatively separate calcium from other elements in a biological matrix. Subsequently, this method was used to evaluate if calcium isotopes respond when organisms are subjected to conditions known to induce bone loss: 1) Rhesus monkeys were given an estrogen-suppressing drug; 2) Human patients underwent extended bed rest. In both studies, there were rapid, detectable changes in calcium isotope compositions from baseline - verifying that calcium isotopes can be used to rapidly detect changes in bone mineral balance. By characterizing iron isotope fractionation in biologically relevant processes and by demonstrating that calcium isotopes vary rapidly in response to bone loss, this thesis represents an important step in utilizing these isotope systems as a diagnostic and mechanistic tool to study the metabolism of these elements in vivo.
ContributorsMorgan, Jennifer Lynn Louden (Author) / Anbar, Ariel D. (Thesis advisor) / Wasylenki, Laura E. (Committee member) / Jones, Anne K. (Committee member) / Shock, Everett (Committee member) / Arizona State University (Publisher)
Created2011
149753-Thumbnail Image.png
Description
Molybdenum (Mo) is a key trace nutrient for biological assimilation of nitrogen, either as nitrogen gas (N2) or nitrate (NO3-). Although Mo is the most abundant metal in seawater (105 nM), its concentration is low (<5 nM) in most freshwaters today, and it was scarce in the ocean before 600

Molybdenum (Mo) is a key trace nutrient for biological assimilation of nitrogen, either as nitrogen gas (N2) or nitrate (NO3-). Although Mo is the most abundant metal in seawater (105 nM), its concentration is low (<5 nM) in most freshwaters today, and it was scarce in the ocean before 600 million years ago. The use of Mo for nitrogen assimilation can be understood in terms of the changing Mo availability through time; for instance, the higher Mo content of eukaryotic vs. prokaryotic nitrate reductase may have stalled proliferation of eukaryotes in low-Mo Proterozoic oceans. Field and laboratory experiments were performed to study Mo requirements for NO3- assimilation and N2 fixation, respectively. Molybdenum-nitrate addition experiments at Castle Lake, California revealed interannual and depth variability in plankton community response, perhaps resulting from differences in species composition and/or ammonium availability. Furthermore, lake sediments were elevated in Mo compared to soils and bedrock in the watershed. Box modeling suggested that the largest source of Mo to the lake was particulate matter from the watershed. Month-long laboratory experiments with heterocystous cyanobacteria (HC) showed that <1 nM Mo led to low N2 fixation rates, while 10 nM Mo was sufficient for optimal rates. At 1500 nM Mo, freshwater HC hyperaccumulated Mo intercellularly, whereas coastal HC did not. These differences in storage capacity were likely due to the presence in freshwater HC of the small molybdate-binding protein, Mop, and its absence in coastal and marine cyanobacterial species. Expression of the mop gene was regulated by Mo availability in the freshwater HC species Nostoc sp. PCC 7120. Under low Mo (<1 nM) conditions, mop gene expression was up-regulated compared to higher Mo (150 and 3000 nM) treatments, but the subunit composition of the Mop protein changed, suggesting that Mop does not bind Mo in the same manner at <1 nM Mo that it can at higher Mo concentrations. These findings support a role for Mop as a Mo storage protein in HC and suggest that freshwater HC control Mo cellular homeostasis at the post-translational level. Mop's widespread distribution in prokaryotes lends support to the theory that it may be an ancient protein inherited from low-Mo Precambrian oceans.
ContributorsGlass, Jennifer (Author) / Anbar, Ariel D (Thesis advisor) / Shock, Everett L (Committee member) / Jones, Anne K (Committee member) / Hartnett, Hilairy E (Committee member) / Elser, James J (Committee member) / Fromme, Petra (Committee member) / Arizona State University (Publisher)
Created2011
150362-Thumbnail Image.png
Description
There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal

There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal of enabling such applications; however, significant challenges still remain, particularly, in the context of multi-user communications. With the motivation of addressing some of these challenges, the main focus of this dissertation is the design and analysis of capacity approaching coding schemes for several (wireless) multi-user communication scenarios. Specifically, three main themes are studied: superposition coding over broadcast channels, practical coding for binary-input binary-output broadcast channels, and signalling schemes for two-way relay channels. As the first contribution, we propose an analytical tool that allows for reliable comparison of different practical codes and decoding strategies over degraded broadcast channels, even for very low error rates for which simulations are impractical. The second contribution deals with binary-input binary-output degraded broadcast channels, for which an optimal encoding scheme that achieves the capacity boundary is found, and a practical coding scheme is given by concatenation of an outer low density parity check code and an inner (non-linear) mapper that induces desired distribution of "one" in a codeword. The third contribution considers two-way relay channels where the information exchange between two nodes takes place in two transmission phases using a coding scheme called physical-layer network coding. At the relay, a near optimal decoding strategy is derived using a list decoding algorithm, and an approximation is obtained by a joint decoding approach. For the latter scheme, an analytical approximation of the word error rate based on a union bounding technique is computed under the assumption that linear codes are employed at the two nodes exchanging data. Further, when the wireless channel is frequency selective, two decoding strategies at the relay are developed, namely, a near optimal decoding scheme implemented using list decoding, and a reduced complexity detection/decoding scheme utilizing a linear minimum mean squared error based detector followed by a network coded sequence decoder.
ContributorsBhat, Uttam (Author) / Duman, Tolga M. (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Li, Baoxin (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
149922-Thumbnail Image.png
Description
Bridging semantic gap is one of the fundamental problems in multimedia computing and pattern recognition. The challenge of associating low-level signal with their high-level semantic interpretation is mainly due to the fact that semantics are often conveyed implicitly in a context, relying on interactions among multiple levels of concepts or

Bridging semantic gap is one of the fundamental problems in multimedia computing and pattern recognition. The challenge of associating low-level signal with their high-level semantic interpretation is mainly due to the fact that semantics are often conveyed implicitly in a context, relying on interactions among multiple levels of concepts or low-level data entities. Also, additional domain knowledge may often be indispensable for uncovering the underlying semantics, but in most cases such domain knowledge is not readily available from the acquired media streams. Thus, making use of various types of contextual information and leveraging corresponding domain knowledge are vital for effectively associating high-level semantics with low-level signals with higher accuracies in multimedia computing problems. In this work, novel computational methods are explored and developed for incorporating contextual information/domain knowledge in different forms for multimedia computing and pattern recognition problems. Specifically, a novel Bayesian approach with statistical-sampling-based inference is proposed for incorporating a special type of domain knowledge, spatial prior for the underlying shapes; cross-modality correlations via Kernel Canonical Correlation Analysis is explored and the learnt space is then used for associating multimedia contents in different forms; model contextual information as a graph is leveraged for regulating interactions among high-level semantic concepts (e.g., category labels), low-level input signal (e.g., spatial/temporal structure). Four real-world applications, including visual-to-tactile face conversion, photo tag recommendation, wild web video classification and unconstrained consumer video summarization, are selected to demonstrate the effectiveness of the approaches. These applications range from classic research challenges to emerging tasks in multimedia computing. Results from experiments on large-scale real-world data with comparisons to other state-of-the-art methods and subjective evaluations with end users confirmed that the developed approaches exhibit salient advantages, suggesting that they are promising for leveraging contextual information/domain knowledge for a wide range of multimedia computing and pattern recognition problems.
ContributorsWang, Zhesheng (Author) / Li, Baoxin (Thesis advisor) / Sundaram, Hari (Committee member) / Qian, Gang (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2011
149926-Thumbnail Image.png
Description
A new challenge on the horizon is to utilize the large amounts of protein found in the atmosphere to identify different organisms from which the protein originated. Included here is work investigating the presence of identifiable patterns of different proteins collected from the air and biological samples for the purposes

A new challenge on the horizon is to utilize the large amounts of protein found in the atmosphere to identify different organisms from which the protein originated. Included here is work investigating the presence of identifiable patterns of different proteins collected from the air and biological samples for the purposes of remote identification. Protein patterns were generated using high performance liquid chromatography (HPLC). Patterns created could identify high-traffic and low-traffic indoor spaces. Samples were collected from the air using air pumps to draw air through a filter paper trapping particulates, including large amounts of shed protein matter. In complimentary research aerosolized biological samples were collected from various ecosystems throughout Ecuador to explore the relationship between environmental setting and aerosolized protein concentrations. In order to further enhance protein separation and produce more detailed patterns for the identification of individual organisms of interest; a novel separation device was constructed and characterized. The separation device incorporates a longitudinal gradient as well as insulating dielectrophoretic features within a single channel. This design allows for the production of stronger local field gradients along a global gradient allowing particles to enter, initially transported through the channel by electrophoresis and electroosmosis, and to be isolated according to their characteristic physical properties, including charge, polarizability, deformability, surface charge mobility, dielectric features, and local capacitance. Thus, different types of particles are simultaneously separated at different points along the channel distance given small variations of properties. The device has shown the ability to separate analytes over a large dynamic range of size, from 20 nm to 1 μm, roughly the size of proteins to the size of cells. In the study of different sized sulfate capped polystyrene particles were shown to be selectively captured as well as concentrating particles from 103 to 106 times. Qualitative capture and manipulation of β-amyloid fibrils were also shown. The results demonstrate the selective focusing ability of the technique; and it may form the foundation for a versatile tool for separating complex mixtures. Combined this work shows promise for future identification of individual organisms from aerosolized protein as well as for applications in biomedical research.
ContributorsStaton, Sarah J. R (Author) / Hayes, Mark A. (Committee member) / Anbar, Ariel D (Committee member) / Shock, Everett (Committee member) / Williams, Peter (Committee member) / Arizona State University (Publisher)
Created2011
150244-Thumbnail Image.png
Description
A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment

A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment challenging, including the massive amounts of data available, large numbers of users, and a highly dynamic environment, provide unique and untapped opportunities for solving the provenance problem for social media. Current approaches for tracking provenance data do not scale for online social media and consequently there is a gap in provenance methodologies and technologies providing exciting research opportunities. The guiding vision is the use of social media information itself to realize a useful amount of provenance data for information in social media. This departs from traditional approaches for data provenance which rely on a central store of provenance information. The contemporary online social media environment is an enormous and constantly updated "central store" that can be mined for provenance information that is not readily made available to the average social media user. This research introduces an approach and builds a foundation aimed at realizing a provenance data capability for social media users that is not accessible today.
ContributorsBarbier, Geoffrey P (Author) / Liu, Huan (Thesis advisor) / Bell, Herbert (Committee member) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150209-Thumbnail Image.png
Description
Historically, uranium has received intense study of its chemical and isotopic properties for use in the nuclear industry, but has been largely ignored by geoscientists despite properties that make it an intriguing target for geochemists and cosmochemists alike. Uranium was long thought to have an invariant 238U/235U ratio in natural

Historically, uranium has received intense study of its chemical and isotopic properties for use in the nuclear industry, but has been largely ignored by geoscientists despite properties that make it an intriguing target for geochemists and cosmochemists alike. Uranium was long thought to have an invariant 238U/235U ratio in natural samples, making it uninteresting for isotopic work. However, recent advances in mass spectrometry have made it possible to detect slight differences in the 238U/235U ratio, creating many exciting new opportunities for U isotopic research. Using uranium ore samples from diverse depositional settings from around the world, it is shown that the low-temperature redox transition of uranium (U6+ to U4+) causes measurable fractionation of the 238U/235U ratio. Moreover, it is shown experimentally that a coordination change of U can also cause measurable fractionation in the 238U/235U ratio. This improved understanding of the fractionation mechanisms of U allows for the use of the 238U/235U ratio as a paleoredox proxy. The 238U/235U ratios of carbonates deposited spanning the end-Permian extinction horizon provide evidence of pronounced and persistent widespread ocean anoxia at, or immediately preceding, the extinction boundary. Variable 238U/235U ratios correlated with proxies for initial Cm/U in the Solar System's earliest objects demonstrates the existence of 247Cm in the early Solar System. Proof of variable 238U/235U ratios in meteoritic material forces a substantive change in the previously established procedures of Pb-Pb dating, which assumed an invariant 238U/235U ratio. This advancement improves the accuracy of not only the Pb-Pb chronometer that directly utilizes the 238U/235U ratio, but also for short-lived radiometric dating techniques that indirectly use the 238U/235U ratio to calculate ages of Solar System material.
ContributorsBrennecka, Gregory A (Author) / Anbar, Ariel D (Thesis advisor) / Wadhwa, Meenakshi (Thesis advisor) / Herrmann, Achim D (Committee member) / Hervig, Richard (Committee member) / Young, Patrick (Committee member) / Arizona State University (Publisher)
Created2011
150158-Thumbnail Image.png
Description
Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering

Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering the correlation among different labels in multi-label learning. Specifically, I propose Hypergraph Spectral Learning (HSL) to perform dimensionality reduction for multi-label data by exploiting correlations among different labels using a hypergraph. The regularization effect on the classical dimensionality reduction algorithm known as Canonical Correlation Analysis (CCA) is elucidated in this thesis. The relationship between CCA and Orthonormalized Partial Least Squares (OPLS) is also investigated. To perform dimensionality reduction efficiently for large-scale problems, two efficient implementations are proposed for a class of dimensionality reduction algorithms, including canonical correlation analysis, orthonormalized partial least squares, linear discriminant analysis, and hypergraph spectral learning. The first approach is a direct least squares approach which allows the use of different regularization penalties, but is applicable under a certain assumption; the second one is a two-stage approach which can be applied in the regularization setting without any assumption. Furthermore, an online implementation for the same class of dimensionality reduction algorithms is proposed when the data comes sequentially. A Matlab toolbox for multi-label dimensionality reduction has been developed and released. The proposed algorithms have been applied successfully in the Drosophila gene expression pattern image annotation. The experimental results on some benchmark data sets in multi-label learning also demonstrate the effectiveness and efficiency of the proposed algorithms.
ContributorsSun, Liang (Author) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Liu, Huan (Committee member) / Mittelmann, Hans D. (Committee member) / Arizona State University (Publisher)
Created2011