Matching Items (2,420)
Filtering by

Clear all filters

152021-Thumbnail Image.png
Description
Metal hydride materials have been intensively studied for hydrogen storage applications. In addition to potential hydrogen economy applications, metal hydrides offer a wide variety of other interesting properties. For example, hydrogen-dominant materials, which are hydrides with the highest hydrogen content for a particular metal/semimetal composition, are predicted to display high-temperature

Metal hydride materials have been intensively studied for hydrogen storage applications. In addition to potential hydrogen economy applications, metal hydrides offer a wide variety of other interesting properties. For example, hydrogen-dominant materials, which are hydrides with the highest hydrogen content for a particular metal/semimetal composition, are predicted to display high-temperature superconductivity. On the other side of the spectrum are hydrides with small amounts of hydrogen (0.1 - 1 at.%) that are investigated as viable magnetic, thermoelectric or semiconducting materials. Research of metal hydride materials is generally important to gain fundamental understanding of metal-hydrogen interactions in materials. Hydrogenation of Zintl phases, which are defined as compounds between an active metal (alkali, alkaline earth, rare earth) and a p-block metal/semimetal, were attempted by a hot sintering method utilizing an autoclave loaded with gaseous hydrogen (< 9 MPa). Hydride formation competes with oxidative decomposition of a Zintl phase. The oxidative decomposition, which leads to a mixture of binary active metal hydride and p-block element, was observed for investigated aluminum (Al) and gallium (Ga) containing Zintl phases. However, a new phase Li2Al was discovered when Zintl phase precursors were synthesized. Using the single crystal x-ray diffraction (SCXRD), the Li2Al was found to crystallize in an orthorhombic unit cell (Cmcm) with the lattice parameters a = 4.6404(8) Å, b = 9.719(2) Å, and c = 4.4764(8) Å. Increased demand for materials with improved properties necessitates the exploration of alternative synthesis methods. Conventional metal hydride synthesis methods, like ball-milling and autoclave technique, are not responding to the demands of finding new materials. A viable alternative synthesis method is the application of high pressure for the preparation of hydrogen-dominant materials. Extreme pressures in the gigapascal ranges can open access to new metal hydrides with novel structures and properties, because of the drastically increased chemical potential of hydrogen. Pressures up to 10 GPa can be easily achieved using the multi-anvil (MA) hydrogenations while maintaining sufficient sample volume for structure and property characterization. Gigapascal MA hydrogenations using ammonia borane (BH3NH3) as an internal hydrogen source were employed in the search for new hydrogen-dominant materials. Ammonia borane has high gravimetric volume of hydrogen, and additionally the thermally activated decomposition at high pressures lead to a complete hydrogen release at reasonably low temperature. These properties make ammonia borane a desired hydrogen source material. The missing member Li2PtH6 of the series of A2PtH6 compounds (A = Na to Cs) was accessed by employing MA technique. As the known heavier analogs, the Li2PtH6 also crystallizes in a cubic K2PtCl6-type structure with a cell edge length of 6.7681(3) Å. Further gigapascal hydrogenations afforded the compounds K2SiH6 and Rb2SiH6 which are isostructural to Li2PtH6. The cubic K2SiH6 and Rb2SiH6 are built from unique hypervalent SiH62- entities with the lattice parameters of 7.8425(9) and 8.1572(4) Å, respectively. Spectroscopic analysis of hexasilicides confirmed the presence of hypervalent bonding. The Si-H stretching frequencies at 1550 cm-1 appeared considerably decreased in comparison with a normal-valent (2e2c) Si-H stretching frequencies in SiH4 at around 2200 cm-1. However, the observed stretching modes in hypervalent hexasilicides were in a reasonable agreement with Ph3SiH2- (1520 cm-1) where the hydrogen has the axial (3e4c bonded) position in the trigoal bipyramidal environment.
ContributorsPuhakainen, Kati (Author) / Häussermann, Ulrich (Thesis advisor) / Seo, Dong (Thesis advisor) / Kouvetakis, John (Committee member) / Wolf, George (Committee member) / Arizona State University (Publisher)
Created2013
152025-Thumbnail Image.png
Description
At present, almost 70% of the electric energy in the United States is produced utilizing fossil fuels. Combustion of fossil fuels contributes CO2 to the atmosphere, potentially exacerbating the impact on global warming. To make the electric power system (EPS) more sustainable for the future, there has been an emphasis

At present, almost 70% of the electric energy in the United States is produced utilizing fossil fuels. Combustion of fossil fuels contributes CO2 to the atmosphere, potentially exacerbating the impact on global warming. To make the electric power system (EPS) more sustainable for the future, there has been an emphasis on scaling up generation of electric energy from wind and solar resources. These resources are renewable in nature and have pollution free operation. Various states in the US have set up different goals for achieving certain amount of electrical energy to be produced from renewable resources. The Southwestern region of the United States receives significant solar radiation throughout the year. High solar radiation makes concentrated solar power and solar PV the most suitable means of renewable energy production in this region. However, the majority of the projects that are presently being developed are either residential or utility owned solar PV plants. This research explores the impact of significant PV penetration on the steady state voltage profile of the electric power transmission system. This study also identifies the impact of PV penetration on the dynamic response of the transmission system such as rotor angle stability, frequency response and voltage response after a contingency. The light load case of spring 2010 and the peak load case of summer 2018 have been considered for analyzing the impact of PV. If the impact is found to be detrimental to the normal operation of the EPS, mitigation measures have been devised and presented in the thesis. Commercially available software tools/packages such as PSLF, PSS/E, DSA Tools have been used to analyze the power network and validate the results.
ContributorsPrakash, Nitin (Author) / Heydt, Gerald T. (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2013
151685-Thumbnail Image.png
Description
A proposed visible spectrum nanoscale imaging method requires material with permittivity values much larger than those available in real world materials to shrink the visible wavelength to attain the desired resolution. It has been proposed that the extraordinarily slow propagation experienced by light guided along plasmon resonant structures is a

A proposed visible spectrum nanoscale imaging method requires material with permittivity values much larger than those available in real world materials to shrink the visible wavelength to attain the desired resolution. It has been proposed that the extraordinarily slow propagation experienced by light guided along plasmon resonant structures is a viable approach to obtaining these short wavelengths. To assess the feasibility of such a system, an effective medium model of a chain of Noble metal plasmonic nanospheres is developed, leading to a straightforward calculation of the waveguiding properties. Evaluation of other models for such structures that have appeared in the literature, including an eigenvalue problem nearest neighbor approximation, a multi- neighbor approximation with retardation, and a method-of-moments method for a finite chain, show conflicting expectations of such a structure. In particular, recent publications suggest the possibility of regions of invalidity for eigenvalue problem solutions that are considered far below the onset of guidance, and for solutions that assume the loss is low enough to justify perturbation approximations. Even the published method-of-moments approach suffers from an unjustified assumption in the original interpretation, leading to overly optimistic estimations of the attenuation of the plasmon guided wave. In this work it is shown that the method of moments approach solution was dominated by the radiation from the source dipole, and not the waveguiding behavior claimed. If this dipolar radiation is removed the remaining fields ought to contain the desired guided wave information. Using a Prony's-method-based algorithm the dispersion properties of the chain of spheres are assessed at two frequencies, and shown to be dramatically different from the optimistic expectations in much of the literature. A reliable alternative to these models is to replace the chain of spheres with an effective medium model, thus mapping the chain problem into the well-known problem of the dielectric rod. The solution of the Green function problem for excitation of the symmetric longitudinal mode (TM01) is performed by numerical integration. Using this method the frequency ranges over which the rod guides and the associated attenuation are clearly seen. The effective medium model readily allows for variation of the sphere size and separation, and can be taken to the limit where instead of a chain of spheres we have a solid Noble metal rod. This latter case turns out to be the optimal for minimizing the attenuation of the guided wave. Future work is proposed to simulate the chain of photonic nanospheres and the nanowire using finite-difference time-domain to verify observed guided behavior in the Green's function method devised in this thesis and to simulate the proposed nanosensing devices.
ContributorsHale, Paul (Author) / Diaz, Rodolfo E (Thesis advisor) / Goodnick, Stephen (Committee member) / Aberle, James T., 1961- (Committee member) / Palais, Joseph (Committee member) / Arizona State University (Publisher)
Created2013
151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
151690-Thumbnail Image.png
Description
Practical communication systems are subject to errors due to imperfect time alignment among the communicating nodes. Timing errors can occur in different forms depending on the underlying communication scenario. This doctoral study considers two different classes of asynchronous systems; point-to-point (P2P) communication systems with synchronization errors, and asynchronous cooperative systems.

Practical communication systems are subject to errors due to imperfect time alignment among the communicating nodes. Timing errors can occur in different forms depending on the underlying communication scenario. This doctoral study considers two different classes of asynchronous systems; point-to-point (P2P) communication systems with synchronization errors, and asynchronous cooperative systems. In particular, the focus is on an information theoretic analysis for P2P systems with synchronization errors and developing new signaling solutions for several asynchronous cooperative communication systems. The first part of the dissertation presents several bounds on the capacity of the P2P systems with synchronization errors. First, binary insertion and deletion channels are considered where lower bounds on the mutual information between the input and output sequences are computed for independent uniformly distributed (i.u.d.) inputs. Then, a channel suffering from both synchronization errors and additive noise is considered as a serial concatenation of a synchronization error-only channel and an additive noise channel. It is proved that the capacity of the original channel is lower bounded in terms of the synchronization error-only channel capacity and the parameters of both channels. On a different front, to better characterize the deletion channel capacity, the capacity of three independent deletion channels with different deletion probabilities are related through an inequality resulting in the tightest upper bound on the deletion channel capacity for deletion probabilities larger than 0.65. Furthermore, the first non-trivial upper bound on the 2K-ary input deletion channel capacity is provided by relating the 2K-ary input deletion channel capacity with the binary deletion channel capacity through an inequality. The second part of the dissertation develops two new relaying schemes to alleviate asynchronism issues in cooperative communications. The first one is a single carrier (SC)-based scheme providing a spectrally efficient Alamouti code structure at the receiver under flat fading channel conditions by reducing the overhead needed to overcome the asynchronism and obtain spatial diversity. The second one is an orthogonal frequency division multiplexing (OFDM)-based approach useful for asynchronous cooperative systems experiencing excessive relative delays among the relays under frequency-selective channel conditions to achieve a delay diversity structure at the receiver and extract spatial diversity.
ContributorsRahmati, Mojtaba (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2013
151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
151720-Thumbnail Image.png
Description
Solar energy, including solar heating, solar architecture, solar thermal electricity and solar photovoltaics, is one of the primary energy sources replacing fossil fuels. Being one of the most important techniques, significant research has been conducted in solar cell efficiency improvement. Simulation of various structures and materials of solar cells provides

Solar energy, including solar heating, solar architecture, solar thermal electricity and solar photovoltaics, is one of the primary energy sources replacing fossil fuels. Being one of the most important techniques, significant research has been conducted in solar cell efficiency improvement. Simulation of various structures and materials of solar cells provides a deeper understanding of device operation and ways to improve their efficiency. Over the last two decades, polycrystalline thin-film Cadmium-Sulfide and Cadmium-Telluride (CdS/CdTe) solar cells fabricated on glass substrates have been considered as one of the most promising candidate in the photovoltaic technologies, for their similar efficiency and low costs when compared to traditional silicon-based solar cells. In this work a fast one dimensional time-dependent/steady-state drift-diffusion simulator, accelerated by adaptive non-uniform mesh and automatic time-step control, for modeling solar cells has been developed and has been used to simulate a CdS/CdTe solar cell. These models are used to reproduce transients of carrier transport in response to step-function signals of different bias and varied light intensity. The time-step control models are also used to help convergence in steady-state simulations where constrained material constants, such as carrier lifetimes in the order of nanosecond and carrier mobility in the order of 100 cm2/Vs, must be applied.
ContributorsGuo, Da (Author) / Vasileska, Dragica (Thesis advisor) / Goodnick, Stephen M (Committee member) / Sankin, Igor (Committee member) / Arizona State University (Publisher)
Created2013
151722-Thumbnail Image.png
Description
Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating

Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.
ContributorsFink, Alex M (Author) / Spanias, Andreas S (Thesis advisor) / Cook, Perry R. (Committee member) / Turaga, Pavan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013