This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 151
150398-Thumbnail Image.png
Description
Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely

Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results.
ContributorsTu, Kai (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
150380-Thumbnail Image.png
Description
Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in

Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in the array, and varying weather conditions. With the introduction of smarter inverters and solar modules, the data obtained from the photovoltaic array can be used to dynamically modify the array topology and improve the array power output. This is beneficial especially when module mismatches such as shading, soiling and aging occur in the photovoltaic array. This research focuses on the topology optimization of PV arrays under shading conditions using measurements obtained from a PV array set-up. A scheme known as topology reconfiguration method is proposed to find the optimal array topology for a given weather condition and faulty module information. Various topologies such as the series-parallel (SP), the total cross-tied (TCT), the bridge link (BL) and their bypassed versions are considered. The topology reconfiguration method compares the efficiencies of the topologies, evaluates the percentage gain in the generated power that would be obtained by reconfiguration of the array and other factors to find the optimal topology. This method is employed for various possible shading patterns to predict the best topology. The results demonstrate the benefit of having an electrically reconfigurable array topology. The effects of irradiance and shading on the array performance are also studied. The simulations are carried out using a SPICE simulator. The simulation results are validated with the experimental data provided by the PACECO Company.
ContributorsBuddha, Santoshi Tejasri (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Thesis advisor) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
150389-Thumbnail Image.png
Description
Radiation-induced gain degradation in bipolar devices is considered to be the primary threat to linear bipolar circuits operating in the space environment. The damage is primarily caused by charged particles trapped in the Earth's magnetosphere, the solar wind, and cosmic rays. This constant radiation exposure leads to early end-of-life expectancies

Radiation-induced gain degradation in bipolar devices is considered to be the primary threat to linear bipolar circuits operating in the space environment. The damage is primarily caused by charged particles trapped in the Earth's magnetosphere, the solar wind, and cosmic rays. This constant radiation exposure leads to early end-of-life expectancies for many electronic parts. Exposure to ionizing radiation increases the density of oxide and interfacial defects in bipolar oxides leading to an increase in base current in bipolar junction transistors. Radiation-induced excess base current is the primary cause of current gain degradation. Analysis of base current response can enable the measurement of defects generated by radiation exposure. In addition to radiation, the space environment is also characterized by extreme temperature fluctuations. Temperature, like radiation, also has a very strong impact on base current. Thus, a technique for separating the effects of radiation from thermal effects is necessary in order to accurately measure radiation-induced damage in space. This thesis focuses on the extraction of radiation damage in lateral PNP bipolar junction transistors and the space environment. It also describes the measurement techniques used and provides a quantitative analysis methodology for separating radiation and thermal effects on the bipolar base current.
ContributorsCampola, Michael J (Author) / Barnaby, Hugh J (Thesis advisor) / Holbert, Keith E. (Committee member) / Vasileska, Dragica (Committee member) / Arizona State University (Publisher)
Created2011
150362-Thumbnail Image.png
Description
There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal

There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal of enabling such applications; however, significant challenges still remain, particularly, in the context of multi-user communications. With the motivation of addressing some of these challenges, the main focus of this dissertation is the design and analysis of capacity approaching coding schemes for several (wireless) multi-user communication scenarios. Specifically, three main themes are studied: superposition coding over broadcast channels, practical coding for binary-input binary-output broadcast channels, and signalling schemes for two-way relay channels. As the first contribution, we propose an analytical tool that allows for reliable comparison of different practical codes and decoding strategies over degraded broadcast channels, even for very low error rates for which simulations are impractical. The second contribution deals with binary-input binary-output degraded broadcast channels, for which an optimal encoding scheme that achieves the capacity boundary is found, and a practical coding scheme is given by concatenation of an outer low density parity check code and an inner (non-linear) mapper that induces desired distribution of "one" in a codeword. The third contribution considers two-way relay channels where the information exchange between two nodes takes place in two transmission phases using a coding scheme called physical-layer network coding. At the relay, a near optimal decoding strategy is derived using a list decoding algorithm, and an approximation is obtained by a joint decoding approach. For the latter scheme, an analytical approximation of the word error rate based on a union bounding technique is computed under the assumption that linear codes are employed at the two nodes exchanging data. Further, when the wireless channel is frequency selective, two decoding strategies at the relay are developed, namely, a near optimal decoding scheme implemented using list decoding, and a reduced complexity detection/decoding scheme utilizing a linear minimum mean squared error based detector followed by a network coded sequence decoder.
ContributorsBhat, Uttam (Author) / Duman, Tolga M. (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Li, Baoxin (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
149939-Thumbnail Image.png
Description
The increased use of commercial complementary metal-oxide-semiconductor (CMOS) technologies in harsh radiation environments has resulted in a new approach to radiation effects mitigation. This approach utilizes simulation to support the design of integrated circuits (ICs) to meet targeted tolerance specifications. Modeling the deleterious impact of ionizing radiation on ICs fabricated

The increased use of commercial complementary metal-oxide-semiconductor (CMOS) technologies in harsh radiation environments has resulted in a new approach to radiation effects mitigation. This approach utilizes simulation to support the design of integrated circuits (ICs) to meet targeted tolerance specifications. Modeling the deleterious impact of ionizing radiation on ICs fabricated in advanced CMOS technologies requires understanding and analyzing the basic mechanisms that result in buildup of radiation-induced defects in specific sensitive regions. Extensive experimental studies have demonstrated that the sensitive regions are shallow trench isolation (STI) oxides. Nevertheless, very little work has been done to model the physical mechanisms that result in the buildup of radiation-induced defects and the radiation response of devices fabricated in these technologies. A comprehensive study of the physical mechanisms contributing to the buildup of radiation-induced oxide trapped charges and the generation of interface traps in advanced CMOS devices is presented in this dissertation. The basic mechanisms contributing to the buildup of radiation-induced defects are explored using a physical model that utilizes kinetic equations that captures total ionizing dose (TID) and dose rate effects in silicon dioxide (SiO2). These mechanisms are formulated into analytical models that calculate oxide trapped charge density (Not) and interface trap density (Nit) in sensitive regions of deep-submicron devices. Experiments performed on field-oxide-field-effect-transistors (FOXFETs) and metal-oxide-semiconductor (MOS) capacitors permit investigating TID effects and provide a comparison for the radiation response of advanced CMOS devices. When used in conjunction with closed-form expressions for surface potential, the analytical models enable an accurate description of radiation-induced degradation of transistor electrical characteristics. In this dissertation, the incorporation of TID effects in advanced CMOS devices into surface potential based compact models is also presented. The incorporation of TID effects into surface potential based compact models is accomplished through modifications of the corresponding surface potential equations (SPE), allowing the inclusion of radiation-induced defects (i.e., Not and Nit) into the calculations of surface potential. Verification of the compact modeling approach is achieved via comparison with experimental data obtained from FOXFETs fabricated in a 90 nm low-standby power commercial bulk CMOS technology and numerical simulations of fully-depleted (FD) silicon-on-insulator (SOI) n-channel transistors.
ContributorsSanchez Esqueda, Ivan (Author) / Barnaby, Hugh J (Committee member) / Schroder, Dieter (Thesis advisor) / Schroder, Dieter K. (Committee member) / Holbert, Keith E. (Committee member) / Gildenblat, Gennady (Committee member) / Arizona State University (Publisher)
Created2011
150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
ContributorsThulasiram, Ramesh (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150197-Thumbnail Image.png
Description
Ever reducing time to market, along with short product lifetimes, has created a need to shorten the microprocessor design time. Verification of the design and its analysis are two major components of this design cycle. Design validation techniques can be broadly classified into two major categories: simulation based approaches and

Ever reducing time to market, along with short product lifetimes, has created a need to shorten the microprocessor design time. Verification of the design and its analysis are two major components of this design cycle. Design validation techniques can be broadly classified into two major categories: simulation based approaches and formal techniques. Simulation based microprocessor validation involves running millions of cycles using random or pseudo random tests and allows verification of the register transfer level (RTL) model against an architectural model, i.e., that the processor executes instructions as required. The validation effort involves model checking to a high level description or simulation of the design against the RTL implementation. Formal techniques exhaustively analyze parts of the design but, do not verify RTL against the architecture specification. The focus of this work is to implement a fully automated validation environment for a MIPS based radiation hardened microprocessor using simulation based approaches. The basic framework uses the classical validation approach in which the design to be validated is described in a Hardware Definition Language (HDL) such as VHDL or Verilog. To implement a simulation based approach a number of random or pseudo random tests are generated. The output of the HDL based design is compared against the one obtained from a "perfect" model implementing similar functionality, a mismatch in the results would thus indicate a bug in the HDL based design. Effort is made to design the environment in such a manner that it can support validation during different stages of the design cycle. The validation environment includes appropriate changes so as to support architecture changes which are introduced because of radiation hardening. The manner in which the validation environment is build is highly dependent on the specifications of the perfect model used for comparisons. This work implements the validation environment for two MIPS simulators as the reference model. Two bugs have been discovered in the RTL model, using simulation based approaches through the validation environment.
ContributorsSharma, Abhishek (Author) / Clark, Lawrence (Thesis advisor) / Holbert, Keith E. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2011
150095-Thumbnail Image.png
Description
Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It

Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms.
ContributorsChen, Jianhui (Author) / Ye, Jieping (Thesis advisor) / Kumar, Sudhir (Committee member) / Liu, Huan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2011
152376-Thumbnail Image.png
Description
High Voltage Direct Current (HVDC) technology is being considered for several long distance point-to-point overhead transmission lines, because of their lower losses and higher transmission capability, when compared to AC systems. Insulators are used to support and isolate the conductors mechanically and electrically. Composite insulators are gaining popularity for both

High Voltage Direct Current (HVDC) technology is being considered for several long distance point-to-point overhead transmission lines, because of their lower losses and higher transmission capability, when compared to AC systems. Insulators are used to support and isolate the conductors mechanically and electrically. Composite insulators are gaining popularity for both AC and DC lines, for the reasons of light weight and good performance under contaminated conditions. This research illustrates the electric potential and field computation on HVDC composite insulators by using the charge simulation method. The electric field is calculated under both dry and wet conditions. Under dry conditions, the field distributions along the insulators whose voltage levels range from 500 kV to 1200 kV are calculated and compared. The results indicate that the HVDC insulator produces higher electric field, when compared to AC insulator. Under wet conditions, a 500 kV insulator is modeled with discrete water droplets on the surface. In this case, the field distribution is affected by surface resistivity and separations between droplets. The corona effects on insulators are analyzed for both dry and wet conditions. Corona discharge is created, when electric field strength exceeds the threshold value. Corona and grading rings are placed near the end-fittings of the insulators to reduce occurrence of corona. The dimensions of these rings, specifically their radius, tube thickness and projection from end fittings are optimized. This will help the utilities design proper corona and grading rings to reduce the corona phenomena.
ContributorsHe, Jiahong (Author) / Gorur, Ravi S (Committee member) / Ayyanar, Raja (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2013
152288-Thumbnail Image.png
Description
Chalcogenide glass (ChG) materials have gained wide attention because of their applications in conductive bridge random access memory (CBRAM), phase change memories (PC-RAM), optical rewritable disks (CD-RW and DVD-RW), microelectromechanical systems (MEMS), microfluidics, and optical communications. One of the significant properties of ChG materials is the change in the resistivity

Chalcogenide glass (ChG) materials have gained wide attention because of their applications in conductive bridge random access memory (CBRAM), phase change memories (PC-RAM), optical rewritable disks (CD-RW and DVD-RW), microelectromechanical systems (MEMS), microfluidics, and optical communications. One of the significant properties of ChG materials is the change in the resistivity of the material when a metal such as Ag or Cu is added to it by diffusion. This study demonstrates the potential radiation-sensing capabilities of two metal/chalcogenide glass device configurations. Lateral and vertical device configurations sense the radiation-induced migration of Ag+ ions in germanium selenide glasses via changes in electrical resistance between electrodes on the ChG. Before irradiation, these devices exhibit a high-resistance `OFF-state' (in the order of 10E12) but following irradiation, with either 60-Co gamma-rays or UV light, their resistance drops to a low-resistance `ON-state' (around 10E3). Lateral devices have exhibited cyclical recovery with room temperature annealing of the Ag doped ChG, which suggests potential uses in reusable radiation sensor applications. The feasibility of producing inexpensive flexible radiation sensors has been demonstrated by studying the effects of mechanical strain and temperature stress on sensors formed on flexible polymer substrate. The mechanisms of radiation-induced Ag/Ag+ transport and reactions in ChG have been modeled using a finite element device simulator, ATLAS. The essential reactions captured by the simulator are radiation-induced carrier generation, combined with reduction/oxidation for Ag species in the chalcogenide film. Metal-doped ChGs are solid electrolytes that have both ionic and electronic conductivity. The ChG based Programmable Metallization Cell (PMC) is a technology platform that offers electric field dependent resistance switching mechanisms by formation and dissolution of nano sized conductive filaments in a ChG solid electrolyte between oxidizable and inert electrodes. This study identifies silver anode agglomeration in PMC devices following large radiation dose exposure and considers device failure mechanisms via electrical and material characterization. The results demonstrate that by changing device structural parameters, silver agglomeration in PMC devices can be suppressed and reliable resistance switching may be maintained for extremely high doses ranging from 4 Mrad(GeSe) to more than 10 Mrad (ChG).
ContributorsDandamudi, Pradeep (Author) / Kozicki, Michael N (Thesis advisor) / Barnaby, Hugh J (Committee member) / Holbert, Keith E. (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2013