Matching Items (159)
Filtering by

Clear all filters

152154-Thumbnail Image.png
Description
As crystalline silicon solar cells continue to get thinner, the recombination of carriers at the surfaces of the cell plays an ever-important role in controlling the cell efficiency. One tool to minimize surface recombination is field effect passivation from the charges present in the thin films applied on the cell

As crystalline silicon solar cells continue to get thinner, the recombination of carriers at the surfaces of the cell plays an ever-important role in controlling the cell efficiency. One tool to minimize surface recombination is field effect passivation from the charges present in the thin films applied on the cell surfaces. The focus of this work is to understand the properties of charges present in the SiNx films and then to develop a mechanism to manipulate the polarity of charges to either negative or positive based on the end-application. Specific silicon-nitrogen dangling bonds (·Si-N), known as K center defects, are the primary charge trapping defects present in the SiNx films. A custom built corona charging tool was used to externally inject positive or negative charges in the SiNx film. Detailed Capacitance-Voltage (C-V) measurements taken on corona charged SiNx samples confirmed the presence of a net positive or negative charge density, as high as +/- 8 x 1012 cm-2, present in the SiNx film. High-energy (~ 4.9 eV) UV radiation was used to control and neutralize the charges in the SiNx films. Electron-Spin-Resonance (ESR) technique was used to detect and quantify the density of neutral K0 defects that are paramagnetically active. The density of the neutral K0 defects increased after UV treatment and decreased after high temperature annealing and charging treatments. Etch-back C-V measurements on SiNx films showed that the K centers are spread throughout the bulk of the SiNx film and not just near the SiNx-Si interface. It was also shown that the negative injected charges in the SiNx film were stable and present even after 1 year under indoor room-temperature conditions. Lastly, a stack of SiO2/SiNx dielectric layers applicable to standard commercial solar cells was developed using a low temperature (< 400 °C) PECVD process. Excellent surface passivation on FZ and CZ Si substrates for both n- and p-type samples was achieved by manipulating and controlling the charge in SiNx films.
ContributorsSharma, Vivek (Author) / Bowden, Stuart (Thesis advisor) / Schroder, Dieter (Committee member) / Honsberg, Christiana (Committee member) / Roedel, Ronald (Committee member) / Alford, Terry (Committee member) / Arizona State University (Publisher)
Created2013
151953-Thumbnail Image.png
Description
Distributed inference has applications in a wide range of fields such as source localization, target detection, environment monitoring, and healthcare. In this dissertation, distributed inference schemes which use bounded transmit power are considered. The performance of the proposed schemes are studied for a variety of inference problems. In the first

Distributed inference has applications in a wide range of fields such as source localization, target detection, environment monitoring, and healthcare. In this dissertation, distributed inference schemes which use bounded transmit power are considered. The performance of the proposed schemes are studied for a variety of inference problems. In the first part of the dissertation, a distributed detection scheme where the sensors transmit with constant modulus signals over a Gaussian multiple access channel is considered. The deflection coefficient of the proposed scheme is shown to depend on the characteristic function of the sensing noise, and the error exponent for the system is derived using large deviation theory. Optimization of the deflection coefficient and error exponent are considered with respect to a transmission phase parameter for a variety of sensing noise distributions including impulsive ones. The proposed scheme is also favorably compared with existing amplify-and-forward (AF) and detect-and-forward (DF) schemes. The effect of fading is shown to be detrimental to the detection performance and simulations are provided to corroborate the analytical results. The second part of the dissertation studies a distributed inference scheme which uses bounded transmission functions over a Gaussian multiple access channel. The conditions on the transmission functions under which consistent estimation and reliable detection are possible is characterized. For the distributed estimation problem, an estimation scheme that uses bounded transmission functions is proved to be strongly consistent provided that the variance of the noise samples are bounded and that the transmission function is one-to-one. The proposed estimation scheme is compared with the amplify and forward technique and its robustness to impulsive sensing noise distributions is highlighted. It is also shown that bounded transmissions suffer from inconsistent estimates if the sensing noise variance goes to infinity. For the distributed detection problem, similar results are obtained by studying the deflection coefficient. Simulations corroborate our analytical results. In the third part of this dissertation, the problem of estimating the average of samples distributed at the nodes of a sensor network is considered. A distributed average consensus algorithm in which every sensor transmits with bounded peak power is proposed. In the presence of communication noise, it is shown that the nodes reach consensus asymptotically to a finite random variable whose expectation is the desired sample average of the initial observations with a variance that depends on the step size of the algorithm and the variance of the communication noise. The asymptotic performance is characterized by deriving the asymptotic covariance matrix using results from stochastic approximation theory. It is shown that using bounded transmissions results in slower convergence compared to the linear consensus algorithm based on the Laplacian heuristic. Simulations corroborate our analytical findings. Finally, a robust distributed average consensus algorithm in which every sensor performs a nonlinear processing at the receiver is proposed. It is shown that non-linearity at the receiver nodes makes the algorithm robust to a wide range of channel noise distributions including the impulsive ones. It is shown that the nodes reach consensus asymptotically and similar results are obtained as in the case of transmit non-linearity. Simulations corroborate our analytical findings and highlight the robustness of the proposed algorithm.
ContributorsDasarathan, Sivaraman (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Reisslein, Martin (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151971-Thumbnail Image.png
Description
Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart

Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart of these algorithms is particle filtering (PF), a sequential Monte Carlo technique used to estimate the unknown parameters of dynamic systems. First, we analyze the bottlenecks in existing PF algorithms, and we propose a new parallel PF (PPF) algorithm based on the independent Metropolis-Hastings (IMH) algorithm. We show that the proposed PPF-IMH algorithm improves the root mean-squared error (RMSE) estimation performance, and we demonstrate that a parallel implementation of the algorithm results in significant reduction in inter-processor communication. We apply our implementation on a Xilinx Virtex-5 field programmable gate array (FPGA) platform to demonstrate that, for a one-dimensional problem, the PPF-IMH architecture with four processing elements and 1,000 particles can process input samples at 170 kHz by using less than 5% FPGA resources. We also apply the proposed PPF-IMH to waveform-agile sensing to achieve real-time tracking of dynamic targets with high RMSE tracking performance. We next integrate the PPF-IMH algorithm to track the dynamic parameters in neural sensing when the number of neural dipole sources is known. We analyze the computational complexity of a PF based method and propose the use of multiple particle filtering (MPF) to reduce the complexity. We demonstrate the improved performance of MPF using numerical simulations with both synthetic and real data. We also propose an FPGA implementation of the MPF algorithm and show that the implementation supports real-time tracking. For the more realistic scenario of automatically estimating an unknown number of time-varying neural dipole sources, we propose a new approach based on the probability hypothesis density filtering (PHDF) algorithm. The PHDF is implemented using particle filtering (PF-PHDF), and it is applied in a closed-loop to first estimate the number of dipole sources and then their corresponding amplitude, location and orientation parameters. We demonstrate the improved tracking performance of the proposed PF-PHDF algorithm and map it onto a Xilinx Virtex-5 FPGA platform to show its real-time implementation potential. Finally, we propose the use of sensor scheduling and compressive sensing techniques to reduce the number of active sensors, and thus overall power consumption, of electroencephalography (EEG) systems. We propose an efficient sensor scheduling algorithm which adaptively configures EEG sensors at each measurement time interval to reduce the number of sensors needed for accurate tracking. We combine the sensor scheduling method with PF-PHDF and implement the system on an FPGA platform to achieve real-time tracking. We also investigate the sparsity of EEG signals and integrate compressive sensing with PF to estimate neural activity. Simulation results show that both sensor scheduling and compressive sensing based methods achieve comparable tracking performance with significantly reduced number of sensors.
ContributorsMiao, Lifeng (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Zhang, Junshan (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2013
151296-Thumbnail Image.png
Description
Scaling of the classical planar MOSFET below 20 nm gate length is facing not only technological difficulties but also limitations imposed by short channel effects, gate and junction leakage current due to quantum tunneling, high body doping induced threshold voltage variation, and carrier mobility degradation. Non-classical multiple-gate structures such as

Scaling of the classical planar MOSFET below 20 nm gate length is facing not only technological difficulties but also limitations imposed by short channel effects, gate and junction leakage current due to quantum tunneling, high body doping induced threshold voltage variation, and carrier mobility degradation. Non-classical multiple-gate structures such as double-gate (DG) FinFETs and surrounding gate field-effect-transistors (SGFETs) have good electrostatic integrity and are an alternative to planar MOSFETs for below 20 nm technology nodes. Circuit design with these devices need compact models for SPICE simulation. In this work physics based compact models for the common-gate symmetric DG-FinFET, independent-gate asymmetric DG-FinFET, and SGFET are developed. Despite the complex device structure and boundary conditions for the Poisson-Boltzmann equation, the core structure of the DG-FinFET and SGFET models, are maintained similar to the surface potential based compact models for planar MOSFETs such as SP and PSP. TCAD simulations show differences between the transient behavior and the capacitance-voltage characteristics of bulk and SOI FinFETs if the gate-voltage swing includes the accumulation region. This effect can be captured by a compact model of FinFETs only if it includes the contribution of both types of carriers in the Poisson-Boltzmann equation. An accurate implicit input voltage equation valid in all regions of operation is proposed for common-gate symmetric DG-FinFETs with intrinsic or lightly doped bodies. A closed-form algorithm is developed for solving the new input voltage equation including ambipolar effects. The algorithm is verified for both the surface potential and its derivatives and includes a previously published analytical approximation for surface potential as a special case when ambipolar effects can be neglected. The symmetric linearization method for common-gate symmetric DG-FinFETs is developed in a form free of the charge-sheet approximation present in its original formulation for bulk MOSFETs. The accuracy of the proposed technique is verified by comparison with exact results. An alternative and computationally efficient description of the boundary between the trigonometric and hyperbolic solutions of the Poisson-Boltzmann equation for the independent-gate asymmetric DG-FinFET is developed in terms of the Lambert W function. Efficient numerical algorithm is proposed for solving the input voltage equation. Analytical expressions for terminal charges of an independent-gate asymmetric DG-FinFET are derived. The new charge model is C-infinity continuous, valid for weak as well as for strong inversion condition of both the channels and does not involve the charge-sheet approximation. This is accomplished by developing the symmetric linearization method in a form that does not require identical boundary conditions at the two Si-SiO2 interfaces and allows for volume inversion in the DG-FinFET. Verification of the model is performed with both numerical computations and 2D TCAD simulations under a wide range of biasing conditions. The model is implemented in a standard circuit simulator through Verilog-A code. Simulation examples for both digital and analog circuits verify good model convergence and demonstrate the capabilities of new circuit topologies that can be implemented using independent-gate asymmetric DG-FinFETs.
ContributorsDessai, Gajanan (Author) / Gildenblat, Gennady (Committee member) / McAndrew, Colin (Committee member) / Cao, Yu (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2012
151455-Thumbnail Image.png
Description
Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving

Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focusses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.
ContributorsMoncada, Albert (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Yekani Fard, Masoud (Committee member) / Arizona State University (Publisher)
Created2012
151465-Thumbnail Image.png
Description
Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on

Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on a priori information and user-specified model parameters. Also, ECG beat morphologies, which vary greatly across patients and disease states, cannot be uniquely characterized by a single model. In this work, sequential Bayesian based methods are used to appropriately model and adaptively select the corresponding model parameters of ECG signals. An adaptive framework based on a sequential Bayesian tracking method is proposed to adaptively select the cardiac parameters that minimize the estimation error, thus precluding the need for pre-processing. Simulations using real ECG data from the online Physionet database demonstrate the improvement in performance of the proposed algorithm in accurately estimating critical heart disease parameters. In addition, two new approaches to ECG modeling are presented using the interacting multiple model and the sequential Markov chain Monte Carlo technique with adaptive model selection. Both these methods can adaptively choose between different models for various ECG beat morphologies without requiring prior ECG information, as demonstrated by using real ECG signals. A supervised Bayesian maximum-likelihood (ML) based classifier uses the estimated model parameters to classify different types of cardiac arrhythmias. However, the non-availability of sufficient amounts of representative training data and the large inter-patient variability pose a challenge to the existing supervised learning algorithms, resulting in a poor classification performance. In addition, recently developed unsupervised learning methods require a priori knowledge on the number of diseases to cluster the ECG data, which often evolves over time. In order to address these issues, an adaptive learning ECG classification method that uses Dirichlet process Gaussian mixture models is proposed. This approach does not place any restriction on the number of disease classes, nor does it require any training data. This algorithm is adapted to be patient-specific by labeling or identifying the generated mixtures using the Bayesian ML method, assuming the availability of labeled training data.
ContributorsEdla, Shwetha Reddy (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
151513-Thumbnail Image.png
Description
Ball Grid Array (BGA) using lead-free or lead-rich solder materials are widely used as Second Level Interconnects (SLI) in mounting packaged components to the printed circuit board (PCB). The reliability of these solder joints is of significant importance to the performance of microelectronics components and systems. Product design/form-factor, solder material,

Ball Grid Array (BGA) using lead-free or lead-rich solder materials are widely used as Second Level Interconnects (SLI) in mounting packaged components to the printed circuit board (PCB). The reliability of these solder joints is of significant importance to the performance of microelectronics components and systems. Product design/form-factor, solder material, manufacturing process, use condition, as well as, the inherent variabilities present in the system, greatly influence product reliability. Accurate reliability analysis requires an integrated approach to concurrently account for all these factors and their synergistic effects. Such an integrated and robust methodology can be used in design and development of new and advanced microelectronics systems and can provide significant improvement in cycle-time, cost, and reliability. IMPRPK approach is based on a probabilistic methodology, focusing on three major tasks of (1) Characterization of BGA solder joints to identify failure mechanisms and obtain statistical data, (2) Finite Element analysis (FEM) to predict system response needed for life prediction, and (3) development of a probabilistic methodology to predict the reliability, as well as, the sensitivity of the system to various parameters and the variabilities. These tasks and the predictive capabilities of IMPRPK in microelectronic reliability analysis are discussed.
ContributorsFallah-Adl, Ali (Author) / Tasooji, Amaneh (Thesis advisor) / Krause, Stephen (Committee member) / Alford, Terry (Committee member) / Jiang, Hanqing (Committee member) / Mahajan, Ravi (Committee member) / Arizona State University (Publisher)
Created2013
151410-Thumbnail Image.png
Description
Test cost has become a significant portion of device cost and a bottleneck in high volume manufacturing. Increasing integration density and shrinking feature sizes increased test time/cost and reduce observability. Test engineers have to put a tremendous effort in order to maintain test cost within an acceptable budget. Unfortunately, there

Test cost has become a significant portion of device cost and a bottleneck in high volume manufacturing. Increasing integration density and shrinking feature sizes increased test time/cost and reduce observability. Test engineers have to put a tremendous effort in order to maintain test cost within an acceptable budget. Unfortunately, there is not a single straightforward solution to the problem. Products that are tested have several application domains and distinct customer profiles. Some products are required to operate for long periods of time while others are required to be low cost and optimized for low cost. Multitude of constraints and goals make it impossible to find a single solution that work for all cases. Hence, test development/optimization is typically design/circuit dependent and even process specific. Therefore, test optimization cannot be performed using a single test approach, but necessitates a diversity of approaches. This works aims at addressing test cost minimization and test quality improvement at various levels. In the first chapter of the work, we investigate pre-silicon strategies, such as design for test and pre-silicon statistical simulation optimization. In the second chapter, we investigate efficient post-silicon test strategies, such as adaptive test, adaptive multi-site test, outlier analysis, and process shift detection/tracking.
ContributorsYilmaz, Ender (Author) / Ozev, Sule (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Cao, Yu (Committee member) / Christen, Jennifer Blain (Committee member) / Arizona State University (Publisher)
Created2012
152307-Thumbnail Image.png
Description
Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of

Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of biological threats. Currently, traditional bioinformatics tools, such as data mining classification algorithms, are used to process the large amount of peptide microarray data. However, these methods generally require training data and do not adapt to changing immune conditions or additional patient information. This work proposes advanced processing techniques to improve the classification and identification of single and multiple underlying immune response states embedded in immunosignatures, making it possible to detect both known and previously unknown diseases or biothreat agents. Novel adaptive learning methodologies for un- supervised and semi-supervised clustering integrated with immunosignature feature extraction approaches are proposed. The techniques are based on extracting novel stochastic features from microarray binding intensities and use Dirichlet process Gaussian mixture models to adaptively cluster the immunosignatures in the feature space. This learning-while-clustering approach allows continuous discovery of antibody activity by adaptively detecting new disease states, with limited a priori disease or patient information. A beta process factor analysis model to determine underlying patient immune responses is also proposed to further improve the adaptive clustering performance by formatting new relationships between patients and antibody activity. In order to extend the clustering methods for diagnosing multiple states in a patient, the adaptive hierarchical Dirichlet process is integrated with modified beta process factor analysis latent feature modeling to identify relationships between patients and infectious agents. The use of Bayesian nonparametric adaptive learning techniques allows for further clustering if additional patient data is received. Significant improvements in feature identification and immune response clustering are demonstrated using samples from patients with different diseases.
ContributorsMalin, Anna (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Lacroix, Zoé (Committee member) / Arizona State University (Publisher)
Created2013
152459-Thumbnail Image.png
Description
Non-volatile memories (NVM) are widely used in modern electronic devices due to their non-volatility, low static power consumption and high storage density. While Flash memories are the dominant NVM technology, resistive memories such as phase change access memory (PRAM) and spin torque transfer random access memory (STT-MRAM) are gaining ground.

Non-volatile memories (NVM) are widely used in modern electronic devices due to their non-volatility, low static power consumption and high storage density. While Flash memories are the dominant NVM technology, resistive memories such as phase change access memory (PRAM) and spin torque transfer random access memory (STT-MRAM) are gaining ground. All these technologies suffer from reliability degradation due to process variations, structural limits and material property shift. To address the reliability concerns of these NVM technologies, multi-level low cost solutions are proposed for each of them. My approach consists of first building a comprehensive error model. Next the error characteristics are exploited to develop low cost multi-level strategies to compensate for the errors. For instance, for NAND Flash memory, I first characterize errors due to threshold voltage variations as a function of the number of program/erase cycles. Next a flexible product code is designed to migrate to a stronger ECC scheme as program/erase cycles increases. An adaptive data refresh scheme is also proposed to improve memory reliability with low energy cost for applications with different data update frequencies. For PRAM, soft errors and hard errors models are built based on shifts in the resistance distributions. Next I developed a multi-level error control approach involving bit interleaving and subblock flipping at the architecture level, threshold resistance tuning at the circuit level and programming current profile tuning at the device level. This approach helped reduce the error rate significantly so that it was now sufficient to use a low cost ECC scheme to satisfy the memory reliability constraint. I also studied the reliability of a PRAM+DRAM hybrid memory system and analyzed the tradeoffs between memory performance, programming energy and lifetime. For STT-MRAM, I first developed an error model based on process variations. I developed a multi-level approach to reduce the error rates that consisted of increasing the W/L ratio of the access transistor, increasing the voltage difference across the memory cell and adjusting the current profile during write operation. This approach enabled use of a low cost BCH based ECC scheme to achieve very low block failure rates.
ContributorsYang, Chengen (Author) / Chakrabarti, Chaitali (Thesis advisor) / Cao, Yu (Committee member) / Ogras, Umit Y. (Committee member) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2014