Matching Items (125)
Filtering by

Clear all filters

149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149996-Thumbnail Image.png
Description
One of the challenges in future semiconductor device design is excessive rise of power dissipation and device temperatures. With the introduction of new geometrically confined device structures like SOI, FinFET, nanowires and continuous incorporation of new materials with poor thermal conductivities in the device active region, the device thermal problem

One of the challenges in future semiconductor device design is excessive rise of power dissipation and device temperatures. With the introduction of new geometrically confined device structures like SOI, FinFET, nanowires and continuous incorporation of new materials with poor thermal conductivities in the device active region, the device thermal problem is expected to become more challenging in coming years. This work examines the degradation in the ON-current due to self-heating effects in 10 nm channel length silicon nanowire transistors. As part of this dissertation, a 3D electrothermal device simulator is developed that self-consistently solves electron Boltzmann transport equation with 3D energy balance equations for both the acoustic and the optical phonons. This device simulator predicts temperature variations and other physical and electrical parameters across the device for different bias and boundary conditions. The simulation results show insignificant current degradation for nanowire self-heating because of pronounced velocity overshoot effect. In addition, this work explores the role of various placement of the source and drain contacts on the magnitude of self-heating effect in nanowire transistors. This work also investigates the simultaneous influence of self-heating and random charge effects on the magnitude of the ON current for both positively and negatively charged single charges. This research suggests that the self-heating effects affect the ON-current in two ways: (1) by lowering the barrier at the source end of the channel, thus allowing more carriers to go through, and (2) via the screening effect of the Coulomb potential. To examine the effect of temperature dependent thermal conductivity of thin silicon films in nanowire transistors, Selberherr's thermal conductivity model is used in the device simulator. The simulations results show larger current degradation because of self-heating due to decreased thermal conductivity . Crystallographic direction dependent thermal conductivity is also included in the device simulations. Larger degradation is observed in the current along the [100] direction when compared to the [110] direction which is in agreement with the values for the thermal conductivity tensor provided by Zlatan Aksamija.
ContributorsHossain, Arif (Author) / Vasileska, Dragica (Thesis advisor) / Ahmed, Shaikh (Committee member) / Bakkaloglu, Bertan (Committee member) / Goodnick, Stephen (Committee member) / Arizona State University (Publisher)
Created2011
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
150398-Thumbnail Image.png
Description
Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely

Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results.
ContributorsTu, Kai (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
149867-Thumbnail Image.png
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ContributorsKrishnamoorthi, Harish (Author) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149915-Thumbnail Image.png
Description
Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.
ContributorsWerth, Nicholas (Author) / Karam, Lina (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149902-Thumbnail Image.png
Description
For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.
ContributorsNorthrop, Judith (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
150175-Thumbnail Image.png
Description
The tracking of multiple targets becomes more challenging in complex environments due to the additional degrees of nonlinearity in the measurement model. In urban terrain, for example, there are multiple reflection path measurements that need to be exploited since line-of-sight observations are not always available. Multiple target tracking in urban

The tracking of multiple targets becomes more challenging in complex environments due to the additional degrees of nonlinearity in the measurement model. In urban terrain, for example, there are multiple reflection path measurements that need to be exploited since line-of-sight observations are not always available. Multiple target tracking in urban terrain environments is traditionally implemented using sequential Monte Carlo filtering algorithms and data association techniques. However, data association techniques can be computationally intensive and require very strict conditions for efficient performance. This thesis investigates the probability hypothesis density (PHD) method for tracking multiple targets in urban environments. The PHD is based on the theory of random finite sets and it is implemented using the particle filter. Unlike data association methods, it can be used to estimate the number of targets as well as their corresponding tracks. A modified maximum-likelihood version of the PHD (MPHD) is proposed to automatically and adaptively estimate the measurement types available at each time step. Specifically, the MPHD allows measurement-to-nonlinearity associations such that the best matched measurement can be used at each time step, resulting in improved radar coverage and scene visibility. Numerical simulations demonstrate the effectiveness of the MPHD in improving tracking performance, both for tracking multiple targets and targets in clutter.
ContributorsZhou, Meng (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2011
150248-Thumbnail Image.png
Description
In very small electronic devices the alternate capture and emission of carriers at an individual defect site located at the interface of Si:SiO2 of a MOSFET generates discrete switching in the device conductance referred to as a random telegraph signal (RTS) or random telegraph noise (RTN). In this research work,

In very small electronic devices the alternate capture and emission of carriers at an individual defect site located at the interface of Si:SiO2 of a MOSFET generates discrete switching in the device conductance referred to as a random telegraph signal (RTS) or random telegraph noise (RTN). In this research work, the integration of random defects positioned across the channel at the Si:SiO2 interface from source end to the drain end in the presence of different random dopant distributions are used to conduct Ensemble Monte-Carlo ( EMC ) based numerical simulation of key device performance metrics for 45 nm gate length MOSFET device. The two main performance parameters that affect RTS based reliability measurements are percentage change in threshold voltage and percentage change in drain current fluctuation in the saturation region. It has been observed as a result of the simulation that changes in both and values moderately decrease as the defect position is gradually moved from source end to the drain end of the channel. Precise analytical device physics based model needs to be developed to explain and assess the EMC simulation based higher VT fluctuations as experienced for trap positions at the source side. A new analytical model has been developed that simultaneously takes account of dopant number variations in the channel and depletion region underneath and carrier mobility fluctuations resulting from fluctuations in surface potential barriers. Comparisons of this new analytical model along with existing analytical models are shown to correlate with 3D EMC simulation based model for assessment of VT fluctuations percentage induced by a single interface trap. With scaling of devices beyond 32 nm node, halo doping at the source and drain are routinely incorporated to combat the threshold voltage roll-off that takes place with effective channel length reduction. As a final study on this regard, 3D EMC simulation method based computations of threshold voltage fluctuations have been performed for varying source and drain halo pocket length to illustrate the threshold voltage fluctuations related reliability problems that have been aggravated by trap positions near the source at the interface compared to conventional 45 nm MOSFET.
ContributorsAshraf, Nabil Shovon (Author) / Vasileska, Dragica (Thesis advisor) / Schroder, Dieter (Committee member) / Goodnick, Stephen (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2011
150154-Thumbnail Image.png
Description
As existing solar cell technologies come closer to their theoretical efficiency, new concepts that overcome the Shockley-Queisser limit and exceed 50% efficiency need to be explored. New materials systems are often investigated to achieve this, but the use of existing solar cell materials in advanced concept approaches is compelling for

As existing solar cell technologies come closer to their theoretical efficiency, new concepts that overcome the Shockley-Queisser limit and exceed 50% efficiency need to be explored. New materials systems are often investigated to achieve this, but the use of existing solar cell materials in advanced concept approaches is compelling for multiple theoretical and practical reasons. In order to include advanced concept approaches into existing materials, nanostructures are used as they alter the physical properties of these materials. To explore advanced nanostructured concepts with existing materials such as III-V alloys, silicon and/or silicon/germanium and associated alloys, fundamental aspects of using these materials in advanced concept nanostructured solar cells must be understood. Chief among these is the determination and predication of optimum electronic band structures, including effects such as strain on the band structure, and the material's opto-electronic properties. Nanostructures have a large impact on band structure and electronic properties through quantum confinement. An additional large effect is the change in band structure due to elastic strain caused by lattice mismatch between the barrier and nanostructured (usually self-assembled QDs) materials. To develop a material model for advanced concept solar cells, the band structure is calculated for single as well as vertical array of quantum dots with the realistic effects such as strain, associated with the epitaxial growth of these materials. The results show significant effect of strain in band structure. More importantly, the band diagram of a vertical array of QDs with different spacer layer thickness show significant change in band offsets, especially for heavy and light hole valence bands when the spacer layer thickness is reduced. These results, ultimately, have significance to develop a material model for advance concept solar cells that use the QD nanostructures as absorbing medium. The band structure calculations serve as the basis for multiple other calculations. Chief among these is that the model allows the design of a practical QD advanced concept solar cell, which meets key design criteria such as a negligible valence band offset between the QD/barrier materials and close to optimum band gaps, resulting in the predication of optimum material combinations.
ContributorsDahal, Som Nath (Author) / Honsberg, Christiana (Thesis advisor) / Goodnick, Stephen (Committee member) / Roedel, Ronald (Committee member) / Ponce, Fernando (Committee member) / Arizona State University (Publisher)
Created2011