Matching Items (151)
Filtering by

Clear all filters

149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149996-Thumbnail Image.png
Description
One of the challenges in future semiconductor device design is excessive rise of power dissipation and device temperatures. With the introduction of new geometrically confined device structures like SOI, FinFET, nanowires and continuous incorporation of new materials with poor thermal conductivities in the device active region, the device thermal problem

One of the challenges in future semiconductor device design is excessive rise of power dissipation and device temperatures. With the introduction of new geometrically confined device structures like SOI, FinFET, nanowires and continuous incorporation of new materials with poor thermal conductivities in the device active region, the device thermal problem is expected to become more challenging in coming years. This work examines the degradation in the ON-current due to self-heating effects in 10 nm channel length silicon nanowire transistors. As part of this dissertation, a 3D electrothermal device simulator is developed that self-consistently solves electron Boltzmann transport equation with 3D energy balance equations for both the acoustic and the optical phonons. This device simulator predicts temperature variations and other physical and electrical parameters across the device for different bias and boundary conditions. The simulation results show insignificant current degradation for nanowire self-heating because of pronounced velocity overshoot effect. In addition, this work explores the role of various placement of the source and drain contacts on the magnitude of self-heating effect in nanowire transistors. This work also investigates the simultaneous influence of self-heating and random charge effects on the magnitude of the ON current for both positively and negatively charged single charges. This research suggests that the self-heating effects affect the ON-current in two ways: (1) by lowering the barrier at the source end of the channel, thus allowing more carriers to go through, and (2) via the screening effect of the Coulomb potential. To examine the effect of temperature dependent thermal conductivity of thin silicon films in nanowire transistors, Selberherr's thermal conductivity model is used in the device simulator. The simulations results show larger current degradation because of self-heating due to decreased thermal conductivity . Crystallographic direction dependent thermal conductivity is also included in the device simulations. Larger degradation is observed in the current along the [100] direction when compared to the [110] direction which is in agreement with the values for the thermal conductivity tensor provided by Zlatan Aksamija.
ContributorsHossain, Arif (Author) / Vasileska, Dragica (Thesis advisor) / Ahmed, Shaikh (Committee member) / Bakkaloglu, Bertan (Committee member) / Goodnick, Stephen (Committee member) / Arizona State University (Publisher)
Created2011
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
150398-Thumbnail Image.png
Description
Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely

Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results.
ContributorsTu, Kai (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
150389-Thumbnail Image.png
Description
Radiation-induced gain degradation in bipolar devices is considered to be the primary threat to linear bipolar circuits operating in the space environment. The damage is primarily caused by charged particles trapped in the Earth's magnetosphere, the solar wind, and cosmic rays. This constant radiation exposure leads to early end-of-life expectancies

Radiation-induced gain degradation in bipolar devices is considered to be the primary threat to linear bipolar circuits operating in the space environment. The damage is primarily caused by charged particles trapped in the Earth's magnetosphere, the solar wind, and cosmic rays. This constant radiation exposure leads to early end-of-life expectancies for many electronic parts. Exposure to ionizing radiation increases the density of oxide and interfacial defects in bipolar oxides leading to an increase in base current in bipolar junction transistors. Radiation-induced excess base current is the primary cause of current gain degradation. Analysis of base current response can enable the measurement of defects generated by radiation exposure. In addition to radiation, the space environment is also characterized by extreme temperature fluctuations. Temperature, like radiation, also has a very strong impact on base current. Thus, a technique for separating the effects of radiation from thermal effects is necessary in order to accurately measure radiation-induced damage in space. This thesis focuses on the extraction of radiation damage in lateral PNP bipolar junction transistors and the space environment. It also describes the measurement techniques used and provides a quantitative analysis methodology for separating radiation and thermal effects on the bipolar base current.
ContributorsCampola, Michael J (Author) / Barnaby, Hugh J (Thesis advisor) / Holbert, Keith E. (Committee member) / Vasileska, Dragica (Committee member) / Arizona State University (Publisher)
Created2011
149867-Thumbnail Image.png
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ContributorsKrishnamoorthi, Harish (Author) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149937-Thumbnail Image.png
Description
There will always be a need for high current/voltage transistors. A transistor that has the ability to be both or either of these things is the silicon metal-silicon field effect transistor (MESFET). An additional perk that silicon MESFET transistors have is the ability to be integrated into the standard silicon

There will always be a need for high current/voltage transistors. A transistor that has the ability to be both or either of these things is the silicon metal-silicon field effect transistor (MESFET). An additional perk that silicon MESFET transistors have is the ability to be integrated into the standard silicon on insulator (SOI) complementary metal oxide semiconductor (CMOS) process flow. This makes a silicon MESFET transistor a very valuable device for use in any standard CMOS circuit that may usually need a separate integrated circuit (IC) in order to switch power on or from a high current/voltage because it allows this function to be performed with a single chip thereby cutting costs. The ability for the MESFET to cost effectively satisfy the needs of this any many other high current/voltage device application markets is what drives the study of MESFET optimization. Silicon MESFETs that are integrated into standard SOI CMOS processes often receive dopings during fabrication that would not ideally be there in a process made exclusively for MESFETs. Since these remnants of SOI CMOS processing effect the operation of a MESFET device, their effect can be seen in the current-voltage characteristics of a measured MESFET device. Device simulations are done and compared to measured silicon MESFET data in order to deduce the cause and effect of many of these SOI CMOS remnants. MESFET devices can be made in both fully depleted (FD) and partially depleted (PD) SOI CMOS technologies. Device simulations are used to do a comparison of FD and PD MESFETs in order to show the advantages and disadvantages of MESFETs fabricated in different technologies. It is shown that PD MESFET have the highest current per area capability. Since the PD MESFET is shown to have the highest current capability, a layout optimization method to further increase the current per area capability of the PD silicon MESFET is presented, derived, and proven to a first order.
ContributorsSochacki, John (Author) / Thornton, Trevor J (Thesis advisor) / Schroder, Dieter (Committee member) / Vasileska, Dragica (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2011
149915-Thumbnail Image.png
Description
Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.
ContributorsWerth, Nicholas (Author) / Karam, Lina (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149902-Thumbnail Image.png
Description
For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.
ContributorsNorthrop, Judith (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
150071-Thumbnail Image.png
Description
In semiconductor physics, many properties or phenomena of materials can be brought to light through certain changes in the materials. Having a tool to define new material properties so as to highlight certain phenomena greatly increases the ability to understand that phenomena. The generalized Monte Carlo tool allows the user

In semiconductor physics, many properties or phenomena of materials can be brought to light through certain changes in the materials. Having a tool to define new material properties so as to highlight certain phenomena greatly increases the ability to understand that phenomena. The generalized Monte Carlo tool allows the user to do that by keeping every parameter used to define a material, within the non-parabolic band approximation, a variable in the control of the user. A material is defined by defining its valleys, energies, valley effective masses and their directions. The types of scattering to be included can also be chosen. The non-parabolic band structure model is used. With the deployment of the generalized Monte Carlo tool onto www.nanoHUB.org the tool will be available to users around the world. This makes it a very useful educational tool that can be incorporated into curriculums. The tool is integrated with Rappture, to allow user-friendly access of the tool. The user can freely define a material in an easy systematic way without having to worry about the coding involved. The output results are automatically graphed and since the code incorporates an analytic band structure model, it is relatively fast. The versatility of the tool has been investigated and has produced results closely matching the experimental values for some common materials. The tool has been uploaded onto www.nanoHUB.org by integrating it with the Rappture interface. By using Rappture as the user interface, one can easily make changes to the current parameter sets to obtain even more accurate results.
ContributorsHathwar, Raghuraj (Author) / Vasileska, Dragica (Thesis advisor) / Goodnick, Stephen M (Committee member) / Saraniti, Marco (Committee member) / Arizona State University (Publisher)
Created2011