This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 4 of 4
Filtering by

Clear all filters

151410-Thumbnail Image.png
Description
Test cost has become a significant portion of device cost and a bottleneck in high volume manufacturing. Increasing integration density and shrinking feature sizes increased test time/cost and reduce observability. Test engineers have to put a tremendous effort in order to maintain test cost within an acceptable budget. Unfortunately, there

Test cost has become a significant portion of device cost and a bottleneck in high volume manufacturing. Increasing integration density and shrinking feature sizes increased test time/cost and reduce observability. Test engineers have to put a tremendous effort in order to maintain test cost within an acceptable budget. Unfortunately, there is not a single straightforward solution to the problem. Products that are tested have several application domains and distinct customer profiles. Some products are required to operate for long periods of time while others are required to be low cost and optimized for low cost. Multitude of constraints and goals make it impossible to find a single solution that work for all cases. Hence, test development/optimization is typically design/circuit dependent and even process specific. Therefore, test optimization cannot be performed using a single test approach, but necessitates a diversity of approaches. This works aims at addressing test cost minimization and test quality improvement at various levels. In the first chapter of the work, we investigate pre-silicon strategies, such as design for test and pre-silicon statistical simulation optimization. In the second chapter, we investigate efficient post-silicon test strategies, such as adaptive test, adaptive multi-site test, outlier analysis, and process shift detection/tracking.
ContributorsYilmaz, Ender (Author) / Ozev, Sule (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Cao, Yu (Committee member) / Christen, Jennifer Blain (Committee member) / Arizona State University (Publisher)
Created2012
150776-Thumbnail Image.png
Description
The front end of almost all ADCs consists of a Sample and Hold Circuit in order to make sure a constant analog value is digitized at the end of ADC. The design of Track and Hold Circuit (THA) mainly focuses on following parameters: Input frequency, Sampling frequency, dynamic Range, hold

The front end of almost all ADCs consists of a Sample and Hold Circuit in order to make sure a constant analog value is digitized at the end of ADC. The design of Track and Hold Circuit (THA) mainly focuses on following parameters: Input frequency, Sampling frequency, dynamic Range, hold pedestal, feed through error. This thesis will discuss the importance of these parameters of a THA to the ADCs and commonly used architectures of THA. A new architecture with SiGe HBT transistors in BiCMOS 130 nm technology is presented here. The proposed topology without complicated circuitry achieves high Spurious Free Dynamic Range(SFDR) and Total Harmonic Distortion (THD).These are important figure of merits for any THA which gives a measure of non-linearity of the circuit. The proposed topology is implemented in IBM8HP 130 nm BiCMOS process combines typical emitter follower switch in bipolar THAs and output steering technique proposed in the previous work. With these techniques and the cascode transistor in the input which is used to isolate the switch from the input during the hold mode, better results have been achieved. The THA is designed to work with maximum input frequency of 250 MHz at sampling frequency of 500 MHz with input currents not more than 5mA achieving an SFDR of 78.49 dB. Simulation and results are presented, illustrating the advantages and trade-offs of the proposed topology.
ContributorsRao, Nishita Ramakrishna (Author) / Barnaby, Hugh (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Christen, Jennifer Blain (Committee member) / Arizona State University (Publisher)
Created2012
151182-Thumbnail Image.png
Description
ABSTRACT As the technology length shrinks down, achieving higher gain is becoming very difficult in deep sub-micron technologies. As the supply voltages drop, cascodes are very difficult to implement and cascade amplifiers are needed to achieve sufficient gain with required output swing. This sets the fundamental limit on the SNR

ABSTRACT As the technology length shrinks down, achieving higher gain is becoming very difficult in deep sub-micron technologies. As the supply voltages drop, cascodes are very difficult to implement and cascade amplifiers are needed to achieve sufficient gain with required output swing. This sets the fundamental limit on the SNR and hence the maximum resolution that can be achieved by ADC. With the RSD algorithm and the range overlap, the sub ADC can tolerate large comparator offsets leaving the linearity and accuracy requirement for the DAC and residue gain stage. Typically, the multiplying DAC requires high gain wide bandwidth op-amp and the design of this high gain op-amp becomes challenging in the deep submicron technologies. This work presents `A 12 bit 25MSPS 1.2V pipelined ADC using split CLS technique' in IBM 130nm 8HP process using only CMOS devices for the application of Large Hadron Collider (LHC). CLS technique relaxes the gain requirement of op-amp and improves the signal-to-noise ratio without increase in power or input sampling capacitor with rail-to-rail swing. An op-amp sharing technique has been incorporated with split CLS technique which decreases the number of op-amps and hence the power further. Entire pipelined converter has been implemented as six 2.5 bit RSD stages and hence decreases the latency associated with the pipelined architecture - one of the main requirements for LHC along with the power requirement. Two different OTAs have been designed to use in the split-CLS technique. Bootstrap switches and pass gate switches are used in the circuit along with a low power dynamic kick-back compensated comparator.
ContributorsSwaminathan, Visu Vaithiyanathan (Author) / Barnaby, Hugh (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Christen, Jennifer Blain (Committee member) / Arizona State University (Publisher)
Created2012
157174-Thumbnail Image.png
Description
Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card

Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card fraud in online transactions. Every online transaction comes with a fraud risk and it is the merchant's liability to detect and stop fraudulent transactions. Merchants utilize various mechanisms to prevent and manage fraud such as automated fraud detection systems and manual transaction reviews by expert fraud analysts. Many proposed solutions mostly focus on fraud detection accuracy and ignore financial considerations. Also, the highly effective manual review process is overlooked. First, I propose Profit Optimizing Neural Risk Manager (PONRM), a selective classifier that (a) constitutes optimal collaboration between machine learning models and human expertise under industrial constraints, (b) is cost and profit sensitive. I suggest directions on how to characterize fraudulent behavior and assess the risk of a transaction. I show that my framework outperforms cost-sensitive and cost-insensitive baselines on three real-world merchant datasets. While PONRM is able to work with many supervised learners and obtain convincing results, utilizing probability outputs directly from the trained model itself can pose problems, especially in deep learning as softmax output is not a true uncertainty measure. This phenomenon, and the wide and rapid adoption of deep learning by practitioners brought unintended consequences in many situations such as in the infamous case of Google Photos' racist image recognition algorithm; thus, necessitated the utilization of the quantified uncertainty for each prediction. There have been recent efforts towards quantifying uncertainty in conventional deep learning methods (e.g., dropout as Bayesian approximation); however, their optimal use in decision making is often overlooked and understudied. Thus, I present a mixed-integer programming framework for selective classification called MIPSC, that investigates and combines model uncertainty and predictive mean to identify optimal classification and rejection regions. I also extend this framework to cost-sensitive settings (MIPCSC) and focus on the critical real-world problem, online fraud management and show that my approach outperforms industry standard methods significantly for online fraud management in real-world settings.
ContributorsYildirim, Mehmet Yigit (Author) / Davulcu, Hasan (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Huang, Dijiang (Committee member) / Hsiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2019