Matching Items (4)
Filtering by

Clear all filters

153036-Thumbnail Image.png
Description
High speed current-steering DACs with high linearity are needed in today's applications such as wired and wireless communications, instrumentation, radar, and other direct digital synthesis (DDS) applications. However, a trade-off exists between the speed and resolution of Nyquist rate current-steering DACs. As the resolution increases, more transistor area

High speed current-steering DACs with high linearity are needed in today's applications such as wired and wireless communications, instrumentation, radar, and other direct digital synthesis (DDS) applications. However, a trade-off exists between the speed and resolution of Nyquist rate current-steering DACs. As the resolution increases, more transistor area is required to meet matching requirements for optimal linearity and thus, the overall speed of the DAC is limited.

In this thesis work, a 12-bit current-steering DAC was designed with current sources scaled below the required matching size to decrease the area and increase the overall speed of the DAC. By scaling the current sources, however, errors due to random mismatch between current sources will arise and additional calibration hardware is necessary to ensure 12-bit linearity. This work presents how to implement a self-calibration DAC that works to fix amplitude errors while maintaining a lower overall area. Additionally, the DAC designed in this thesis investigates the implementation feasibility of a data-interleaved architecture. Data interleaving can increase the total bandwidth of the DACs by 2 with an increase in SQNR by an additional 3 dB.

The final results show that the calibration method can effectively improve the linearity of the DAC. The DAC is able to run up to 400 MSPS frequencies with a 75 dB SFDR performance and above 87 dB SFDR performance at update rates of 200 MSPS.
ContributorsJankunas, Benjamin (Author) / Bakkaloglu, Bertan (Thesis advisor) / Kitchen, Jennifer (Committee member) / Ozev, Sule (Committee member) / Arizona State University (Publisher)
Created2014
154116-Thumbnail Image.png
Description
RF transmitter manufacturers go to great extremes and expense to ensure that their product meets the RF output power requirements for which they are designed. Therefore, there is an urgent need for in-field monitoring of output power and gain to bring down the costs of RF transceiver testing and ensure

RF transmitter manufacturers go to great extremes and expense to ensure that their product meets the RF output power requirements for which they are designed. Therefore, there is an urgent need for in-field monitoring of output power and gain to bring down the costs of RF transceiver testing and ensure product reliability. Built-in self-test (BIST) techniques can perform such monitoring without the requirement for expensive RF test equipment. In most BIST techniques, on-chip resources, such as peak detectors, power detectors, or envelope detectors are used along with frequency down conversion to analyze the output of the design under test (DUT). However, this conversion circuitry is subject to similar process, voltage, and temperature (PVT) variations as the DUT and affects the measurement accuracy. So, it is important to monitor BIST performance over time, voltage and temperature, such that accurate in-field measurements can be performed.

In this research, a multistep BIST solution using only baseband signals for test analysis is presented. An on-chip signal generation circuit, which is robust with respect to time, supply voltage, and temperature variations is used for self-calibration of the BIST system before the DUT measurement. Using mathematical modelling, an analytical expression for the output signal is derived first and then test signals are devised to extract the output power of the DUT. By utilizing a standard 180nm IBM7RF CMOS process, a 2.4GHz low power RF IC incorporated with the proposed BIST circuitry and on-chip test signal source is designed and fabricated. Experimental results are presented, which show this BIST method can monitor the DUT’s output power with +/- 0.35dB accuracy over a 20dB power dynamic range.
ContributorsGangula, Sudheer Kumar Reddy (Author) / Kitchen, Jennifer (Thesis advisor) / Ozev, Sule (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2015
155894-Thumbnail Image.png
Description
In this work, a 12-bit ADC with three types of calibration is proposed for high speed security applications as well as a precision application. This converter performs for both applications because it satisfies all the necessary specifications such as minimal device mismatch and offset, programmability to decrease aging effects, high

In this work, a 12-bit ADC with three types of calibration is proposed for high speed security applications as well as a precision application. This converter performs for both applications because it satisfies all the necessary specifications such as minimal device mismatch and offset, programmability to decrease aging effects, high SNR for increased ENOB and fast conversion rate. The designed converter implements three types of calibration necessary for offset and gain error, including: a correlated double sampling integrator used in the first stage of the ADC, a power up auto zero technique implemented in the digital code to store any offset and subtract out if necessary, and an automatic startup and manual calibration to control the common mode voltages. The proposed ADC was designed in Intel’s 10nm technology. This ADC is designed to monitor DC voltages for the precision and high speed applications. The conversion rate of the analog to digital converter is programmable to 7µs or 910ns, depending on the precision or high speed application, respectively. The range of the input and reference supply is 0 to 1.25V. The ADC is designed in Intel 10nm technology using a 1.8V supply consuming an area of 0.0705mm2. This thesis explores challenges of designing a dual-purpose analog to digital converter, which include: 1.) increased offset in 10nm technology, 2.) dual application ADC that can be accurate and fast, 3.) reducing the parasitic capacitance of the ADC, and 4.) gain error that occurs in ADCs.
ContributorsSchmelter, Brooke (Author) / Bakkaloglu, Bertan (Thesis advisor) / Ogras, Umit Y. (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2017
152396-Thumbnail Image.png
Description
In thesis, a test time reduction (a low cost test) methodology for digitally-calibrated pipeline analog-to-digital converters (ADCs) is presented. A long calibration time is required in the final test to validate performance of these designs. To reduce total test time, optimized calibration technique and calibrated effective number of bits (ENOB)

In thesis, a test time reduction (a low cost test) methodology for digitally-calibrated pipeline analog-to-digital converters (ADCs) is presented. A long calibration time is required in the final test to validate performance of these designs. To reduce total test time, optimized calibration technique and calibrated effective number of bits (ENOB) prediction from calibration coefficient will be presented. With the prediction technique, failed devices can be identified only without actual calibration. This technique reduces significant amount of time for the total test time.
ContributorsKim, Kibeom (Author) / Ozev, Sule (Thesis advisor) / Kitchen, Jennifer (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2013