Matching Items (6)
Filtering by

Clear all filters

153036-Thumbnail Image.png
Description
High speed current-steering DACs with high linearity are needed in today's applications such as wired and wireless communications, instrumentation, radar, and other direct digital synthesis (DDS) applications. However, a trade-off exists between the speed and resolution of Nyquist rate current-steering DACs. As the resolution increases, more transistor area

High speed current-steering DACs with high linearity are needed in today's applications such as wired and wireless communications, instrumentation, radar, and other direct digital synthesis (DDS) applications. However, a trade-off exists between the speed and resolution of Nyquist rate current-steering DACs. As the resolution increases, more transistor area is required to meet matching requirements for optimal linearity and thus, the overall speed of the DAC is limited.

In this thesis work, a 12-bit current-steering DAC was designed with current sources scaled below the required matching size to decrease the area and increase the overall speed of the DAC. By scaling the current sources, however, errors due to random mismatch between current sources will arise and additional calibration hardware is necessary to ensure 12-bit linearity. This work presents how to implement a self-calibration DAC that works to fix amplitude errors while maintaining a lower overall area. Additionally, the DAC designed in this thesis investigates the implementation feasibility of a data-interleaved architecture. Data interleaving can increase the total bandwidth of the DACs by 2 with an increase in SQNR by an additional 3 dB.

The final results show that the calibration method can effectively improve the linearity of the DAC. The DAC is able to run up to 400 MSPS frequencies with a 75 dB SFDR performance and above 87 dB SFDR performance at update rates of 200 MSPS.
ContributorsJankunas, Benjamin (Author) / Bakkaloglu, Bertan (Thesis advisor) / Kitchen, Jennifer (Committee member) / Ozev, Sule (Committee member) / Arizona State University (Publisher)
Created2014
154116-Thumbnail Image.png
Description
RF transmitter manufacturers go to great extremes and expense to ensure that their product meets the RF output power requirements for which they are designed. Therefore, there is an urgent need for in-field monitoring of output power and gain to bring down the costs of RF transceiver testing and ensure

RF transmitter manufacturers go to great extremes and expense to ensure that their product meets the RF output power requirements for which they are designed. Therefore, there is an urgent need for in-field monitoring of output power and gain to bring down the costs of RF transceiver testing and ensure product reliability. Built-in self-test (BIST) techniques can perform such monitoring without the requirement for expensive RF test equipment. In most BIST techniques, on-chip resources, such as peak detectors, power detectors, or envelope detectors are used along with frequency down conversion to analyze the output of the design under test (DUT). However, this conversion circuitry is subject to similar process, voltage, and temperature (PVT) variations as the DUT and affects the measurement accuracy. So, it is important to monitor BIST performance over time, voltage and temperature, such that accurate in-field measurements can be performed.

In this research, a multistep BIST solution using only baseband signals for test analysis is presented. An on-chip signal generation circuit, which is robust with respect to time, supply voltage, and temperature variations is used for self-calibration of the BIST system before the DUT measurement. Using mathematical modelling, an analytical expression for the output signal is derived first and then test signals are devised to extract the output power of the DUT. By utilizing a standard 180nm IBM7RF CMOS process, a 2.4GHz low power RF IC incorporated with the proposed BIST circuitry and on-chip test signal source is designed and fabricated. Experimental results are presented, which show this BIST method can monitor the DUT’s output power with +/- 0.35dB accuracy over a 20dB power dynamic range.
ContributorsGangula, Sudheer Kumar Reddy (Author) / Kitchen, Jennifer (Thesis advisor) / Ozev, Sule (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2015
156773-Thumbnail Image.png
Description
As integrated technologies are scaling down, there is an increasing trend in the

process,voltage and temperature (PVT) variations of highly integrated RF systems.

Accounting for these variations during the design phase requires tremendous amount

of time for prediction of RF performance and optimizing it accordingly. Thus, there

is an increasing gap between the need

As integrated technologies are scaling down, there is an increasing trend in the

process,voltage and temperature (PVT) variations of highly integrated RF systems.

Accounting for these variations during the design phase requires tremendous amount

of time for prediction of RF performance and optimizing it accordingly. Thus, there

is an increasing gap between the need to relax the RF performance requirements at

the design phase for rapid development and the need to provide high performance

and low cost RF circuits that function with PVT variations. No matter how care-

fully designed, RF integrated circuits (ICs) manufactured with advanced technology

nodes necessitate lengthy post-production calibration and test cycles with expensive

RF test instruments. Hence design-for-test (DFT) is proposed for low-cost and fast

measurement of performance parameters during both post-production and in-eld op-

eration. For example, built-in self-test (BIST) is a DFT solution for low-cost on-chip

measurement of RF performance parameters. In this dissertation, three aspects of

automated test and calibration, including DFT mathematical model, BIST hardware

and built-in calibration are covered for RF front-end blocks.

First, the theoretical foundation of a post-production test of RF integrated phased

array antennas is proposed by developing the mathematical model to measure gain

and phase mismatches between antenna elements without any electrical contact. The

proposed technique is fast, cost-efficient and uses near-field measurement of radiated

power from antennas hence, it requires single test setup, it has easy implementation

and it is short in time which makes it viable for industrialized high volume integrated

IC production test.

Second, a BIST model intended for the characterization of I/Q offset, gain and

phase mismatch of IQ transmitters without relying on external equipment is intro-

duced. The proposed BIST method is based on on-chip amplitude measurement as

in prior works however,here the variations in the BIST circuit do not affect the target

parameter estimation accuracy since measurements are designed to be relative. The

BIST circuit is implemented in 130nm technology and can be used for post-production

and in-field calibration.

Third, a programmable low noise amplifier (LNA) is proposed which is adaptable

to different application scenarios depending on the specification requirements. Its

performance is optimized with regards to required specifications e.g. distance, power

consumption, BER, data rate, etc.The statistical modeling is used to capture the

correlations among measured performance parameters and calibration modes for fast

adaptation. Machine learning technique is used to capture these non-linear correlations and build the probability distribution of a target parameter based on measurement results of the correlated parameters. The proposed concept is demonstrated by

embedding built-in tuning knobs in LNA design in 130nm technology. The tuning

knobs are carefully designed to provide independent combinations of important per-

formance parameters such as gain and linearity. Minimum number of switches are

used to provide the desired tuning range without a need for an external analog input.
ContributorsShafiee, Maryam (Author) / Ozev, Sule (Thesis advisor) / Diaz, Rodolfo (Committee member) / Ogras, Umit Y. (Committee member) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2018
155894-Thumbnail Image.png
Description
In this work, a 12-bit ADC with three types of calibration is proposed for high speed security applications as well as a precision application. This converter performs for both applications because it satisfies all the necessary specifications such as minimal device mismatch and offset, programmability to decrease aging effects, high

In this work, a 12-bit ADC with three types of calibration is proposed for high speed security applications as well as a precision application. This converter performs for both applications because it satisfies all the necessary specifications such as minimal device mismatch and offset, programmability to decrease aging effects, high SNR for increased ENOB and fast conversion rate. The designed converter implements three types of calibration necessary for offset and gain error, including: a correlated double sampling integrator used in the first stage of the ADC, a power up auto zero technique implemented in the digital code to store any offset and subtract out if necessary, and an automatic startup and manual calibration to control the common mode voltages. The proposed ADC was designed in Intel’s 10nm technology. This ADC is designed to monitor DC voltages for the precision and high speed applications. The conversion rate of the analog to digital converter is programmable to 7µs or 910ns, depending on the precision or high speed application, respectively. The range of the input and reference supply is 0 to 1.25V. The ADC is designed in Intel 10nm technology using a 1.8V supply consuming an area of 0.0705mm2. This thesis explores challenges of designing a dual-purpose analog to digital converter, which include: 1.) increased offset in 10nm technology, 2.) dual application ADC that can be accurate and fast, 3.) reducing the parasitic capacitance of the ADC, and 4.) gain error that occurs in ADCs.
ContributorsSchmelter, Brooke (Author) / Bakkaloglu, Bertan (Thesis advisor) / Ogras, Umit Y. (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2017
152396-Thumbnail Image.png
Description
In thesis, a test time reduction (a low cost test) methodology for digitally-calibrated pipeline analog-to-digital converters (ADCs) is presented. A long calibration time is required in the final test to validate performance of these designs. To reduce total test time, optimized calibration technique and calibrated effective number of bits (ENOB)

In thesis, a test time reduction (a low cost test) methodology for digitally-calibrated pipeline analog-to-digital converters (ADCs) is presented. A long calibration time is required in the final test to validate performance of these designs. To reduce total test time, optimized calibration technique and calibrated effective number of bits (ENOB) prediction from calibration coefficient will be presented. With the prediction technique, failed devices can be identified only without actual calibration. This technique reduces significant amount of time for the total test time.
ContributorsKim, Kibeom (Author) / Ozev, Sule (Thesis advisor) / Kitchen, Jennifer (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2013
158689-Thumbnail Image.png
Description
Micro Electro Mechanical Systems (MEMS) based accelerometers are one of the most commonly used sensors out there. They are used in devices such as, airbags, smartphones, airplanes, and many more. Although they are very accurate, they degrade with time or get offset due to some damage. To fix this, they

Micro Electro Mechanical Systems (MEMS) based accelerometers are one of the most commonly used sensors out there. They are used in devices such as, airbags, smartphones, airplanes, and many more. Although they are very accurate, they degrade with time or get offset due to some damage. To fix this, they must be calibrated again using physical calibration technique, which is an expensive process to conduct. However, these sensors can also be calibrated infield by applying an on-chip electrical stimulus to the sensor. Electrical stimulus-based calibration could bring the cost of testing and calibration significantly down as compared to factory testing. In this thesis, simulations are presented to formulate a statistical prediction model based on an electrical stimulus. Results from two different approaches of electrical calibration have been discussed. A prediction model with a root mean square error of 1% has been presented in this work. Experiments were conducted on commercially available accelerometers to test the techniques used for simulations.
ContributorsBassi, Ishaan (Author) / Ozev, Sule (Thesis advisor) / Christen, Jennifer Blain (Committee member) / Vasileska, Dragica (Committee member) / Arizona State University (Publisher)
Created2020