Matching Items (3)
Filtering by

Clear all filters

153036-Thumbnail Image.png
Description
High speed current-steering DACs with high linearity are needed in today's applications such as wired and wireless communications, instrumentation, radar, and other direct digital synthesis (DDS) applications. However, a trade-off exists between the speed and resolution of Nyquist rate current-steering DACs. As the resolution increases, more transistor area

High speed current-steering DACs with high linearity are needed in today's applications such as wired and wireless communications, instrumentation, radar, and other direct digital synthesis (DDS) applications. However, a trade-off exists between the speed and resolution of Nyquist rate current-steering DACs. As the resolution increases, more transistor area is required to meet matching requirements for optimal linearity and thus, the overall speed of the DAC is limited.

In this thesis work, a 12-bit current-steering DAC was designed with current sources scaled below the required matching size to decrease the area and increase the overall speed of the DAC. By scaling the current sources, however, errors due to random mismatch between current sources will arise and additional calibration hardware is necessary to ensure 12-bit linearity. This work presents how to implement a self-calibration DAC that works to fix amplitude errors while maintaining a lower overall area. Additionally, the DAC designed in this thesis investigates the implementation feasibility of a data-interleaved architecture. Data interleaving can increase the total bandwidth of the DACs by 2 with an increase in SQNR by an additional 3 dB.

The final results show that the calibration method can effectively improve the linearity of the DAC. The DAC is able to run up to 400 MSPS frequencies with a 75 dB SFDR performance and above 87 dB SFDR performance at update rates of 200 MSPS.
ContributorsJankunas, Benjamin (Author) / Bakkaloglu, Bertan (Thesis advisor) / Kitchen, Jennifer (Committee member) / Ozev, Sule (Committee member) / Arizona State University (Publisher)
Created2014
156773-Thumbnail Image.png
Description
As integrated technologies are scaling down, there is an increasing trend in the

process,voltage and temperature (PVT) variations of highly integrated RF systems.

Accounting for these variations during the design phase requires tremendous amount

of time for prediction of RF performance and optimizing it accordingly. Thus, there

is an increasing gap between the need

As integrated technologies are scaling down, there is an increasing trend in the

process,voltage and temperature (PVT) variations of highly integrated RF systems.

Accounting for these variations during the design phase requires tremendous amount

of time for prediction of RF performance and optimizing it accordingly. Thus, there

is an increasing gap between the need to relax the RF performance requirements at

the design phase for rapid development and the need to provide high performance

and low cost RF circuits that function with PVT variations. No matter how care-

fully designed, RF integrated circuits (ICs) manufactured with advanced technology

nodes necessitate lengthy post-production calibration and test cycles with expensive

RF test instruments. Hence design-for-test (DFT) is proposed for low-cost and fast

measurement of performance parameters during both post-production and in-eld op-

eration. For example, built-in self-test (BIST) is a DFT solution for low-cost on-chip

measurement of RF performance parameters. In this dissertation, three aspects of

automated test and calibration, including DFT mathematical model, BIST hardware

and built-in calibration are covered for RF front-end blocks.

First, the theoretical foundation of a post-production test of RF integrated phased

array antennas is proposed by developing the mathematical model to measure gain

and phase mismatches between antenna elements without any electrical contact. The

proposed technique is fast, cost-efficient and uses near-field measurement of radiated

power from antennas hence, it requires single test setup, it has easy implementation

and it is short in time which makes it viable for industrialized high volume integrated

IC production test.

Second, a BIST model intended for the characterization of I/Q offset, gain and

phase mismatch of IQ transmitters without relying on external equipment is intro-

duced. The proposed BIST method is based on on-chip amplitude measurement as

in prior works however,here the variations in the BIST circuit do not affect the target

parameter estimation accuracy since measurements are designed to be relative. The

BIST circuit is implemented in 130nm technology and can be used for post-production

and in-field calibration.

Third, a programmable low noise amplifier (LNA) is proposed which is adaptable

to different application scenarios depending on the specification requirements. Its

performance is optimized with regards to required specifications e.g. distance, power

consumption, BER, data rate, etc.The statistical modeling is used to capture the

correlations among measured performance parameters and calibration modes for fast

adaptation. Machine learning technique is used to capture these non-linear correlations and build the probability distribution of a target parameter based on measurement results of the correlated parameters. The proposed concept is demonstrated by

embedding built-in tuning knobs in LNA design in 130nm technology. The tuning

knobs are carefully designed to provide independent combinations of important per-

formance parameters such as gain and linearity. Minimum number of switches are

used to provide the desired tuning range without a need for an external analog input.
ContributorsShafiee, Maryam (Author) / Ozev, Sule (Thesis advisor) / Diaz, Rodolfo (Committee member) / Ogras, Umit Y. (Committee member) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2018
155894-Thumbnail Image.png
Description
In this work, a 12-bit ADC with three types of calibration is proposed for high speed security applications as well as a precision application. This converter performs for both applications because it satisfies all the necessary specifications such as minimal device mismatch and offset, programmability to decrease aging effects, high

In this work, a 12-bit ADC with three types of calibration is proposed for high speed security applications as well as a precision application. This converter performs for both applications because it satisfies all the necessary specifications such as minimal device mismatch and offset, programmability to decrease aging effects, high SNR for increased ENOB and fast conversion rate. The designed converter implements three types of calibration necessary for offset and gain error, including: a correlated double sampling integrator used in the first stage of the ADC, a power up auto zero technique implemented in the digital code to store any offset and subtract out if necessary, and an automatic startup and manual calibration to control the common mode voltages. The proposed ADC was designed in Intel’s 10nm technology. This ADC is designed to monitor DC voltages for the precision and high speed applications. The conversion rate of the analog to digital converter is programmable to 7µs or 910ns, depending on the precision or high speed application, respectively. The range of the input and reference supply is 0 to 1.25V. The ADC is designed in Intel 10nm technology using a 1.8V supply consuming an area of 0.0705mm2. This thesis explores challenges of designing a dual-purpose analog to digital converter, which include: 1.) increased offset in 10nm technology, 2.) dual application ADC that can be accurate and fast, 3.) reducing the parasitic capacitance of the ADC, and 4.) gain error that occurs in ADCs.
ContributorsSchmelter, Brooke (Author) / Bakkaloglu, Bertan (Thesis advisor) / Ogras, Umit Y. (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2017