Matching Items (5)
Filtering by

Clear all filters

152143-Thumbnail Image.png
Description
Radio frequency (RF) transceivers require a disproportionately high effort in terms of test development time, test equipment cost, and test time. The relatively high test cost stems from two contributing factors. First, RF transceivers require the measurement of a diverse set of specifications, requiring multiple test set-ups and long test

Radio frequency (RF) transceivers require a disproportionately high effort in terms of test development time, test equipment cost, and test time. The relatively high test cost stems from two contributing factors. First, RF transceivers require the measurement of a diverse set of specifications, requiring multiple test set-ups and long test times, which complicates load-board design, debug, and diagnosis. Second, high frequency operation necessitates the use of expensive equipment, resulting in higher per second test time cost compared with mixed-signal or digital circuits. Moreover, in terms of the non-recurring engineering cost, the need to measure complex specfications complicates the test development process and necessitates a long learning process for test engineers. Test time is dominated by changing and settling time for each test set-up. Thus, single set-up test solutions are desirable. Loop-back configuration where the transmitter output is connected to the receiver input are used as the desirable test set- up for RF transceivers, since it eliminates the reliance on expensive instrumentation for RF signal analysis and enables measuring multiple parameters at once. In-phase and Quadrature (IQ) imbalance, non-linearity, DC offset and IQ time skews are some of the most detrimental imperfections in transceiver performance. Measurement of these parameters in the loop-back mode is challenging due to the coupling between the receiver (RX) and transmitter (TX) parameters. Loop-back based solutions are proposed in this work to resolve this issue. A calibration algorithm for a subset of the above mentioned impairments is also presented. Error Vector Magnitude (EVM) is a system-level parameter that is specified for most advanced communication standards. EVM measurement often takes extensive test development efforts, tester resources, and long test times. EVM is analytically related to system impairments, which are typically measured in a production test i environment. Thus, EVM test can be eliminated from the test list if the relations between EVM and system impairments are derived independent of the circuit implementation and manufacturing process. In this work, the focus is on the WLAN standard, and deriving the relations between EVM and three of the most detrimental impairments for QAM/OFDM based systems (IQ imbalance, non-linearity, and noise). Having low cost test techniques for measuring the RF transceivers imperfections and being able to analytically compute EVM from the measured parameters is a complete test solution for RF transceivers. These techniques along with the proposed calibration method can be used in improving the yield by widening the pass/fail boundaries for transceivers imperfections. For all of the proposed methods, simulation and hardware measurements prove that the proposed techniques provide accurate characterization of RF transceivers.
ContributorsNassery, Afsaneh (Author) / Ozev, Sule (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Kiaei, Sayfe (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
153036-Thumbnail Image.png
Description
High speed current-steering DACs with high linearity are needed in today's applications such as wired and wireless communications, instrumentation, radar, and other direct digital synthesis (DDS) applications. However, a trade-off exists between the speed and resolution of Nyquist rate current-steering DACs. As the resolution increases, more transistor area

High speed current-steering DACs with high linearity are needed in today's applications such as wired and wireless communications, instrumentation, radar, and other direct digital synthesis (DDS) applications. However, a trade-off exists between the speed and resolution of Nyquist rate current-steering DACs. As the resolution increases, more transistor area is required to meet matching requirements for optimal linearity and thus, the overall speed of the DAC is limited.

In this thesis work, a 12-bit current-steering DAC was designed with current sources scaled below the required matching size to decrease the area and increase the overall speed of the DAC. By scaling the current sources, however, errors due to random mismatch between current sources will arise and additional calibration hardware is necessary to ensure 12-bit linearity. This work presents how to implement a self-calibration DAC that works to fix amplitude errors while maintaining a lower overall area. Additionally, the DAC designed in this thesis investigates the implementation feasibility of a data-interleaved architecture. Data interleaving can increase the total bandwidth of the DACs by 2 with an increase in SQNR by an additional 3 dB.

The final results show that the calibration method can effectively improve the linearity of the DAC. The DAC is able to run up to 400 MSPS frequencies with a 75 dB SFDR performance and above 87 dB SFDR performance at update rates of 200 MSPS.
ContributorsJankunas, Benjamin (Author) / Bakkaloglu, Bertan (Thesis advisor) / Kitchen, Jennifer (Committee member) / Ozev, Sule (Committee member) / Arizona State University (Publisher)
Created2014
154116-Thumbnail Image.png
Description
RF transmitter manufacturers go to great extremes and expense to ensure that their product meets the RF output power requirements for which they are designed. Therefore, there is an urgent need for in-field monitoring of output power and gain to bring down the costs of RF transceiver testing and ensure

RF transmitter manufacturers go to great extremes and expense to ensure that their product meets the RF output power requirements for which they are designed. Therefore, there is an urgent need for in-field monitoring of output power and gain to bring down the costs of RF transceiver testing and ensure product reliability. Built-in self-test (BIST) techniques can perform such monitoring without the requirement for expensive RF test equipment. In most BIST techniques, on-chip resources, such as peak detectors, power detectors, or envelope detectors are used along with frequency down conversion to analyze the output of the design under test (DUT). However, this conversion circuitry is subject to similar process, voltage, and temperature (PVT) variations as the DUT and affects the measurement accuracy. So, it is important to monitor BIST performance over time, voltage and temperature, such that accurate in-field measurements can be performed.

In this research, a multistep BIST solution using only baseband signals for test analysis is presented. An on-chip signal generation circuit, which is robust with respect to time, supply voltage, and temperature variations is used for self-calibration of the BIST system before the DUT measurement. Using mathematical modelling, an analytical expression for the output signal is derived first and then test signals are devised to extract the output power of the DUT. By utilizing a standard 180nm IBM7RF CMOS process, a 2.4GHz low power RF IC incorporated with the proposed BIST circuitry and on-chip test signal source is designed and fabricated. Experimental results are presented, which show this BIST method can monitor the DUT’s output power with +/- 0.35dB accuracy over a 20dB power dynamic range.
ContributorsGangula, Sudheer Kumar Reddy (Author) / Kitchen, Jennifer (Thesis advisor) / Ozev, Sule (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2015
ContributorsBerry, John (Performer) / Morgan, Lanny (Performer) / Concert Jazz Band (Performer) / ASU Library. Music Library (Publisher)
Created1986-03-05
157711-Thumbnail Image.png
Description
Power management circuits are employed in most electronic integrated systems, including applications for automotive, IoT, and smart wearables. Oftentimes, these power management circuits become a single point of system failure, and since they are present in most modern electronic devices, they become a target for hardware security attacks. Digital circuits

Power management circuits are employed in most electronic integrated systems, including applications for automotive, IoT, and smart wearables. Oftentimes, these power management circuits become a single point of system failure, and since they are present in most modern electronic devices, they become a target for hardware security attacks. Digital circuits are typically more prone to security attacks compared to analog circuits, but malfunctions in digital circuitry can affect the analog performance/parameters of power management circuits. This research studies the effect that these hacks will have on the analog performance of power circuits, specifically linear and switching power regulators/converters. Apart from security attacks, these circuits suffer from performance degradations due to temperature, aging, and load stress. Power management circuits usually consist of regulators or converters that regulate the load’s voltage supply by employing a feedback loop, and the stability of the feedback loop is a critical parameter in the system design. Oftentimes, the passive components employed in these circuits shift in value over varying conditions and may cause instability within the power converter. Therefore, variations in the passive components, as well as malicious hardware security attacks, can degrade regulator performance and affect the system’s stability. The traditional ways of detecting phase margin, which indicates system stability, employ techniques that require the converter to be in open loop, and hence can’t be used while the system is deployed in-the-field under normal operation. Aging of components and security attacks may occur after the power management systems have completed post-production test and have been deployed, and they may not cause catastrophic failure of the system, hence making them difficult to detect. These two issues of component variations and security attacks can be detected during normal operation over the product lifetime, if the frequency response of the power converter can be monitored in-situ and in-field. This work presents a method to monitor the phase margin (stability) of a power converter without affecting its normal mode of operation by injecting a white noise/ pseudo random binary sequence (PRBS). Furthermore, this work investigates the analog performance parameters, including phase margin, that are affected by various digital hacks on the control circuitry associated with power converters. A case study of potential hardware attacks is completed for a linear low-dropout regulator (LDO).
ContributorsMalakar, Pragya Priya (Author) / Kitchen, Jennifer (Thesis advisor) / Ozev, Sule (Committee member) / Brunhaver, John (Committee member) / Arizona State University (Publisher)
Created2019