Matching Items (10)

149636-Thumbnail Image.png

Investigation of CO2 tracer gas-based calibration of multi-zone airflow models

Description

The modeling and simulation of airflow dynamics in buildings has many applications including indoor air quality and ventilation analysis, contaminant dispersion prediction, and the calculation of personal occupant exposure. Multi-zone airflow model software programs provide such capabilities in a manner

The modeling and simulation of airflow dynamics in buildings has many applications including indoor air quality and ventilation analysis, contaminant dispersion prediction, and the calculation of personal occupant exposure. Multi-zone airflow model software programs provide such capabilities in a manner that is practical for whole building analysis. This research addresses the need for calibration methodologies to improve the prediction accuracy of multi-zone software programs. Of particular interest is accurate modeling of airflow dynamics in response to extraordinary events, i.e. chemical and biological attacks. This research developed and explored a candidate calibration methodology which utilizes tracer gas (e.g., CO2) data. A key concept behind this research was that calibration of airflow models is a highly over-parameterized problem and that some form of model reduction is imperative. Model reduction was achieved by proposing the concept of macro-zones, i.e. groups of rooms that can be combined into one zone for the purposes of predicting or studying dynamic airflow behavior under different types of stimuli. The proposed calibration methodology consists of five steps: (i) develop a "somewhat" realistic or partially calibrated multi-zone model of a building so that the subsequent steps yield meaningful results, (ii) perform an airflow-based sensitivity analysis to determine influential system drivers, (iii) perform a tracer gas-based sensitivity analysis to identify macro-zones for model reduction, (iv) release CO2 in the building and measure tracer gas concentrations in at least one room within each macro-zone (some replication in other rooms is highly desirable) and use these measurements to further calibrate aggregate flow parameters of macro-zone flow elements so as to improve the model fit, and (v) evaluate model adequacy of the updated model based on some metric. The proposed methodology was first evaluated with a synthetic building and subsequently refined using actual measured airflows and CO2 concentrations for a real building. The airflow dynamics of the buildings analyzed were found to be dominated by the HVAC system. In such buildings, rectifying differences between measured and predicted tracer gas behavior should focus on factors impacting room air change rates first and flow parameter assumptions between zones second.

Contributors

Agent

Created

Date Created
2011

152396-Thumbnail Image.png

Digital calibration and prediction of effective number of bits for pipeline ADC

Description

In thesis, a test time reduction (a low cost test) methodology for digitally-calibrated pipeline analog-to-digital converters (ADCs) is presented. A long calibration time is required in the final test to validate performance of these designs. To reduce total test time,

In thesis, a test time reduction (a low cost test) methodology for digitally-calibrated pipeline analog-to-digital converters (ADCs) is presented. A long calibration time is required in the final test to validate performance of these designs. To reduce total test time, optimized calibration technique and calibrated effective number of bits (ENOB) prediction from calibration coefficient will be presented. With the prediction technique, failed devices can be identified only without actual calibration. This technique reduces significant amount of time for the total test time.

Contributors

Agent

Created

Date Created
2013

153036-Thumbnail Image.png

Design and calibration of a 12-bit current-steering DAC using data-interleaving

Description

High speed current-steering DACs with high linearity are needed in today's applications such as wired and wireless communications, instrumentation, radar, and other direct digital synthesis (DDS) applications. However, a trade-off exists between the speed and resolution of Nyquist rate

High speed current-steering DACs with high linearity are needed in today's applications such as wired and wireless communications, instrumentation, radar, and other direct digital synthesis (DDS) applications. However, a trade-off exists between the speed and resolution of Nyquist rate current-steering DACs. As the resolution increases, more transistor area is required to meet matching requirements for optimal linearity and thus, the overall speed of the DAC is limited.

In this thesis work, a 12-bit current-steering DAC was designed with current sources scaled below the required matching size to decrease the area and increase the overall speed of the DAC. By scaling the current sources, however, errors due to random mismatch between current sources will arise and additional calibration hardware is necessary to ensure 12-bit linearity. This work presents how to implement a self-calibration DAC that works to fix amplitude errors while maintaining a lower overall area. Additionally, the DAC designed in this thesis investigates the implementation feasibility of a data-interleaved architecture. Data interleaving can increase the total bandwidth of the DACs by 2 with an increase in SQNR by an additional 3 dB.

The final results show that the calibration method can effectively improve the linearity of the DAC. The DAC is able to run up to 400 MSPS frequencies with a 75 dB SFDR performance and above 87 dB SFDR performance at update rates of 200 MSPS.

Contributors

Agent

Created

Date Created
2014

153270-Thumbnail Image.png

Fisheye camera calibration and applications

Description

Fisheye cameras are special cameras that have a much larger field of view compared to

conventional cameras. The large field of view comes at a price of non-linear distortions

introduced near the boundaries of the images captured by such cameras. Despite this

drawback,

Fisheye cameras are special cameras that have a much larger field of view compared to

conventional cameras. The large field of view comes at a price of non-linear distortions

introduced near the boundaries of the images captured by such cameras. Despite this

drawback, they are being used increasingly in many applications of computer vision,

robotics, reconnaissance, astrophotography, surveillance and automotive applications.

The images captured from such cameras can be corrected for their distortion if the

cameras are calibrated and the distortion function is determined. Calibration also allows

fisheye cameras to be used in tasks involving metric scene measurement, metric

scene reconstruction and other simultaneous localization and mapping (SLAM) algorithms.

This thesis presents a calibration toolbox (FisheyeCDC Toolbox) that implements a collection of some of the most widely used techniques for calibration of fisheye cameras under one package. This enables an inexperienced user to calibrate his/her own camera without the need for a theoretical understanding about computer vision and camera calibration. This thesis also explores some of the applications of calibration such as distortion correction and 3D reconstruction.

Contributors

Agent

Created

Date Created
2014

151165-Thumbnail Image.png

A new camera calibration accuracy standard for three-dimensional image reconstruction using Monte Carlo simulations

Description

Camera calibration has applications in the fields of robotic motion, geographic mapping, semiconductor defect characterization, and many more. This thesis considers camera calibration for the purpose of high accuracy three-dimensional reconstruction when characterizing ball grid arrays within the semiconductor industry.

Camera calibration has applications in the fields of robotic motion, geographic mapping, semiconductor defect characterization, and many more. This thesis considers camera calibration for the purpose of high accuracy three-dimensional reconstruction when characterizing ball grid arrays within the semiconductor industry. Bouguet's calibration method is used following a set of criteria with the purpose of studying the method's performance according to newly proposed standards. The performance of the camera calibration method is currently measured using standards such as pixel error and computational time. This thesis proposes the use of standard deviation of the intrinsic parameter estimation within a Monte Carlo simulation as a new standard of performance measure. It specifically shows that the standard deviation decreases based on the increased number of images input into the calibration routine. It is also shown that the default thresholds of the non-linear maximum likelihood estimation problem of the calibration method require change in order to improve computational time performance; however, the accuracy lost is negligable even for high accuracy requirements such as ball grid array characterization.

Contributors

Agent

Created

Date Created
2012

154116-Thumbnail Image.png

In-field built-in self-test for measuring RF transmitter power and gain

Description

RF transmitter manufacturers go to great extremes and expense to ensure that their product meets the RF output power requirements for which they are designed. Therefore, there is an urgent need for in-field monitoring of output power and gain to

RF transmitter manufacturers go to great extremes and expense to ensure that their product meets the RF output power requirements for which they are designed. Therefore, there is an urgent need for in-field monitoring of output power and gain to bring down the costs of RF transceiver testing and ensure product reliability. Built-in self-test (BIST) techniques can perform such monitoring without the requirement for expensive RF test equipment. In most BIST techniques, on-chip resources, such as peak detectors, power detectors, or envelope detectors are used along with frequency down conversion to analyze the output of the design under test (DUT). However, this conversion circuitry is subject to similar process, voltage, and temperature (PVT) variations as the DUT and affects the measurement accuracy. So, it is important to monitor BIST performance over time, voltage and temperature, such that accurate in-field measurements can be performed.

In this research, a multistep BIST solution using only baseband signals for test analysis is presented. An on-chip signal generation circuit, which is robust with respect to time, supply voltage, and temperature variations is used for self-calibration of the BIST system before the DUT measurement. Using mathematical modelling, an analytical expression for the output signal is derived first and then test signals are devised to extract the output power of the DUT. By utilizing a standard 180nm IBM7RF CMOS process, a 2.4GHz low power RF IC incorporated with the proposed BIST circuitry and on-chip test signal source is designed and fabricated. Experimental results are presented, which show this BIST method can monitor the DUT’s output power with +/- 0.35dB accuracy over a 20dB power dynamic range.

Contributors

Agent

Created

Date Created
2015

156773-Thumbnail Image.png

DFT Solutions for Automated Test and Calibration of Forthcoming RF Integrated Transceivers

Description

As integrated technologies are scaling down, there is an increasing trend in the

process,voltage and temperature (PVT) variations of highly integrated RF systems.

Accounting for these variations during the design phase requires tremendous amount

of time for prediction of RF performance and optimizing

As integrated technologies are scaling down, there is an increasing trend in the

process,voltage and temperature (PVT) variations of highly integrated RF systems.

Accounting for these variations during the design phase requires tremendous amount

of time for prediction of RF performance and optimizing it accordingly. Thus, there

is an increasing gap between the need to relax the RF performance requirements at

the design phase for rapid development and the need to provide high performance

and low cost RF circuits that function with PVT variations. No matter how care-

fully designed, RF integrated circuits (ICs) manufactured with advanced technology

nodes necessitate lengthy post-production calibration and test cycles with expensive

RF test instruments. Hence design-for-test (DFT) is proposed for low-cost and fast

measurement of performance parameters during both post-production and in-eld op-

eration. For example, built-in self-test (BIST) is a DFT solution for low-cost on-chip

measurement of RF performance parameters. In this dissertation, three aspects of

automated test and calibration, including DFT mathematical model, BIST hardware

and built-in calibration are covered for RF front-end blocks.

First, the theoretical foundation of a post-production test of RF integrated phased

array antennas is proposed by developing the mathematical model to measure gain

and phase mismatches between antenna elements without any electrical contact. The

proposed technique is fast, cost-efficient and uses near-field measurement of radiated

power from antennas hence, it requires single test setup, it has easy implementation

and it is short in time which makes it viable for industrialized high volume integrated

IC production test.

Second, a BIST model intended for the characterization of I/Q offset, gain and

phase mismatch of IQ transmitters without relying on external equipment is intro-

duced. The proposed BIST method is based on on-chip amplitude measurement as

in prior works however,here the variations in the BIST circuit do not affect the target

parameter estimation accuracy since measurements are designed to be relative. The

BIST circuit is implemented in 130nm technology and can be used for post-production

and in-field calibration.

Third, a programmable low noise amplifier (LNA) is proposed which is adaptable

to different application scenarios depending on the specification requirements. Its

performance is optimized with regards to required specifications e.g. distance, power

consumption, BER, data rate, etc.The statistical modeling is used to capture the

correlations among measured performance parameters and calibration modes for fast

adaptation. Machine learning technique is used to capture these non-linear correlations and build the probability distribution of a target parameter based on measurement results of the correlated parameters. The proposed concept is demonstrated by

embedding built-in tuning knobs in LNA design in 130nm technology. The tuning

knobs are carefully designed to provide independent combinations of important per-

formance parameters such as gain and linearity. Minimum number of switches are

used to provide the desired tuning range without a need for an external analog input.

Contributors

Agent

Created

Date Created
2018

155894-Thumbnail Image.png

Dual Application ADC using Three Calibration Techniques in 10nm Technology

Description

In this work, a 12-bit ADC with three types of calibration is proposed for high speed security applications as well as a precision application. This converter performs for both applications because it satisfies all the necessary specifications such as minimal

In this work, a 12-bit ADC with three types of calibration is proposed for high speed security applications as well as a precision application. This converter performs for both applications because it satisfies all the necessary specifications such as minimal device mismatch and offset, programmability to decrease aging effects, high SNR for increased ENOB and fast conversion rate. The designed converter implements three types of calibration necessary for offset and gain error, including: a correlated double sampling integrator used in the first stage of the ADC, a power up auto zero technique implemented in the digital code to store any offset and subtract out if necessary, and an automatic startup and manual calibration to control the common mode voltages. The proposed ADC was designed in Intel’s 10nm technology. This ADC is designed to monitor DC voltages for the precision and high speed applications. The conversion rate of the analog to digital converter is programmable to 7µs or 910ns, depending on the precision or high speed application, respectively. The range of the input and reference supply is 0 to 1.25V. The ADC is designed in Intel 10nm technology using a 1.8V supply consuming an area of 0.0705mm2. This thesis explores challenges of designing a dual-purpose analog to digital converter, which include: 1.) increased offset in 10nm technology, 2.) dual application ADC that can be accurate and fast, 3.) reducing the parasitic capacitance of the ADC, and 4.) gain error that occurs in ADCs.

Contributors

Agent

Created

Date Created
2017

158689-Thumbnail Image.png

Electrical Stimulation Based Statistical Calibration Model For MEMS Accelerometer And Other Sensors

Description

Micro Electro Mechanical Systems (MEMS) based accelerometers are one of the most commonly used sensors out there. They are used in devices such as, airbags, smartphones, airplanes, and many more. Although they are very accurate, they degrade with time or

Micro Electro Mechanical Systems (MEMS) based accelerometers are one of the most commonly used sensors out there. They are used in devices such as, airbags, smartphones, airplanes, and many more. Although they are very accurate, they degrade with time or get offset due to some damage. To fix this, they must be calibrated again using physical calibration technique, which is an expensive process to conduct. However, these sensors can also be calibrated infield by applying an on-chip electrical stimulus to the sensor. Electrical stimulus-based calibration could bring the cost of testing and calibration significantly down as compared to factory testing. In this thesis, simulations are presented to formulate a statistical prediction model based on an electrical stimulus. Results from two different approaches of electrical calibration have been discussed. A prediction model with a root mean square error of 1% has been presented in this work. Experiments were conducted on commercially available accelerometers to test the techniques used for simulations.

Contributors

Agent

Created

Date Created
2020

161897-Thumbnail Image.png

Quantifying Chalcophile Elements and Heavy Halogens by Secondary Ion Mass Spectrometry and Demonstrating the Significant Effect of Different Secondary Ion Normalizing Procedures

Description

A novel technique for measuring heavy trace elements in geologic materials with secondary ion mass spectrometry (SIMS) is presented. This technique combines moderate levels of mass resolving power (MRP) with energy filtering in order to remove molecular ion interferences while

A novel technique for measuring heavy trace elements in geologic materials with secondary ion mass spectrometry (SIMS) is presented. This technique combines moderate levels of mass resolving power (MRP) with energy filtering in order to remove molecular ion interferences while maintaining enough sensitivity to measure trace elements. The technique was evaluated by measuring a set of heavy chalcophilic elements in two sets of doped glasses similar in composition to rhyolites and basalts, respectively. The normalized count rates of Cu, As, Se, Br, and Te were plotted against concentrations to test that the signal increased linearly with concentration. The signal from any residual molecular ion interferences (e.g. ²⁹Si³⁰Si¹⁶O on ⁷⁵As) represented apparent concentrations ≤ 1 μg/g for most of the chalcophiles in rhyolitic matrices and between 1 and 10 μg/g in basaltic compositions. This technique was then applied to two suites of melt inclusions from the Bandelier Tuff: Ti-rich, primitive and Ti-poor, evolved rhyolitic compositions. The results showed that Ti-rich inclusions contained ~30 μg/g Cu and ~3 μg/g As while the Ti-poor inclusions contained near background Cu and ~6 μg/g As. Additionally, two of the Ti-rich inclusions contained > 5 μg/g of Sb and Te, well above background. Other elements were at or near background. This suggests certain chalcophilic elements may be helpful in unraveling processes relating to diversity of magma sources in large eruptions. Additionally, an unrelated experiment is presented demonstrating changes in the matrix effect on SIMS counts when normalizing against ³⁰Si⁺ versus ²⁸Si²⁺. If one uses doubly charged silicon as a reference, (common when using large-geometry SIMS instruments to study the light elements Li - C) it is important that the standards closely match the major element chemistry of the unknown.

Contributors

Agent

Created

Date Created
2021