Matching Items (308)
Filtering by

Clear all filters

151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
152194-Thumbnail Image.png
Description
Distributed estimation uses many inexpensive sensors to compose an accurate estimate of a given parameter. It is frequently implemented using wireless sensor networks. There have been several studies on optimizing power allocation in wireless sensor networks used for distributed estimation, the vast majority of which assume linear radio-frequency amplifiers. Linear

Distributed estimation uses many inexpensive sensors to compose an accurate estimate of a given parameter. It is frequently implemented using wireless sensor networks. There have been several studies on optimizing power allocation in wireless sensor networks used for distributed estimation, the vast majority of which assume linear radio-frequency amplifiers. Linear amplifiers are inherently inefficient, so in this dissertation nonlinear amplifiers are examined to gain efficiency while operating distributed sensor networks. This research presents a method to boost efficiency by operating the amplifiers in the nonlinear region of operation. Operating amplifiers nonlinearly presents new challenges. First, nonlinear amplifier characteristics change across manufacturing process variation, temperature, operating voltage, and aging. Secondly, the equations conventionally used for estimators and performance expectations in linear amplify-and-forward systems fail. To compensate for the first challenge, predistortion is utilized not to linearize amplifiers but rather to force them to fit a common nonlinear limiting amplifier model close to the inherent amplifier performance. This minimizes the power impact and the training requirements for predistortion. Second, new estimators are required that account for transmitter nonlinearity. This research derives analytically and confirms via simulation new estimators and performance expectation equations for use in nonlinear distributed estimation. An additional complication when operating nonlinear amplifiers in a wireless environment is the influence of varied and potentially unknown channel gains. The impact of these varied gains and both measurement and channel noise sources on estimation performance are analyzed in this paper. Techniques for minimizing the estimate variance are developed. It is shown that optimizing transmitter power allocation to minimize estimate variance for the most-compressed parameter measurement is equivalent to the problem for linear sensors. Finally, a method for operating distributed estimation in a multipath environment is presented that is capable of developing robust estimates for a wide range of Rician K-factors. This dissertation demonstrates that implementing distributed estimation using nonlinear sensors can boost system efficiency and is compatible with existing techniques from the literature for boosting efficiency at the system level via sensor power allocation. Nonlinear transmitters work best when channel gains are known and channel noise and receiver noise levels are low.
ContributorsSantucci, Robert (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioðlu, Cihan (Committee member) / Bakkaloglu, Bertan (Committee member) / Tsakalis, Kostas (Committee member) / Arizona State University (Publisher)
Created2013
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
152143-Thumbnail Image.png
Description
Radio frequency (RF) transceivers require a disproportionately high effort in terms of test development time, test equipment cost, and test time. The relatively high test cost stems from two contributing factors. First, RF transceivers require the measurement of a diverse set of specifications, requiring multiple test set-ups and long test

Radio frequency (RF) transceivers require a disproportionately high effort in terms of test development time, test equipment cost, and test time. The relatively high test cost stems from two contributing factors. First, RF transceivers require the measurement of a diverse set of specifications, requiring multiple test set-ups and long test times, which complicates load-board design, debug, and diagnosis. Second, high frequency operation necessitates the use of expensive equipment, resulting in higher per second test time cost compared with mixed-signal or digital circuits. Moreover, in terms of the non-recurring engineering cost, the need to measure complex specfications complicates the test development process and necessitates a long learning process for test engineers. Test time is dominated by changing and settling time for each test set-up. Thus, single set-up test solutions are desirable. Loop-back configuration where the transmitter output is connected to the receiver input are used as the desirable test set- up for RF transceivers, since it eliminates the reliance on expensive instrumentation for RF signal analysis and enables measuring multiple parameters at once. In-phase and Quadrature (IQ) imbalance, non-linearity, DC offset and IQ time skews are some of the most detrimental imperfections in transceiver performance. Measurement of these parameters in the loop-back mode is challenging due to the coupling between the receiver (RX) and transmitter (TX) parameters. Loop-back based solutions are proposed in this work to resolve this issue. A calibration algorithm for a subset of the above mentioned impairments is also presented. Error Vector Magnitude (EVM) is a system-level parameter that is specified for most advanced communication standards. EVM measurement often takes extensive test development efforts, tester resources, and long test times. EVM is analytically related to system impairments, which are typically measured in a production test i environment. Thus, EVM test can be eliminated from the test list if the relations between EVM and system impairments are derived independent of the circuit implementation and manufacturing process. In this work, the focus is on the WLAN standard, and deriving the relations between EVM and three of the most detrimental impairments for QAM/OFDM based systems (IQ imbalance, non-linearity, and noise). Having low cost test techniques for measuring the RF transceivers imperfections and being able to analytically compute EVM from the measured parameters is a complete test solution for RF transceivers. These techniques along with the proposed calibration method can be used in improving the yield by widening the pass/fail boundaries for transceivers imperfections. For all of the proposed methods, simulation and hardware measurements prove that the proposed techniques provide accurate characterization of RF transceivers.
ContributorsNassery, Afsaneh (Author) / Ozev, Sule (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Kiaei, Sayfe (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
152151-Thumbnail Image.png
Description
Fluxgate sensors are magnetic field sensors that can measure DC and low frequency AC magnetic fields. They can measure much lower magnetic fields than other magnetic sensors like Hall effect sensors, magnetoresistive sensors etc. They also have high linearity, high sensitivity and low noise. The major application of fluxgate sensors

Fluxgate sensors are magnetic field sensors that can measure DC and low frequency AC magnetic fields. They can measure much lower magnetic fields than other magnetic sensors like Hall effect sensors, magnetoresistive sensors etc. They also have high linearity, high sensitivity and low noise. The major application of fluxgate sensors is in magnetometers for the measurement of earth's magnetic field. Magnetometers are used in navigation systems and electronic compasses. Fluxgate sensors can also be used to measure high DC currents. Integrated micro-fluxgate sensors have been developed in recent years. These sensors have much lower power consumption and area compared to their PCB counterparts. The output voltage of micro-fluxgate sensors is very low which makes the analog front end more complex and results in an increase in power consumption of the system. In this thesis a new analog front-end circuit for micro-fluxgate sensors is developed. This analog front-end circuit uses charge pump based excitation circuit and phase delay based read-out chain. With these two features the power consumption of analog front-end is reduced. The output is digital and it is immune to amplitude noise at the output of the sensor. Digital output is produced without using an ADC. A SPICE model of micro-fluxgate sensor is used to verify the operation of the analog front-end and the simulation results show very good linearity.
ContributorsPappu, Karthik (Author) / Bakkaloglu, Bertan (Thesis advisor) / Christen, Jennifer Blain (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2013
152259-Thumbnail Image.png
Description
Synchronous buck converters have become the obvious choice of design for high efficiency voltage down-conversion applications and find wide scale usage in today's IC industry. The use of digital control in synchronous buck converters is becoming increasingly popular because of its associated advantages over traditional analog counterparts in terms of

Synchronous buck converters have become the obvious choice of design for high efficiency voltage down-conversion applications and find wide scale usage in today's IC industry. The use of digital control in synchronous buck converters is becoming increasingly popular because of its associated advantages over traditional analog counterparts in terms of design flexibility, reduced use of off-chip components, and better programmability to enable advanced controls. They also demonstrate better immunity to noise, enhances tolerance to the process, voltage and temperature (PVT) variations, low chip area and as a result low cost. It enables processing in digital domain requiring a need of analog-digital interfacing circuit viz. Analog to Digital Converter (ADC) and Digital to Analog Converter (DAC). A Digital to Pulse Width Modulator (DPWM) acts as time domain DAC required in the control loop to modulate the ON time of the Power-MOSFETs. The accuracy and efficiency of the DPWM creates the upper limit to the steady state voltage ripple of the DC - DC converter and efficiency in low load conditions. This thesis discusses the prevalent architectures for DPWM in switched mode DC - DC converters. The design of a Hybrid DPWM is presented. The DPWM is 9-bit accurate and is targeted for a Synchronous Buck Converter with a switching frequency of 1.0 MHz. The design supports low power mode(s) for the buck converter in the Pulse Frequency Modulation (PFM) mode as well as other fail-safe features. The design implementation is digital centric making it robust across PVT variations and portable to lower technology nodes. Key target of the design is to reduce design time. The design is tested across large Process (+/- 3σ), Voltage (1.8V +/- 10%) and Temperature (-55.0 °C to 125 °C) and is in the process of tape-out.
ContributorsKumar, Amit (Author) / Bakkaloglu, Bertan (Thesis advisor) / Song, Hongjiang (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
152044-Thumbnail Image.png
Description
Doppler radar can be used to measure respiration and heart rate without contact and through obstacles. In this work, a Doppler radar architecture at 2.4 GHz and a new signal processing algorithm to estimate the respiration and heart rate are presented. The received signal is dominated by the transceiver noise,

Doppler radar can be used to measure respiration and heart rate without contact and through obstacles. In this work, a Doppler radar architecture at 2.4 GHz and a new signal processing algorithm to estimate the respiration and heart rate are presented. The received signal is dominated by the transceiver noise, LO phase noise and clutter which reduces the signal-to-noise ratio of the desired signal. The proposed architecture and algorithm are used to mitigate these issues and obtain an accurate estimate of the heart and respiration rate. Quadrature low-IF transceiver architecture is adopted to resolve null point problem as well as avoid 1/f noise and DC offset due to mixer-LO coupling. Adaptive clutter cancellation algorithm is used to enhance receiver sensitivity coupled with a novel Pattern Search in Noise Subspace (PSNS) algorithm is used to estimate respiration and heart rate. PSNS is a modified MUSIC algorithm which uses the phase noise to enhance Doppler shift detection. A prototype system was implemented using off-the-shelf TI and RFMD transceiver and tests were conduct with eight individuals. The measured results shows accurate estimate of the cardio pulmonary signals in low-SNR conditions and have been tested up to a distance of 6 meters.
ContributorsKhunti, Hitesh Devshi (Author) / Kiaei, Sayfe (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Bliss, Daniel (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
152063-Thumbnail Image.png
Description
A cerebral aneurysm is a bulging of a blood vessel in the brain. Aneurysmal rupture affects 25,000 people each year and is associated with a 45% mortality rate. Therefore, it is critically important to treat cerebral aneurysms effectively before they rupture. Endovascular coiling is the most effective treatment for cerebral

A cerebral aneurysm is a bulging of a blood vessel in the brain. Aneurysmal rupture affects 25,000 people each year and is associated with a 45% mortality rate. Therefore, it is critically important to treat cerebral aneurysms effectively before they rupture. Endovascular coiling is the most effective treatment for cerebral aneurysms. During coiling process, series of metallic coils are deployed into the aneurysmal sack with the intent of reaching a sufficient packing density (PD). Coils packing can facilitate thrombus formation and help seal off the aneurysm from circulation over time. While coiling is effective, high rates of treatment failure have been associated with basilar tip aneurysms (BTAs). Treatment failure may be related to geometrical features of the aneurysm. The purpose of this study was to investigate the influence of dome size, parent vessel (PV) angle, and PD on post-treatment aneurysmal hemodynamics using both computational fluid dynamics (CFD) and particle image velocimetry (PIV). Flows in four idealized BTA models with a combination of dome sizes and two different PV angles were simulated using CFD and then validated against PIV data. Percent reductions in post-treatment aneurysmal velocity and cross-neck (CN) flow as well as percent coverage of low wall shear stress (WSS) area were analyzed. In all models, aneurysmal velocity and CN flow decreased after coiling, while low WSS area increased. However, with increasing PD, further reductions were observed in aneurysmal velocity and CN flow, but minimal changes were observed in low WSS area. Overall, coil PD had the greatest impact while dome size has greater impact than PV angle on aneurysmal hemodynamics. These findings lead to a conclusion that combinations of treatment goals and geometric factor may play key roles in coil embolization treatment outcomes, and support that different treatment timing may be a critical factor in treatment optimization.
ContributorsIndahlastari, Aprinda (Author) / Frakes, David (Thesis advisor) / Chong, Brian (Committee member) / Muthuswamy, Jitendran (Committee member) / Arizona State University (Publisher)
Created2013