Matching Items (16)
Filtering by

Clear all filters

152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
150127-Thumbnail Image.png
Description
This dissertation describes development of a procedure for obtaining high quality, optical grade sand coupons from frozen sand specimens of Ottawa 20/30 sand for image processing and analysis to quantify soil structure along with a methodology for quantifying the microstructure from the images. A technique for thawing and stabilizing

This dissertation describes development of a procedure for obtaining high quality, optical grade sand coupons from frozen sand specimens of Ottawa 20/30 sand for image processing and analysis to quantify soil structure along with a methodology for quantifying the microstructure from the images. A technique for thawing and stabilizing frozen core samples was developed using optical grade Buehler® Epo-Tek® epoxy resin, a modified triaxial cell, a vacuum/reservoir chamber, a desiccator, and a moisture gauge. The uniform epoxy resin impregnation required proper drying of the soil specimen, application of appropriate confining pressure and vacuum levels, and epoxy mixing, de-airing and curing. The resulting stabilized sand specimen was sectioned into 10 mm thick coupons that were planed, ground, and polished with progressively finer diamond abrasive grit levels using the modified Allied HTP Inc. polishing method so that the soil structure could be accurately quantified using images obtained with the use of an optical microscopy technique. Illumination via Bright Field Microscopy was used to capture the images for subsequent image processing and sand microstructure analysis. The quality of resulting images and the validity of the subsequent image morphology analysis hinged largely on employment of a polishing and grinding technique that resulted in a flat, scratch free, reflective coupon surface characterized by minimal microstructure relief and good contrast between the sand particles and the surrounding epoxy resin. Subsequent image processing involved conversion of the color images first to gray scale images and then to binary images with the use of contrast and image adjustments, removal of noise and image artifacts, image filtering, and image segmentation. Mathematical morphology algorithms were used on the resulting binary images to further enhance image quality. The binary images were then used to calculate soil structure parameters that included particle roundness and sphericity, particle orientation variability represented by rose diagrams, statistics on the local void ratio variability as a function of the sample size, and the local void ratio distribution histograms using Oda's method and Voronoi tessellation method, including the skewness, kurtosis, and entropy of a gamma cumulative probability distribution fit to the local void ratio distribution.
ContributorsCzupak, Zbigniew David (Author) / Kavazanjian, Edward (Thesis advisor) / Zapata, Claudia (Committee member) / Houston, Sandra (Committee member) / Arizona State University (Publisher)
Created2011
151112-Thumbnail Image.png
Description
The Arizona State University Herbarium began in 1896 when Professor Fredrick Irish collected the first recorded Arizona specimen for what was then called the Tempe Normal School - a Parkinsonia microphylla. Since then, the collection has grown to approximately 400,000 specimens of vascular plants and lichens. The most recent project

The Arizona State University Herbarium began in 1896 when Professor Fredrick Irish collected the first recorded Arizona specimen for what was then called the Tempe Normal School - a Parkinsonia microphylla. Since then, the collection has grown to approximately 400,000 specimens of vascular plants and lichens. The most recent project includes the digitization - both the imaging and databasing - of approximately 55,000 vascular plant specimens from Latin America. To accomplish this efficiently, possibilities in non-traditional methods, including both new and existing technologies, were explored. SALIX (semi-automatic label information extraction) was developed as the central tool to handle automatic parsing, along with BarcodeRenamer (BCR) to automate image file renaming by barcode. These two developments, combined with existing technologies, make up the SALIX Method. The SALIX Method provides a way to digitize herbarium specimens more efficiently than the traditional approach of entering data solely through keystroking. Using digital imaging, optical character recognition, and automatic parsing, I found that the SALIX Method processes data at an average rate that is 30% faster than typing. Data entry speed is dependent on user proficiency, label quality, and to a lesser degree, label length. This method is used to capture full specimen records, including close-up images where applicable. Access to biodiversity data is limited by the time and resources required to digitize, but I have found that it is possible to do so at a rate that is faster than typing. Finally, I experiment with the use of digital field guides in advancing access to biodiversity data, to stimulate public engagement in natural history collections.
ContributorsBarber, Anne Christine (Author) / Landrum, Leslie R. (Thesis advisor) / Wojciechowski, Martin F. (Thesis advisor) / Gilbert, Edward (Committee member) / Lafferty, Daryl (Committee member) / Arizona State University (Publisher)
Created2012
157327-Thumbnail Image.png
Description
The challenge of radiation therapy is to maximize the dose to the tumor while simultaneously minimizing the dose elsewhere. Proton therapy is well suited to this challenge due to the way protons slow down in matter. As the proton slows down, the rate of energy loss per unit path length

The challenge of radiation therapy is to maximize the dose to the tumor while simultaneously minimizing the dose elsewhere. Proton therapy is well suited to this challenge due to the way protons slow down in matter. As the proton slows down, the rate of energy loss per unit path length continuously increases leading to a sharp dose near the end of range. Unlike conventional radiation therapy, protons stop inside the patient, sparing tissue beyond the tumor. Proton therapy should be superior to existing modalities, however, because protons stop inside the patient, there is uncertainty in the range. “Range uncertainty” causes doctors to take a conservative approach in treatment planning, counteracting the advantages offered by proton therapy. Range uncertainty prevents proton therapy from reaching its full potential.

A new method of delivering protons, pencil-beam scanning (PBS), has become the new standard for treatment over the past few years. PBS utilizes magnets to raster scan a thin proton beam across the tumor at discrete locations and using many discrete pulses of typically 10 ms duration each. The depth is controlled by changing the beam energy. The discretization in time of the proton delivery allows for new methods of dose verification, however few devices have been developed which can meet the bandwidth demands of PBS.

In this work, two devices have been developed to perform dose verification and monitoring with an emphasis placed on fast response times. Measurements were performed at the Mayo Clinic. One detector addresses range uncertainty by measuring prompt gamma-rays emitted during treatment. The range detector presented in this work is able to measure the proton range in-vivo to within 1.1 mm at depths up to 11 cm in less than 500 ms and up to 7.5 cm in less than 200 ms. A beam fluence detector presented in this work is able to measure the position and shape of each beam spot. It is hoped that this work may lead to a further maturation of detection techniques in proton therapy, helping the treatment to reach its full potential to improve the outcomes in patients.
ContributorsHolmes, Jason M (Author) / Alarcon, Ricardo (Thesis advisor) / Bues, Martin (Committee member) / Galyaev, Eugene (Committee member) / Chamberlin, Ralph (Committee member) / Arizona State University (Publisher)
Created2019
135354-Thumbnail Image.png
Description
Introduction: There are 350 to 400 pediatric heart transplants annually according to the Pediatric Heart Transplant Database (Dipchand et al. 2014). Finding appropriate donors can be challenging especially for the pediatric population. The current standard of care is a donor-to-recipient weight ratio. This ratio is not necessarily

Introduction: There are 350 to 400 pediatric heart transplants annually according to the Pediatric Heart Transplant Database (Dipchand et al. 2014). Finding appropriate donors can be challenging especially for the pediatric population. The current standard of care is a donor-to-recipient weight ratio. This ratio is not necessarily a parameter directly indicative of the size of a heart, potentially leading to ill-fitting allografts (Tang et al. 2010). In this paper, a regression model is presented - developed by correlating total cardiac volume to non-invasive imaging parameters and patient characteristics – for use in determining ideal allograft fit with respect to total cardiac volume.
Methods: A virtual, 3D library of clinically-defined normal hearts was compiled from reconstructed CT and MR scans. Non-invasive imaging parameters and patient characteristics were collected and subjected to backward elimination linear regression to define a model relating patient parameters to the total cardiac volume. This regression model was then used to retrospectively accept or reject an ‘ideal’ donor graft from the library for 3 patients that had undergone heart transplantation. Oversized and undersized grafts were also transplanted to qualitatively analyze virtual transplantation specificity.
Results: The backward elimination approach of the data for the 20 patients rejected the factors of BMI, BSA, sex and both end-systolic and end-diastolic left ventricular measurements from echocardiography. Height and weight were included in the linear regression model yielding an adjusted R-squared of 82.5%. Height and weight showed statistical significance with p-values of 0.005 and 0.02 respectively. The final equation for the linear regression model was TCV = -169.320+ 2.874h + 3.578w ± 73 (h=height, w=weight, TCV= total cardiac volume).
Discussion: With the current regression model, height and weight significantly correlate to total cardiac volume. This regression model and virtual normal heart library provide for the possibility of virtual transplant and size-matching for transplantation. The study and regression model is, however, limited due to a small sample size. Additionally, the lack of volumetric resolution from the MR datasets is a potentially limiting factor. Despite these limitations the virtual library has the potential to be a critical tool for clinical care that will continue to grow as normal hearts are added to the virtual library.
ContributorsSajadi, Susan (Co-author) / Lindquist, Jacob (Co-author) / Frakes, David (Thesis director) / Ryan, Justin (Committee member) / Harrington Bioengineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135425-Thumbnail Image.png
Description
The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor

The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor edge detection method was therefore developed to realize an edge detector directly from spectral data. This thesis explores the possibilities of detecting edges from the phase of the spectral data, that is, without the magnitude of the sampled spectral data. Prior work has demonstrated that the spectral phase contains particularly important information about underlying features in a signal. Furthermore, the concentration factor method yields some insight into the detection of edges in spectral phase data. An iterative design approach was taken to realize an edge detector using only the spectral phase data, also allowing for the design of an edge detector when phase data are intermittent or corrupted. Problem formulations showing the power of the design approach are given throughout. A post-processing scheme relying on the difference of multiple edge approximations yields a strong edge detector which is shown to be resilient under noisy, intermittent phase data. Lastly, a thresholding technique is applied to give an explicit enhanced edge detector ready to be used. Examples throughout are demonstrate both on signals and images.
ContributorsReynolds, Alexander Bryce (Author) / Gelb, Anne (Thesis director) / Cochran, Douglas (Committee member) / Viswanathan, Adityavikram (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
137469-Thumbnail Image.png
Description
Oxygen delivery is crucial for the development of healthy, functional tissue. Low tissue oxygenation, or hypoxia, is a characteristic that is common in many tumors. Hypoxia contributes to tumor malignancy and can reduce the success of chemotherapy and radiation treatment. There is a current need to noninvasively measure tumor oxygenation

Oxygen delivery is crucial for the development of healthy, functional tissue. Low tissue oxygenation, or hypoxia, is a characteristic that is common in many tumors. Hypoxia contributes to tumor malignancy and can reduce the success of chemotherapy and radiation treatment. There is a current need to noninvasively measure tumor oxygenation or pO2 in patients to determine a personalized treatment method. This project focuses on creating and characterizing nanoemulsions using a pO2 reporter molecule hexamethyldisiloxane (HMDSO) and its longer chain variants as well as assessing their cytotoxicity. We also explored creating multi-modal (MRI/Fluorescence) nanoemulsions.
ContributorsGrucky, Marian Louise (Author) / Kodibagkar, Vikram (Thesis director) / Rege, Kaushal (Committee member) / Stabenfeldt, Sarah (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2013-05
133517-Thumbnail Image.png
Description
Traumatic brain injury (TBI) is a major concern in public health due to its prevalence and effect. Every year, about 1.7 million TBIs are reported [7]. According to the According to the Centers for Disease Control and Prevention (CDC), 5.5% of all emergency department visits, hospitalizations, and deaths from 2002

Traumatic brain injury (TBI) is a major concern in public health due to its prevalence and effect. Every year, about 1.7 million TBIs are reported [7]. According to the According to the Centers for Disease Control and Prevention (CDC), 5.5% of all emergency department visits, hospitalizations, and deaths from 2002 to 2006 are due to TBI [8]. The brain's natural defense, the Blood Brain Barrier (BBB), prevents the entry of most substances into the brain through the blood stream, including medicines administered to treat TBI [11]. TBI may cause the breakdown of the BBB, and may result in increased permeability, providing an opportunity for NPs to enter the brain [3,4]. Dr. Stabenfeldt's lab has previously established that intravenously injected nanoparticles (NP) will accumulate near the injury site after focal brain injury [4]. The current project focuses on confirmation of the accumulation or extravasation of NPs after brain injury using 2-photon microscopy. Specifically, the project used controlled cortical impact injury induced mice models that were intravenously injected with 40nm NPs post-injury. The MATLAB code seeks to analyze the brain images through registration, segmentation, and intensity measurement and evaluate if fluorescent NPs will accumulate in the extravascular tissue of injured mice models. The code was developed with 2D bicubic interpolation, subpixel image registration, drawn dimension segmentation and fixed dimension segmentation, and dynamic image analysis. A statistical difference was found between the extravascular tissue of injured and uninjured mouse models. This statistical difference proves that the NPs do extravasate through the permeable cranial blood vessels in injured cranial tissue.
ContributorsIrwin, Jacob Aleksandr (Author) / Stabenfeldt, Sarah (Thesis director) / Bharadwaj, Vimala (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134823-Thumbnail Image.png
Description
Imaging using electric fields could provide a cheaper, safer, and easier alternative to the standard methods used for imaging. The viability of electric field imaging at very low frequencies using D-dot sensors has already been investigated and proven. The new goal is to determine if imaging is viable at high

Imaging using electric fields could provide a cheaper, safer, and easier alternative to the standard methods used for imaging. The viability of electric field imaging at very low frequencies using D-dot sensors has already been investigated and proven. The new goal is to determine if imaging is viable at high frequencies. In order to accomplish this, the operational amplifiers used in the very low frequency imaging test set up must be replaced with ones that have higher bandwidth. The trade-off of using these amplifiers is that they have a typical higher input leakage current on the order of 100 compared to the standard. Using a modified circuit design technique that reduces input leakage current of the operational amplifiers used in the imaging test setup, a printed circuit board with D-dot sensors is fabricated to identify the frequency limitations of electric field imaging. Data is collected at both low and high frequencies as well as low peak voltage. The data is then analyzed to determine the range in intensity of electric field and frequency that this circuit low-leakage design can accurately detect a signal. Data is also collected using another printed circuit board that uses the standard circuit design technique. The data taken from the different boards is compared to identify if the modified circuit design technique allows for higher sensitivity imaging. In conclusion, this research supports that using low-leakage design techniques can allow for signal detection comparable to that of the standard circuit design. The low-leakage design allowed for sensitivity within a factor two to that of the standard design. Although testing at higher frequencies was limited, signal detection for the low-leakage design was reliable up until 97 kHz, but further experimentation is needed to determine the upper frequency limits.
ContributorsLin, Richard (Co-author) / Angell, Tyler (Co-author) / Allee, David (Thesis director) / Chung, Hugh (Committee member) / Electrical Engineering Program (Contributor) / W. P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
157900-Thumbnail Image.png
Description
Readout Integrated Circuits(ROICs) are important components of infrared(IR) imag

ing systems. Performance of ROICs affect the quality of images obtained from IR

imaging systems. Contemporary infrared imaging applications demand ROICs that

can support large dynamic range, high frame rate, high output data rate, at low

cost, size and power. Some of these applications are

Readout Integrated Circuits(ROICs) are important components of infrared(IR) imag

ing systems. Performance of ROICs affect the quality of images obtained from IR

imaging systems. Contemporary infrared imaging applications demand ROICs that

can support large dynamic range, high frame rate, high output data rate, at low

cost, size and power. Some of these applications are military surveillance, remote

sensing in space and earth science missions and medical diagnosis. This work focuses

on developing a ROIC unit cell prototype for National Aeronautics and Space Ad

ministration(NASA), Jet Propulsion Laboratory’s(JPL’s) space applications. These

space applications also demand high sensitivity, longer integration times(large well

capacity), wide operating temperature range, wide input current range and immunity

to radiation events such as Single Event Latchup(SEL).

This work proposes a digital ROIC(DROIC) unit cell prototype of 30ux30u size,

to be used mainly with NASA JPL’s High Operating Temperature Barrier Infrared

Detectors(HOT BIRDs). Current state of the art DROICs achieve a dynamic range

of 16 bits using advanced 65-90nm CMOS processes which adds a lot of cost overhead.

The DROIC pixel proposed in this work uses a low cost 180nm CMOS process and

supports a dynamic range of 20 bits operating at a low frame rate of 100 frames per

second(fps), and a dynamic range of 12 bits operating at a high frame rate of 5kfps.

The total electron well capacity of this DROIC pixel is 1.27 billion electrons, enabling

integration times as long as 10ms, to achieve better dynamic range. The DROIC unit

cell uses an in-pixel 12-bit coarse ADC and an external 8-bit DAC based fine ADC.

The proposed DROIC uses layout techniques that make it immune to radiation up to

300krad(Si) of total ionizing dose(TID) and single event latch-up(SEL). It also has a

wide input current range from 10pA to 1uA and supports detectors operating from

Short-wave infrared (SWIR) to longwave infrared (LWIR) regions.
ContributorsPraveen, Subramanya Chilukuri (Author) / Bakkaloglu, Bertan (Thesis advisor) / Kitchen, Jennifer (Committee member) / Long, Yu (Committee member) / Arizona State University (Publisher)
Created2019