Matching Items (343)

Filtering by

Clear all filters

151852-Thumbnail Image.png

Characterization of coronary atherosclerotic plaques by dual energy computed tomography

Description

Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque

Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome. The composition of plaque is important for detection of plaque vulnerability. Due to prognostic importance of early stage identification, non-invasive assessment of plaque characterization is necessary. Computed tomography (CT) has emerged as a non-invasive alternative to coronary angiography. Recently, dual energy CT (DECT) coronary angiography has been performed clinically. DECT scanners use two different X-ray energies in order to determine the energy dependency of tissue attenuation values for each voxel. They generate virtual monochromatic energy images, as well as material basis pair images. The characterization of plaque components by DECT is still an active research topic since overlap between the CT attenuations measured in plaque components and contrast material shows that the single mean density might not be an appropriate measure for characterization. This dissertation proposes feature extraction, feature selection and learning strategies for supervised characterization of coronary atherosclerotic plaques. In my first study, I proposed an approach for calcium quantification in contrast-enhanced examinations of the coronary arteries, potentially eliminating the need for an extra non-contrast X-ray acquisition. The ambiguity of separation of calcium from contrast material was solved by using virtual non-contrast images. Additional attenuation data provided by DECT provides valuable information for separation of lipid from fibrous plaque since the change of their attenuation as the energy level changes is different. My second study proposed these as the input to supervised learners for a more precise classification of lipid and fibrous plaques. My last study aimed at automatic segmentation of coronary arteries characterizing plaque components and lumen on contrast enhanced monochromatic X-ray images. This required extraction of features from regions of interests. This study proposed feature extraction strategies and selection of important ones. The results show that supervised learning on the proposed features provides promising results for automatic characterization of coronary atherosclerotic plaques by DECT.

Contributors

Agent

Created

Date Created
2013

151857-Thumbnail Image.png

MRI visualization and mathematical modeling of local drug delivery

Description

Controlled release formulations for local, in vivo drug delivery are of growing interest to device manufacturers, research scientists, and clinicians; however, most research characterizing controlled release formulations occurs in vitro because the spatial and temporal distribution of drug delivery is

Controlled release formulations for local, in vivo drug delivery are of growing interest to device manufacturers, research scientists, and clinicians; however, most research characterizing controlled release formulations occurs in vitro because the spatial and temporal distribution of drug delivery is difficult to measure in vivo. In this work, in vivo magnetic resonance imaging (MRI) of local drug delivery is performed to visualize and quantify the time resolved distribution of MRI contrast agents. I find it is possible to visualize contrast agent distributions in near real time from local delivery vehicles using MRI. Three dimensional T1 maps are processed to produce in vivo concentration maps of contrast agent for individual animal models. The method for obtaining concentration maps is analyzed to estimate errors introduced at various steps in the process. The method is used to evaluate different controlled release vehicles, vehicle placement, and type of surgical wound in rabbits as a model for antimicrobial delivery to orthopaedic infection sites. I are able to see differences between all these factors; however, all images show that contrast agent remains fairly local to the wound site and do not distribute to tissues far from the implant in therapeutic concentrations. I also produce a mathematical model that investigates important mechanisms in the transport of antimicrobials in a wound environment. It is determined from both the images and the mathematical model that antimicrobial distribution in an orthopaedic wounds is dependent on both diffusive and convective mechanisms. Furthermore, I began development of MRI visible therapeutic agents to examine active drug distributions. I hypothesize that this work can be developed into a non-invasive, patient specific, clinical tool to evaluate the success of interventional procedures using local drug delivery vehicles.

Contributors

Agent

Created

Date Created
2013

151860-Thumbnail Image.png

Lead identification, optimization and characterization of novel cancer treatment strategies using repositioned drugs

Description

Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and

Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal in both men and women. Developing new drugs for the treatment of cancer is both a slow and expensive process. It is estimated that it takes an average of 15 years and an expense of $800 million to bring a single new drug to the market. However, it is also estimated that nearly 40% of that cost could be avoided by finding alternative uses for drugs that have already been approved by the Food and Drug Administration (FDA). The research presented in this document describes the testing, identification, and mechanistic evaluation of novel methods for treating many human carcinomas using drugs previously approved by the FDA. A tissue culture plate-based screening of FDA approved drugs will identify compounds that can be used in combination with the protein TRAIL to induce apoptosis selectively in cancer cells. Identified leads will next be optimized using high-throughput microfluidic devices to determine the most effective treatment conditions. Finally, a rigorous mechanistic analysis will be conducted to understand how the FDA-approved drug mitoxantrone, sensitizes cancer cells to TRAIL-mediated apoptosis.

Contributors

Agent

Created

Date Created
2013

152336-Thumbnail Image.png

Frequency response characteristics of respiratory flow-meters

Description

Flow measurement has always been one of the most critical processes in many industrial and clinical applications. The dynamic behavior of flow helps to define the state of a process. An industrial example would be that in an aircraft, where

Flow measurement has always been one of the most critical processes in many industrial and clinical applications. The dynamic behavior of flow helps to define the state of a process. An industrial example would be that in an aircraft, where the rate of airflow passing the aircraft is used to determine the speed of the plane. A clinical example would be that the flow of a patient's breath which could help determine the state of the patient's lungs. This project is focused on the flow-meter that are used for airflow measurement in human lungs. In order to do these measurements, resistive-type flow-meters are commonly used in respiratory measurement systems. This method consists of passing the respiratory flow through a fluid resistive component, while measuring the resulting pressure drop, which is linearly related to volumetric flow rate. These types of flow-meters typically have a low frequency response but are adequate for most applications, including spirometry and respiration monitoring. In the case of lung parameter estimation methods, such as the Quick Obstruction Method, it becomes important to have a higher frequency response in the flow-meter so that the high frequency components in the flow are measurable. The following three types of flow-meters were: a. Capillary type b. Screen Pneumotach type c. Square Edge orifice type To measure the frequency response, a sinusoidal flow is generated with a small speaker and passed through the flow-meter that is connected to a large, rigid container. True flow is proportional to the derivative of the pressure inside the container. True flow is then compared with the measured flow, which is proportional to the pressure drop across the flow-meter. In order to do the characterization, two LabVIEW data acquisition programs have been developed, one for transducer calibration, and another one that records flow and pressure data for frequency response testing of the flow-meter. In addition, a model that explains the behavior exhibited by the flow-meter has been proposed and simulated. This model contains a fluid resistor and inductor in series. The final step in this project was to approximate the frequency response data to the developed model expressed as a transfer function.

Contributors

Agent

Created

Date Created
2013

152340-Thumbnail Image.png

Non-invasive method to detect the changes of glucose concentration in whole blood using photometric technique

Description

A noninvasive optical method is developed to monitor rapid changes in blood glucose levels in diabetic patients. The system depends on an optical cell built with a LED that emits light of wavelength 535nm that is a peak absorbance of

A noninvasive optical method is developed to monitor rapid changes in blood glucose levels in diabetic patients. The system depends on an optical cell built with a LED that emits light of wavelength 535nm that is a peak absorbance of hemoglobin. As the glucose concentration in the blood decreases, its osmolarity also decreases and the RBCs swell and decrease the path length absorption coefficient. Decreasing absorption coefficient increases the transmission of light through the whole blood. The system was tested with a constructed optical cell that held whole blood in a capillary tube. As expected the light transmitted to the photodiode increases with decreasing glucose concentration. The average response time of the system was between 30-40 seconds. The changes in size of the RBC cells in response to glucose concentration changes were confirmed using a cell counter and also visually under microscope. This method does not allow measuring the glucose concentration with an absolute concentration calibration. It is directed towards development of a device to monitor the changes in glucose concentration as an aid to diabetic management. This method might be improvised for precision and resolution and be developed as a ring or an earring that patients can wear.

Contributors

Agent

Created

Date Created
2013

152367-Thumbnail Image.png

Designing m-health modules with sensor interfaces for DSP education

Description

Advancements in mobile technologies have significantly enhanced the capabilities of mobile devices to serve as powerful platforms for sensing, processing, and visualization. Surges in the sensing technology and the abundance of data have enabled the use of these portable devices

Advancements in mobile technologies have significantly enhanced the capabilities of mobile devices to serve as powerful platforms for sensing, processing, and visualization. Surges in the sensing technology and the abundance of data have enabled the use of these portable devices for real-time data analysis and decision-making in digital signal processing (DSP) applications. Most of the current efforts in DSP education focus on building tools to facilitate understanding of the mathematical principles. However, there is a disconnect between real-world data processing problems and the material presented in a DSP course. Sophisticated mobile interfaces and apps can potentially play a crucial role in providing a hands-on-experience with modern DSP applications to students. In this work, a new paradigm of DSP learning is explored by building an interactive easy-to-use health monitoring application for use in DSP courses. This is motivated by the increasing commercial interest in employing mobile phones for real-time health monitoring tasks. The idea is to exploit the computational abilities of the Android platform to build m-Health modules with sensor interfaces. In particular, appropriate sensing modalities have been identified, and a suite of software functionalities have been developed. Within the existing framework of the AJDSP app, a graphical programming environment, interfaces to on-board and external sensor hardware have also been developed to acquire and process physiological data. The set of sensor signals that can be monitored include electrocardiogram (ECG), photoplethysmogram (PPG), accelerometer signal, and galvanic skin response (GSR). The proposed m-Health modules can be used to estimate parameters such as heart rate, oxygen saturation, step count, and heart rate variability. A set of laboratory exercises have been designed to demonstrate the use of these modules in DSP courses. The app was evaluated through several workshops involving graduate and undergraduate students in signal processing majors at Arizona State University. The usefulness of the software modules in enhancing student understanding of signals, sensors and DSP systems were analyzed. Student opinions about the app and the proposed m-health modules evidenced the merits of integrating tools for mobile sensing and processing in a DSP curriculum, and familiarizing students with challenges in modern data-driven applications.

Contributors

Agent

Created

Date Created
2013

152370-Thumbnail Image.png

Characterizing retinotopic mapping using conformal geometry and Beltrami coefficient: a preliminary study

Description

Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the

Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models. This thesis explores using concepts from computational conformal geometry to create a custom software framework for examining and generating quantitative mathematical models for characterizing the geometry of early visual areas in the human brain. The software framework includes a graphical user interface built on top of a selected core conformal flattening algorithm and various software tools compiled specifically for processing and examining retinotopic data. Three conformal flattening algorithms were implemented and evaluated for speed and how well they preserve the conformal metric. All three algorithms performed well in preserving the conformal metric but the speed and stability of the algorithms varied. The software framework performed correctly on actual retinotopic data collected using the standard travelling-wave experiment. Preliminary analysis of the Beltrami coefficient for the early data set shows that selected regions of V1 that contain reasonably smooth eccentricity and polar angle gradients do show significant local conformality, warranting further investigation of this approach for analysis of early and higher visual cortex.

Contributors

Agent

Created

Date Created
2013

152044-Thumbnail Image.png

Vital sign estimation through Doppler radar

Description

Doppler radar can be used to measure respiration and heart rate without contact and through obstacles. In this work, a Doppler radar architecture at 2.4 GHz and a new signal processing algorithm to estimate the respiration and heart rate are

Doppler radar can be used to measure respiration and heart rate without contact and through obstacles. In this work, a Doppler radar architecture at 2.4 GHz and a new signal processing algorithm to estimate the respiration and heart rate are presented. The received signal is dominated by the transceiver noise, LO phase noise and clutter which reduces the signal-to-noise ratio of the desired signal. The proposed architecture and algorithm are used to mitigate these issues and obtain an accurate estimate of the heart and respiration rate. Quadrature low-IF transceiver architecture is adopted to resolve null point problem as well as avoid 1/f noise and DC offset due to mixer-LO coupling. Adaptive clutter cancellation algorithm is used to enhance receiver sensitivity coupled with a novel Pattern Search in Noise Subspace (PSNS) algorithm is used to estimate respiration and heart rate. PSNS is a modified MUSIC algorithm which uses the phase noise to enhance Doppler shift detection. A prototype system was implemented using off-the-shelf TI and RFMD transceiver and tests were conduct with eight individuals. The measured results shows accurate estimate of the cardio pulmonary signals in low-SNR conditions and have been tested up to a distance of 6 meters.

Contributors

Agent

Created

Date Created
2013

152063-Thumbnail Image.png

The influence of dome size, parent vessel angle, and coil packing density on coil embolization treatment in cerebral aneurysms

Description

A cerebral aneurysm is a bulging of a blood vessel in the brain. Aneurysmal rupture affects 25,000 people each year and is associated with a 45% mortality rate. Therefore, it is critically important to treat cerebral aneurysms effectively before they

A cerebral aneurysm is a bulging of a blood vessel in the brain. Aneurysmal rupture affects 25,000 people each year and is associated with a 45% mortality rate. Therefore, it is critically important to treat cerebral aneurysms effectively before they rupture. Endovascular coiling is the most effective treatment for cerebral aneurysms. During coiling process, series of metallic coils are deployed into the aneurysmal sack with the intent of reaching a sufficient packing density (PD). Coils packing can facilitate thrombus formation and help seal off the aneurysm from circulation over time. While coiling is effective, high rates of treatment failure have been associated with basilar tip aneurysms (BTAs). Treatment failure may be related to geometrical features of the aneurysm. The purpose of this study was to investigate the influence of dome size, parent vessel (PV) angle, and PD on post-treatment aneurysmal hemodynamics using both computational fluid dynamics (CFD) and particle image velocimetry (PIV). Flows in four idealized BTA models with a combination of dome sizes and two different PV angles were simulated using CFD and then validated against PIV data. Percent reductions in post-treatment aneurysmal velocity and cross-neck (CN) flow as well as percent coverage of low wall shear stress (WSS) area were analyzed. In all models, aneurysmal velocity and CN flow decreased after coiling, while low WSS area increased. However, with increasing PD, further reductions were observed in aneurysmal velocity and CN flow, but minimal changes were observed in low WSS area. Overall, coil PD had the greatest impact while dome size has greater impact than PV angle on aneurysmal hemodynamics. These findings lead to a conclusion that combinations of treatment goals and geometric factor may play key roles in coil embolization treatment outcomes, and support that different treatment timing may be a critical factor in treatment optimization.

Contributors

Agent

Created

Date Created
2013

152070-Thumbnail Image.png

Electrocorticographica analysis of spontaneous conversation to localize receptive and expressive language areas

Description

When surgical resection becomes necessary to alleviate a patient's epileptiform activity, that patient is monitored by video synchronized with electrocorticography (ECoG) to determine the type and location of seizure focus. This provides a unique opportunity for researchers to gather neurophysiological

When surgical resection becomes necessary to alleviate a patient's epileptiform activity, that patient is monitored by video synchronized with electrocorticography (ECoG) to determine the type and location of seizure focus. This provides a unique opportunity for researchers to gather neurophysiological data with high temporal and spatial resolution; these data are assessed prior to surgical resection to ensure the preservation of the patient's quality of life, e.g. avoid the removal of brain tissue required for speech processing. Currently considered the "gold standard" for the mapping of cortex, electrical cortical stimulation (ECS) involves the systematic activation of pairs of electrodes to localize functionally specific brain regions. This method has distinct limitations, which often includes pain experienced by the patient. Even in the best cases, the technique suffers from subjective assessments on the parts of both patients and physicians, and high inter- and intra-observer variability. Recent advances have been made as researchers have reported the localization of language areas through several signal processing methodologies, all necessitating patient participation in a controlled experiment. The development of a quantification tool to localize speech areas in which a patient is engaged in an unconstrained interpersonal conversation would eliminate the dependence of biased patient and reviewer input, as well as unnecessary discomfort to the patient. Post-hoc ECoG data were gathered from five patients with intractable epilepsy while each was engaged in a conversation with family members or clinicians. After the data were separated into different speech conditions, the power of each was compared to baseline to determine statistically significant activated electrodes. The results of several analytical methods are presented here. The algorithms did not yield language-specific areas exclusively, as broad activation of statistically significant electrodes was apparent across cortical areas. For one patient, 15 adjacent contacts along superior temporal gyrus (STG) and posterior part of the temporal lobe were determined language-significant through a controlled experiment. The task involved a patient lying in bed listening to repeated words, and yielded statistically significant activations that aligned with those of clinical evaluation. The results of this study do not support the hypothesis that unconstrained conversation may be used to localize areas required for receptive and productive speech, yet suggests a simple listening task may be an adequate alternative to direct cortical stimulation.

Contributors

Agent

Created

Date Created
2013