Matching Items (45)
149904-Thumbnail Image.png
Description
Computed tomography (CT) is one of the essential imaging modalities for medical diagnosis. Since its introduction in 1972, CT technology has been improved dramatically, especially in terms of its acquisition speed. However, the main principle of CT which consists in acquiring only density information has not changed at all

Computed tomography (CT) is one of the essential imaging modalities for medical diagnosis. Since its introduction in 1972, CT technology has been improved dramatically, especially in terms of its acquisition speed. However, the main principle of CT which consists in acquiring only density information has not changed at all until recently. Different materials may have the same CT number, which may lead to uncertainty or misdiagnosis. Dual-energy CT (DECT) was reintroduced recently to solve this problem by using the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between two low and high energy images or measurements, so that it is difficult to acquire the accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, a new model and an image enhancement technique for DECT are proposed, based on the fact that the attenuation of a high density material decreases more rapidly as X-ray energy increases. This fact has been previously ignored in most of DECT image enhancement techniques. The proposed technique consists of offset correction, spectral error correction, and adaptive noise suppression. It reduced noise, improved contrast effectively and showed better material differentiation in real patient images as well as phantom studies.
ContributorsPark, Kyung Kook (Author) / Metin, Akay (Thesis advisor) / Pavlicek, William (Committee member) / Akay, Yasemin (Committee member) / Towe, Bruce (Committee member) / Muthuswamy, Jitendran (Committee member) / Arizona State University (Publisher)
Created2011
149928-Thumbnail Image.png
Description
The technology expansion seen in the last decade for genomics research has permitted the generation of large-scale data sources pertaining to molecular biological assays, genomics, proteomics, transcriptomics and other modern omics catalogs. New methods to analyze, integrate and visualize these data types are essential to unveil relevant disease mechanisms. Towards

The technology expansion seen in the last decade for genomics research has permitted the generation of large-scale data sources pertaining to molecular biological assays, genomics, proteomics, transcriptomics and other modern omics catalogs. New methods to analyze, integrate and visualize these data types are essential to unveil relevant disease mechanisms. Towards these objectives, this research focuses on data integration within two scenarios: (1) transcriptomic, proteomic and functional information and (2) real-time sensor-based measurements motivated by single-cell technology. To assess relationships between protein abundance, transcriptomic and functional data, a nonlinear model was explored at static and temporal levels. The successful integration of these heterogeneous data sources through the stochastic gradient boosted tree approach and its improved predictability are some highlights of this work. Through the development of an innovative validation subroutine based on a permutation approach and the use of external information (i.e., operons), lack of a priori knowledge for undetected proteins was overcome. The integrative methodologies allowed for the identification of undetected proteins for Desulfovibrio vulgaris and Shewanella oneidensis for further biological exploration in laboratories towards finding functional relationships. In an effort to better understand diseases such as cancer at different developmental stages, the Microscale Life Science Center headquartered at the Arizona State University is pursuing single-cell studies by developing novel technologies. This research arranged and applied a statistical framework that tackled the following challenges: random noise, heterogeneous dynamic systems with multiple states, and understanding cell behavior within and across different Barrett's esophageal epithelial cell lines using oxygen consumption curves. These curves were characterized with good empirical fit using nonlinear models with simple structures which allowed extraction of a large number of features. Application of a supervised classification model to these features and the integration of experimental factors allowed for identification of subtle patterns among different cell types visualized through multidimensional scaling. Motivated by the challenges of analyzing real-time measurements, we further explored a unique two-dimensional representation of multiple time series using a wavelet approach which showcased promising results towards less complex approximations. Also, the benefits of external information were explored to improve the image representation.
ContributorsTorres Garcia, Wandaliz (Author) / Meldrum, Deirdre R. (Thesis advisor) / Runger, George C. (Thesis advisor) / Gel, Esma S. (Committee member) / Li, Jing (Committee member) / Zhang, Weiwen (Committee member) / Arizona State University (Publisher)
Created2011
151698-Thumbnail Image.png
Description
Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient

Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient or the Quality Assurance for the amount of organ dose received. In this study, we are exploring the methodologies to systematically reduce the absorbed radiation dose in the Fluoroscopically Guided Interventional Radiology procedures. In the first part of this study, we developed a mathematical model which determines a set of geometry settings for the equipment and a level for the energy during a patient exam. The goal is to minimize the amount of absorbed dose in the critical organs while maintaining image quality required for the diagnosis. The model is a large-scale mixed integer program. We performed polyhedral analysis and derived several sets of strong inequalities to improve the computational speed and quality of the solution. Results present the amount of absorbed dose in the critical organ can be reduced up to 99% for a specific set of angles. In the second part, we apply an approximate gradient method to simultaneously optimize angle and table location while minimizing dose in the critical organs with respect to the image quality. In each iteration, we solve a sub-problem as a MIP to determine the radiation field size and corresponding X-ray tube energy. In the computational experiments, results show further reduction (up to 80%) of the absorbed dose in compare with previous method. Last, there are uncertainties in the medical procedures resulting imprecision of the absorbed dose. We propose a robust formulation to hedge from the worst case absorbed dose while ensuring feasibility. In this part, we investigate a robust approach for the organ motions within a radiology procedure. We minimize the absorbed dose for the critical organs across all input data scenarios which are corresponding to the positioning and size of the organs. The computational results indicate up to 26% increase in the absorbed dose calculated for the robust approach which ensures the feasibility across scenarios.
ContributorsKhodadadegan, Yasaman (Author) / Zhang, Muhong (Thesis advisor) / Pavlicek, William (Thesis advisor) / Fowler, John (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2013
151810-Thumbnail Image.png
Description
Hepatocellular carcinoma (HCC) is a malignant tumor and seventh most common cancer in human. Every year there is a significant rise in the number of patients suffering from HCC. Most clinical research has focused on HCC early detection so that there are high chances of patient's survival. Emerging advancements in

Hepatocellular carcinoma (HCC) is a malignant tumor and seventh most common cancer in human. Every year there is a significant rise in the number of patients suffering from HCC. Most clinical research has focused on HCC early detection so that there are high chances of patient's survival. Emerging advancements in functional and structural imaging techniques have provided the ability to detect microscopic changes in tumor micro environment and micro structure. The prime focus of this thesis is to validate the applicability of advanced imaging modality, Magnetic Resonance Elastography (MRE), for HCC diagnosis. The research was carried out on three HCC patient's data and three sets of experiments were conducted. The main focus was on quantitative aspect of MRE in conjunction with Texture Analysis, an advanced imaging processing pipeline and multi-variate analysis machine learning method for accurate HCC diagnosis. We analyzed the techniques to handle unbalanced data and evaluate the efficacy of sampling techniques. Along with this we studied different machine learning algorithms and developed models using them. Performance metrics such as Prediction Accuracy, Sensitivity and Specificity have been used for evaluation for the final developed model. We were able to identify the significant features in the dataset and also the selected classifier was robust in predicting the response class variable with high accuracy.
ContributorsBansal, Gaurav (Author) / Wu, Teresa (Thesis advisor) / Mitchell, Ross (Thesis advisor) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2013
151852-Thumbnail Image.png
Description
Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome.

Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome. The composition of plaque is important for detection of plaque vulnerability. Due to prognostic importance of early stage identification, non-invasive assessment of plaque characterization is necessary. Computed tomography (CT) has emerged as a non-invasive alternative to coronary angiography. Recently, dual energy CT (DECT) coronary angiography has been performed clinically. DECT scanners use two different X-ray energies in order to determine the energy dependency of tissue attenuation values for each voxel. They generate virtual monochromatic energy images, as well as material basis pair images. The characterization of plaque components by DECT is still an active research topic since overlap between the CT attenuations measured in plaque components and contrast material shows that the single mean density might not be an appropriate measure for characterization. This dissertation proposes feature extraction, feature selection and learning strategies for supervised characterization of coronary atherosclerotic plaques. In my first study, I proposed an approach for calcium quantification in contrast-enhanced examinations of the coronary arteries, potentially eliminating the need for an extra non-contrast X-ray acquisition. The ambiguity of separation of calcium from contrast material was solved by using virtual non-contrast images. Additional attenuation data provided by DECT provides valuable information for separation of lipid from fibrous plaque since the change of their attenuation as the energy level changes is different. My second study proposed these as the input to supervised learners for a more precise classification of lipid and fibrous plaques. My last study aimed at automatic segmentation of coronary arteries characterizing plaque components and lumen on contrast enhanced monochromatic X-ray images. This required extraction of features from regions of interests. This study proposed feature extraction strategies and selection of important ones. The results show that supervised learning on the proposed features provides promising results for automatic characterization of coronary atherosclerotic plaques by DECT.
ContributorsYamak, Didem (Author) / Akay, Metin (Thesis advisor) / Muthuswamy, Jit (Committee member) / Akay, Yasemin (Committee member) / Pavlicek, William (Committee member) / Vernon, Brent (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
152382-Thumbnail Image.png
Description
A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values

A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values are computed at each level within a sample. If at least one of the P-values is less than a pre-specified significance level, the chart signals out-of-control. The primary advantage of our approach is that only one control chart is required to monitor several parameters simultaneously: the intercept, slope(s), and the error standard deviation. A comprehensive comparison of the proposed method and the existing KMW-Shewhart method for monitoring linear profiles is conducted. In addition, the effect that the number of observations within a sample has on the performance of the proposed method is investigated. The proposed method was also compared to the T^2 method discussed in Kang and Albin (2000) for multivariate, polynomial, and nonlinear profiles. A simulation study shows that overall the proposed P-value method performs satisfactorily for different profile types.
ContributorsAdibi, Azadeh (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Li, Jing (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2013
151176-Thumbnail Image.png
Description
Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these

Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these massive datasets lay in their complex structures, such as high-dimensionality, hierarchy, multi-modality, heterogeneity and data uncertainty. Besides the statistical challenges, the associated computational approaches are also considered essential in achieving efficiency, effectiveness, as well as the numerical stability in practice. On the other hand, some recent developments in statistics and machine learning, such as sparse learning, transfer learning, and some traditional methodologies which still hold potential, such as multi-level models, all shed lights on addressing these complex datasets in a statistically powerful and computationally efficient way. In this dissertation, we identify four kinds of general complex datasets, including "high-dimensional datasets", "hierarchically-structured datasets", "multimodality datasets" and "data uncertainties", which are ubiquitous in many domains, such as biology, medicine, neuroscience, health care delivery, manufacturing, etc. We depict the development of novel statistical models to analyze complex datasets which fall under these four categories, and we show how these models can be applied to some real-world applications, such as Alzheimer's disease research, nursing care process, and manufacturing.
ContributorsHuang, Shuai (Author) / Li, Jing (Thesis advisor) / Askin, Ronald (Committee member) / Ye, Jieping (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2012
135788-Thumbnail Image.png
Description
The Department of Defense (DoD) acquisition system is a complex system riddled with cost and schedule overruns. These cost and schedule overruns are very serious issues as the acquisition system is responsible for aiding U.S. warfighters. Hence, if the acquisition process is failing that could be a potential threat to

The Department of Defense (DoD) acquisition system is a complex system riddled with cost and schedule overruns. These cost and schedule overruns are very serious issues as the acquisition system is responsible for aiding U.S. warfighters. Hence, if the acquisition process is failing that could be a potential threat to our nation's security. Furthermore, the DoD acquisition system is responsible for proper allocation of billions of taxpayer's dollars and employs many civilians and military personnel. Much research has been done in the past on the acquisition system with little impact or success. One reason for this lack of success in improving the system is the lack of accurate models to test theories. This research is a continuation of the effort on the Enterprise Requirements and Acquisition Model (ERAM), a discrete event simulation modeling research on DoD acquisition system. We propose to extend ERAM using agent-based simulation principles due to the many interactions among the subsystems of the acquisition system. We initially identify ten sub models needed to simulate the acquisition system. This research focuses on three sub models related to the budget of acquisition programs. In this thesis, we present the data collection, data analysis, initial implementation, and initial validation needed to facilitate these sub models and lay the groundwork for a full agent-based simulation of the DoD acquisition system.
ContributorsBucknell, Sophia Robin (Author) / Wu, Teresa (Thesis director) / Li, Jing (Committee member) / Colombi, John (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
149315-Thumbnail Image.png
Description
In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is

In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is proposed using simulation and online calibration methods to enable the adaptive management of large-scale complex supply chain systems. The design, implementation and verification of the integrated approach are studied in this dissertation. The research contributions are two-fold. First, this work enriches symbiotic simulation methodology by proposing a framework of simulation and advanced data fusion methods to improve simulation accuracy. Data fusion techniques optimally calibrate the simulation state/parameters by considering errors in both the simulation models and in measurements of the real-world system. Data fusion methods - Kalman Filtering, Extended Kalman Filtering, and Ensemble Kalman Filtering - are examined and discussed under varied conditions of system chaotic levels, data quality and data availability. Second, the proposed framework is developed, validated and demonstrated in `proof-of-concept' case studies on representative supply chain problems. In the case study of a simplified supply chain system, Kalman Filtering is applied to fuse simulation data and emulation data to effectively improve the accuracy of the detection of abnormalities. In the case study of the `beer game' supply chain model, the system's chaotic level is identified as a key factor to influence simulation performance and the choice of data fusion method. Ensemble Kalman Filtering is found more robust than Extended Kalman Filtering in a highly chaotic system. With appropriate tuning, the improvement of simulation accuracy is up to 80% in a chaotic system, and 60% in a stable system. In the last study, the integrated framework is applied to adaptive inventory control of a multi-echelon supply chain with non-stationary demand. It is worth pointing out that the framework proposed in this dissertation is not only useful in supply chain management, but also suitable to model other complex dynamic systems, such as healthcare delivery systems and energy consumption networks.
ContributorsWang, Shanshan (Author) / Wu, Teresa (Thesis advisor) / Fowler, John (Thesis advisor) / Pfund, Michele (Committee member) / Li, Jing (Committee member) / Pavlicek, William (Committee member) / Arizona State University (Publisher)
Created2010