Matching Items (25)
151698-Thumbnail Image.png
Description
Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient

Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient or the Quality Assurance for the amount of organ dose received. In this study, we are exploring the methodologies to systematically reduce the absorbed radiation dose in the Fluoroscopically Guided Interventional Radiology procedures. In the first part of this study, we developed a mathematical model which determines a set of geometry settings for the equipment and a level for the energy during a patient exam. The goal is to minimize the amount of absorbed dose in the critical organs while maintaining image quality required for the diagnosis. The model is a large-scale mixed integer program. We performed polyhedral analysis and derived several sets of strong inequalities to improve the computational speed and quality of the solution. Results present the amount of absorbed dose in the critical organ can be reduced up to 99% for a specific set of angles. In the second part, we apply an approximate gradient method to simultaneously optimize angle and table location while minimizing dose in the critical organs with respect to the image quality. In each iteration, we solve a sub-problem as a MIP to determine the radiation field size and corresponding X-ray tube energy. In the computational experiments, results show further reduction (up to 80%) of the absorbed dose in compare with previous method. Last, there are uncertainties in the medical procedures resulting imprecision of the absorbed dose. We propose a robust formulation to hedge from the worst case absorbed dose while ensuring feasibility. In this part, we investigate a robust approach for the organ motions within a radiology procedure. We minimize the absorbed dose for the critical organs across all input data scenarios which are corresponding to the positioning and size of the organs. The computational results indicate up to 26% increase in the absorbed dose calculated for the robust approach which ensures the feasibility across scenarios.
ContributorsKhodadadegan, Yasaman (Author) / Zhang, Muhong (Thesis advisor) / Pavlicek, William (Thesis advisor) / Fowler, John (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
Description
Multicore processors have proliferated in nearly all forms of computing, from servers, desktop, to smartphones. The primary reason for this large adoption of multicore processors is due to its ability to overcome the power-wall by providing higher performance at a lower power consumption rate. With multi-cores, there is increased need

Multicore processors have proliferated in nearly all forms of computing, from servers, desktop, to smartphones. The primary reason for this large adoption of multicore processors is due to its ability to overcome the power-wall by providing higher performance at a lower power consumption rate. With multi-cores, there is increased need for dynamic energy management (DEM), much more than for single-core processors, as DEM for multi-cores is no more a mechanism just to ensure that a processor is kept under specified temperature limits, but also a set of techniques that manage various processor controls like dynamic voltage and frequency scaling (DVFS), task migration, fan speed, etc. to achieve a stated objective. The objectives span a wide range from maximizing throughput, minimizing power consumption, reducing peak temperature, maximizing energy efficiency, maximizing processor reliability, and so on, along with much more wider constraints of temperature, power, timing, and reliability constraints. Thus DEM can be very complex and challenging to achieve. Since often times many DEMs operate together on a single processor, there is a need to unify various DEM techniques. This dissertation address such a need. In this work, a framework for DEM is proposed that provides a unifying processor model that includes processor power, thermal, timing, and reliability models, supports various DEM control mechanisms, many different objective functions along with equally diverse constraint specifications. Using the framework, a range of novel solutions is derived for instances of DEM problems, that include maximizing processor performance, energy efficiency, or minimizing power consumption, peak temperature under constraints of maximum temperature, memory reliability and task deadlines. Finally, a robust closed-loop controller to implement the above solutions on a real processor platform with a very low operational overhead is proposed. Along with the controller design, a model identification methodology for obtaining the required power and thermal models for the controller is also discussed. The controller is architecture independent and hence easily portable across many platforms. The controller has been successfully deployed on Intel Sandy Bridge processor and the use of the controller has increased the energy efficiency of the processor by over 30%
ContributorsHanumaiah, Vinay (Author) / Vrudhula, Sarma (Thesis advisor) / Chatha, Karamvir (Committee member) / Chakrabarti, Chaitali (Committee member) / Rodriguez, Armando (Committee member) / Askin, Ronald (Committee member) / Arizona State University (Publisher)
Created2013
152456-Thumbnail Image.png
Description
Vehicles powered by electricity and alternative-fuels are becoming a more popular form of transportation since they have less of an environmental impact than standard gasoline vehicles. Unfortunately, their success is currently inhibited by the sparseness of locations where the vehicles can refuel as well as the fact that many of

Vehicles powered by electricity and alternative-fuels are becoming a more popular form of transportation since they have less of an environmental impact than standard gasoline vehicles. Unfortunately, their success is currently inhibited by the sparseness of locations where the vehicles can refuel as well as the fact that many of the vehicles have a range that is less than those powered by gasoline. These factors together create a "range anxiety" in drivers, which causes the drivers to worry about the utility of alternative-fuel and electric vehicles and makes them less likely to purchase these vehicles. For the new vehicle technologies to thrive it is critical that range anxiety is minimized and performance is increased as much as possible through proper routing and scheduling. In the case of long distance trips taken by individual vehicles, the routes must be chosen such that the vehicles take the shortest routes while not running out of fuel on the trip. When many vehicles are to be routed during the day, if the refueling stations have limited capacity then care must be taken to avoid having too many vehicles arrive at the stations at any time. If the vehicles that will need to be routed in the future are unknown then this problem is stochastic. For fleets of vehicles serving scheduled operations, switching to alternative-fuels requires ensuring the schedules do not cause the vehicles to run out of fuel. This is especially problematic since the locations where the vehicles may refuel are limited due to the technology being new. This dissertation covers three related optimization problems: routing a single electric or alternative-fuel vehicle on a long distance trip, routing many electric vehicles in a network where the stations have limited capacity and the arrivals into the system are stochastic, and scheduling fleets of electric or alternative-fuel vehicles with limited locations to refuel. Different algorithms are proposed to solve each of the three problems, of which some are exact and some are heuristic. The algorithms are tested on both random data and data relating to the State of Arizona.
ContributorsAdler, Jonathan D (Author) / Mirchandani, Pitu B. (Thesis advisor) / Askin, Ronald (Committee member) / Gel, Esma (Committee member) / Xue, Guoliang (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2014
152494-Thumbnail Image.png
Description
Major advancements in biology and medicine have been realized during recent decades, including massively parallel sequencing, which allows researchers to collect millions or billions of short reads from a DNA or RNA sample. This capability opens the door to a renaissance in personalized medicine if effectively deployed. Three projects that

Major advancements in biology and medicine have been realized during recent decades, including massively parallel sequencing, which allows researchers to collect millions or billions of short reads from a DNA or RNA sample. This capability opens the door to a renaissance in personalized medicine if effectively deployed. Three projects that address major and necessary advancements in massively parallel sequencing are included in this dissertation. The first study involves a pair of algorithms to verify patient identity based on single nucleotide polymorphisms (SNPs). In brief, we developed a method that allows de novo construction of sample relationships, e.g., which ones are from the same individuals and which are from different individuals. We also developed a method to confirm the hypothesis that a tumor came from a known individual. The second study derives an algorithm to multiplex multiple Polymerase Chain Reaction (PCR) reactions, while minimizing interference between reactions that compromise results. PCR is a powerful technique that amplifies pre-determined regions of DNA and is often used to selectively amplify DNA and RNA targets that are destined for sequencing. It is highly desirable to multiplex reactions to save on reagent and assay setup costs as well as equalize the effect of minor handling issues across gene targets. Our solution involves a binary integer program that minimizes events that are likely to cause interference between PCR reactions. The third study involves design and analysis methods required to analyze gene expression and copy number results against a reference range in a clinical setting for guiding patient treatments. Our goal is to determine which events are present in a given tumor specimen. These events may be mutation, DNA copy number or RNA expression. All three techniques are being used in major research and diagnostic projects for their intended purpose at the time of writing this manuscript. The SNP matching solution has been selected by The Cancer Genome Atlas to determine sample identity. Paradigm Diagnostics, Viomics and International Genomics Consortium utilize the PCR multiplexing technique to multiplex various types of PCR reactions on multi-million dollar projects. The reference range-based normalization method is used by Paradigm Diagnostics to analyze results from every patient.
ContributorsMorris, Scott (Author) / Gel, Esma S (Thesis advisor) / Runger, George C. (Thesis advisor) / Askin, Ronald (Committee member) / Paulauskis, Joseph (Committee member) / Arizona State University (Publisher)
Created2014
151852-Thumbnail Image.png
Description
Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome.

Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome. The composition of plaque is important for detection of plaque vulnerability. Due to prognostic importance of early stage identification, non-invasive assessment of plaque characterization is necessary. Computed tomography (CT) has emerged as a non-invasive alternative to coronary angiography. Recently, dual energy CT (DECT) coronary angiography has been performed clinically. DECT scanners use two different X-ray energies in order to determine the energy dependency of tissue attenuation values for each voxel. They generate virtual monochromatic energy images, as well as material basis pair images. The characterization of plaque components by DECT is still an active research topic since overlap between the CT attenuations measured in plaque components and contrast material shows that the single mean density might not be an appropriate measure for characterization. This dissertation proposes feature extraction, feature selection and learning strategies for supervised characterization of coronary atherosclerotic plaques. In my first study, I proposed an approach for calcium quantification in contrast-enhanced examinations of the coronary arteries, potentially eliminating the need for an extra non-contrast X-ray acquisition. The ambiguity of separation of calcium from contrast material was solved by using virtual non-contrast images. Additional attenuation data provided by DECT provides valuable information for separation of lipid from fibrous plaque since the change of their attenuation as the energy level changes is different. My second study proposed these as the input to supervised learners for a more precise classification of lipid and fibrous plaques. My last study aimed at automatic segmentation of coronary arteries characterizing plaque components and lumen on contrast enhanced monochromatic X-ray images. This required extraction of features from regions of interests. This study proposed feature extraction strategies and selection of important ones. The results show that supervised learning on the proposed features provides promising results for automatic characterization of coronary atherosclerotic plaques by DECT.
ContributorsYamak, Didem (Author) / Akay, Metin (Thesis advisor) / Muthuswamy, Jit (Committee member) / Akay, Yasemin (Committee member) / Pavlicek, William (Committee member) / Vernon, Brent (Committee member) / Arizona State University (Publisher)
Created2013
149904-Thumbnail Image.png
Description
Computed tomography (CT) is one of the essential imaging modalities for medical diagnosis. Since its introduction in 1972, CT technology has been improved dramatically, especially in terms of its acquisition speed. However, the main principle of CT which consists in acquiring only density information has not changed at all

Computed tomography (CT) is one of the essential imaging modalities for medical diagnosis. Since its introduction in 1972, CT technology has been improved dramatically, especially in terms of its acquisition speed. However, the main principle of CT which consists in acquiring only density information has not changed at all until recently. Different materials may have the same CT number, which may lead to uncertainty or misdiagnosis. Dual-energy CT (DECT) was reintroduced recently to solve this problem by using the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between two low and high energy images or measurements, so that it is difficult to acquire the accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, a new model and an image enhancement technique for DECT are proposed, based on the fact that the attenuation of a high density material decreases more rapidly as X-ray energy increases. This fact has been previously ignored in most of DECT image enhancement techniques. The proposed technique consists of offset correction, spectral error correction, and adaptive noise suppression. It reduced noise, improved contrast effectively and showed better material differentiation in real patient images as well as phantom studies.
ContributorsPark, Kyung Kook (Author) / Metin, Akay (Thesis advisor) / Pavlicek, William (Committee member) / Akay, Yasemin (Committee member) / Towe, Bruce (Committee member) / Muthuswamy, Jitendran (Committee member) / Arizona State University (Publisher)
Created2011
151176-Thumbnail Image.png
Description
Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these

Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these massive datasets lay in their complex structures, such as high-dimensionality, hierarchy, multi-modality, heterogeneity and data uncertainty. Besides the statistical challenges, the associated computational approaches are also considered essential in achieving efficiency, effectiveness, as well as the numerical stability in practice. On the other hand, some recent developments in statistics and machine learning, such as sparse learning, transfer learning, and some traditional methodologies which still hold potential, such as multi-level models, all shed lights on addressing these complex datasets in a statistically powerful and computationally efficient way. In this dissertation, we identify four kinds of general complex datasets, including "high-dimensional datasets", "hierarchically-structured datasets", "multimodality datasets" and "data uncertainties", which are ubiquitous in many domains, such as biology, medicine, neuroscience, health care delivery, manufacturing, etc. We depict the development of novel statistical models to analyze complex datasets which fall under these four categories, and we show how these models can be applied to some real-world applications, such as Alzheimer's disease research, nursing care process, and manufacturing.
ContributorsHuang, Shuai (Author) / Li, Jing (Thesis advisor) / Askin, Ronald (Committee member) / Ye, Jieping (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2012
154011-Thumbnail Image.png
Description
This thesis presents a successful application of operations research techniques in nonprofit distribution system to improve the distribution efficiency and increase customer service quality. It focuses on truck routing problems faced by St. Mary’s Food Bank Distribution Center. This problem is modeled as a capacitated vehicle routing problem to improve the distribution efficiency

This thesis presents a successful application of operations research techniques in nonprofit distribution system to improve the distribution efficiency and increase customer service quality. It focuses on truck routing problems faced by St. Mary’s Food Bank Distribution Center. This problem is modeled as a capacitated vehicle routing problem to improve the distribution efficiency and is extended to capacitated vehicle routing problem with time windows to increase customer service quality. Several heuristics are applied to solve these vehicle routing problems and tested in well-known benchmark problems. Algorithms are tested by comparing the results with the plan currently used by St. Mary’s Food Bank Distribution Center. The results suggest heuristics are quite completive: average 17% less trucks and 28.52% less travel time are used in heuristics’ solution.
ContributorsLi, Xiaoyan (Author) / Askin, Ronald (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2015
157496-Thumbnail Image.png
Description
The shift in focus of manufacturing systems to high-mix and low-volume production poses a challenge to both efficient scheduling of manufacturing operations and effective assessment of production capacity. This thesis considers the problem of scheduling a set of jobs that require machine and worker resources to complete their manufacturing operations.

The shift in focus of manufacturing systems to high-mix and low-volume production poses a challenge to both efficient scheduling of manufacturing operations and effective assessment of production capacity. This thesis considers the problem of scheduling a set of jobs that require machine and worker resources to complete their manufacturing operations. Although planners in manufacturing contexts typically focus solely on machines, schedules that only consider machining requirements may be problematic during implementation because machines need skilled workers and cannot run unsupervised. The model used in this research will be beneficial to these environments as planners would be able to determine more realistic assignments and operation sequences to minimize the total time required to complete all jobs. This thesis presents a mathematical formulation for concurrent scheduling of machines and workers that can optimally schedule a set of jobs while accounting for changeover times between operations. The mathematical formulation is based on disjunctive constraints that capture the conflict between operations when trying to schedule them to be performed by the same machine or worker. An additional formulation extends the previous one to consider how cross-training may impact the production capacity and, for a given budget, provide training recommendations for specific workers and operations to reduce the makespan. If training a worker is advantageous to increase production capacity, the model recommends the best time window to complete it such that overlaps with work assignments are avoided. It is assumed that workers can perform tasks involving the recently acquired skills as soon as training is complete. As an alternative to the mixed-integer programming formulations, this thesis provides a math-heuristic approach that fixes the order of some operations based on Largest Processing Time (LPT) and Shortest Processing Time (SPT) procedures, while allowing the exact formulation to find the optimal schedule for the remaining operations. Computational experiments include the use of the solution for the no-training problem as a starting feasible solution to the training problem. Although the models provided are general, the manufacturing of Printed Circuit Boards are used as a case study.
ContributorsAdams, Katherine Bahia (Author) / Sefair, Jorge (Thesis advisor) / Askin, Ronald (Thesis advisor) / Webster, Scott (Committee member) / Arizona State University (Publisher)
Created2019