Matching Items (11)
152494-Thumbnail Image.png
Description
Major advancements in biology and medicine have been realized during recent decades, including massively parallel sequencing, which allows researchers to collect millions or billions of short reads from a DNA or RNA sample. This capability opens the door to a renaissance in personalized medicine if effectively deployed. Three projects that

Major advancements in biology and medicine have been realized during recent decades, including massively parallel sequencing, which allows researchers to collect millions or billions of short reads from a DNA or RNA sample. This capability opens the door to a renaissance in personalized medicine if effectively deployed. Three projects that address major and necessary advancements in massively parallel sequencing are included in this dissertation. The first study involves a pair of algorithms to verify patient identity based on single nucleotide polymorphisms (SNPs). In brief, we developed a method that allows de novo construction of sample relationships, e.g., which ones are from the same individuals and which are from different individuals. We also developed a method to confirm the hypothesis that a tumor came from a known individual. The second study derives an algorithm to multiplex multiple Polymerase Chain Reaction (PCR) reactions, while minimizing interference between reactions that compromise results. PCR is a powerful technique that amplifies pre-determined regions of DNA and is often used to selectively amplify DNA and RNA targets that are destined for sequencing. It is highly desirable to multiplex reactions to save on reagent and assay setup costs as well as equalize the effect of minor handling issues across gene targets. Our solution involves a binary integer program that minimizes events that are likely to cause interference between PCR reactions. The third study involves design and analysis methods required to analyze gene expression and copy number results against a reference range in a clinical setting for guiding patient treatments. Our goal is to determine which events are present in a given tumor specimen. These events may be mutation, DNA copy number or RNA expression. All three techniques are being used in major research and diagnostic projects for their intended purpose at the time of writing this manuscript. The SNP matching solution has been selected by The Cancer Genome Atlas to determine sample identity. Paradigm Diagnostics, Viomics and International Genomics Consortium utilize the PCR multiplexing technique to multiplex various types of PCR reactions on multi-million dollar projects. The reference range-based normalization method is used by Paradigm Diagnostics to analyze results from every patient.
ContributorsMorris, Scott (Author) / Gel, Esma S (Thesis advisor) / Runger, George C. (Thesis advisor) / Askin, Ronald (Committee member) / Paulauskis, Joseph (Committee member) / Arizona State University (Publisher)
Created2014
153063-Thumbnail Image.png
Description
Technological advances have enabled the generation and collection of various data from complex systems, thus, creating ample opportunity to integrate knowledge in many decision making applications. This dissertation introduces holistic learning as the integration of a comprehensive set of relationships that are used towards the learning objective. The holistic view

Technological advances have enabled the generation and collection of various data from complex systems, thus, creating ample opportunity to integrate knowledge in many decision making applications. This dissertation introduces holistic learning as the integration of a comprehensive set of relationships that are used towards the learning objective. The holistic view of the problem allows for richer learning from data and, thereby, improves decision making.

The first topic of this dissertation is the prediction of several target attributes using a common set of predictor attributes. In a holistic learning approach, the relationships between target attributes are embedded into the learning algorithm created in this dissertation. Specifically, a novel tree based ensemble that leverages the relationships between target attributes towards constructing a diverse, yet strong, model is proposed. The method is justified through its connection to existing methods and experimental evaluations on synthetic and real data.

The second topic pertains to monitoring complex systems that are modeled as networks. Such systems present a rich set of attributes and relationships for which holistic learning is important. In social networks, for example, in addition to friendship ties, various attributes concerning the users' gender, age, topic of messages, time of messages, etc. are collected. A restricted form of monitoring fails to take the relationships of multiple attributes into account, whereas the holistic view embeds such relationships in the monitoring methods. The focus is on the difficult task to detect a change that might only impact a small subset of the network and only occur in a sub-region of the high-dimensional space of the network attributes. One contribution is a monitoring algorithm based on a network statistical model. Another contribution is a transactional model that transforms the task into an expedient structure for machine learning, along with a generalizable algorithm to monitor the attributed network. A learning step in this algorithm adapts to changes that may only be local to sub-regions (with a broader potential for other learning tasks). Diagnostic tools to interpret the change are provided. This robust, generalizable, holistic monitoring method is elaborated on synthetic and real networks.
ContributorsAzarnoush, Bahareh (Author) / Runger, George C. (Thesis advisor) / Bekki, Jennifer (Thesis advisor) / Pan, Rong (Committee member) / Saghafian, Soroush (Committee member) / Arizona State University (Publisher)
Created2014
149890-Thumbnail Image.png
Description
Nowadays ports play a critic role in the supply chains of contemporary companies and global commerce. Since the ports' operational effectiveness is critical on the development of competitive supply chains, their contribution to regional economies is essential. With the globalization of markets, the traffic of containers flowing through the different

Nowadays ports play a critic role in the supply chains of contemporary companies and global commerce. Since the ports' operational effectiveness is critical on the development of competitive supply chains, their contribution to regional economies is essential. With the globalization of markets, the traffic of containers flowing through the different ports has increased significantly in the last decades. In order to attract additional container traffic and improve their comparative advantages over the competition, ports serving same hinterlands explore ways to improve their operations to become more attractive to shippers. This research explores the hypothesis that lowering the variability of the service time observed in the handling of containers, a port reduces the total logistics costs of their customers, increase its competiveness and that of their customers. This thesis proposes a methodology that allows the quantification of the variability existing in the services of a port derived from factors like inefficient internal operations, vessel congestion or external disruptions scenarios. It focuses on assessing the impact of this variability on the user's logistic costs. The methodology also allows a port to define competitive strategies that take into account its variability and that of competing ports. These competitive strategies are also translated into specific parameters that can be used to design and adjust internal operations. The methodology includes (1) a definition of a proper economic model to measure the logistic impact of port's variability, (2) a network analysis approach to the defined problem and (3) a systematic procedure to determine competitive service time parameters for a port. After the methodology is developed, a case study is presented where it is applied to the Port of Guaymas. This is done by finding service time parameters for this port that yield lower logistic costs than the observed in other competing ports.
ContributorsMeneses Preciado, Cesar (Author) / Villalobos, Jesus R (Thesis advisor) / Gel, Esma S (Committee member) / Maltz, Arnold B (Committee member) / Arizona State University (Publisher)
Created2011
151111-Thumbnail Image.png
Description
This research is motivated by a deterministic scheduling problem that is fairly common in manufacturing environments, where there are certain processes that call for a machine working on multiple jobs at the same time. An example of such an environment is wafer fabrication in the semiconductor industry where some stages

This research is motivated by a deterministic scheduling problem that is fairly common in manufacturing environments, where there are certain processes that call for a machine working on multiple jobs at the same time. An example of such an environment is wafer fabrication in the semiconductor industry where some stages can be modeled as batch processes. There has been significant work done in the past in the field of a single stage of parallel machines which process jobs in batches. The primary motivation behind this research is to extend the research done in this area to a two-stage flow-shop where jobs arrive with unequal ready times and belong to incompatible job families with the goal of minimizing total weighted tardiness. As a first step to propose solutions, a mixed integer mathematical model is developed which tackles the problem at hand. The problem is NP-hard and thus the developed mathematical program can only solve problem instances of smaller sizes in a reasonable amount of time. The next step is to build heuristics which can provide feasible solutions in polynomial time for larger problem instances. The basic nature of the heuristics proposed is time window decomposition, where jobs within a moving time frame are considered for batching each time a machine becomes available on either stage. The Apparent Tardiness Cost (ATC) rule is used to build batches, and is modified to calculate ATC indices on a batch as well as a job level. An improvisation to the above heuristic is proposed, where the heuristic is run iteratively, each time assigning start times of jobs on the second stage as due dates for the jobs on the first stage. The underlying logic behind the iterative approach is to improve the way due dates are estimated for the first stage based on assigned due dates for jobs in the second stage. An important study carried out as part of this research is to analyze the bottleneck stage in terms of its location and how it affects the performance measure. Extensive experimentation is carried out to test how the quality of the solution varies when input parameters are varied between high and low values.
ContributorsTewari, Anubha Alokkumar (Author) / Fowler, John W (Thesis advisor) / Monch, Lars (Thesis advisor) / Gel, Esma S (Committee member) / Arizona State University (Publisher)
Created2012
151051-Thumbnail Image.png
Description
Today's competitive markets force companies to constantly engage in the complex task of managing their demand. In make-to-order manufacturing or service systems, the demand of a product is shaped by price and lead times, where high price and lead time quotes ensure profitability for supplier, but discourage the customers from

Today's competitive markets force companies to constantly engage in the complex task of managing their demand. In make-to-order manufacturing or service systems, the demand of a product is shaped by price and lead times, where high price and lead time quotes ensure profitability for supplier, but discourage the customers from placing orders. Low price and lead times, on the other hand, generally result in high demand, but do not necessarily ensure profitability. The price and lead time quotation problem considers the trade-off between offering high and low prices and lead times. The recent practices in make-to- order manufacturing companies reveal the importance of dynamic quotation strategies, under which the prices and lead time quotes flexibly change depending on the status of the system. In this dissertation, the objective is to model a make-to-order manufacturing system and explore various aspects of dynamic quotation strategies such as the behavior of optimal price and lead time decisions, the impact of customer preferences on optimal decisions, the benefits of employing dynamic quotation in comparison to simpler quotation strategies, and the benefits of coordinating price and lead time decisions. I first consider a manufacturer that receives demand from spot purchasers (who are quoted dynamic price and lead times), as well as from contract customers who have agree- ments with the manufacturer with fixed price and lead time terms. I analyze how customer preferences affect the optimal price and lead time decisions, the benefits of dynamic quo- tation, and the optimal mix of spot purchaser and contract customers. These analyses necessitate the computation of expected tardiness of customer orders at the moment cus- tomer enters the system. Hence, in the second part of the dissertation, I develop method- ologies to compute the expected tardiness in multi-class priority queues. For the trivial single class case, a closed formulation is obtained. For the more complex multi-class case, numerical inverse Laplace transformation algorithms are developed. In the last part of the dissertation, I model a decentralized system with two components. Marketing department determines the price quotes with the objective of maximizing revenues, and manufacturing department determines the lead time quotes to minimize lateness costs. I discuss the ben- efits of coordinating price and lead time decisions, and develop an incentivization scheme to reduce the negative impacts of lack of coordination.
ContributorsHafizoglu, Ahmet Baykal (Author) / Gel, Esma S (Thesis advisor) / Villalobos, Jesus R (Committee member) / Mirchandani, Pitu (Committee member) / Keskinocak, Pinar (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2012
156337-Thumbnail Image.png
Description
Healthcare operations have enjoyed reduced costs, improved patient safety, and

innovation in healthcare policy over a huge variety of applications by tackling prob-

lems via the creation and optimization of descriptive mathematical models to guide

decision-making. Despite these accomplishments, models are stylized representations

of real-world applications, reliant on accurate estimations from historical data to

Healthcare operations have enjoyed reduced costs, improved patient safety, and

innovation in healthcare policy over a huge variety of applications by tackling prob-

lems via the creation and optimization of descriptive mathematical models to guide

decision-making. Despite these accomplishments, models are stylized representations

of real-world applications, reliant on accurate estimations from historical data to jus-

tify their underlying assumptions. To protect against unreliable estimations which

can adversely affect the decisions generated from applications dependent on fully-

realized models, techniques that are robust against misspecications are utilized while

still making use of incoming data for learning. Hence, new robust techniques are ap-

plied that (1) allow for the decision-maker to express a spectrum of pessimism against

model uncertainties while (2) still utilizing incoming data for learning. Two main ap-

plications are investigated with respect to these goals, the first being a percentile

optimization technique with respect to a multi-class queueing system for application

in hospital Emergency Departments. The second studies the use of robust forecasting

techniques in improving developing countries’ vaccine supply chains via (1) an inno-

vative outside of cold chain policy and (2) a district-managed approach to inventory

control. Both of these research application areas utilize data-driven approaches that

feature learning and pessimism-controlled robustness.
ContributorsBren, Austin (Author) / Saghafian, Soroush (Thesis advisor) / Mirchandani, Pitu (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2018
137381-Thumbnail Image.png
Description
In recent years, Operations Research (OR) has had a signicant impact on improving the performance of hospital Emergency Departments (EDs). This includes improving a wide range of processes involving patient ow from the initial call to the ED through disposition, discharge home, or admission to the hospital. We mainly seek

In recent years, Operations Research (OR) has had a signicant impact on improving the performance of hospital Emergency Departments (EDs). This includes improving a wide range of processes involving patient ow from the initial call to the ED through disposition, discharge home, or admission to the hospital. We mainly seek to illustrate the benet of OR in EDs, and provide an overview of research performed in this vein to assist both researchers and practitioners. We also elaborate on possibilities for future researchers by shedding light on some less studied aspects that can have valuable impacts on practice.
ContributorsAustin, Garrett Alexander (Author) / Saghafian, Soroush (Thesis director) / Gel, Esma (Committee member) / Traub, Stephen (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2013-12
149476-Thumbnail Image.png
Description
In mixture-process variable experiments, it is common that the number of runs is greater than in mixture-only or process-variable experiments. These experiments have to estimate the parameters from the mixture components, process variables, and interactions of both variables. In some of these experiments there are variables that are hard to

In mixture-process variable experiments, it is common that the number of runs is greater than in mixture-only or process-variable experiments. These experiments have to estimate the parameters from the mixture components, process variables, and interactions of both variables. In some of these experiments there are variables that are hard to change or cannot be controlled under normal operating conditions. These situations often prohibit a complete randomization for the experimental runs due to practical and economical considerations. Furthermore, the process variables can be categorized into two types: variables that are controllable and directly affect the response, and variables that are uncontrollable and primarily affect the variability of the response. These uncontrollable variables are called noise factors and assumed controllable in a laboratory environment for the purpose of conducting experiments. The model containing both noise variables and control factors can be used to determine factor settings for the control factor that makes the response "robust" to the variability transmitted from the noise factors. These types of experiments can be analyzed in a model for the mean response and a model for the slope of the response within a split-plot structure. When considering the experimental designs, low prediction variances for the mean and slope model are desirable. The methods for the mixture-process variable designs with noise variables considering a restricted randomization are demonstrated and some mixture-process variable designs that are robust to the coefficients of interaction with noise variables are evaluated using fraction design space plots with the respect to the prediction variance properties. Finally, the G-optimal design that minimizes the maximum prediction variance over the entire design region is created using a genetic algorithm.
ContributorsCho, Tae Yeon (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie M. (Thesis advisor) / Shunk, Dan L. (Committee member) / Gel, Esma S (Committee member) / Kulahci, Murat (Committee member) / Arizona State University (Publisher)
Created2010
157514-Thumbnail Image.png
Description
One of the critical issues in the U.S. healthcare sector is attributed to medications management. Mismanagement of medications can not only bring more unfavorable medical outcomes for patients, but also imposes avoidable medical expenditures, which can be partially accounted for the enormous $750 billion that the American healthcare system wastes

One of the critical issues in the U.S. healthcare sector is attributed to medications management. Mismanagement of medications can not only bring more unfavorable medical outcomes for patients, but also imposes avoidable medical expenditures, which can be partially accounted for the enormous $750 billion that the American healthcare system wastes annually. The lack of efficiency in medical outcomes can be due to several reasons. One of them is the problem of drug intensification: a problem associated with more aggressive management of medications and its negative consequences for patients.

To address this and many other challenges in regard to medications mismanagement, I take advantage of data-driven methodologies where a decision-making framework for identifying optimal medications management strategies will be established based on real-world data. This data-driven approach has the advantage of supporting decision-making processes by data analytics, and hence, the decision made can be validated by verifiable data. Thus, compared to merely theoretical methods, my methodology will be more applicable to patients as the ultimate beneficiaries of the healthcare system.

Based on this premise, in this dissertation I attempt to analyze and advance three streams of research that are influenced by issues involving the management of medications/treatments for different medical contexts. In particular, I will discuss (1) management of medications/treatment modalities for new-onset of diabetes after solid organ transplantations and (2) epidemic of opioid prescription and abuse.
ContributorsBoloori, Alireza (Author) / Saghafian, Soroush (Thesis advisor) / Fowler, John (Thesis advisor) / Gel, Esma (Committee member) / Cook, Curtiss B (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2019
156127-Thumbnail Image.png
Description
Hypertensive disorders of pregnancy (HDP) affect up to 5%-15% of pregnancies around the globe, and form a leading cause of maternal and neonatal morbidity and mortality. HDP are progressive disorders for which the only cure is to deliver the baby. An increasing trend in the prevalence of HDP has been

Hypertensive disorders of pregnancy (HDP) affect up to 5%-15% of pregnancies around the globe, and form a leading cause of maternal and neonatal morbidity and mortality. HDP are progressive disorders for which the only cure is to deliver the baby. An increasing trend in the prevalence of HDP has been observed in the recent years. This trend is anticipated to continue due to the rise in the prevalence of diseases that strongly influence hypertension such as obesity and metabolic syndrome. In order to lessen the adverse outcomes due to HDP, we need to study (1) the natural progression of HDP, (2) the risks of adverse outcomes associated with these disorders, and (3) the optimal timing of delivery for women with HDP.

In the first study, the natural progression of HDP in the third trimester of pregnancy is modeled with a discrete-time Markov chain (DTMC). The transition probabilities of the DTMC are estimated using clinical data with an order restricted inference model that maximizes the likelihood function subject to a set of order restrictions between the transition probabilities. The results provide useful insights on the progression of HDP, and the estimated transition probabilities are used to parametrize the decision models in the third study.

In the second study, the risks of maternal and neonatal adverse outcomes for women with HDP are quantified with a composite measure of childbirth morbidity, and the estimated risks are compared with respect to type of HDP at delivery, gestational age at delivery, and type of delivery in a retrospective cohort study. Furthermore, the safety of child delivery with respect to the same variables is assessed with a provider survey and technique for order performance by similarity to ideal solution (TOPSIS). The methods and results of this study are used to parametrize the decision models in the third study.

In the third study, the decision problem of timing of delivery for women with HDP is formulated as a discrete-time Markov decision process (MDP) model that minimizes the risks of maternal and neonatal adverse outcomes. We additionally formulate a robust MDP model that gives the worst-case optimal policy when transition probabilities are allowed to vary within their confidence intervals. The results of the decision models are assessed within a probabilistic sensitivity analysis (PSA) that considers the uncertainty in the estimated risk values. In our PSA, the performance of candidate delivery policies is evaluated using a large number of problem instances that are constructed according to the orders between model parameters to incorporate physicians' intuition.
ContributorsDemirtas, Aysegul (Author) / Gel, Esma S (Thesis advisor) / Saghafian, Soroush (Thesis advisor) / Bekki, Jennifer (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2018