Matching Items (77)
Filtering by

Clear all filters

158530-Thumbnail Image.png
Description
The success of genetically-modified T-cells in treating hematological malignancies has accelerated the research timeline for Chimeric Antigen Receptor-T (CAR-T) cell therapy. Since there are only two approved products (Kymriah and Yescarta), the process knowledge is limited. This leads to a low efficiency at manufacturing stage with serious challenges corresponding to

The success of genetically-modified T-cells in treating hematological malignancies has accelerated the research timeline for Chimeric Antigen Receptor-T (CAR-T) cell therapy. Since there are only two approved products (Kymriah and Yescarta), the process knowledge is limited. This leads to a low efficiency at manufacturing stage with serious challenges corresponding to high cost and scalability. In addition, the individualized nature of the therapy limits inventory and creates a high risk of product loss due to supply chain failure. The sector needs a new manufacturing paradigm capable of quickly responding to individualized demands while considering complex system dynamics.

The research formulates the problem of Chimeric Antigen Receptor-T (CAR-T) manufacturing design, understanding the performance for large scale production of personalized therapies. The solution looks to develop a simulation environment for bio-manufacturing systems with single-use equipment. The result is BioMan: a discrete-event simulation model that considers the role of therapy's individualized nature, type of processing and quality-management policies on process yield and time, while dealing with the available resource constraints simultaneously. The tool will be useful to understand the impact of varying factor inputs on Chimeric Antigen Receptor-T (CAR-T) cell manufacturing and will eventually facilitate the decision-maker to finalize the right strategies achieving better processing, high resource utilization, and less failure rates.
ContributorsSharma, Gaurav (Author) / Pedrielli, Giulia (Thesis advisor) / Fainekos, Georgios (Committee member) / Fowler, John (Committee member) / Arizona State University (Publisher)
Created2020
158541-Thumbnail Image.png
Description
Modern manufacturing systems are part of a complex supply chain where customer preferences are constantly evolving. The rapidly evolving market demands manufacturing organizations to be increasingly agile and flexible. Medium term capacity planning for manufacturing systems employ queueing network models based on stationary demand assumptions. However, these stationary demand assumptions

Modern manufacturing systems are part of a complex supply chain where customer preferences are constantly evolving. The rapidly evolving market demands manufacturing organizations to be increasingly agile and flexible. Medium term capacity planning for manufacturing systems employ queueing network models based on stationary demand assumptions. However, these stationary demand assumptions are not very practical for rapidly evolving supply chains. Nonstationary demand processes provide a reasonable framework to capture the time-varying nature of modern markets. The analysis of queues and queueing networks with time-varying parameters is mathematically intractable. In this dissertation, heuristics which draw upon existing steady state queueing results are proposed to provide computationally efficient approximations for dynamic multi-product manufacturing systems modeled as time-varying queueing networks with multiple customer classes (product types). This dissertation addresses the problem of performance evaluation of such manufacturing systems.

This dissertation considers the two key aspects of dynamic multi-product manufacturing systems - namely, performance evaluation and optimal server resource allocation. First, the performance evaluation of systems with infinite queueing room and a first-come first-serve service paradigm is considered. Second, systems with finite queueing room and priorities between product types are considered. Finally, the optimal server allocation problem is addressed in the context of dynamic multi-product manufacturing systems. The performance estimates developed in the earlier part of the dissertation are leveraged in a simulated annealing algorithm framework to obtain server resource allocations.
ContributorsJampani Hanumantha, Girish (Author) / Askin, Ronald (Thesis advisor) / Ju, Feng (Committee member) / Yan, Hao (Committee member) / Mirchandani, Pitu (Committee member) / Arizona State University (Publisher)
Created2020
161846-Thumbnail Image.png
Description
Complex systems appear when interaction among system components creates emergent behavior that is difficult to be predicted from component properties. The growth of Internet of Things (IoT) and embedded technology has increased complexity across several sectors (e.g., automotive, aerospace, agriculture, city infrastructures, home technologies, healthcare) where the paradigm of cyber-physical

Complex systems appear when interaction among system components creates emergent behavior that is difficult to be predicted from component properties. The growth of Internet of Things (IoT) and embedded technology has increased complexity across several sectors (e.g., automotive, aerospace, agriculture, city infrastructures, home technologies, healthcare) where the paradigm of cyber-physical systems (CPSs) has become a standard. While CPS enables unprecedented capabilities, it raises new challenges in system design, certification, control, and verification. When optimizing system performance computationally expensive simulation tools are often required, and search algorithms that sequentially interrogate a simulator to learn promising solutions are in great demand. This class of algorithms are black-box optimization techniques. However, the generality that makes black-box optimization desirable also causes computational efficiency difficulties when applied real problems. This thesis focuses on Bayesian optimization, a prominent black-box optimization family, and proposes new principles, translated in implementable algorithms, to scale Bayesian optimization to highly expensive, large scale problems. Four problem contexts are studied and approaches are proposed for practically applying Bayesian optimization concepts, namely: (1) increasing sample efficiency of a highly expensive simulator in the presence of other sources of information, where multi-fidelity optimization is used to leverage complementary information sources; (2) accelerating global optimization in the presence of local searches by avoiding over-exploitation with adaptive restart behavior; (3) scaling optimization to high dimensional input spaces by integrating Game theoretic mechanisms with traditional techniques; (4) accelerating optimization by embedding function structure when the reward function is a minimum of several functions. In the first context this thesis produces two multi-fidelity algorithms, a sample driven and model driven approach, and is implemented to optimize a serial production line; in the second context the Stochastic Optimization with Adaptive Restart (SOAR) framework is produced and analyzed with multiple applications to CPS falsification problems; in the third context the Bayesian optimization with sample fictitious play (BOFiP) algorithm is developed with an implementation in high-dimensional neural network training; in the last problem context the minimum surrogate optimization (MSO) framework is produced and combined with both Bayesian optimization and the SOAR framework with applications in simultaneous falsification of multiple CPS requirements.
ContributorsMathesen, Logan (Author) / Pedrielli, Giulia (Thesis advisor) / Candan, Kasim (Committee member) / Fainekos, Georgios (Committee member) / Gel, Esma (Committee member) / Montgomery, Douglas (Committee member) / Zabinsky, Zelda (Committee member) / Arizona State University (Publisher)
Created2021
161785-Thumbnail Image.png
Description
Natural disasters are occurring increasingly around the world, causing significant economiclosses. To alleviate their adverse effect, it is crucial to plan what should be done in response to them in a proactive manner. This research aims at developing proactive and real-time recovery algorithms for large-scale power networks exposed to weather events considering uncertainty.

Natural disasters are occurring increasingly around the world, causing significant economiclosses. To alleviate their adverse effect, it is crucial to plan what should be done in response to them in a proactive manner. This research aims at developing proactive and real-time recovery algorithms for large-scale power networks exposed to weather events considering uncertainty. These algorithms support the recovery decisions to mitigate the disaster impact, resulting in faster recovery of the network. The challenges associated with developing these algorithms are summarized below: 1. Even ignoring uncertainty, when operating cost of the network is considered the problem will be a bi-level optimization which is NP-hard. 2. To meet the requirement for real-time decision making under uncertainty, the problem could be formulated a Stochastic Dynamic Program with the aim to minimize the total cost. However, considering the operating cost of the network violates the underlying assumptions of this approach. 3. Stochastic Dynamic Programming approach is also not applicable to realistic problem sizes, due to the curse of dimensionality. 4. Uncertainty-based approaches for failure modeling, rely on point-generation of failures and ignore the network structure. To deal with the first challenge, in chapter 2, a heuristic solution framework is proposed, and its performance is evaluated by conducting numerical experiments. To address the second challenge, in chapter 3, after formulating the problem as a Stochastic Dynamic Program, an approximated dynamic programming heuristic is proposed to solve the problem. Numerical experiments on synthetic and realistic test-beds, show the satisfactory performance of the proposed approach. To address the third challenge, in chapter 4, an efficient base heuristic policy and an aggregation scheme in the action space is proposed. Numerical experiments on a realistic test-bed verify the ability of the proposed method to recover the network more efficiently. Finally, to address the fourth challenge, in chapter 5, a simulation-based model is proposed that using historical data and accounting for the interaction between network components, allows for analyzing the impact of adverse events on regional service level. A realistic case study is then conducted to showcase the applicability of the approach.
ContributorsInanlouganji, Alireza (Author) / Pedrielli, Giulia (Thesis advisor) / Mirchandani, Pitu (Committee member) / Reddy, T. Agami (Committee member) / Ju, Feng (Committee member) / Arizona State University (Publisher)
Created2021
161504-Thumbnail Image.png
Description
Drinking water quality violations are widespread in the United States and elsewhere in the world. More than half of Americans are not confident in the safety of their tap water, especially after the 2014 Flint, Michigan water crisis. Other than accidental contamination events, stagnation is a major cause of water

Drinking water quality violations are widespread in the United States and elsewhere in the world. More than half of Americans are not confident in the safety of their tap water, especially after the 2014 Flint, Michigan water crisis. Other than accidental contamination events, stagnation is a major cause of water quality degradation. Thus, there is a pressing need to build a real-time control system that can make control decisions quickly and proactively so that the quality of water can be maintained at all times. However, towards this end, modeling the dynamics of water distribution systems are very challenging due to the complex fluid dynamics and chemical reactions in the system. This challenge needs to be addressed before moving on to modeling the optimal control problem. The research in this dissertation leverages statistical machine learning approaches in approximating the complex water system dynamics and then develops different optimization models for proactive and real-time water quality control. This research focuses on two effective ways to maintain water quality, flushing of taps and injection of chlorine or other disinfectants; both of these actions decrease the equivalent “water age”, a useful proxy for water quality related to bacteria growth. This research first develops linear predictive models for water quality and subsequently linear programming optimization models for proactive water age control via flushing. The second part of the research considers both flushing and disinfectant injections in the control problem and develops mixed integer quadratically constrained optimization models for controlling water age. Different control strategies for disinfectant injections are also evaluated: binary on-off injections and continuous injections. In the third part of the research, water demand is assumed to be uncertain and stochastic. The developed approach to control the system relates to learning the optimal real-time flushing decisions by combing reinforced temporal-difference learning approaches with linear value function approximation for solving approximately the underlying Markov decision processes. Computational results on widely used simulation models demonstrates the developed control systems were indeed effective for water quality control with known demands as well as when demands are uncertain and stochastic.
ContributorsLi, Xiushuang (Author) / Mirchandani, Pitu (Thesis advisor) / Boyer, Treavor (Committee member) / Ju, Feng (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2021
161516-Thumbnail Image.png
Description
Biodiversity has been declining during the last decades due to habitat loss, landscape deterioration, environmental change, and human-related activities. In addition to its economic and cultural value, biodiversity plays an important role in keeping an environment’s ecosystem in balance. Disrupting such processes can reduce the provision of natural resources such

Biodiversity has been declining during the last decades due to habitat loss, landscape deterioration, environmental change, and human-related activities. In addition to its economic and cultural value, biodiversity plays an important role in keeping an environment’s ecosystem in balance. Disrupting such processes can reduce the provision of natural resources such as food and water, which in turn yields a direct threat to human health. Protecting and restoring natural areas is fundamental to preserve biodiversity and to mitigate the effects of ongoing environmental change. Unfortunately, it is impossible to protect every critical area due to resource limitations, requiring the use of advanced decision tools for the design of conservation plans. This dissertation studies three problems on the design of wildlife corridors and reserves that include patch-specific conservation decisions under spatial, operational, ecological, and biological requirements. In addition to the ecological impact of each problem’s solution, this dissertation contributes a set of formulations, valid inequalities, and pre-processing and solution algorithms for optimization problems with spatial requirements. The first problem is a utility-based corridor design problem to connect fragmented habitats, where each patch has a utility value reflecting its quality. The corridor must satisfy geometry requirements such as a connectivity and minimum width. We propose a mix-integer programming (MIP) model to maximize the total utility of the corridor under the given geometry requirements as well as a budget constraint to reflect the acquisition (or restoration) cost of the selected patches. To overcome the computational difficulty when solving large-scale instances, we develop multiple acceleration techniques, including a brand-and-cut algorithm enhanced with problem-specific valid inequalities and a bound-improving heuristic triggered at each integer node in the branch-and-bound exploration. We test the proposed model and solution algorithm using large-scale fabricated instances and a real case study for the design of an ecological corridor for the Florida Panther. Our modeling framework is able to solve instances of up to 1500 patches within 2 hours to optimality or with a small optimality gap. The second problem introduces the species movement across the fragmented landscape into the corridor design problem. The premise is that dispersal dynamics, if available, must inform the design to account for the corridor’s usage by the species. To this end, we propose a spatial discrete-time absorbing Markov chain (DTMC) approach to represent species dispersal and develop short- and long-term landscape usage metrics. We explore two different types of design problems: open and closed corridors. An open corridor is a sequence of landscape patches used by the species to disperse out of a habitat. For this case, we devise a dynamic programming algorithm that implicitly enumerates possible corridors and finds that of maximum probability. The second problem is to find a closed corridor of maximum probability that connects two fragmented habitats. To solve this problem variant, we extended the framework from the utility-based corridor design problem by blending the recursive Markov chain equations with a network flow nonlinear formulation. The third problem leverages on the DTMC approach to explore a reserve design problem with spatial requirements like connectivity and compactness. We approximate the compactness using the concept of maximum reserve diameter, i.e., the largest distance allowed between two patch in the reserve. To solve this problem, we devise a two-stage approach that balances the trade-off between reserve usage probability and compactness. The first stage's problem is to detect a subset of patches of maximum usage probability, while the second stage's problem imposes the geometry requirements on the optimal solution obtained from the first stage. To overcome the computational difficulty of large-scale landscapes, we develop tailored solution algorithms, including a warm-up heuristic to initialize the branch-and-bound exploration, problem-specific valid inequalities, and a decomposition strategy that sequentially solves smaller problems on landscape partitions.
ContributorsWang, Chao (Author) / Sefair, Jorge A. (Thesis advisor) / Mirchandani, Pitu (Committee member) / Pavlic, Theodore (Committee member) / Tong, Daoqin (Committee member) / Arizona State University (Publisher)
Created2021
156859-Thumbnail Image.png
Description
The analysis of clinical workflow offers many challenges to clinical stakeholders and researchers, especially in environments characterized by dynamic and concurrent processes. Workflow analysis in such environments is essential for monitoring performance and finding bottlenecks and sources of error. Clinical workflow analysis has been enhanced with the inclusion of modern

The analysis of clinical workflow offers many challenges to clinical stakeholders and researchers, especially in environments characterized by dynamic and concurrent processes. Workflow analysis in such environments is essential for monitoring performance and finding bottlenecks and sources of error. Clinical workflow analysis has been enhanced with the inclusion of modern technologies. One such intervention is automated location tracking which is a system that detects the movement of clinicians and equipment. Utilizing the data produced from automated location tracking technologies can lead to the development of novel workflow analytics that can be used to complement more traditional approaches such as ethnography and grounded-theory based qualitative methods. The goals of this research are to: (i) develop a series of analytic techniques to derive deeper workflow-related insight in an emergency department setting, (ii) overlay data from disparate sources (quantitative and qualitative) to develop strategies that facilitate workflow redesign, and (iii) incorporate visual analytics methods to improve the targeted visual feedback received by providers based on the findings. The overarching purpose is to create a framework to demonstrate the utility of automated location tracking data used in conjunction with clinical data like EHR logs and its vital role in the future of clinical workflow analysis/analytics. This document is categorized based on two primary aims of the research. The first aim deals with the use of automated location tracking data to develop a novel methodological/exploratory framework for clinical workflow. The second aim is to overlay the quantitative data generated from the previous aim on data from qualitative observation and shadowing studies (mixed methods) to develop a deeper view of clinical workflow that can be used to facilitate workflow redesign. The final sections of the document speculate on the direction of this work where the potential of this research in the creation of fully integrated clinical environments i.e. environments with state-of-the-art location tracking and other data collection mechanisms, is discussed. The main purpose of this research is to demonstrate ways by which clinical processes can be continuously monitored allowing for proactive adaptations in the face of technological and process changes to minimize any negative impact on the quality of patient care and provider satisfaction.
ContributorsVankipuram, Akshay (Author) / Patel, Vimla L. (Thesis advisor) / Wang, Dongwen (Thesis advisor) / Shortliffe, Edward H (Committee member) / Kaufman, David R. (Committee member) / Traub, Stephen J (Committee member) / Arizona State University (Publisher)
Created2018