Matching Items (30)
Filtering by

Clear all filters

171460-Thumbnail Image.png
Description
Arc Routing Problems (ARPs) are a type of routing problem that finds routes of minimum total cost covering the edges or arcs in a graph representing street or road networks. They find application in many essential services such as residential waste collection, winter gritting, and others. Being NP-hard, solutions are

Arc Routing Problems (ARPs) are a type of routing problem that finds routes of minimum total cost covering the edges or arcs in a graph representing street or road networks. They find application in many essential services such as residential waste collection, winter gritting, and others. Being NP-hard, solutions are usually found using heuristic methods. This dissertation contributes to heuristics for ARP, with a focus on the Capacitated Arc Routing Problem (CARP) with additional constraints. In operations such as residential waste collection, vehicle breakdown disruptions occur frequently. A new variant Capacitated Arc Re-routing Problem for Vehicle Break-down (CARP-VB) is introduced to address the need to re-route using only remaining vehicles to avoid missing services. A new heuristic Probe is developed to solve CARP-VB. Experiments on benchmark instances show that Probe is better in reducing the makespan and hence effective in reducing delays and avoiding missing services. In addition to total cost, operators are also interested in solutions that are attractive, that is, routes that are contiguous, compact, and non-overlapping to manage the work. Operators may not adopt a solution that is not attractive even if it is optimum. They are also interested in solutions that are balanced in workload to meet equity requirements. A new multi-objective memetic algorithm, MA-ABC is developed, that optimizes three objectives: Attractiveness, makespan, and total cost. On testing with benchmark instances, MA-ABC was found to be effective in providing attractive and balanced route solutions without affecting the total cost. Changes in the problem specification such as demand and topology occurs frequently in business operations. Machine learning be applied to learn the distribution behind these changes and generate solutions quickly at time of inference. Splice is a machine learning framework for CARP that generates closer to optimum solutions quickly using a graph neural network and deep Q-learning. Splice can solve several variants of node and arc routing problems using the same architecture without any modification. Splice was trained and tested using randomly generated instances. Splice generated solutions faster that are also better in comparison to popular metaheuristics.
ContributorsRamamoorthy, Muhilan (Author) / Syrotiuk, Violet R. (Thesis advisor) / Forrest, Stephanie (Committee member) / Mirchandani, Pitu (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2022
154216-Thumbnail Image.png
Description
The Partition of Variance (POV) method is a simplistic way to identify large sources of variation in manufacturing systems. This method identifies the variance by estimating the variance of the means (between variance) and the means of the variance (within variance). The project shows that the method correctly identifies the

The Partition of Variance (POV) method is a simplistic way to identify large sources of variation in manufacturing systems. This method identifies the variance by estimating the variance of the means (between variance) and the means of the variance (within variance). The project shows that the method correctly identifies the variance source when compared to the ANOVA method. Although the variance estimators deteriorate when varying degrees of non-normality is introduced through simulation; however, the POV method is shown to be a more stable measure of variance in the aggregate. The POV method also provides non-negative, stable estimates for interaction when compared to the ANOVA method. The POV method is shown to be more stable, particularly in low sample size situations. Based on these findings, it is suggested that the POV is not a replacement for more complex analysis methods, but rather, a supplement to them. POV is ideal for preliminary analysis due to the ease of implementation, the simplicity of interpretation, and the lack of dependency on statistical analysis packages or statistical knowledge.
ContributorsLittle, David John (Author) / Borror, Connie (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Broatch, Jennifer (Committee member) / Arizona State University (Publisher)
Created2015
158093-Thumbnail Image.png
Description
Model-based clustering is a sub-field of statistical modeling and machine learning. The mixture models use the probability to describe the degree of the data point belonging to the cluster, and the probability is updated iteratively during the clustering. While mixture models have demonstrated the superior performance in handling noisy data

Model-based clustering is a sub-field of statistical modeling and machine learning. The mixture models use the probability to describe the degree of the data point belonging to the cluster, and the probability is updated iteratively during the clustering. While mixture models have demonstrated the superior performance in handling noisy data in many fields, there exist some challenges for high dimensional dataset. It is noted that among a large number of features, some may not indeed contribute to delineate the cluster profiles. The inclusion of these “noisy” features will confuse the model to identify the real structure of the clusters and cost more computational time. Recognizing the issue, in this dissertation, I propose a new feature selection algorithm for continuous dataset first and then extend to mixed datatype. Finally, I conduct uncertainty quantification for the feature selection results as the third topic.

The first topic is an embedded feature selection algorithm termed Expectation-Selection-Maximization (ESM) model that can automatically select features while optimizing the parameters for Gaussian Mixture Model. I introduce a relevancy index (RI) revealing the contribution of the feature in the clustering process to assist feature selection. I demonstrate the efficacy of the ESM by studying two synthetic datasets, four benchmark datasets, and an Alzheimer’s Disease dataset.

The second topic focuses on extending the application of ESM algorithm to handle mixed datatypes. The Gaussian mixture model is generalized to Generalized Model of Mixture (GMoM), which can not only handle continuous features, but also binary and nominal features.

The last topic is about Uncertainty Quantification (UQ) of the feature selection. A new algorithm termed ESOM is proposed, which takes the variance information into consideration while conducting feature selection. Also, a set of outliers are generated in the feature selection process to infer the uncertainty in the input data. Finally, the selected features and detected outlier instances are evaluated by visualization comparison.
ContributorsFu, Yinlin (Author) / Wu, Teresa (Thesis advisor) / Mirchandani, Pitu (Committee member) / Li, Jing (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2020
158577-Thumbnail Image.png
Description
This dissertation focuses on three large-scale optimization problems and devising algorithms to solve them. In addition to the societal impact of each problem’s solution, this dissertation contributes to the optimization literature a set of decomposition algorithms for problems whose optimal solution is sparse. These algorithms exploit problem-specific properties and use

This dissertation focuses on three large-scale optimization problems and devising algorithms to solve them. In addition to the societal impact of each problem’s solution, this dissertation contributes to the optimization literature a set of decomposition algorithms for problems whose optimal solution is sparse. These algorithms exploit problem-specific properties and use tailored strategies based on iterative refinement (outer-approximations). The proposed algorithms are not rooted in duality theory, providing an alternative to existing methods based on linear programming relaxations. However, it is possible to embed existing decomposition methods into the proposed framework. These general decomposition principles extend to other combinatorial optimization problems.

The first problem is a route assignment and scheduling problem in which a set of vehicles need to traverse a directed network while maintaining a minimum inter-vehicle distance at any time. This problem is inspired by applications in hazmat logistics and the coordination of autonomous agents. The proposed approach includes realistic features such as continuous-time vehicle scheduling, heterogeneous speeds, minimum and maximum waiting times at any node, among others.

The second problem is a fixed-charge network design, which aims to find a minimum-cost plan to transport a target amount of a commodity between known origins and destinations. In addition to the typical flow decisions, the model chooses the capacity of each arc and selects sources and sinks. The proposed algorithms admit any nondecreasing piecewise linear cost structure. This model is applied to the Carbon Capture and Storage (CCS) problem, which is to design a minimum-cost pipeline network to transport CO2 between industrial sources and geologic reservoirs for long-term storage.

The third problem extends the proposed decomposition framework to a special case of joint chance constraint programming with independent random variables. This model is applied to the probabilistic transportation problem, where demands are assumed stochastic and independent. Using an empirical probability distribution, this problem is formulated as an integer program with the goal of finding a minimum-cost distribution plan that satisfies all the demands with a minimum given probability. The proposed scalable algorithm is based on a concave envelop approximation of the empirical probability function, which is iteratively refined as needed.
ContributorsMatin Moghaddam, Navid (Author) / Sefair, Jorge (Thesis advisor) / Mirchandani, Pitu (Committee member) / Escobedo, Adolfo (Committee member) / Grubesic, Anthony (Committee member) / Arizona State University (Publisher)
Created2020
158541-Thumbnail Image.png
Description
Modern manufacturing systems are part of a complex supply chain where customer preferences are constantly evolving. The rapidly evolving market demands manufacturing organizations to be increasingly agile and flexible. Medium term capacity planning for manufacturing systems employ queueing network models based on stationary demand assumptions. However, these stationary demand assumptions

Modern manufacturing systems are part of a complex supply chain where customer preferences are constantly evolving. The rapidly evolving market demands manufacturing organizations to be increasingly agile and flexible. Medium term capacity planning for manufacturing systems employ queueing network models based on stationary demand assumptions. However, these stationary demand assumptions are not very practical for rapidly evolving supply chains. Nonstationary demand processes provide a reasonable framework to capture the time-varying nature of modern markets. The analysis of queues and queueing networks with time-varying parameters is mathematically intractable. In this dissertation, heuristics which draw upon existing steady state queueing results are proposed to provide computationally efficient approximations for dynamic multi-product manufacturing systems modeled as time-varying queueing networks with multiple customer classes (product types). This dissertation addresses the problem of performance evaluation of such manufacturing systems.

This dissertation considers the two key aspects of dynamic multi-product manufacturing systems - namely, performance evaluation and optimal server resource allocation. First, the performance evaluation of systems with infinite queueing room and a first-come first-serve service paradigm is considered. Second, systems with finite queueing room and priorities between product types are considered. Finally, the optimal server allocation problem is addressed in the context of dynamic multi-product manufacturing systems. The performance estimates developed in the earlier part of the dissertation are leveraged in a simulated annealing algorithm framework to obtain server resource allocations.
ContributorsJampani Hanumantha, Girish (Author) / Askin, Ronald (Thesis advisor) / Ju, Feng (Committee member) / Yan, Hao (Committee member) / Mirchandani, Pitu (Committee member) / Arizona State University (Publisher)
Created2020
158602-Thumbnail Image.png
Description
Short-notice disasters such as hurricanes involve uncertainties in many facets, from the time of its occurrence to its impacts’ magnitude. Failure to incorporate these uncertainties can affect the effectiveness of the emergency responses. In the case of a hurricane event, uncertainties and corresponding impacts during a storm event can quickly

Short-notice disasters such as hurricanes involve uncertainties in many facets, from the time of its occurrence to its impacts’ magnitude. Failure to incorporate these uncertainties can affect the effectiveness of the emergency responses. In the case of a hurricane event, uncertainties and corresponding impacts during a storm event can quickly cascade. Over the past decades, various storm forecast models have been developed to predict the storm uncertainties; however, access to the usage of these models is limited. Hence, as the first part of this research, a data-driven simulation model is developed with aim to generate spatial-temporal storm predicted hazards for each possible hurricane track modeled. The simulation model identifies a means to represent uncertainty in storm’s movement and its associated potential hazards in the form of probabilistic scenarios tree where each branch is associated with scenario-level storm track and weather profile. Storm hazards, such as strong winds, torrential rain, and storm surges, can inflict significant damage on the road network and affect the population’s ability to move during the storm event. A cascading network failure algorithm is introduced in the second part of the research. The algorithm takes the scenario-level storm hazards to predict uncertainties in mobility states over the storm event. In the third part of the research, a methodology is proposed to generate a sequence of actions that simultaneously solve the evacuation flow scheduling and suggested routes which minimize the total flow time, or the makespan, for the evacuation process from origins to destinations in the resulting stochastic time-dependent network. The methodology is implemented for the 2017 Hurricane Irma case study to recommend an evacuation policy for Manatee County, FL. The results are compared with evacuation plans for assumed scenarios; the research suggests that evacuation recommendations that are based on single scenarios reduce the effectiveness of the evacuation procedure. The overall contributions of the research presented here are new methodologies to: (1) predict and visualize the spatial-temporal impacts of an oncoming storm event, (2) predict uncertainties in the impacts to transportation infrastructure and mobility, and (3) determine the quickest evacuation schedule and routes under the uncertainties within the resulting stochastic transportation networks.
ContributorsGita, Ketut (Author) / Mirchandani, Pitu (Thesis advisor) / Maciejewski, Ross (Committee member) / Sefair, Jorge (Committee member) / Zhou, Xuesong (Committee member) / Arizona State University (Publisher)
Created2020
161504-Thumbnail Image.png
Description
Drinking water quality violations are widespread in the United States and elsewhere in the world. More than half of Americans are not confident in the safety of their tap water, especially after the 2014 Flint, Michigan water crisis. Other than accidental contamination events, stagnation is a major cause of water

Drinking water quality violations are widespread in the United States and elsewhere in the world. More than half of Americans are not confident in the safety of their tap water, especially after the 2014 Flint, Michigan water crisis. Other than accidental contamination events, stagnation is a major cause of water quality degradation. Thus, there is a pressing need to build a real-time control system that can make control decisions quickly and proactively so that the quality of water can be maintained at all times. However, towards this end, modeling the dynamics of water distribution systems are very challenging due to the complex fluid dynamics and chemical reactions in the system. This challenge needs to be addressed before moving on to modeling the optimal control problem. The research in this dissertation leverages statistical machine learning approaches in approximating the complex water system dynamics and then develops different optimization models for proactive and real-time water quality control. This research focuses on two effective ways to maintain water quality, flushing of taps and injection of chlorine or other disinfectants; both of these actions decrease the equivalent “water age”, a useful proxy for water quality related to bacteria growth. This research first develops linear predictive models for water quality and subsequently linear programming optimization models for proactive water age control via flushing. The second part of the research considers both flushing and disinfectant injections in the control problem and develops mixed integer quadratically constrained optimization models for controlling water age. Different control strategies for disinfectant injections are also evaluated: binary on-off injections and continuous injections. In the third part of the research, water demand is assumed to be uncertain and stochastic. The developed approach to control the system relates to learning the optimal real-time flushing decisions by combing reinforced temporal-difference learning approaches with linear value function approximation for solving approximately the underlying Markov decision processes. Computational results on widely used simulation models demonstrates the developed control systems were indeed effective for water quality control with known demands as well as when demands are uncertain and stochastic.
ContributorsLi, Xiushuang (Author) / Mirchandani, Pitu (Thesis advisor) / Boyer, Treavor (Committee member) / Ju, Feng (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2021
161785-Thumbnail Image.png
Description
Natural disasters are occurring increasingly around the world, causing significant economiclosses. To alleviate their adverse effect, it is crucial to plan what should be done in response to them in a proactive manner. This research aims at developing proactive and real-time recovery algorithms for large-scale power networks exposed to weather events considering uncertainty.

Natural disasters are occurring increasingly around the world, causing significant economiclosses. To alleviate their adverse effect, it is crucial to plan what should be done in response to them in a proactive manner. This research aims at developing proactive and real-time recovery algorithms for large-scale power networks exposed to weather events considering uncertainty. These algorithms support the recovery decisions to mitigate the disaster impact, resulting in faster recovery of the network. The challenges associated with developing these algorithms are summarized below: 1. Even ignoring uncertainty, when operating cost of the network is considered the problem will be a bi-level optimization which is NP-hard. 2. To meet the requirement for real-time decision making under uncertainty, the problem could be formulated a Stochastic Dynamic Program with the aim to minimize the total cost. However, considering the operating cost of the network violates the underlying assumptions of this approach. 3. Stochastic Dynamic Programming approach is also not applicable to realistic problem sizes, due to the curse of dimensionality. 4. Uncertainty-based approaches for failure modeling, rely on point-generation of failures and ignore the network structure. To deal with the first challenge, in chapter 2, a heuristic solution framework is proposed, and its performance is evaluated by conducting numerical experiments. To address the second challenge, in chapter 3, after formulating the problem as a Stochastic Dynamic Program, an approximated dynamic programming heuristic is proposed to solve the problem. Numerical experiments on synthetic and realistic test-beds, show the satisfactory performance of the proposed approach. To address the third challenge, in chapter 4, an efficient base heuristic policy and an aggregation scheme in the action space is proposed. Numerical experiments on a realistic test-bed verify the ability of the proposed method to recover the network more efficiently. Finally, to address the fourth challenge, in chapter 5, a simulation-based model is proposed that using historical data and accounting for the interaction between network components, allows for analyzing the impact of adverse events on regional service level. A realistic case study is then conducted to showcase the applicability of the approach.
ContributorsInanlouganji, Alireza (Author) / Pedrielli, Giulia (Thesis advisor) / Mirchandani, Pitu (Committee member) / Reddy, T. Agami (Committee member) / Ju, Feng (Committee member) / Arizona State University (Publisher)
Created2021
161608-Thumbnail Image.png
Description
A production system is commonly restricted by time windows. For example, perishability is a major concern in food processing and requires products, such as yogurt, beer and meat, not to stay in buffer for long. Semiconductor manufacturing is faced with oxidation and moisture absorption issues, if a product in buffer

A production system is commonly restricted by time windows. For example, perishability is a major concern in food processing and requires products, such as yogurt, beer and meat, not to stay in buffer for long. Semiconductor manufacturing is faced with oxidation and moisture absorption issues, if a product in buffer is exposed to air for long. Machine reliability is a major source of uncertainty in production systems that causes residence time constraints to be unsatisfied, leading to potential product quality issues. Rapid advances in sensor technology and automation provide potentials to manage production in real time, but the system complexity, brought by residence time constraints, makes it difficult to optimize system performance while providing a guaranteed product quality. To contribute to this end, this dissertation is dedicated to modeling, analysis and control of production systems with constrained time windows. This study starts with a small-scale serial production line with two machines and one buffer. Even the simplest serial lines could have too large state space due to the consideration of residence time constraints. A Markov chain model is developed to approximately analyze its transient behavior with a high accuracy. An iterative learning algorithm is proposed to perform real-time control. The analysis of two-machine serial line contributes to the further analysis of more general and complex serial lines with multiple machines. Residence time constraints can be required in multiple stages. To deal with it, a two-machine-one-buffer subsystem isolated from a multi-stage serial production line is firstly analyzed and then acts as a building block to support the aggregation method for overall system performance. The proposed aggregation method substantially reduces the complexity of the problem while maintaining a high accuracy. A decomposition-based control approach is proposed to control a multi-stage serial production line. A production system is decomposed into small-scale subsystems, and an iterative aggregation procedure is then used to generate a production control policy. The decomposition-based control approach outperforms general-purpose reinforcement learning method by delivering significant system performance improvement and substantial reduction on computation overhead. Semiconductor assembly line is a typical production system, where products are restricted by time windows and production can be disrupted by machine failures. A production control problem of semiconductor assembly line is presented and studied, and thus total lot delay time and residence time constraint violation are minimized.
ContributorsWang, Feifan (Author) / Ju, Feng (Thesis advisor) / Askin, Ronald (Committee member) / Mirchandani, Pitu (Committee member) / Patel, Nital (Committee member) / Arizona State University (Publisher)
Created2021
151633-Thumbnail Image.png
Description
In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also

In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also involves mixed-product transshipments between DCs with multiple truck size selection and routing delivery to retailers. Optimally solving such an integrated problem is in general not easy due to its combinatorial nature, especially when transshipments and routing are involved. In order to find out a good solution effectively, a two-phase solution methodology is derived: Phase I solves an integer programming model which includes all the constraints in the original model except that the routings are simplified to direct shipments by using estimated routing cost parameters. Then Phase II model solves the lower level inventory routing problem for each opened DC and its assigned retailers. The accuracy of the estimated routing cost and the effectiveness of the two-phase solution methodology are evaluated, the computational performance is found to be promising. The problem is able to be heuristically solved within a reasonable time frame for a broad range of problem sizes (one hour for the instance of 200 retailers). In addition, a model is generated for a similar network design problem considering direct shipment and consolidation within the same product set opportunities. A genetic algorithm and a specific problem heuristic are designed, tested and compared on several realistic scenarios.
ContributorsXia, Mingjun (Author) / Askin, Ronald (Thesis advisor) / Mirchandani, Pitu (Committee member) / Zhang, Muhong (Committee member) / Kierstead, Henry (Committee member) / Arizona State University (Publisher)
Created2013