Matching Items (32)
171460-Thumbnail Image.png
Description
Arc Routing Problems (ARPs) are a type of routing problem that finds routes of minimum total cost covering the edges or arcs in a graph representing street or road networks. They find application in many essential services such as residential waste collection, winter gritting, and others. Being NP-hard, solutions are

Arc Routing Problems (ARPs) are a type of routing problem that finds routes of minimum total cost covering the edges or arcs in a graph representing street or road networks. They find application in many essential services such as residential waste collection, winter gritting, and others. Being NP-hard, solutions are usually found using heuristic methods. This dissertation contributes to heuristics for ARP, with a focus on the Capacitated Arc Routing Problem (CARP) with additional constraints. In operations such as residential waste collection, vehicle breakdown disruptions occur frequently. A new variant Capacitated Arc Re-routing Problem for Vehicle Break-down (CARP-VB) is introduced to address the need to re-route using only remaining vehicles to avoid missing services. A new heuristic Probe is developed to solve CARP-VB. Experiments on benchmark instances show that Probe is better in reducing the makespan and hence effective in reducing delays and avoiding missing services. In addition to total cost, operators are also interested in solutions that are attractive, that is, routes that are contiguous, compact, and non-overlapping to manage the work. Operators may not adopt a solution that is not attractive even if it is optimum. They are also interested in solutions that are balanced in workload to meet equity requirements. A new multi-objective memetic algorithm, MA-ABC is developed, that optimizes three objectives: Attractiveness, makespan, and total cost. On testing with benchmark instances, MA-ABC was found to be effective in providing attractive and balanced route solutions without affecting the total cost. Changes in the problem specification such as demand and topology occurs frequently in business operations. Machine learning be applied to learn the distribution behind these changes and generate solutions quickly at time of inference. Splice is a machine learning framework for CARP that generates closer to optimum solutions quickly using a graph neural network and deep Q-learning. Splice can solve several variants of node and arc routing problems using the same architecture without any modification. Splice was trained and tested using randomly generated instances. Splice generated solutions faster that are also better in comparison to popular metaheuristics.
ContributorsRamamoorthy, Muhilan (Author) / Syrotiuk, Violet R. (Thesis advisor) / Forrest, Stephanie (Committee member) / Mirchandani, Pitu (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2022
157832-Thumbnail Image.png
Description
This dissertation investigates congestion mitigation during the ingress of a planned special event (PSE). PSEs would impact the regular operation of the transportation system within certain time periods due to increased travel demand or reduced capacities on certain road segments. For individual attendees, cruising for parking during a PSE could

This dissertation investigates congestion mitigation during the ingress of a planned special event (PSE). PSEs would impact the regular operation of the transportation system within certain time periods due to increased travel demand or reduced capacities on certain road segments. For individual attendees, cruising for parking during a PSE could be a struggle given the severe congestion and scarcity of parking spaces in the network. With the development of smartphones-based ridesharing services such as Uber/Lyft, more and more attendees are turning to ridesharing rather than driving by themselves. This study explores congestion mitigation during a planned special event considering parking, ridesharing and network configuration from both attendees and planner’s perspectives.

Parking availability (occupancy of parking facility) information is the fundamental building block for both travelers and planners to make parking-related decisions. It is highly valued by travelers and is one of the most important inputs to many parking models. This dissertation proposes a model-based practical framework to predict future occupancy from historical occupancy data alone. The framework consists of two modules: estimation of model parameters, and occupancy prediction. At the core of the predictive framework, a queuing model is employed to describe the stochastic occupancy change of a parking facility.

From an attendee’s perspective, the probability of finding parking at a particular parking facility is more treasured than occupancy information for parking search. However, it is hard to estimate parking probabilities even with accurate occupancy data in a dynamic environment. In the second part of this dissertation, taking one step further, the idea of introducing learning algorithms into parking guidance and information systems that employ a central server is investigated, in order to provide estimated optimal parking searching strategies to travelers. With the help of the Markov Decision Process (MDP), the parking searching process on a network with uncertain parking availabilities can be modeled and analyzed.

Finally, from a planner’s perspective, a bi-level model is proposed to generate a comprehensive PSE traffic management plan considering parking, ridesharing and route recommendations at the same time. The upper level is an optimization model aiming to minimize total travel time experienced by travelers. In the lower level, a link transmission model incorporating parking and ridesharing is used to evaluate decisions from and provide feedback to the upper level. A congestion relief algorithm is proposed and tested on a real-world network.
ContributorsXiao, Jun, Ph.D (Author) / Lou, Yingyan (Thesis advisor) / Pendyala, Ram (Committee member) / Zhou, Xuesong (Committee member) / Mirchandani, Pitu (Committee member) / Arizona State University (Publisher)
Created2019
158093-Thumbnail Image.png
Description
Model-based clustering is a sub-field of statistical modeling and machine learning. The mixture models use the probability to describe the degree of the data point belonging to the cluster, and the probability is updated iteratively during the clustering. While mixture models have demonstrated the superior performance in handling noisy data

Model-based clustering is a sub-field of statistical modeling and machine learning. The mixture models use the probability to describe the degree of the data point belonging to the cluster, and the probability is updated iteratively during the clustering. While mixture models have demonstrated the superior performance in handling noisy data in many fields, there exist some challenges for high dimensional dataset. It is noted that among a large number of features, some may not indeed contribute to delineate the cluster profiles. The inclusion of these “noisy” features will confuse the model to identify the real structure of the clusters and cost more computational time. Recognizing the issue, in this dissertation, I propose a new feature selection algorithm for continuous dataset first and then extend to mixed datatype. Finally, I conduct uncertainty quantification for the feature selection results as the third topic.

The first topic is an embedded feature selection algorithm termed Expectation-Selection-Maximization (ESM) model that can automatically select features while optimizing the parameters for Gaussian Mixture Model. I introduce a relevancy index (RI) revealing the contribution of the feature in the clustering process to assist feature selection. I demonstrate the efficacy of the ESM by studying two synthetic datasets, four benchmark datasets, and an Alzheimer’s Disease dataset.

The second topic focuses on extending the application of ESM algorithm to handle mixed datatypes. The Gaussian mixture model is generalized to Generalized Model of Mixture (GMoM), which can not only handle continuous features, but also binary and nominal features.

The last topic is about Uncertainty Quantification (UQ) of the feature selection. A new algorithm termed ESOM is proposed, which takes the variance information into consideration while conducting feature selection. Also, a set of outliers are generated in the feature selection process to infer the uncertainty in the input data. Finally, the selected features and detected outlier instances are evaluated by visualization comparison.
ContributorsFu, Yinlin (Author) / Wu, Teresa (Thesis advisor) / Mirchandani, Pitu (Committee member) / Li, Jing (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2020
158577-Thumbnail Image.png
Description
This dissertation focuses on three large-scale optimization problems and devising algorithms to solve them. In addition to the societal impact of each problem’s solution, this dissertation contributes to the optimization literature a set of decomposition algorithms for problems whose optimal solution is sparse. These algorithms exploit problem-specific properties and use

This dissertation focuses on three large-scale optimization problems and devising algorithms to solve them. In addition to the societal impact of each problem’s solution, this dissertation contributes to the optimization literature a set of decomposition algorithms for problems whose optimal solution is sparse. These algorithms exploit problem-specific properties and use tailored strategies based on iterative refinement (outer-approximations). The proposed algorithms are not rooted in duality theory, providing an alternative to existing methods based on linear programming relaxations. However, it is possible to embed existing decomposition methods into the proposed framework. These general decomposition principles extend to other combinatorial optimization problems.

The first problem is a route assignment and scheduling problem in which a set of vehicles need to traverse a directed network while maintaining a minimum inter-vehicle distance at any time. This problem is inspired by applications in hazmat logistics and the coordination of autonomous agents. The proposed approach includes realistic features such as continuous-time vehicle scheduling, heterogeneous speeds, minimum and maximum waiting times at any node, among others.

The second problem is a fixed-charge network design, which aims to find a minimum-cost plan to transport a target amount of a commodity between known origins and destinations. In addition to the typical flow decisions, the model chooses the capacity of each arc and selects sources and sinks. The proposed algorithms admit any nondecreasing piecewise linear cost structure. This model is applied to the Carbon Capture and Storage (CCS) problem, which is to design a minimum-cost pipeline network to transport CO2 between industrial sources and geologic reservoirs for long-term storage.

The third problem extends the proposed decomposition framework to a special case of joint chance constraint programming with independent random variables. This model is applied to the probabilistic transportation problem, where demands are assumed stochastic and independent. Using an empirical probability distribution, this problem is formulated as an integer program with the goal of finding a minimum-cost distribution plan that satisfies all the demands with a minimum given probability. The proposed scalable algorithm is based on a concave envelop approximation of the empirical probability function, which is iteratively refined as needed.
ContributorsMatin Moghaddam, Navid (Author) / Sefair, Jorge (Thesis advisor) / Mirchandani, Pitu (Committee member) / Escobedo, Adolfo (Committee member) / Grubesic, Anthony (Committee member) / Arizona State University (Publisher)
Created2020
158541-Thumbnail Image.png
Description
Modern manufacturing systems are part of a complex supply chain where customer preferences are constantly evolving. The rapidly evolving market demands manufacturing organizations to be increasingly agile and flexible. Medium term capacity planning for manufacturing systems employ queueing network models based on stationary demand assumptions. However, these stationary demand assumptions

Modern manufacturing systems are part of a complex supply chain where customer preferences are constantly evolving. The rapidly evolving market demands manufacturing organizations to be increasingly agile and flexible. Medium term capacity planning for manufacturing systems employ queueing network models based on stationary demand assumptions. However, these stationary demand assumptions are not very practical for rapidly evolving supply chains. Nonstationary demand processes provide a reasonable framework to capture the time-varying nature of modern markets. The analysis of queues and queueing networks with time-varying parameters is mathematically intractable. In this dissertation, heuristics which draw upon existing steady state queueing results are proposed to provide computationally efficient approximations for dynamic multi-product manufacturing systems modeled as time-varying queueing networks with multiple customer classes (product types). This dissertation addresses the problem of performance evaluation of such manufacturing systems.

This dissertation considers the two key aspects of dynamic multi-product manufacturing systems - namely, performance evaluation and optimal server resource allocation. First, the performance evaluation of systems with infinite queueing room and a first-come first-serve service paradigm is considered. Second, systems with finite queueing room and priorities between product types are considered. Finally, the optimal server allocation problem is addressed in the context of dynamic multi-product manufacturing systems. The performance estimates developed in the earlier part of the dissertation are leveraged in a simulated annealing algorithm framework to obtain server resource allocations.
ContributorsJampani Hanumantha, Girish (Author) / Askin, Ronald (Thesis advisor) / Ju, Feng (Committee member) / Yan, Hao (Committee member) / Mirchandani, Pitu (Committee member) / Arizona State University (Publisher)
Created2020
158602-Thumbnail Image.png
Description
Short-notice disasters such as hurricanes involve uncertainties in many facets, from the time of its occurrence to its impacts’ magnitude. Failure to incorporate these uncertainties can affect the effectiveness of the emergency responses. In the case of a hurricane event, uncertainties and corresponding impacts during a storm event can quickly

Short-notice disasters such as hurricanes involve uncertainties in many facets, from the time of its occurrence to its impacts’ magnitude. Failure to incorporate these uncertainties can affect the effectiveness of the emergency responses. In the case of a hurricane event, uncertainties and corresponding impacts during a storm event can quickly cascade. Over the past decades, various storm forecast models have been developed to predict the storm uncertainties; however, access to the usage of these models is limited. Hence, as the first part of this research, a data-driven simulation model is developed with aim to generate spatial-temporal storm predicted hazards for each possible hurricane track modeled. The simulation model identifies a means to represent uncertainty in storm’s movement and its associated potential hazards in the form of probabilistic scenarios tree where each branch is associated with scenario-level storm track and weather profile. Storm hazards, such as strong winds, torrential rain, and storm surges, can inflict significant damage on the road network and affect the population’s ability to move during the storm event. A cascading network failure algorithm is introduced in the second part of the research. The algorithm takes the scenario-level storm hazards to predict uncertainties in mobility states over the storm event. In the third part of the research, a methodology is proposed to generate a sequence of actions that simultaneously solve the evacuation flow scheduling and suggested routes which minimize the total flow time, or the makespan, for the evacuation process from origins to destinations in the resulting stochastic time-dependent network. The methodology is implemented for the 2017 Hurricane Irma case study to recommend an evacuation policy for Manatee County, FL. The results are compared with evacuation plans for assumed scenarios; the research suggests that evacuation recommendations that are based on single scenarios reduce the effectiveness of the evacuation procedure. The overall contributions of the research presented here are new methodologies to: (1) predict and visualize the spatial-temporal impacts of an oncoming storm event, (2) predict uncertainties in the impacts to transportation infrastructure and mobility, and (3) determine the quickest evacuation schedule and routes under the uncertainties within the resulting stochastic transportation networks.
ContributorsGita, Ketut (Author) / Mirchandani, Pitu (Thesis advisor) / Maciejewski, Ross (Committee member) / Sefair, Jorge (Committee member) / Zhou, Xuesong (Committee member) / Arizona State University (Publisher)
Created2020
161516-Thumbnail Image.png
Description
Biodiversity has been declining during the last decades due to habitat loss, landscape deterioration, environmental change, and human-related activities. In addition to its economic and cultural value, biodiversity plays an important role in keeping an environment’s ecosystem in balance. Disrupting such processes can reduce the provision of natural resources such

Biodiversity has been declining during the last decades due to habitat loss, landscape deterioration, environmental change, and human-related activities. In addition to its economic and cultural value, biodiversity plays an important role in keeping an environment’s ecosystem in balance. Disrupting such processes can reduce the provision of natural resources such as food and water, which in turn yields a direct threat to human health. Protecting and restoring natural areas is fundamental to preserve biodiversity and to mitigate the effects of ongoing environmental change. Unfortunately, it is impossible to protect every critical area due to resource limitations, requiring the use of advanced decision tools for the design of conservation plans. This dissertation studies three problems on the design of wildlife corridors and reserves that include patch-specific conservation decisions under spatial, operational, ecological, and biological requirements. In addition to the ecological impact of each problem’s solution, this dissertation contributes a set of formulations, valid inequalities, and pre-processing and solution algorithms for optimization problems with spatial requirements. The first problem is a utility-based corridor design problem to connect fragmented habitats, where each patch has a utility value reflecting its quality. The corridor must satisfy geometry requirements such as a connectivity and minimum width. We propose a mix-integer programming (MIP) model to maximize the total utility of the corridor under the given geometry requirements as well as a budget constraint to reflect the acquisition (or restoration) cost of the selected patches. To overcome the computational difficulty when solving large-scale instances, we develop multiple acceleration techniques, including a brand-and-cut algorithm enhanced with problem-specific valid inequalities and a bound-improving heuristic triggered at each integer node in the branch-and-bound exploration. We test the proposed model and solution algorithm using large-scale fabricated instances and a real case study for the design of an ecological corridor for the Florida Panther. Our modeling framework is able to solve instances of up to 1500 patches within 2 hours to optimality or with a small optimality gap. The second problem introduces the species movement across the fragmented landscape into the corridor design problem. The premise is that dispersal dynamics, if available, must inform the design to account for the corridor’s usage by the species. To this end, we propose a spatial discrete-time absorbing Markov chain (DTMC) approach to represent species dispersal and develop short- and long-term landscape usage metrics. We explore two different types of design problems: open and closed corridors. An open corridor is a sequence of landscape patches used by the species to disperse out of a habitat. For this case, we devise a dynamic programming algorithm that implicitly enumerates possible corridors and finds that of maximum probability. The second problem is to find a closed corridor of maximum probability that connects two fragmented habitats. To solve this problem variant, we extended the framework from the utility-based corridor design problem by blending the recursive Markov chain equations with a network flow nonlinear formulation. The third problem leverages on the DTMC approach to explore a reserve design problem with spatial requirements like connectivity and compactness. We approximate the compactness using the concept of maximum reserve diameter, i.e., the largest distance allowed between two patch in the reserve. To solve this problem, we devise a two-stage approach that balances the trade-off between reserve usage probability and compactness. The first stage's problem is to detect a subset of patches of maximum usage probability, while the second stage's problem imposes the geometry requirements on the optimal solution obtained from the first stage. To overcome the computational difficulty of large-scale landscapes, we develop tailored solution algorithms, including a warm-up heuristic to initialize the branch-and-bound exploration, problem-specific valid inequalities, and a decomposition strategy that sequentially solves smaller problems on landscape partitions.
ContributorsWang, Chao (Author) / Sefair, Jorge A. (Thesis advisor) / Mirchandani, Pitu (Committee member) / Pavlic, Theodore (Committee member) / Tong, Daoqin (Committee member) / Arizona State University (Publisher)
Created2021
161504-Thumbnail Image.png
Description
Drinking water quality violations are widespread in the United States and elsewhere in the world. More than half of Americans are not confident in the safety of their tap water, especially after the 2014 Flint, Michigan water crisis. Other than accidental contamination events, stagnation is a major cause of water

Drinking water quality violations are widespread in the United States and elsewhere in the world. More than half of Americans are not confident in the safety of their tap water, especially after the 2014 Flint, Michigan water crisis. Other than accidental contamination events, stagnation is a major cause of water quality degradation. Thus, there is a pressing need to build a real-time control system that can make control decisions quickly and proactively so that the quality of water can be maintained at all times. However, towards this end, modeling the dynamics of water distribution systems are very challenging due to the complex fluid dynamics and chemical reactions in the system. This challenge needs to be addressed before moving on to modeling the optimal control problem. The research in this dissertation leverages statistical machine learning approaches in approximating the complex water system dynamics and then develops different optimization models for proactive and real-time water quality control. This research focuses on two effective ways to maintain water quality, flushing of taps and injection of chlorine or other disinfectants; both of these actions decrease the equivalent “water age”, a useful proxy for water quality related to bacteria growth. This research first develops linear predictive models for water quality and subsequently linear programming optimization models for proactive water age control via flushing. The second part of the research considers both flushing and disinfectant injections in the control problem and develops mixed integer quadratically constrained optimization models for controlling water age. Different control strategies for disinfectant injections are also evaluated: binary on-off injections and continuous injections. In the third part of the research, water demand is assumed to be uncertain and stochastic. The developed approach to control the system relates to learning the optimal real-time flushing decisions by combing reinforced temporal-difference learning approaches with linear value function approximation for solving approximately the underlying Markov decision processes. Computational results on widely used simulation models demonstrates the developed control systems were indeed effective for water quality control with known demands as well as when demands are uncertain and stochastic.
ContributorsLi, Xiushuang (Author) / Mirchandani, Pitu (Thesis advisor) / Boyer, Treavor (Committee member) / Ju, Feng (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2021
161785-Thumbnail Image.png
Description
Natural disasters are occurring increasingly around the world, causing significant economiclosses. To alleviate their adverse effect, it is crucial to plan what should be done in response to them in a proactive manner. This research aims at developing proactive and real-time recovery algorithms for large-scale power networks exposed to weather events considering uncertainty.

Natural disasters are occurring increasingly around the world, causing significant economiclosses. To alleviate their adverse effect, it is crucial to plan what should be done in response to them in a proactive manner. This research aims at developing proactive and real-time recovery algorithms for large-scale power networks exposed to weather events considering uncertainty. These algorithms support the recovery decisions to mitigate the disaster impact, resulting in faster recovery of the network. The challenges associated with developing these algorithms are summarized below: 1. Even ignoring uncertainty, when operating cost of the network is considered the problem will be a bi-level optimization which is NP-hard. 2. To meet the requirement for real-time decision making under uncertainty, the problem could be formulated a Stochastic Dynamic Program with the aim to minimize the total cost. However, considering the operating cost of the network violates the underlying assumptions of this approach. 3. Stochastic Dynamic Programming approach is also not applicable to realistic problem sizes, due to the curse of dimensionality. 4. Uncertainty-based approaches for failure modeling, rely on point-generation of failures and ignore the network structure. To deal with the first challenge, in chapter 2, a heuristic solution framework is proposed, and its performance is evaluated by conducting numerical experiments. To address the second challenge, in chapter 3, after formulating the problem as a Stochastic Dynamic Program, an approximated dynamic programming heuristic is proposed to solve the problem. Numerical experiments on synthetic and realistic test-beds, show the satisfactory performance of the proposed approach. To address the third challenge, in chapter 4, an efficient base heuristic policy and an aggregation scheme in the action space is proposed. Numerical experiments on a realistic test-bed verify the ability of the proposed method to recover the network more efficiently. Finally, to address the fourth challenge, in chapter 5, a simulation-based model is proposed that using historical data and accounting for the interaction between network components, allows for analyzing the impact of adverse events on regional service level. A realistic case study is then conducted to showcase the applicability of the approach.
ContributorsInanlouganji, Alireza (Author) / Pedrielli, Giulia (Thesis advisor) / Mirchandani, Pitu (Committee member) / Reddy, T. Agami (Committee member) / Ju, Feng (Committee member) / Arizona State University (Publisher)
Created2021
161608-Thumbnail Image.png
Description
A production system is commonly restricted by time windows. For example, perishability is a major concern in food processing and requires products, such as yogurt, beer and meat, not to stay in buffer for long. Semiconductor manufacturing is faced with oxidation and moisture absorption issues, if a product in buffer

A production system is commonly restricted by time windows. For example, perishability is a major concern in food processing and requires products, such as yogurt, beer and meat, not to stay in buffer for long. Semiconductor manufacturing is faced with oxidation and moisture absorption issues, if a product in buffer is exposed to air for long. Machine reliability is a major source of uncertainty in production systems that causes residence time constraints to be unsatisfied, leading to potential product quality issues. Rapid advances in sensor technology and automation provide potentials to manage production in real time, but the system complexity, brought by residence time constraints, makes it difficult to optimize system performance while providing a guaranteed product quality. To contribute to this end, this dissertation is dedicated to modeling, analysis and control of production systems with constrained time windows. This study starts with a small-scale serial production line with two machines and one buffer. Even the simplest serial lines could have too large state space due to the consideration of residence time constraints. A Markov chain model is developed to approximately analyze its transient behavior with a high accuracy. An iterative learning algorithm is proposed to perform real-time control. The analysis of two-machine serial line contributes to the further analysis of more general and complex serial lines with multiple machines. Residence time constraints can be required in multiple stages. To deal with it, a two-machine-one-buffer subsystem isolated from a multi-stage serial production line is firstly analyzed and then acts as a building block to support the aggregation method for overall system performance. The proposed aggregation method substantially reduces the complexity of the problem while maintaining a high accuracy. A decomposition-based control approach is proposed to control a multi-stage serial production line. A production system is decomposed into small-scale subsystems, and an iterative aggregation procedure is then used to generate a production control policy. The decomposition-based control approach outperforms general-purpose reinforcement learning method by delivering significant system performance improvement and substantial reduction on computation overhead. Semiconductor assembly line is a typical production system, where products are restricted by time windows and production can be disrupted by machine failures. A production control problem of semiconductor assembly line is presented and studied, and thus total lot delay time and residence time constraint violation are minimized.
ContributorsWang, Feifan (Author) / Ju, Feng (Thesis advisor) / Askin, Ronald (Committee member) / Mirchandani, Pitu (Committee member) / Patel, Nital (Committee member) / Arizona State University (Publisher)
Created2021