Matching Items (31)
158577-Thumbnail Image.png
Description
This dissertation focuses on three large-scale optimization problems and devising algorithms to solve them. In addition to the societal impact of each problem’s solution, this dissertation contributes to the optimization literature a set of decomposition algorithms for problems whose optimal solution is sparse. These algorithms exploit problem-specific properties and use

This dissertation focuses on three large-scale optimization problems and devising algorithms to solve them. In addition to the societal impact of each problem’s solution, this dissertation contributes to the optimization literature a set of decomposition algorithms for problems whose optimal solution is sparse. These algorithms exploit problem-specific properties and use tailored strategies based on iterative refinement (outer-approximations). The proposed algorithms are not rooted in duality theory, providing an alternative to existing methods based on linear programming relaxations. However, it is possible to embed existing decomposition methods into the proposed framework. These general decomposition principles extend to other combinatorial optimization problems.

The first problem is a route assignment and scheduling problem in which a set of vehicles need to traverse a directed network while maintaining a minimum inter-vehicle distance at any time. This problem is inspired by applications in hazmat logistics and the coordination of autonomous agents. The proposed approach includes realistic features such as continuous-time vehicle scheduling, heterogeneous speeds, minimum and maximum waiting times at any node, among others.

The second problem is a fixed-charge network design, which aims to find a minimum-cost plan to transport a target amount of a commodity between known origins and destinations. In addition to the typical flow decisions, the model chooses the capacity of each arc and selects sources and sinks. The proposed algorithms admit any nondecreasing piecewise linear cost structure. This model is applied to the Carbon Capture and Storage (CCS) problem, which is to design a minimum-cost pipeline network to transport CO2 between industrial sources and geologic reservoirs for long-term storage.

The third problem extends the proposed decomposition framework to a special case of joint chance constraint programming with independent random variables. This model is applied to the probabilistic transportation problem, where demands are assumed stochastic and independent. Using an empirical probability distribution, this problem is formulated as an integer program with the goal of finding a minimum-cost distribution plan that satisfies all the demands with a minimum given probability. The proposed scalable algorithm is based on a concave envelop approximation of the empirical probability function, which is iteratively refined as needed.
ContributorsMatin Moghaddam, Navid (Author) / Sefair, Jorge (Thesis advisor) / Mirchandani, Pitu (Committee member) / Escobedo, Adolfo (Committee member) / Grubesic, Anthony (Committee member) / Arizona State University (Publisher)
Created2020
158602-Thumbnail Image.png
Description
Short-notice disasters such as hurricanes involve uncertainties in many facets, from the time of its occurrence to its impacts’ magnitude. Failure to incorporate these uncertainties can affect the effectiveness of the emergency responses. In the case of a hurricane event, uncertainties and corresponding impacts during a storm event can quickly

Short-notice disasters such as hurricanes involve uncertainties in many facets, from the time of its occurrence to its impacts’ magnitude. Failure to incorporate these uncertainties can affect the effectiveness of the emergency responses. In the case of a hurricane event, uncertainties and corresponding impacts during a storm event can quickly cascade. Over the past decades, various storm forecast models have been developed to predict the storm uncertainties; however, access to the usage of these models is limited. Hence, as the first part of this research, a data-driven simulation model is developed with aim to generate spatial-temporal storm predicted hazards for each possible hurricane track modeled. The simulation model identifies a means to represent uncertainty in storm’s movement and its associated potential hazards in the form of probabilistic scenarios tree where each branch is associated with scenario-level storm track and weather profile. Storm hazards, such as strong winds, torrential rain, and storm surges, can inflict significant damage on the road network and affect the population’s ability to move during the storm event. A cascading network failure algorithm is introduced in the second part of the research. The algorithm takes the scenario-level storm hazards to predict uncertainties in mobility states over the storm event. In the third part of the research, a methodology is proposed to generate a sequence of actions that simultaneously solve the evacuation flow scheduling and suggested routes which minimize the total flow time, or the makespan, for the evacuation process from origins to destinations in the resulting stochastic time-dependent network. The methodology is implemented for the 2017 Hurricane Irma case study to recommend an evacuation policy for Manatee County, FL. The results are compared with evacuation plans for assumed scenarios; the research suggests that evacuation recommendations that are based on single scenarios reduce the effectiveness of the evacuation procedure. The overall contributions of the research presented here are new methodologies to: (1) predict and visualize the spatial-temporal impacts of an oncoming storm event, (2) predict uncertainties in the impacts to transportation infrastructure and mobility, and (3) determine the quickest evacuation schedule and routes under the uncertainties within the resulting stochastic transportation networks.
ContributorsGita, Ketut (Author) / Mirchandani, Pitu (Thesis advisor) / Maciejewski, Ross (Committee member) / Sefair, Jorge (Committee member) / Zhou, Xuesong (Committee member) / Arizona State University (Publisher)
Created2020
158514-Thumbnail Image.png
Description
In today’s rapidly changing world and competitive business environment, firms are challenged to build their production and distribution systems to provide the desired customer service at the lowest possible cost. Designing an optimal supply chain by optimizing supply chain operations and decisions is key to achieving these goals.

In today’s rapidly changing world and competitive business environment, firms are challenged to build their production and distribution systems to provide the desired customer service at the lowest possible cost. Designing an optimal supply chain by optimizing supply chain operations and decisions is key to achieving these goals.

In this research, a capacity planning and production scheduling mathematical model for a multi-facility and multiple product supply chain network with significant capital and labor costs is first proposed. This model considers the key levers of capacity configuration at production plants namely, shifts, run rate, down periods, finished goods inventory management and overtime. It suggests a minimum cost plan for meeting medium range demand forecasts that indicates production and inventory levels at plants by time period, the associated manpower plan and outbound shipments over the planning horizon. This dissertation then investigates two model extensions: production flexibility and pricing. In the first extension, the cost and benefits of investing in production flexibility is studied. In the second extension, product pricing decisions are added to the model for demand shaping taking into account price elasticity of demand.





The research develops methodologies to optimize supply chain operations by determining the optimal capacity plan and optimal flows of products among facilities based on a nonlinear mixed integer programming formulation. For large size real life cases the problem is intractable. An alternate formulation and an iterative heuristic algorithm are proposed and tested. The performance and bounds for the heuristic are evaluated. A real life case study in the automotive industry is considered for the implementation of the proposed models. The implementation results illustrate that the proposed method provides valuable insights for assisting the decision making process in the supply chain and provides significant improvement over current practice.
ContributorsAlmatooq, Nourah (Author) / Askin, Ronald (Thesis advisor) / Sefair, Jorge (Thesis advisor) / Gel, Esma (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2020
157571-Thumbnail Image.png
Description
Breeding seeds to include desirable traits (increased yield, drought/temperature resistance, etc.) is a growing and important method of establishing food security. However, besides breeder intuition, few decision-making tools exist that can provide the breeders with credible evidence to make decisions on which seeds to progress to further stages of development.

Breeding seeds to include desirable traits (increased yield, drought/temperature resistance, etc.) is a growing and important method of establishing food security. However, besides breeder intuition, few decision-making tools exist that can provide the breeders with credible evidence to make decisions on which seeds to progress to further stages of development. This thesis attempts to create a chance-constrained knapsack optimization model, which the breeder can use to make better decisions about seed progression and help reduce the levels of risk in their selections. The model’s objective is to select seed varieties out of a larger pool of varieties and maximize the average yield of the “knapsack” based on meeting some risk criteria. Two models are created for different cases. First is the risk reduction model which seeks to reduce the risk of getting a bad yield but still maximize the total yield. The second model considers the possibility of adverse environmental effects and seeks to mitigate the negative effects it could have on the total yield. In practice, breeders can use these models to better quantify uncertainty in selecting seed varieties
ContributorsOzcan, Ozkan Meric (Author) / Armbruster, Dieter (Thesis advisor) / Gel, Esma (Thesis advisor) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2019
161559-Thumbnail Image.png
Description
To maintain long term success, a manufacturing company should be managed and operated under the guidance of properly designed capacity, production and logistics plans that are formulated in coordination with its manufacturing footprint, so that its managerial goals on both strategic and tactical levels can be fulfilled. In particular, sufficient

To maintain long term success, a manufacturing company should be managed and operated under the guidance of properly designed capacity, production and logistics plans that are formulated in coordination with its manufacturing footprint, so that its managerial goals on both strategic and tactical levels can be fulfilled. In particular, sufficient flexibility and efficiency should be ensured so that future customer demand can be met at a profit. This dissertation is motivated by an automobile manufacturer's mid-term and long-term decision problems, but applies to any multi-plant, multi-product manufacturer with evolving product portfolios and significant fixed and variable production costs. Via introducing the concepts of effective capacity and product-specific flexibility, two mixed integer programming (MIP) models are proposed to help manufacturers shape their mid-term capacity plans and long-term product allocation plans. With fixed tooling flexibility, production and logistics considerations are integrated into a mid-term capacity planning model to develop well-informed and balanced tactical plans, which utilize various capacity adjustment options to coordinate production, inventory, and shipping schedules throughout the planning horizon so that overall operational and capacity adjustment costs are minimized. For long-term product allocation planning, strategic tooling configuration plans that empower the production of multi-generation products at minimal configuration and operational costs are established for all plants throughout the planning horizon considering product-specific commonality and compatibility. New product introductions and demand uncertainty over the planning horizon are incorporated. As a result, potential production sites for each product and corresponding process flexibility are determined. An efficient heuristic method is developed and shown to perform well in solution quality and computational requirements.
ContributorsYao, Xufeng (Author) / Askin, Ronald (Thesis advisor) / Sefair, Jorge (Thesis advisor) / Escobedo, Adolfo (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2021
161732-Thumbnail Image.png
Description
Computer vision and tracking has become an area of great interest for many reasons, including self-driving cars, identification of vehicles and drivers on roads, and security camera monitoring, all of which are expanding in the modern digital era. When working with practical systems that are constrained in multiple ways, such

Computer vision and tracking has become an area of great interest for many reasons, including self-driving cars, identification of vehicles and drivers on roads, and security camera monitoring, all of which are expanding in the modern digital era. When working with practical systems that are constrained in multiple ways, such as video quality or viewing angle, algorithms that work well theoretically can have a high error rate in practice. This thesis studies several ways in which that error can be minimized.This thesis describes an application in a practical system. This project is to detect, track and count people entering different lanes at an airport security checkpoint, using CCTV videos as a primary source. This thesis improves an existing algorithm that is not optimized for this particular problem and has a high error rate when comparing the algorithm counts with the true volume of users. The high error rate is caused by many people crowding into security lanes at the same time. The camera from which footage was captured is located at a poor angle, and thus many of the people occlude each other and cause the existing algorithm to miss people. One solution is to count only heads; since heads are smaller than a full body, they will occlude less, and in addition, since the camera is angled from above, the heads in back will appear higher and will not be occluded by people in front. One of the primary improvements to the algorithm is to combine both person detections and head detections to improve the accuracy. The proposed algorithm also improves the accuracy of detections. The existing algorithm used the COCO training dataset, which works well in scenarios where people are visible and not occluded. However, the available video quality in this project was not very good, with people often blocking each other from the camera’s view. Thus, a different training set was needed that could detect people even in poor-quality frames and with occlusion. The new training set is the first algorithmic improvement, and although occasionally performing worse, corrected the error by 7.25% on average.
ContributorsLarsen, Andrei (Author) / Askin, Ronald (Thesis advisor) / Sefair, Jorge (Thesis advisor) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
161983-Thumbnail Image.png
Description
Matching or stratification is commonly used in observational studies to remove bias due to confounding variables. Analyzing matched data sets requires specific methods which handle dependency among observations within a stratum. Also, modern studies often include hundreds or thousands of variables. Traditional methods for matched data sets are challenged in

Matching or stratification is commonly used in observational studies to remove bias due to confounding variables. Analyzing matched data sets requires specific methods which handle dependency among observations within a stratum. Also, modern studies often include hundreds or thousands of variables. Traditional methods for matched data sets are challenged in high-dimensional settings, mixed type variables (numerical and categorical), nonlinear andinteraction effects. Furthermore, machine learning research for such structured data is quite limited. This dissertation addresses this important gap and proposes machine learning models for identifying informative variables from high-dimensional matched data sets. The first part of this dissertation proposes a machine learning model to identify informative variables from high-dimensional matched case-control data sets. The outcome of interest in this study design is binary (case or control), and each stratum is assumed to have one unit from each outcome level. The proposed method which is referred to as Matched Forest (MF) is effective for large number of variables and identifying interaction effects. The second part of this dissertation proposes three enhancements of MF algorithm. First, a regularization framework is proposed to improve variable selection performance in excessively high-dimensional settings. Second, a classification method is proposed to classify unlabeled pairs of data. Third, two metrics are proposed to estimate the effects of important variables identified by MF. The third part proposes a machine learning model based on Neural Networks to identify important variables from a more generalized matched case-control data set where each stratum has one unit from case outcome level and more than one unit from control outcome level. This method which is referred to as Matched Neural Network (MNN) performs better than current algorithms to identify variables with interaction effects. Lastly, a generalized machine learning model is proposed to identify informative variables from high-dimensional matched data sets where the outcome has more than two levels. This method outperforms existing algorithms in the literature in identifying variables with complex nonlinear and interaction effects.
ContributorsShomal Zadeh, Nooshin (Author) / Runger, George (Thesis advisor) / Montgomery, Douglas (Committee member) / Shinde, Shilpa (Committee member) / Escobedo, Adolfo (Committee member) / Arizona State University (Publisher)
Created2021
Description
Within recent years, the drive for increased sustainability within large corporations has drastically increased. One critical measure within sustainability is the diversion rate, or the amount of waste diverted from landfills to recycling, repurposing, or reselling. There are a variety of different ways in which a company can improve their

Within recent years, the drive for increased sustainability within large corporations has drastically increased. One critical measure within sustainability is the diversion rate, or the amount of waste diverted from landfills to recycling, repurposing, or reselling. There are a variety of different ways in which a company can improve their diversion rate, such as repurposing paper. A conventional method would be to simply have a recycling bin for collecting all paper, but the concern for large companies then becomes a security issue as confidential papers may not be safe in a traditional recycling bin. Salt River Project (SRP) has tackled this issue by hiring a third-party vendor (TPV) and having all paper placed into designated, secure shredding bins whose content is shredded upon collection and ultimately recycled into new material. However, while this effort is improving their diversion, the question has arisen of how to make the program viable in the long term based on the costs required to sustain it. To tackle this issue, this thesis will focus on creating a methodology and sampling plan to determine the appropriate level of a third-party recycling service required and to guide efficient bin-sizing solutions. This will in turn allow for SRP to understand how much paper waste is being produced and how accurately they are being charged for TPV services.
ContributorsHolladay, Amy E. (Author) / Escobedo, Adolfo (Thesis director) / Kucukozyigit, Ali (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131386-Thumbnail Image.png
Description
Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical

Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical estimates.
Results indicate that accurate collective decisions can
be achieved with less people when ordinal and cardinal
information is collected and aggregated together
using consensus-based, multimodal models. We also
show that presenting users with larger problems produces
more valuable ordinal information, and is a more
efficient way to collect an aggregate ranking. As a result,
we suggest input-elicitation to be more widely considered
for future work in crowdsourcing and incorporated
into future platforms to improve accuracy and efficiency.
ContributorsKemmer, Ryan Wyeth (Author) / Escobedo, Adolfo (Thesis director) / Maciejewski, Ross (Committee member) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
130926-Thumbnail Image.png
Description
The outbreak of the coronavirus has impacted retailers and the food industry after they were forced to switch to delivery services due to social distancing measures. During these times, online sales and local deliveries started to see an increase in their demand - making these methods the new way of

The outbreak of the coronavirus has impacted retailers and the food industry after they were forced to switch to delivery services due to social distancing measures. During these times, online sales and local deliveries started to see an increase in their demand - making these methods the new way of staying in business. For this reason, this research seeks to identify strategies that could be implemented by delivery service companies to improve their operations by comparing two types of p-median models (node-based and edge-based). To simulate demand, geographical data will be analyzed for the cities of San Diego and Paris. The usage of districting models will allow the determination on how balance and compact the service regions are within the districts. After analyzing the variability of each demand simulation run, conclusions will be made on whether one model is better than the other.
ContributorsAguilar, Sarbith Anabella (Author) / Escobedo, Adolfo (Thesis director) / Juarez, Joseph (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12