Matching Items (82)
157571-Thumbnail Image.png
Description
Breeding seeds to include desirable traits (increased yield, drought/temperature resistance, etc.) is a growing and important method of establishing food security. However, besides breeder intuition, few decision-making tools exist that can provide the breeders with credible evidence to make decisions on which seeds to progress to further stages of development.

Breeding seeds to include desirable traits (increased yield, drought/temperature resistance, etc.) is a growing and important method of establishing food security. However, besides breeder intuition, few decision-making tools exist that can provide the breeders with credible evidence to make decisions on which seeds to progress to further stages of development. This thesis attempts to create a chance-constrained knapsack optimization model, which the breeder can use to make better decisions about seed progression and help reduce the levels of risk in their selections. The model’s objective is to select seed varieties out of a larger pool of varieties and maximize the average yield of the “knapsack” based on meeting some risk criteria. Two models are created for different cases. First is the risk reduction model which seeks to reduce the risk of getting a bad yield but still maximize the total yield. The second model considers the possibility of adverse environmental effects and seeks to mitigate the negative effects it could have on the total yield. In practice, breeders can use these models to better quantify uncertainty in selecting seed varieties
ContributorsOzcan, Ozkan Meric (Author) / Armbruster, Dieter (Thesis advisor) / Gel, Esma (Thesis advisor) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2019
161516-Thumbnail Image.png
Description
Biodiversity has been declining during the last decades due to habitat loss, landscape deterioration, environmental change, and human-related activities. In addition to its economic and cultural value, biodiversity plays an important role in keeping an environment’s ecosystem in balance. Disrupting such processes can reduce the provision of natural resources such

Biodiversity has been declining during the last decades due to habitat loss, landscape deterioration, environmental change, and human-related activities. In addition to its economic and cultural value, biodiversity plays an important role in keeping an environment’s ecosystem in balance. Disrupting such processes can reduce the provision of natural resources such as food and water, which in turn yields a direct threat to human health. Protecting and restoring natural areas is fundamental to preserve biodiversity and to mitigate the effects of ongoing environmental change. Unfortunately, it is impossible to protect every critical area due to resource limitations, requiring the use of advanced decision tools for the design of conservation plans. This dissertation studies three problems on the design of wildlife corridors and reserves that include patch-specific conservation decisions under spatial, operational, ecological, and biological requirements. In addition to the ecological impact of each problem’s solution, this dissertation contributes a set of formulations, valid inequalities, and pre-processing and solution algorithms for optimization problems with spatial requirements. The first problem is a utility-based corridor design problem to connect fragmented habitats, where each patch has a utility value reflecting its quality. The corridor must satisfy geometry requirements such as a connectivity and minimum width. We propose a mix-integer programming (MIP) model to maximize the total utility of the corridor under the given geometry requirements as well as a budget constraint to reflect the acquisition (or restoration) cost of the selected patches. To overcome the computational difficulty when solving large-scale instances, we develop multiple acceleration techniques, including a brand-and-cut algorithm enhanced with problem-specific valid inequalities and a bound-improving heuristic triggered at each integer node in the branch-and-bound exploration. We test the proposed model and solution algorithm using large-scale fabricated instances and a real case study for the design of an ecological corridor for the Florida Panther. Our modeling framework is able to solve instances of up to 1500 patches within 2 hours to optimality or with a small optimality gap. The second problem introduces the species movement across the fragmented landscape into the corridor design problem. The premise is that dispersal dynamics, if available, must inform the design to account for the corridor’s usage by the species. To this end, we propose a spatial discrete-time absorbing Markov chain (DTMC) approach to represent species dispersal and develop short- and long-term landscape usage metrics. We explore two different types of design problems: open and closed corridors. An open corridor is a sequence of landscape patches used by the species to disperse out of a habitat. For this case, we devise a dynamic programming algorithm that implicitly enumerates possible corridors and finds that of maximum probability. The second problem is to find a closed corridor of maximum probability that connects two fragmented habitats. To solve this problem variant, we extended the framework from the utility-based corridor design problem by blending the recursive Markov chain equations with a network flow nonlinear formulation. The third problem leverages on the DTMC approach to explore a reserve design problem with spatial requirements like connectivity and compactness. We approximate the compactness using the concept of maximum reserve diameter, i.e., the largest distance allowed between two patch in the reserve. To solve this problem, we devise a two-stage approach that balances the trade-off between reserve usage probability and compactness. The first stage's problem is to detect a subset of patches of maximum usage probability, while the second stage's problem imposes the geometry requirements on the optimal solution obtained from the first stage. To overcome the computational difficulty of large-scale landscapes, we develop tailored solution algorithms, including a warm-up heuristic to initialize the branch-and-bound exploration, problem-specific valid inequalities, and a decomposition strategy that sequentially solves smaller problems on landscape partitions.
ContributorsWang, Chao (Author) / Sefair, Jorge A. (Thesis advisor) / Mirchandani, Pitu (Committee member) / Pavlic, Theodore (Committee member) / Tong, Daoqin (Committee member) / Arizona State University (Publisher)
Created2021
161504-Thumbnail Image.png
Description
Drinking water quality violations are widespread in the United States and elsewhere in the world. More than half of Americans are not confident in the safety of their tap water, especially after the 2014 Flint, Michigan water crisis. Other than accidental contamination events, stagnation is a major cause of water

Drinking water quality violations are widespread in the United States and elsewhere in the world. More than half of Americans are not confident in the safety of their tap water, especially after the 2014 Flint, Michigan water crisis. Other than accidental contamination events, stagnation is a major cause of water quality degradation. Thus, there is a pressing need to build a real-time control system that can make control decisions quickly and proactively so that the quality of water can be maintained at all times. However, towards this end, modeling the dynamics of water distribution systems are very challenging due to the complex fluid dynamics and chemical reactions in the system. This challenge needs to be addressed before moving on to modeling the optimal control problem. The research in this dissertation leverages statistical machine learning approaches in approximating the complex water system dynamics and then develops different optimization models for proactive and real-time water quality control. This research focuses on two effective ways to maintain water quality, flushing of taps and injection of chlorine or other disinfectants; both of these actions decrease the equivalent “water age”, a useful proxy for water quality related to bacteria growth. This research first develops linear predictive models for water quality and subsequently linear programming optimization models for proactive water age control via flushing. The second part of the research considers both flushing and disinfectant injections in the control problem and develops mixed integer quadratically constrained optimization models for controlling water age. Different control strategies for disinfectant injections are also evaluated: binary on-off injections and continuous injections. In the third part of the research, water demand is assumed to be uncertain and stochastic. The developed approach to control the system relates to learning the optimal real-time flushing decisions by combing reinforced temporal-difference learning approaches with linear value function approximation for solving approximately the underlying Markov decision processes. Computational results on widely used simulation models demonstrates the developed control systems were indeed effective for water quality control with known demands as well as when demands are uncertain and stochastic.
ContributorsLi, Xiushuang (Author) / Mirchandani, Pitu (Thesis advisor) / Boyer, Treavor (Committee member) / Ju, Feng (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2021
161785-Thumbnail Image.png
Description
Natural disasters are occurring increasingly around the world, causing significant economiclosses. To alleviate their adverse effect, it is crucial to plan what should be done in response to them in a proactive manner. This research aims at developing proactive and real-time recovery algorithms for large-scale power networks exposed to weather events considering uncertainty.

Natural disasters are occurring increasingly around the world, causing significant economiclosses. To alleviate their adverse effect, it is crucial to plan what should be done in response to them in a proactive manner. This research aims at developing proactive and real-time recovery algorithms for large-scale power networks exposed to weather events considering uncertainty. These algorithms support the recovery decisions to mitigate the disaster impact, resulting in faster recovery of the network. The challenges associated with developing these algorithms are summarized below: 1. Even ignoring uncertainty, when operating cost of the network is considered the problem will be a bi-level optimization which is NP-hard. 2. To meet the requirement for real-time decision making under uncertainty, the problem could be formulated a Stochastic Dynamic Program with the aim to minimize the total cost. However, considering the operating cost of the network violates the underlying assumptions of this approach. 3. Stochastic Dynamic Programming approach is also not applicable to realistic problem sizes, due to the curse of dimensionality. 4. Uncertainty-based approaches for failure modeling, rely on point-generation of failures and ignore the network structure. To deal with the first challenge, in chapter 2, a heuristic solution framework is proposed, and its performance is evaluated by conducting numerical experiments. To address the second challenge, in chapter 3, after formulating the problem as a Stochastic Dynamic Program, an approximated dynamic programming heuristic is proposed to solve the problem. Numerical experiments on synthetic and realistic test-beds, show the satisfactory performance of the proposed approach. To address the third challenge, in chapter 4, an efficient base heuristic policy and an aggregation scheme in the action space is proposed. Numerical experiments on a realistic test-bed verify the ability of the proposed method to recover the network more efficiently. Finally, to address the fourth challenge, in chapter 5, a simulation-based model is proposed that using historical data and accounting for the interaction between network components, allows for analyzing the impact of adverse events on regional service level. A realistic case study is then conducted to showcase the applicability of the approach.
ContributorsInanlouganji, Alireza (Author) / Pedrielli, Giulia (Thesis advisor) / Mirchandani, Pitu (Committee member) / Reddy, T. Agami (Committee member) / Ju, Feng (Committee member) / Arizona State University (Publisher)
Created2021
161798-Thumbnail Image.png
Description
Computational social choice theory is an emerging research area that studies the computational aspects of decision-making. It continues to be relevant in modern society because many people often work as a group and make decisions in a group setting. Among multiple research topics, rank aggregation is a central problem in

Computational social choice theory is an emerging research area that studies the computational aspects of decision-making. It continues to be relevant in modern society because many people often work as a group and make decisions in a group setting. Among multiple research topics, rank aggregation is a central problem in computational social choice theory. Oftentimes, rankings may involve a large number of alternatives, contain ties, and/or be incomplete, all of which complicate the use of robust aggregation methods. To address these challenges, firstly, this work introduces a correlation coefficient that is designed to deal with a variety of ranking formats including those containing non-strict (i.e., with-ties) and incomplete (i.e., unknown) preferences. The new measure, which can be regarded as a generalization of the seminal Kendall tau correlation coefficient, is proven to satisfy a set of metric-like axioms and to be equivalent to a recently developed ranking distance function associated with Kemeny aggregation. Secondly, this work derives an exact binary programming formulation for the generalized Kemeny rank aggregation problem---whose ranking inputs may be complete and incomplete, with and without ties. It leverages the equivalence of minimizing the Kemeny-Snell distance and maximizing the Kendall-tau correlation, to compare the newly introduced binary programming formulation to a modified version of an existing integer programming formulation associated with the Kendall-tau distance. Thirdly, this work introduces a new social choice property for decomposing large-size problems into smaller subproblems, which allows solving the problem in a distributed fashion. The new property is adequate for handling complete rankings with ties. The property is leveraged to develop a structural decomposition algorithm, through which certain large instances of the NP-hard Kemeny rank aggregation problem can be solved exactly in a practical amount of time. Lastly, this work applies these rank aggregation mechanisms to novel contexts for extracting collective wisdom in crowdsourcing tasks. Through this crowdsourcing experiment, we assess the capability of aggregation frameworks to recover underlying ground truth and the usefulness of multimodal information in overcoming anchoring effects, which shows its ability to enhance the wisdom of crowds and its practicability to the real-world problem.
ContributorsYoo, Yeawon (Author) / Escobedo, Adolfo R (Thesis advisor) / Mirchandani, Pitu B (Committee member) / Pavlic, Ted P (Committee member) / Chiou, Erin K (Committee member) / Arizona State University (Publisher)
Created2021
161846-Thumbnail Image.png
Description
Complex systems appear when interaction among system components creates emergent behavior that is difficult to be predicted from component properties. The growth of Internet of Things (IoT) and embedded technology has increased complexity across several sectors (e.g., automotive, aerospace, agriculture, city infrastructures, home technologies, healthcare) where the paradigm of cyber-physical

Complex systems appear when interaction among system components creates emergent behavior that is difficult to be predicted from component properties. The growth of Internet of Things (IoT) and embedded technology has increased complexity across several sectors (e.g., automotive, aerospace, agriculture, city infrastructures, home technologies, healthcare) where the paradigm of cyber-physical systems (CPSs) has become a standard. While CPS enables unprecedented capabilities, it raises new challenges in system design, certification, control, and verification. When optimizing system performance computationally expensive simulation tools are often required, and search algorithms that sequentially interrogate a simulator to learn promising solutions are in great demand. This class of algorithms are black-box optimization techniques. However, the generality that makes black-box optimization desirable also causes computational efficiency difficulties when applied real problems. This thesis focuses on Bayesian optimization, a prominent black-box optimization family, and proposes new principles, translated in implementable algorithms, to scale Bayesian optimization to highly expensive, large scale problems. Four problem contexts are studied and approaches are proposed for practically applying Bayesian optimization concepts, namely: (1) increasing sample efficiency of a highly expensive simulator in the presence of other sources of information, where multi-fidelity optimization is used to leverage complementary information sources; (2) accelerating global optimization in the presence of local searches by avoiding over-exploitation with adaptive restart behavior; (3) scaling optimization to high dimensional input spaces by integrating Game theoretic mechanisms with traditional techniques; (4) accelerating optimization by embedding function structure when the reward function is a minimum of several functions. In the first context this thesis produces two multi-fidelity algorithms, a sample driven and model driven approach, and is implemented to optimize a serial production line; in the second context the Stochastic Optimization with Adaptive Restart (SOAR) framework is produced and analyzed with multiple applications to CPS falsification problems; in the third context the Bayesian optimization with sample fictitious play (BOFiP) algorithm is developed with an implementation in high-dimensional neural network training; in the last problem context the minimum surrogate optimization (MSO) framework is produced and combined with both Bayesian optimization and the SOAR framework with applications in simultaneous falsification of multiple CPS requirements.
ContributorsMathesen, Logan (Author) / Pedrielli, Giulia (Thesis advisor) / Candan, Kasim (Committee member) / Fainekos, Georgios (Committee member) / Gel, Esma (Committee member) / Montgomery, Douglas (Committee member) / Zabinsky, Zelda (Committee member) / Arizona State University (Publisher)
Created2021
153558-Thumbnail Image.png
Description
Ramping up a semiconductor wafer fabrication facility is a challenging endeavor. One of the key components of this process is to schedule a large number of activities in installing and qualifying (Install/Qual) the capital intensive and sophisticated manufacturing equipment. Activities in the Install/Qual process share multiple types of expensive and

Ramping up a semiconductor wafer fabrication facility is a challenging endeavor. One of the key components of this process is to schedule a large number of activities in installing and qualifying (Install/Qual) the capital intensive and sophisticated manufacturing equipment. Activities in the Install/Qual process share multiple types of expensive and scare resources and each activity might potentially have multiple processing options. In this dissertation, the semiconductor capital equipment Install/Qual scheduling problem is modeled as a multi-mode resource-constrained project scheduling problem (MRCPSP) with multiple special extensions. Three phases of research are carried out: the first phase studies the special problem characteristics of the Install/Qual process, including multiple activity processing options, time-varying resource availability levels, resource vacations, and activity splitting that does not allow preemption. A modified precedence tree-based branch-and-bound algorithm is proposed to solve small size academic problem instances to optimality. Heuristic-based methodologies are the main focus of phase 2. Modified priority rule-based simple heuristics and a modified random key-based genetic algorithm (RKGA) are proposed to search for Install/Qual schedules with short makespans but subject to resource constraints. Methodologies are tested on both small and large random academic problem instances and instances that are similar to the actual Install/Qual process of a major semiconductor manufacturer. In phase 3, a decision making framework is proposed to strategically plan the Install/Qual capacity ramp. Product market demand, product market price, resource consumption cost, as well as the payment of capital equipment, are considered. A modified simulated annealing (SA) algorithm-based optimization module is integrated with a Monte Carlo simulation-based simulation module to search for good capacity ramping strategies under uncertain market information. The decision making framework can be used during the Install/Qual schedule planning phase as well as the Install/Qual schedule execution phase when there is a portion of equipment that has already been installed or qualified. Computational experiments demonstrate the effectiveness of the decision making framework.
ContributorsCheng, Junzilan (Author) / Fowler, John W (Thesis advisor) / Kempf, Karl (Thesis advisor) / Mason, Scott J. (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2015
153610-Thumbnail Image.png
Description
In order to address concerns about the dominance of petroleum-fueled vehicles, the transition to alternative-fueled counterparts is urgently needed. Top barriers preventing the widespread diffusion of alternative-fuel vehicles (AFV) are the limited range and the scarcity of refueling or recharging infrastructures in convenient locations. Researchers have been developing models for

In order to address concerns about the dominance of petroleum-fueled vehicles, the transition to alternative-fueled counterparts is urgently needed. Top barriers preventing the widespread diffusion of alternative-fuel vehicles (AFV) are the limited range and the scarcity of refueling or recharging infrastructures in convenient locations. Researchers have been developing models for optimally locating refueling facilities for range-limited vehicles, and recently a strategy has emerged to cluster refueling stations to encourage consumers to purchase alternative-fuel vehicles by building a critical mass of stations. However, clustering approaches have not yet been developed based on flow-based demand. This study proposes a Threshold Coverage extension to the original Flow Refueling Location Model (FRLM). The new model optimally locates p refueling stations on a network so as to maximize the weighted number of origin zones whose refuelable outbound round trips exceed a given threshold, thus to build critical mass based on flow-based demand on the network. Unlike other clustering approaches, this model can explicitly ensure that flow demands “covered” in the model are refuelable considering the limited driving range of AFVs. Despite not explicitly including local intra-zonal trips, numerical experiments on a statewide highway network proved the effectiveness of the model in clustering stations based on inter-city flow volumes on the network. The model’s policy implementation will provide managerial insights for some key concerns of the industry, such as geographic equity vs. critical mass, from a new perspective. This project will serve as a step to support a more successful public transition to alternative-fuel vehicles.
ContributorsHong, Shuyao (Author) / Kuby, Michael J (Thesis advisor) / Parker, Nathan C. (Committee member) / Lou, Yingyan (Committee member) / Arizona State University (Publisher)
Created2015
153203-Thumbnail Image.png
Description
Construction Management research has not been successful in changing the practices of the construction industry. The method of receiving grants and the peer review paper system that academics rely on to achieve promotion, does not align to academic researchers becoming experts who can bring change to industry practices. Poor construction

Construction Management research has not been successful in changing the practices of the construction industry. The method of receiving grants and the peer review paper system that academics rely on to achieve promotion, does not align to academic researchers becoming experts who can bring change to industry practices. Poor construction industry performance has been documented for the past 25 years in the international construction management field. However, after 25 years of billions of dollars of research investment, the solution remains elusive. Research has shown that very few researchers have a hypothesis, run cycles of research tests in the industry, and result in changing industry practices.

The most impactful research identified in this thesis, has led to conclusions that pre-planning is critical, hiring contractors who have expertise will result in better performance, and risk is mitigated when the supply chain partners work together and expertise is utilized at the beginning of projects.

The problems with construction non-performance have persisted. Legal contract issues have become more important. Traditional research approaches have not identified the severity and the source of construction non-performance. The problem seems to be as complex as ever. The construction industry practices and the academic research community remain in silos. This research proposes that the problem may be in the traditional construction management research structure and methodology. The research

has identified a unique non-traditional research program that has documented over 1700 industry tests, which has resulted in a decrease in client management by up to 79%, contractors adding value by up to 38%, increased customer satisfaction by up to 140%, reduced change order rates as low as -0.6%, and decreased cost of services by up to 31%.

The purpose of this thesis is to document the performance of the non-traditional research program around the above identified results. The documentation of such an effort will shed more light on what is required for a sustainable, industry impacting, and academic expert based research program.
ContributorsRivera, Alfredo O (Author) / Kashiwagi, Dean T. (Thesis advisor) / Sullivan, Kenneth (Committee member) / Kashiwagi, Jacob S (Committee member) / Arizona State University (Publisher)
Created2014
130926-Thumbnail Image.png
Description
The outbreak of the coronavirus has impacted retailers and the food industry after they were forced to switch to delivery services due to social distancing measures. During these times, online sales and local deliveries started to see an increase in their demand - making these methods the new way of

The outbreak of the coronavirus has impacted retailers and the food industry after they were forced to switch to delivery services due to social distancing measures. During these times, online sales and local deliveries started to see an increase in their demand - making these methods the new way of staying in business. For this reason, this research seeks to identify strategies that could be implemented by delivery service companies to improve their operations by comparing two types of p-median models (node-based and edge-based). To simulate demand, geographical data will be analyzed for the cities of San Diego and Paris. The usage of districting models will allow the determination on how balance and compact the service regions are within the districts. After analyzing the variability of each demand simulation run, conclusions will be made on whether one model is better than the other.
ContributorsAguilar, Sarbith Anabella (Author) / Escobedo, Adolfo (Thesis director) / Juarez, Joseph (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12