Matching Items (7)
Filtering by

Clear all filters

149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
154115-Thumbnail Image.png
Description
Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic

Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic response have more complicated structures. In the literature, the optimal design problem for some functional responses has been solved using genetic algorithm (GA) and approximate design methods. The goal of this dissertation is to develop fast computer algorithms for calculating exact D-optimal designs.



First, we demonstrated how the traditional exchange methods could be improved to generate a computationally efficient algorithm for finding G-optimal designs. The proposed two-stage algorithm, which is called the cCEA, uses a clustering-based approach to restrict the set of possible candidates for PEA, and then improves the G-efficiency using CEA.



The second major contribution of this dissertation is the development of fast algorithms for constructing D-optimal designs that determine the optimal sequence of stimuli in fMRI studies. The update formula for the determinant of the information matrix was improved by exploiting the sparseness of the information matrix, leading to faster computation times. The proposed algorithm outperforms genetic algorithm with respect to computational efficiency and D-efficiency.



The third contribution is a study of optimal experimental designs for more general functional response models. First, the B-spline system is proposed to be used as the non-parametric smoother of response function and an algorithm is developed to determine D-optimal sampling points of a spectrum variable. Second, we proposed a two-step algorithm for finding the optimal design for both sampling points and experimental settings. In the first step, the matrix of experimental settings is held fixed while the algorithm optimizes the determinant of the information matrix for a mixed effects model to find the optimal sampling times. In the second step, the optimal sampling times obtained from the first step is held fixed while the algorithm iterates on the information matrix to find the optimal experimental settings. The designs constructed by this approach yield superior performance over other designs found in literature.
ContributorsSaleh, Moein (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Runger, George C. (Committee member) / Kao, Ming-Hung (Committee member) / Arizona State University (Publisher)
Created2015
154011-Thumbnail Image.png
Description
This thesis presents a successful application of operations research techniques in nonprofit distribution system to improve the distribution efficiency and increase customer service quality. It focuses on truck routing problems faced by St. Mary’s Food Bank Distribution Center. This problem is modeled as a capacitated vehicle routing problem to improve the distribution efficiency

This thesis presents a successful application of operations research techniques in nonprofit distribution system to improve the distribution efficiency and increase customer service quality. It focuses on truck routing problems faced by St. Mary’s Food Bank Distribution Center. This problem is modeled as a capacitated vehicle routing problem to improve the distribution efficiency and is extended to capacitated vehicle routing problem with time windows to increase customer service quality. Several heuristics are applied to solve these vehicle routing problems and tested in well-known benchmark problems. Algorithms are tested by comparing the results with the plan currently used by St. Mary’s Food Bank Distribution Center. The results suggest heuristics are quite completive: average 17% less trucks and 28.52% less travel time are used in heuristics’ solution.
ContributorsLi, Xiaoyan (Author) / Askin, Ronald (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2015
156106-Thumbnail Image.png
Description
One of the greatest 21st century challenges is meeting the needs of a growing world population expected to increase 35% by 2050 given projected trends in diets, consumption and income. This in turn requires a 70-100% improvement on current production capability, even as the world is undergoing systemic climate

One of the greatest 21st century challenges is meeting the needs of a growing world population expected to increase 35% by 2050 given projected trends in diets, consumption and income. This in turn requires a 70-100% improvement on current production capability, even as the world is undergoing systemic climate pattern changes. This growth not only translates to higher demand for staple products, such as rice, wheat, and beans, but also creates demand for high-value products such as fresh fruits and vegetables (FVs), fueled by better economic conditions and a more health conscious consumer. In this case, it would seem that these trends would present opportunities for the economic development of environmentally well-suited regions to produce high-value products. Interestingly, many regions with production potential still exhibit a considerable gap between their current and ‘true’ maximum capability, especially in places where poverty is more common. Paradoxically, often high-value, horticultural products could be produced in these regions, if relatively small capital investments are made and proper marketing and distribution channels are created. The hypothesis is that small farmers within local agricultural systems are well positioned to take advantage of existing sustainable and profitable opportunities, specifically in high-value agricultural production. Unearthing these opportunities can entice investments in small farming development and help them enter the horticultural industry, thus expand the volume, variety and/or quality of products available for global consumption. In this dissertation, the objective is three-fold: (1) to demonstrate the hidden production potential that exist within local agricultural communities, (2) highlight the importance of supply chain modeling tools in the strategic design of local agricultural systems, and (3) demonstrate the application of optimization and machine learning techniques to strategize the implementation of protective agricultural technologies.

As part of this dissertation, a yield approximation method is developed and integrated with a mixed-integer program to estimate a region’s potential to produce non-perennial, vegetable items. This integration offers practical approximations that help decision-makers identify technologies needed to protect agricultural production, alter harvesting patterns to better match market behavior, and provide an analytical framework through which external investment entities can assess different production options.
ContributorsFlores, Hector M. (Author) / Villalobos, Rene (Thesis advisor) / Pan, Rong (Committee member) / Wu, Teresa (Committee member) / Parker, Nathan (Committee member) / Arizona State University (Publisher)
Created2017
156215-Thumbnail Image.png
Description
Project portfolio selection (PPS) is a significant problem faced by most organizations. How to best select the many innovative ideas that a company has developed to deploy in a proper and sustained manner with a balanced allocation of its resources over multiple time periods is one of vital importance to

Project portfolio selection (PPS) is a significant problem faced by most organizations. How to best select the many innovative ideas that a company has developed to deploy in a proper and sustained manner with a balanced allocation of its resources over multiple time periods is one of vital importance to a company's goals. This dissertation details the steps involved in deploying a more intuitive portfolio selection framework that facilitates bringing analysts and management to a consensus on ongoing company efforts and buy into final decisions. A binary integer programming selection model that constructs an efficient frontier allows the evaluation of portfolios on many different criteria and allows decision makers (DM) to bring their experience and insight to the table when making a decision is discussed. A binary fractional integer program provides additional choices by optimizing portfolios on cost-benefit ratios over multiple time periods is also presented. By combining this framework with an `elimination by aspects' model of decision making, DMs evaluate portfolios on various objectives and ensure the selection of a portfolio most in line with their goals. By presenting a modeling framework to easily model a large number of project inter-dependencies and an evolutionary algorithm that is intelligently guided in the search for attractive portfolios by a beam search heuristic, practitioners are given a ready recipe to solve big problem instances to generate attractive project portfolios for their organizations. Finally, this dissertation attempts to address the problem of risk and uncertainty in project portfolio selection. After exploring the selection of portfolios based on trade-offs between a primary benefit and a primary cost, the third important dimension of uncertainty of outcome and the risk a decision maker is willing to take on in their quest to select the best portfolio for their organization is examined.
ContributorsSampath, Siddhartha (Author) / Gel, Esma (Thesis advisor) / Fowler, Jown W (Thesis advisor) / Kempf, Karl G. (Committee member) / Pan, Rong (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2018
156337-Thumbnail Image.png
Description
Healthcare operations have enjoyed reduced costs, improved patient safety, and

innovation in healthcare policy over a huge variety of applications by tackling prob-

lems via the creation and optimization of descriptive mathematical models to guide

decision-making. Despite these accomplishments, models are stylized representations

of real-world applications, reliant on accurate estimations from historical data to

Healthcare operations have enjoyed reduced costs, improved patient safety, and

innovation in healthcare policy over a huge variety of applications by tackling prob-

lems via the creation and optimization of descriptive mathematical models to guide

decision-making. Despite these accomplishments, models are stylized representations

of real-world applications, reliant on accurate estimations from historical data to jus-

tify their underlying assumptions. To protect against unreliable estimations which

can adversely affect the decisions generated from applications dependent on fully-

realized models, techniques that are robust against misspecications are utilized while

still making use of incoming data for learning. Hence, new robust techniques are ap-

plied that (1) allow for the decision-maker to express a spectrum of pessimism against

model uncertainties while (2) still utilizing incoming data for learning. Two main ap-

plications are investigated with respect to these goals, the first being a percentile

optimization technique with respect to a multi-class queueing system for application

in hospital Emergency Departments. The second studies the use of robust forecasting

techniques in improving developing countries’ vaccine supply chains via (1) an inno-

vative outside of cold chain policy and (2) a district-managed approach to inventory

control. Both of these research application areas utilize data-driven approaches that

feature learning and pessimism-controlled robustness.
ContributorsBren, Austin (Author) / Saghafian, Soroush (Thesis advisor) / Mirchandani, Pitu (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2018
154566-Thumbnail Image.png
Description
This research is to address the design optimization of systems for a specified reliability level, considering the dynamic nature of component failure rates. In case of designing a mechanical system (especially a load-sharing system), the failure of one component will lead to increase in probability of failure of remaining components.

This research is to address the design optimization of systems for a specified reliability level, considering the dynamic nature of component failure rates. In case of designing a mechanical system (especially a load-sharing system), the failure of one component will lead to increase in probability of failure of remaining components. Many engineering systems like aircrafts, automobiles, and construction bridges will experience this phenomenon.

In order to design these systems, the Reliability-Based Design Optimization framework using Sequential Optimization and Reliability Assessment (SORA) method is developed. The dynamic nature of component failure probability is considered in the system reliability model. The Stress-Strength Interference (SSI) theory is used to build the limit state functions of components and the First Order Reliability Method (FORM) lies at the heart of reliability assessment. Also, in situations where the user needs to determine the optimum number of components and reduce component redundancy, this method can be used to optimally allocate the required number of components to carry the system load. The main advantage of this method is that the computational efficiency is high and also any optimization and reliability assessment technique can be incorporated. Different cases of numerical examples are provided to validate the methodology.
ContributorsBala Subramaniyan, Arun (Author) / Pan, Rong (Thesis advisor) / Askin, Ronald (Committee member) / Ju, Feng (Committee member) / Arizona State University (Publisher)
Created2016