Matching Items (47)
Filtering by

Clear all filters

149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
148169-Thumbnail Image.png
Description

This thesis was conducted to study and analyze the fund allocation process adopted by different states in the United States to reduce the impact of the Covid-19 virus. Seven different states and their funding methodologies were compared against the case count within the state. The study also focused on development

This thesis was conducted to study and analyze the fund allocation process adopted by different states in the United States to reduce the impact of the Covid-19 virus. Seven different states and their funding methodologies were compared against the case count within the state. The study also focused on development of a physical distancing index based on three significant attributes. This index was then compared to the expenditure and case counts to support decision making.
A regression model was developed to analyze and compare how different states case counts played out against the regression model and the risk index.

ContributorsJaisinghani, Shaurya (Author) / Mirchandani, Pitu (Thesis director) / Clough, Michael (Committee member) / McCarville, Daniel R. (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Department of Information Systems (Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149723-Thumbnail Image.png
Description
This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve

This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve high accuracy, but the combination of many rules is difficult to interpret. Rule condition subset selection (RCSS) methods for associative classification are considered. RCSS aims to prune the rule conditions into a subset via feature selection. The subset then can be summarized into rule-based classifiers. Experiments show that classifiers after RCSS can substantially improve the classification interpretability without loss of accuracy. An ensemble feature selection method is proposed to learn Markov blankets for either discrete or continuous networks (without linear, Gaussian assumptions). The method is compared to a Bayesian local structure learning algorithm and to alternative feature selection methods in the causal structure learning problem. Feature selection is also used to enhance the interpretability of time series classification. Existing time series classification algorithms (such as nearest-neighbor with dynamic time warping measures) are accurate but difficult to interpret. This research leverages the time-ordering of the data to extract features, and generates an effective and efficient classifier referred to as a time series forest (TSF). The computational complexity of TSF is only linear in the length of time series, and interpretable features can be extracted. These features can be further reduced, and summarized for even better interpretability. Lastly, two variable importance measures are proposed to reduce the feature selection bias in tree-based ensemble models. It is well known that bias can occur when predictor attributes have different numbers of values. Two methods are proposed to solve the bias problem. One uses an out-of-bag sampling method called OOBForest, and the other, based on the new concept of a partial permutation test, is called a pForest. Experimental results show the existing methods are not always reliable for multi-valued predictors, while the proposed methods have advantages.
ContributorsDeng, Houtao (Author) / Runger, George C. (Thesis advisor) / Lohr, Sharon L (Committee member) / Pan, Rong (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2011
151698-Thumbnail Image.png
Description
Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient

Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient or the Quality Assurance for the amount of organ dose received. In this study, we are exploring the methodologies to systematically reduce the absorbed radiation dose in the Fluoroscopically Guided Interventional Radiology procedures. In the first part of this study, we developed a mathematical model which determines a set of geometry settings for the equipment and a level for the energy during a patient exam. The goal is to minimize the amount of absorbed dose in the critical organs while maintaining image quality required for the diagnosis. The model is a large-scale mixed integer program. We performed polyhedral analysis and derived several sets of strong inequalities to improve the computational speed and quality of the solution. Results present the amount of absorbed dose in the critical organ can be reduced up to 99% for a specific set of angles. In the second part, we apply an approximate gradient method to simultaneously optimize angle and table location while minimizing dose in the critical organs with respect to the image quality. In each iteration, we solve a sub-problem as a MIP to determine the radiation field size and corresponding X-ray tube energy. In the computational experiments, results show further reduction (up to 80%) of the absorbed dose in compare with previous method. Last, there are uncertainties in the medical procedures resulting imprecision of the absorbed dose. We propose a robust formulation to hedge from the worst case absorbed dose while ensuring feasibility. In this part, we investigate a robust approach for the organ motions within a radiology procedure. We minimize the absorbed dose for the critical organs across all input data scenarios which are corresponding to the positioning and size of the organs. The computational results indicate up to 26% increase in the absorbed dose calculated for the robust approach which ensures the feasibility across scenarios.
ContributorsKhodadadegan, Yasaman (Author) / Zhang, Muhong (Thesis advisor) / Pavlicek, William (Thesis advisor) / Fowler, John (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2013
151633-Thumbnail Image.png
Description
In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also

In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also involves mixed-product transshipments between DCs with multiple truck size selection and routing delivery to retailers. Optimally solving such an integrated problem is in general not easy due to its combinatorial nature, especially when transshipments and routing are involved. In order to find out a good solution effectively, a two-phase solution methodology is derived: Phase I solves an integer programming model which includes all the constraints in the original model except that the routings are simplified to direct shipments by using estimated routing cost parameters. Then Phase II model solves the lower level inventory routing problem for each opened DC and its assigned retailers. The accuracy of the estimated routing cost and the effectiveness of the two-phase solution methodology are evaluated, the computational performance is found to be promising. The problem is able to be heuristically solved within a reasonable time frame for a broad range of problem sizes (one hour for the instance of 200 retailers). In addition, a model is generated for a similar network design problem considering direct shipment and consolidation within the same product set opportunities. A genetic algorithm and a specific problem heuristic are designed, tested and compared on several realistic scenarios.
ContributorsXia, Mingjun (Author) / Askin, Ronald (Thesis advisor) / Mirchandani, Pitu (Committee member) / Zhang, Muhong (Committee member) / Kierstead, Henry (Committee member) / Arizona State University (Publisher)
Created2013
152155-Thumbnail Image.png
Description
The smart grid initiative is the impetus behind changes that are expected to culminate into an enhanced distribution system with the communication and control infrastructure to support advanced distribution system applications and resources such as distributed generation, energy storage systems, and price responsive loads. This research proposes a distribution-class analog

The smart grid initiative is the impetus behind changes that are expected to culminate into an enhanced distribution system with the communication and control infrastructure to support advanced distribution system applications and resources such as distributed generation, energy storage systems, and price responsive loads. This research proposes a distribution-class analog of the transmission LMP (DLMP) as an enabler of the advanced applications of the enhanced distribution system. The DLMP is envisioned as a control signal that can incentivize distribution system resources to behave optimally in a manner that benefits economic efficiency and system reliability and that can optimally couple the transmission and the distribution systems. The DLMP is calculated from a two-stage optimization problem; a transmission system OPF and a distribution system OPF. An iterative framework that ensures accurate representation of the distribution system's price sensitive resources for the transmission system problem and vice versa is developed and its convergence problem is discussed. As part of the DLMP calculation framework, a DCOPF formulation that endogenously captures the effect of real power losses is discussed. The formulation uses piecewise linear functions to approximate losses. This thesis explores, with theoretical proofs, the breakdown of the loss approximation technique when non-positive DLMPs/LMPs occur and discusses a mixed integer linear programming formulation that corrects the breakdown. The DLMP is numerically illustrated in traditional and enhanced distribution systems and its superiority to contemporary pricing mechanisms is demonstrated using price responsive loads. Results show that the impact of the inaccuracy of contemporary pricing schemes becomes significant as flexible resources increase. At high elasticity, aggregate load consumption deviated from the optimal consumption by up to about 45 percent when using a flat or time-of-use rate. Individual load consumption deviated by up to 25 percent when using a real-time price. The superiority of the DLMP is more pronounced when important distribution network conditions are not reflected by contemporary prices. The individual load consumption incentivized by the real-time price deviated by up to 90 percent from the optimal consumption in a congested distribution network. While the DLMP internalizes congestion management, the consumption incentivized by the real-time price caused overloads.
ContributorsAkinbode, Oluwaseyi Wemimo (Author) / Hedman, Kory W (Thesis advisor) / Heydt, Gerald T (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2013
151341-Thumbnail Image.png
Description
With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic

With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic monitoring and management, etc. To better understand movement behaviors from the raw mobility data, this doctoral work provides analytic models for analyzing trajectory data. As a first contribution, a model is developed to detect changes in trajectories with time. If the taxis moving in a city are viewed as sensors that provide real time information of the traffic in the city, a change in these trajectories with time can reveal that the road network has changed. To detect changes, trajectories are modeled with a Hidden Markov Model (HMM). A modified training algorithm, for parameter estimation in HMM, called m-BaumWelch, is used to develop likelihood estimates under assumed changes and used to detect changes in trajectory data with time. Data from vehicles are used to test the method for change detection. Secondly, sequential pattern mining is used to develop a model to detect changes in frequent patterns occurring in trajectory data. The aim is to answer two questions: Are the frequent patterns still frequent in the new data? If they are frequent, has the time interval distribution in the pattern changed? Two different approaches are considered for change detection, frequency-based approach and distribution-based approach. The methods are illustrated with vehicle trajectory data. Finally, a model is developed for clustering and outlier detection in semantic trajectories. A challenge with clustering semantic trajectories is that both numeric and categorical attributes are present. Another problem to be addressed while clustering is that trajectories can be of different lengths and also have missing values. A tree-based ensemble is used to address these problems. The approach is extended to outlier detection in semantic trajectories.
ContributorsKondaveeti, Anirudh (Author) / Runger, George C. (Thesis advisor) / Mirchandani, Pitu (Committee member) / Pan, Rong (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
152033-Thumbnail Image.png
Description
The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of

The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.
ContributorsHaghnevis, Moeed (Author) / Askin, Ronald G. (Thesis advisor) / Armbruster, Dieter (Thesis advisor) / Mirchandani, Pitu (Committee member) / Wu, Tong (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2013
152382-Thumbnail Image.png
Description
A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values

A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values are computed at each level within a sample. If at least one of the P-values is less than a pre-specified significance level, the chart signals out-of-control. The primary advantage of our approach is that only one control chart is required to monitor several parameters simultaneously: the intercept, slope(s), and the error standard deviation. A comprehensive comparison of the proposed method and the existing KMW-Shewhart method for monitoring linear profiles is conducted. In addition, the effect that the number of observations within a sample has on the performance of the proposed method is investigated. The proposed method was also compared to the T^2 method discussed in Kang and Albin (2000) for multivariate, polynomial, and nonlinear profiles. A simulation study shows that overall the proposed P-value method performs satisfactorily for different profile types.
ContributorsAdibi, Azadeh (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Li, Jing (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2013
150659-Thumbnail Image.png
Description
This dissertation is to address product design optimization including reliability-based design optimization (RBDO) and robust design with epistemic uncertainty. It is divided into four major components as outlined below. Firstly, a comprehensive study of uncertainties is performed, in which sources of uncertainty are listed, categorized and the impacts are discussed.

This dissertation is to address product design optimization including reliability-based design optimization (RBDO) and robust design with epistemic uncertainty. It is divided into four major components as outlined below. Firstly, a comprehensive study of uncertainties is performed, in which sources of uncertainty are listed, categorized and the impacts are discussed. Epistemic uncertainty is of interest, which is due to lack of knowledge and can be reduced by taking more observations. In particular, the strategies to address epistemic uncertainties due to implicit constraint function are discussed. Secondly, a sequential sampling strategy to improve RBDO under implicit constraint function is developed. In modern engineering design, an RBDO task is often performed by a computer simulation program, which can be treated as a black box, as its analytical function is implicit. An efficient sampling strategy on learning the probabilistic constraint function under the design optimization framework is presented. The method is a sequential experimentation around the approximate most probable point (MPP) at each step of optimization process. It is compared with the methods of MPP-based sampling, lifted surrogate function, and non-sequential random sampling. Thirdly, a particle splitting-based reliability analysis approach is developed in design optimization. In reliability analysis, traditional simulation methods such as Monte Carlo simulation may provide accurate results, but are often accompanied with high computational cost. To increase the efficiency, particle splitting is integrated into RBDO. It is an improvement of subset simulation with multiple particles to enhance the diversity and stability of simulation samples. This method is further extended to address problems with multiple probabilistic constraints and compared with the MPP-based methods. Finally, a reliability-based robust design optimization (RBRDO) framework is provided to integrate the consideration of design reliability and design robustness simultaneously. The quality loss objective in robust design, considered together with the production cost in RBDO, are used formulate a multi-objective optimization problem. With the epistemic uncertainty from implicit performance function, the sequential sampling strategy is extended to RBRDO, and a combined metamodel is proposed to tackle both controllable variables and uncontrollable variables. The solution is a Pareto frontier, compared with a single optimal solution in RBDO.
ContributorsZhuang, Xiaotian (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Zhang, Muhong (Committee member) / Du, Xiaoping (Committee member) / Arizona State University (Publisher)
Created2012