Matching Items (5,849)
Filtering by

Clear all filters

152087-Thumbnail Image.png
Description
Nonregular screening designs can be an economical alternative to traditional resolution IV 2^(k-p) fractional factorials. Recently 16-run nonregular designs, referred to as no-confounding designs, were introduced in the literature. These designs have the property that no pair of main effect (ME) and two-factor interaction (2FI) estimates are completely confounded. In

Nonregular screening designs can be an economical alternative to traditional resolution IV 2^(k-p) fractional factorials. Recently 16-run nonregular designs, referred to as no-confounding designs, were introduced in the literature. These designs have the property that no pair of main effect (ME) and two-factor interaction (2FI) estimates are completely confounded. In this dissertation, orthogonal arrays were evaluated with many popular design-ranking criteria in order to identify optimal 20-run and 24-run no-confounding designs. Monte Carlo simulation was used to empirically assess the model fitting effectiveness of the recommended no-confounding designs. The results of the simulation demonstrated that these new designs, particularly the 24-run designs, are successful at detecting active effects over 95% of the time given sufficient model effect sparsity. The final chapter presents a screening design selection methodology, based on decision trees, to aid in the selection of a screening design from a list of published options. The methodology determines which of a candidate set of screening designs has the lowest expected experimental cost.
ContributorsStone, Brian (Author) / Montgomery, Douglas C. (Thesis advisor) / Silvestrini, Rachel T. (Committee member) / Fowler, John W (Committee member) / Borror, Connie M. (Committee member) / Arizona State University (Publisher)
Created2013
152015-Thumbnail Image.png
Description
This dissertation explores different methodologies for combining two popular design paradigms in the field of computer experiments. Space-filling designs are commonly used in order to ensure that there is good coverage of the design space, but they may not result in good properties when it comes to model fitting. Optimal

This dissertation explores different methodologies for combining two popular design paradigms in the field of computer experiments. Space-filling designs are commonly used in order to ensure that there is good coverage of the design space, but they may not result in good properties when it comes to model fitting. Optimal designs traditionally perform very well in terms of model fitting, particularly when a polynomial is intended, but can result in problematic replication in the case of insignificant factors. By bringing these two design types together, positive properties of each can be retained while mitigating potential weaknesses. Hybrid space-filling designs, generated as Latin hypercubes augmented with I-optimal points, are compared to designs of each contributing component. A second design type called a bridge design is also evaluated, which further integrates the disparate design types. Bridge designs are the result of a Latin hypercube undergoing coordinate exchange to reach constrained D-optimality, ensuring that there is zero replication of factors in any one-dimensional projection. Lastly, bridge designs were augmented with I-optimal points with two goals in mind. Augmentation with candidate points generated assuming the same underlying analysis model serves to reduce the prediction variance without greatly compromising the space-filling property of the design, while augmentation with candidate points generated assuming a different underlying analysis model can greatly reduce the impact of model misspecification during the design phase. Each of these composite designs are compared to pure space-filling and optimal designs. They typically out-perform pure space-filling designs in terms of prediction variance and alphabetic efficiency, while maintaining comparability with pure optimal designs at small sample size. This justifies them as excellent candidates for initial experimentation.
ContributorsKennedy, Kathryn (Author) / Montgomery, Douglas C. (Thesis advisor) / Johnson, Rachel T. (Thesis advisor) / Fowler, John W (Committee member) / Borror, Connie M. (Committee member) / Arizona State University (Publisher)
Created2013
149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150466-Thumbnail Image.png
Description
The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order

The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order to meet and exceed customer expectations, many companies are forced to improve quality and on-time delivery, and have looked towards Lean Six Sigma as an approach to enable process improvement. The Lean Six Sigma literature is rich in deployment strategies; however, there is a general lack of a mathematical approach to deploy Lean Six Sigma in a global enterprise. This includes both project identification and prioritization. The research presented here is two-fold. Firstly, a process characterization framework is presented to evaluate processes based on eight characteristics. An unsupervised learning technique, using clustering algorithms, is then utilized to group processes that are Lean Six Sigma conducive. The approach helps Lean Six Sigma deployment champions to identify key areas within the business to focus a Lean Six Sigma deployment. A case study is presented and 33% of the processes were found to be Lean Six Sigma conducive. Secondly, having identified parts of the business that are lean Six Sigma conducive, the next steps are to formulate and prioritize a portfolio of projects. Very often the deployment champion is faced with the decision of selecting a portfolio of Lean Six Sigma projects that meet multiple objectives which could include: maximizing productivity, customer satisfaction or return on investment, while meeting certain budgetary constraints. A multi-period 0-1 knapsack problem is presented that maximizes the expected net savings of the Lean Six Sigma portfolio over the life cycle of the deployment. Finally, a case study is presented that demonstrates the application of the model in a large multinational company. Traditionally, Lean Six Sigma found its roots in manufacturing. The research presented in this dissertation also emphasizes the applicability of the methodology to the non-manufacturing space. Additionally, a comparison is conducted between manufacturing and non-manufacturing processes to highlight the challenges in deploying the methodology in both spaces.
ContributorsDuarte, Brett Marc (Author) / Fowler, John W (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Shunk, Dan (Committee member) / Borror, Connie (Committee member) / Konopka, John (Committee member) / Arizona State University (Publisher)
Created2011
151111-Thumbnail Image.png
Description
This research is motivated by a deterministic scheduling problem that is fairly common in manufacturing environments, where there are certain processes that call for a machine working on multiple jobs at the same time. An example of such an environment is wafer fabrication in the semiconductor industry where some stages

This research is motivated by a deterministic scheduling problem that is fairly common in manufacturing environments, where there are certain processes that call for a machine working on multiple jobs at the same time. An example of such an environment is wafer fabrication in the semiconductor industry where some stages can be modeled as batch processes. There has been significant work done in the past in the field of a single stage of parallel machines which process jobs in batches. The primary motivation behind this research is to extend the research done in this area to a two-stage flow-shop where jobs arrive with unequal ready times and belong to incompatible job families with the goal of minimizing total weighted tardiness. As a first step to propose solutions, a mixed integer mathematical model is developed which tackles the problem at hand. The problem is NP-hard and thus the developed mathematical program can only solve problem instances of smaller sizes in a reasonable amount of time. The next step is to build heuristics which can provide feasible solutions in polynomial time for larger problem instances. The basic nature of the heuristics proposed is time window decomposition, where jobs within a moving time frame are considered for batching each time a machine becomes available on either stage. The Apparent Tardiness Cost (ATC) rule is used to build batches, and is modified to calculate ATC indices on a batch as well as a job level. An improvisation to the above heuristic is proposed, where the heuristic is run iteratively, each time assigning start times of jobs on the second stage as due dates for the jobs on the first stage. The underlying logic behind the iterative approach is to improve the way due dates are estimated for the first stage based on assigned due dates for jobs in the second stage. An important study carried out as part of this research is to analyze the bottleneck stage in terms of its location and how it affects the performance measure. Extensive experimentation is carried out to test how the quality of the solution varies when input parameters are varied between high and low values.
ContributorsTewari, Anubha Alokkumar (Author) / Fowler, John W (Thesis advisor) / Monch, Lars (Thesis advisor) / Gel, Esma S (Committee member) / Arizona State University (Publisher)
Created2012
136596-Thumbnail Image.png
Description
This article summarizes exploratory research conducted on private and public hospital systems in Australia and Costa Rica analyzing the trends observed within supply chain procurement. Physician preferences and a general lack of available comparative effectiveness research—both of which are challenges unique to the health care industry—were found to be barriers

This article summarizes exploratory research conducted on private and public hospital systems in Australia and Costa Rica analyzing the trends observed within supply chain procurement. Physician preferences and a general lack of available comparative effectiveness research—both of which are challenges unique to the health care industry—were found to be barriers to effective supply chain performance in both systems. Among other insights, the ability of policy to catalyze improved procurement performance in public hospital systems was also was observed. The role of centralization was also found to be fundamental to the success of the systems examined, allowing hospitals to focus on strategic rather than operational decisions and conduct value-streaming activities to generate increased cost savings.
ContributorsBudgett, Alexander Jay (Author) / Schneller, Eugene (Thesis director) / Gopalakrishnan, Mohan (Committee member) / Barrett, The Honors College (Contributor) / Department of Supply Chain Management (Contributor) / Department of English (Contributor)
Created2015-05
137001-Thumbnail Image.png
Description
This thesis focuses on the supply chain of the wine industry from a smaller scale operational perspective. A standard process from converting grapes to wine has been identified and confirmed. The sequential order of harvest, destemmer/crusher, fermentation, press, barrels, bottling, and distribution constitute the main tasks in the red wine

This thesis focuses on the supply chain of the wine industry from a smaller scale operational perspective. A standard process from converting grapes to wine has been identified and confirmed. The sequential order of harvest, destemmer/crusher, fermentation, press, barrels, bottling, and distribution constitute the main tasks in the red wine conversion process. Variations in production between red and white wines are observed; but, the overall process is roughly the same with white wines switching the fermentation and press steps and eliminating the barrels task. In addition, it is established that supply chain considerations do effect overall quality such as taste, aroma, and smell. The ability to utilize a combination of diverse techniques, such as wooden barrels or stainless steel tanks for aging, is what contributes to the differentiation of each wine and makes it unique. While the production methodology and use of specific materials/inputs will alter the quality of wine, it must be recognized that the majority of wine quality is influenced directly by the grape itself. The use of technology and machinery in the wine making process is investigated and determined to be pivotal to the creation of wine and the survival of any size winery. Technology has facilitated the wine making process and the current creation path could not occur without it. Wine operations will adapt and incorporate new procedures to take advantage of growth in technology as it occurs, especially in automation. The information used to assess the wine supply chain was obtained from an extensive literature review, interviews with industry professionals, and onsite tours of production facilities. Given all the results and data, it is evident that the production of wine can greatly benefit from the use of supply chain practices and concepts. The ability to reduce variation in the process and determine which aspects contribute most to wine quality are vital for small scale winery operations to remain competitive and become successful.
ContributorsClarke, Tanya N (Author) / Oke, Adegoke (Thesis director) / Gopalakrishnan, Mohan (Committee member) / Barrett, The Honors College (Contributor) / Department of Economics (Contributor) / Department of Supply Chain Management (Contributor)
Created2014-05
154475-Thumbnail Image.png
Description
Despite significant growth in research about supply chain integration, many questions remain unanswered regarding the path to integration and the benefits that can be accrued. This dissertation examines three aspects of supply chain integration in the health sector, leveraging the healthcare context to extend the theoretical boundaries, as well as

Despite significant growth in research about supply chain integration, many questions remain unanswered regarding the path to integration and the benefits that can be accrued. This dissertation examines three aspects of supply chain integration in the health sector, leveraging the healthcare context to extend the theoretical boundaries, as well as applying supply chain knowledge to an industry known to be immature in terms of its supply chain practices.

In the first chapter, a supply chain operating model that breaks away from the traditional healthcare supply chain structures is examined. Consolidated Service Centers (CSCs) embody a shared services strategy, consolidating supply chain functions across multiple hospitals (i.e. horizontal integration) and disintermediating several key roles in healthcare supply chains such as the group purchasing organizations and national distributors. Through case studies, key characteristics of CSCs that enable them to reduce the level of supply chain complexity are examined.

The second chapter investigates buyer-supplier relationships in healthcare (i.e. supplier integration), where a high level of distrust exists between hospitals and their suppliers. This context is leveraged to study both enablers and barriers to buyer-supplier trust. The results suggest that contracting counteracts the negative effects of dependence on trust. Furthermore, the study reveals that hospital buyers may, in some situations, perceive dedicated resource investments made by suppliers as trust barriers, associating such investments with supplier upselling and entrenchment tactics. This runs contrary to how dedicated investments are perceived in most other industries.

In the third chapter, the triadic relationship between the hospital, supplier, and physician is taken into consideration. Given their professional autonomy and power, physicians commonly undermine hospital efforts in supply base rationalization and standardization. This study examines whether physician-hospital integration (i.e. customer integration) can drive physicians towards supply selection practices that align with the hospital’s sourcing strategies and ultimately result in better supply chain performance. This study utilizes theory on agency triads and professionalism and tests hypotheses through a random effects regression model applied to data about hospital financial performance and physician-hospital arrangements.
ContributorsAbdulsalam, Yousef J (Author) / Schneller, Eugene S (Thesis advisor) / Gopalakrishnan, Mohan (Committee member) / Maltz, Arnold (Committee member) / Dooley, Kevin (Committee member) / Arizona State University (Publisher)
Created2016
149481-Thumbnail Image.png
Description
Surgery is one of the most important functions in a hospital with respect to operational cost, patient flow, and resource utilization. Planning and scheduling the Operating Room (OR) is important for hospitals to improve efficiency and achieve high quality of service. At the same time, it is a complex task

Surgery is one of the most important functions in a hospital with respect to operational cost, patient flow, and resource utilization. Planning and scheduling the Operating Room (OR) is important for hospitals to improve efficiency and achieve high quality of service. At the same time, it is a complex task due to the conflicting objectives and the uncertain nature of surgeries. In this dissertation, three different methodologies are developed to address OR planning and scheduling problem. First, a simulation-based framework is constructed to analyze the factors that affect the utilization of a catheterization lab and provide decision support for improving the efficiency of operations in a hospital with different priorities of patients. Both operational costs and patient satisfaction metrics are considered. Detailed parametric analysis is performed to provide generic recommendations. Overall it is found the 75th percentile of process duration is always on the efficient frontier and is a good compromise of both objectives. Next, the general OR planning and scheduling problem is formulated with a mixed integer program. The objectives include reducing staff overtime, OR idle time and patient waiting time, as well as satisfying surgeon preferences and regulating patient flow from OR to the Post Anesthesia Care Unit (PACU). Exact solutions are obtained using real data. Heuristics and a random keys genetic algorithm (RKGA) are used in the scheduling phase and compared with the optimal solutions. Interacting effects between planning and scheduling are also investigated. Lastly, a multi-objective simulation optimization approach is developed, which relaxes the deterministic assumption in the second study by integrating an optimization module of a RKGA implementation of the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to search for Pareto optimal solutions, and a simulation module to evaluate the performance of a given schedule. It is experimentally shown to be an effective technique for finding Pareto optimal solutions.
ContributorsLi, Qing (Author) / Fowler, John W (Thesis advisor) / Mohan, Srimathy (Thesis advisor) / Gopalakrishnan, Mohan (Committee member) / Askin, Ronald G. (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2010
137758-Thumbnail Image.png
DescriptionThis paper details my journey into children's publishing (as a Supply Chain major fairly unfamiliar with the industry) and culminates with my attempt at writing a picture book.
ContributorsGillmore, Lauren Emily (Author) / Brooks, Dan (Thesis director) / Gopalakrishnan, Mohan (Committee member) / Mokwa, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Supply Chain Management (Contributor)
Created2013-05