Matching Items (17)
152087-Thumbnail Image.png
Description
Nonregular screening designs can be an economical alternative to traditional resolution IV 2^(k-p) fractional factorials. Recently 16-run nonregular designs, referred to as no-confounding designs, were introduced in the literature. These designs have the property that no pair of main effect (ME) and two-factor interaction (2FI) estimates are completely confounded. In

Nonregular screening designs can be an economical alternative to traditional resolution IV 2^(k-p) fractional factorials. Recently 16-run nonregular designs, referred to as no-confounding designs, were introduced in the literature. These designs have the property that no pair of main effect (ME) and two-factor interaction (2FI) estimates are completely confounded. In this dissertation, orthogonal arrays were evaluated with many popular design-ranking criteria in order to identify optimal 20-run and 24-run no-confounding designs. Monte Carlo simulation was used to empirically assess the model fitting effectiveness of the recommended no-confounding designs. The results of the simulation demonstrated that these new designs, particularly the 24-run designs, are successful at detecting active effects over 95% of the time given sufficient model effect sparsity. The final chapter presents a screening design selection methodology, based on decision trees, to aid in the selection of a screening design from a list of published options. The methodology determines which of a candidate set of screening designs has the lowest expected experimental cost.
ContributorsStone, Brian (Author) / Montgomery, Douglas C. (Thesis advisor) / Silvestrini, Rachel T. (Committee member) / Fowler, John W (Committee member) / Borror, Connie M. (Committee member) / Arizona State University (Publisher)
Created2013
152015-Thumbnail Image.png
Description
This dissertation explores different methodologies for combining two popular design paradigms in the field of computer experiments. Space-filling designs are commonly used in order to ensure that there is good coverage of the design space, but they may not result in good properties when it comes to model fitting. Optimal

This dissertation explores different methodologies for combining two popular design paradigms in the field of computer experiments. Space-filling designs are commonly used in order to ensure that there is good coverage of the design space, but they may not result in good properties when it comes to model fitting. Optimal designs traditionally perform very well in terms of model fitting, particularly when a polynomial is intended, but can result in problematic replication in the case of insignificant factors. By bringing these two design types together, positive properties of each can be retained while mitigating potential weaknesses. Hybrid space-filling designs, generated as Latin hypercubes augmented with I-optimal points, are compared to designs of each contributing component. A second design type called a bridge design is also evaluated, which further integrates the disparate design types. Bridge designs are the result of a Latin hypercube undergoing coordinate exchange to reach constrained D-optimality, ensuring that there is zero replication of factors in any one-dimensional projection. Lastly, bridge designs were augmented with I-optimal points with two goals in mind. Augmentation with candidate points generated assuming the same underlying analysis model serves to reduce the prediction variance without greatly compromising the space-filling property of the design, while augmentation with candidate points generated assuming a different underlying analysis model can greatly reduce the impact of model misspecification during the design phase. Each of these composite designs are compared to pure space-filling and optimal designs. They typically out-perform pure space-filling designs in terms of prediction variance and alphabetic efficiency, while maintaining comparability with pure optimal designs at small sample size. This justifies them as excellent candidates for initial experimentation.
ContributorsKennedy, Kathryn (Author) / Montgomery, Douglas C. (Thesis advisor) / Johnson, Rachel T. (Thesis advisor) / Fowler, John W (Committee member) / Borror, Connie M. (Committee member) / Arizona State University (Publisher)
Created2013
153271-Thumbnail Image.png
Description
This thesis presents a model for the buying behavior of consumers in a technology market. In this model, a potential consumer is not perfectly rational, but exhibits bounded rationality following the axioms of prospect theory: reference dependence, diminishing returns and loss sensitivity. To evaluate the products on different criteria, the

This thesis presents a model for the buying behavior of consumers in a technology market. In this model, a potential consumer is not perfectly rational, but exhibits bounded rationality following the axioms of prospect theory: reference dependence, diminishing returns and loss sensitivity. To evaluate the products on different criteria, the analytic hierarchy process is used, which allows for relative comparisons. The analytic hierarchy process proposes that when making a choice between several alternatives, one should measure the products by comparing them relative to each other. This allows the user to put numbers to subjective criteria. Additionally, evidence suggests that a consumer will often consider not only their own evaluation of a product, but also the choices of other consumers. Thus, the model in this paper applies prospect theory to products with multiple attributes using word of mouth as a criteria in the evaluation.
ContributorsElkholy, Alexander (Author) / Armbruster, Dieter (Thesis advisor) / Kempf, Karl (Committee member) / Li, Hongmin (Committee member) / Arizona State University (Publisher)
Created2014
149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150466-Thumbnail Image.png
Description
The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order

The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order to meet and exceed customer expectations, many companies are forced to improve quality and on-time delivery, and have looked towards Lean Six Sigma as an approach to enable process improvement. The Lean Six Sigma literature is rich in deployment strategies; however, there is a general lack of a mathematical approach to deploy Lean Six Sigma in a global enterprise. This includes both project identification and prioritization. The research presented here is two-fold. Firstly, a process characterization framework is presented to evaluate processes based on eight characteristics. An unsupervised learning technique, using clustering algorithms, is then utilized to group processes that are Lean Six Sigma conducive. The approach helps Lean Six Sigma deployment champions to identify key areas within the business to focus a Lean Six Sigma deployment. A case study is presented and 33% of the processes were found to be Lean Six Sigma conducive. Secondly, having identified parts of the business that are lean Six Sigma conducive, the next steps are to formulate and prioritize a portfolio of projects. Very often the deployment champion is faced with the decision of selecting a portfolio of Lean Six Sigma projects that meet multiple objectives which could include: maximizing productivity, customer satisfaction or return on investment, while meeting certain budgetary constraints. A multi-period 0-1 knapsack problem is presented that maximizes the expected net savings of the Lean Six Sigma portfolio over the life cycle of the deployment. Finally, a case study is presented that demonstrates the application of the model in a large multinational company. Traditionally, Lean Six Sigma found its roots in manufacturing. The research presented in this dissertation also emphasizes the applicability of the methodology to the non-manufacturing space. Additionally, a comparison is conducted between manufacturing and non-manufacturing processes to highlight the challenges in deploying the methodology in both spaces.
ContributorsDuarte, Brett Marc (Author) / Fowler, John W (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Shunk, Dan (Committee member) / Borror, Connie (Committee member) / Konopka, John (Committee member) / Arizona State University (Publisher)
Created2011
151111-Thumbnail Image.png
Description
This research is motivated by a deterministic scheduling problem that is fairly common in manufacturing environments, where there are certain processes that call for a machine working on multiple jobs at the same time. An example of such an environment is wafer fabrication in the semiconductor industry where some stages

This research is motivated by a deterministic scheduling problem that is fairly common in manufacturing environments, where there are certain processes that call for a machine working on multiple jobs at the same time. An example of such an environment is wafer fabrication in the semiconductor industry where some stages can be modeled as batch processes. There has been significant work done in the past in the field of a single stage of parallel machines which process jobs in batches. The primary motivation behind this research is to extend the research done in this area to a two-stage flow-shop where jobs arrive with unequal ready times and belong to incompatible job families with the goal of minimizing total weighted tardiness. As a first step to propose solutions, a mixed integer mathematical model is developed which tackles the problem at hand. The problem is NP-hard and thus the developed mathematical program can only solve problem instances of smaller sizes in a reasonable amount of time. The next step is to build heuristics which can provide feasible solutions in polynomial time for larger problem instances. The basic nature of the heuristics proposed is time window decomposition, where jobs within a moving time frame are considered for batching each time a machine becomes available on either stage. The Apparent Tardiness Cost (ATC) rule is used to build batches, and is modified to calculate ATC indices on a batch as well as a job level. An improvisation to the above heuristic is proposed, where the heuristic is run iteratively, each time assigning start times of jobs on the second stage as due dates for the jobs on the first stage. The underlying logic behind the iterative approach is to improve the way due dates are estimated for the first stage based on assigned due dates for jobs in the second stage. An important study carried out as part of this research is to analyze the bottleneck stage in terms of its location and how it affects the performance measure. Extensive experimentation is carried out to test how the quality of the solution varies when input parameters are varied between high and low values.
ContributorsTewari, Anubha Alokkumar (Author) / Fowler, John W (Thesis advisor) / Monch, Lars (Thesis advisor) / Gel, Esma S (Committee member) / Arizona State University (Publisher)
Created2012
149481-Thumbnail Image.png
Description
Surgery is one of the most important functions in a hospital with respect to operational cost, patient flow, and resource utilization. Planning and scheduling the Operating Room (OR) is important for hospitals to improve efficiency and achieve high quality of service. At the same time, it is a complex task

Surgery is one of the most important functions in a hospital with respect to operational cost, patient flow, and resource utilization. Planning and scheduling the Operating Room (OR) is important for hospitals to improve efficiency and achieve high quality of service. At the same time, it is a complex task due to the conflicting objectives and the uncertain nature of surgeries. In this dissertation, three different methodologies are developed to address OR planning and scheduling problem. First, a simulation-based framework is constructed to analyze the factors that affect the utilization of a catheterization lab and provide decision support for improving the efficiency of operations in a hospital with different priorities of patients. Both operational costs and patient satisfaction metrics are considered. Detailed parametric analysis is performed to provide generic recommendations. Overall it is found the 75th percentile of process duration is always on the efficient frontier and is a good compromise of both objectives. Next, the general OR planning and scheduling problem is formulated with a mixed integer program. The objectives include reducing staff overtime, OR idle time and patient waiting time, as well as satisfying surgeon preferences and regulating patient flow from OR to the Post Anesthesia Care Unit (PACU). Exact solutions are obtained using real data. Heuristics and a random keys genetic algorithm (RKGA) are used in the scheduling phase and compared with the optimal solutions. Interacting effects between planning and scheduling are also investigated. Lastly, a multi-objective simulation optimization approach is developed, which relaxes the deterministic assumption in the second study by integrating an optimization module of a RKGA implementation of the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to search for Pareto optimal solutions, and a simulation module to evaluate the performance of a given schedule. It is experimentally shown to be an effective technique for finding Pareto optimal solutions.
ContributorsLi, Qing (Author) / Fowler, John W (Thesis advisor) / Mohan, Srimathy (Thesis advisor) / Gopalakrishnan, Mohan (Committee member) / Askin, Ronald G. (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2010
168544-Thumbnail Image.png
Description2020年,中国经济总量首次突破百万亿大关,位居全球经济总量排名第二,成为全球经济唯一正增长的经济体,实现了中国“增长奇迹”。但是,近年来企业普通员工收入的增长速度远低于社会经济发展的增长速度。二十一世纪是人才竞争的时代,企业转型升级发展的关键在于员工的自主创新能力。根据薪酬激励理论,企业为员工支付更高的薪酬可以调动员工的工作热情和积极性,增强员工的自主创新能力,提高企业创新绩效和企业价值。因此,本文试图研究员工薪酬与企业价值之间的关系,并探索创新绩效是否在其关系中起到中介效应作用。本文通过回顾和梳理国内外有关员工薪酬、创新绩效和企业价值三者关系的相关文献,结合该领域国内外学者的研究经验,以我国科创板上市的214家公司为本文研究的样本。在理论分析和经验研究的基础上,得出以下研究结果:   (1)在科创板上市公司的全样本中,回归结果发现,员工薪酬与企业价值呈显著正相关,员工薪酬与企业创新绩效呈显著正相关,创新绩效与企业价值呈显著正相关,创新绩效在员工薪酬与企业价值的关系中具有中介效应的作用。 (2)区分了企业产权性质后,在民营企业的样本组中,其回归结果发现与全样本组的回归结果基本一致。在非民营企业的样本组中,员工薪酬与创新绩效和企业价值的系数虽为正,但系数的P值并不显著,说明员工薪酬对创新绩效和企业价值都具有正向的激励作用,但不显著;创新绩效对企业价值具有正向的促进作用,但不显著;创新绩效在员工薪酬对企业价值的关系中不具有中介效应,而是起到了遮掩效应的作用。   (3)区分了企业经营所在地后,在非一线城市企业样本中,其回归结果发现与全样本的回归结果基本一致。在一线城市企业样本中,回归结果发现,员工薪酬的系数虽然为正,但P值不显著,说明员工薪酬对创新绩效和企业价值都具有正向的激励作用,但不显著;创新绩效与企业价值呈显著正向相关;创新绩效在员工薪酬对企业价值的关系中起到了遮掩效应。
ContributorsJin, Jian (Author) / Huang, Xiaochuan (Thesis advisor) / Chang, Chun (Thesis advisor) / Li, Hongmin (Committee member) / Arizona State University (Publisher)
Created2022
168555-Thumbnail Image.png
Description本文以消费者认知理论、马斯洛需求层次理论、品牌价值理论为理论基础,通过分析感知质量的构成维度,从感知外在质量和内在感知质量两个方面构建感知质量的评价体系,本文试图以国酒——茅台为例,梳理感知质量对感知价值及品牌溢价的影响机理,构建研究理论框架,提出研究假设,深入探究三者的作用关系,为现代企业提供一定的指导。从实证结果来看,本文使用结构方程模型对感知外在质量、感知内在质量、感知价值和品牌溢价四个潜在变量之间相互关系进行回归分析。正式进行回归之前,对初始数据和模型进行描述性统计、信效度检验、相关性分析判断数据和模型之间的拟合度和适配度情况,再根据MI修正指数和路径系数显著检验对模型进行调整,确定模型计算结果处于可接受水平。从模型回归结果来看,感知质量对品牌溢价的主要路径是“感知外在质量——感知价值——品牌溢价”,而感知内在质量对感知价值和品牌溢价皆无显著影响,且品牌溢价不受感知内在质量和感知外在质量的直接显著影响,因此判断感知价值在感知质量与品牌溢价之间存在一定程度的中介效应,并通过中介效应分析确定感知价值在模型中充当中介变量,存在显著的中介效应,从中可以看出白酒消费者主要是根据白酒的感知外在质量对感知价值的影响进而影响品牌溢价。而对于饮酒经验和健康关注两个要素,根据调节效应的回归分析结果,饮酒经验在“感知内在质量—感知价值”和“感知内在质量—品牌溢价”的路径中的调节效应为显著,且皆为负向调节作用;健康要素对于感知内在质量和外在质量对感知价值和品牌溢价的关联关系中皆存在一定程度的调节效应,且皆为正向效应。而在针对四个不同酒种的分群组回归中,四种白酒的路径系数及显著性都有所不同,主要表现在感知内在质量对感知价值的影响路径上,从回归结果来看,茅台迎宾酒和飞天茅台与习酒和茅台醇在路径系数和显著性明显不同,可以看出当白酒质量较高或较低时,感知内在质量对感知价值的影响并不显著。
ContributorsXiang, Jian (Author) / Li, Hongmin (Thesis advisor) / Shi, Weilei (Thesis advisor) / Dong, Xiaodan (Committee member) / Arizona State University (Publisher)
Created2022
187480-Thumbnail Image.png
Description中国大陆证券市场上的A、B股市场,是世界独特的分割市场,其中,双重上市公司A、B股(以下简称AB股),同股同权,但B股相对A股价格长期折价,被称为“B股难题”(B Share Puzzle), 这是国际资本市场上的一个热点问题,此相关问题研究也一直延续。本文尝试研究中国政府出台的对股市长期发展进行调节的政策与B股折价之间的关系,通过对AB股发展历史的回顾,梳理出二个对AB股长期发展干预和调节的政策,即2001年2月中国政府允许中国大陆居民投资B股(简称政策一)和2005年4月29日开始的中国证券市场股权分置改革(简称政策二),并在此基础上,运用计量统计方法实证分析,研究发现中国政府出台的对股市长期发展进行调节的政策一、政策二与B股折价率有显著相关性,同时政策的干预和调节是分别有针对性进行的,使得B股折价率变化在政策影响下,通过A股价格或者B股价格的显著变化而实现。另外发现,B股平均折价率具有波动聚集特性,有小幅波动和均值回归特点,具有可预测性。
ContributorsLiu, Li (Author) / Li, Hongmin (Thesis advisor) / Zhang, Jie (Thesis advisor) / Chen, Hui (Committee member) / Arizona State University (Publisher)
Created2023