This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 4 of 4
Filtering by

Clear all filters

187584-Thumbnail Image.png
Description
Photolithography is among the key phases in chip manufacturing. It is also among the most expensive with manufacturing equipment valued at the hundreds of millions of dollars. It is paramount that the process is run efficiently, guaranteeing high resource utilization and low product cycle times. A key element in the

Photolithography is among the key phases in chip manufacturing. It is also among the most expensive with manufacturing equipment valued at the hundreds of millions of dollars. It is paramount that the process is run efficiently, guaranteeing high resource utilization and low product cycle times. A key element in the operation of a photolithography system is the effective management of the reticles that are responsible for the imprinting of the circuit path on the wafers. Managing reticles means determining which are appropriate to mount on the very expensive scanners as a function of the product types being released to the system. Given the importance of the problem, several heuristic policies have been developed in the industry practice in an attempt to guarantee that the expensive tools are never idle. However, such policies have difficulties reacting to unforeseen events (e.g., unplanned failures, unavailability of reticles). On the other hand, the technological advance of the semiconductor industry in sensing at system and process level should be harnessed to improve on these “expert policies”. In this thesis, a system for the real time reticle management is developed that not only is able to retrieve information from the real system, but also can embed commonly used policies to improve upon them. A new digital twin for the photolithography process is developed that efficiently and accurately predicts the system performance thus enabling predictions for the future behaviors as a function of possible decisions. The results demonstrate the validity of the developed model, and the feasibility of the overall approach demonstrating a statistically significant improvement of performance as compared to the current policy.
ContributorsSivasubramanian, Chandrasekhar (Author) / Pedrielli, Giulia (Thesis advisor) / Jevtic, Petar (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2023
191035-Thumbnail Image.png
Description
With the explosion of autonomous systems under development, complex simulation models are being tested and relied on far more than in the recent past. This uptick in autonomous systems being modeled then tested magnifies both the advantages and disadvantages of simulation experimentation. An inherent problem in autonomous systems development is

With the explosion of autonomous systems under development, complex simulation models are being tested and relied on far more than in the recent past. This uptick in autonomous systems being modeled then tested magnifies both the advantages and disadvantages of simulation experimentation. An inherent problem in autonomous systems development is when small changes in factor settings result in large changes in a response’s performance. These occurrences look like cliffs in a metamodel’s response surface and are referred to as performance mode boundary regions. These regions represent areas of interest in the autonomous system’s decision-making process. Therefore, performance mode boundary regions are areas of interest for autonomous systems developers.Traditional augmentation methods aid experimenters seeking different objectives, often by improving a certain design property of the factor space (such as variance) or a design’s modeling capabilities. While useful, these augmentation techniques do not target areas of interest that need attention in autonomous systems testing focused on the response. Boundary Explorer Adaptive Sampling Technique, or BEAST, is a set of design augmentation algorithms. The adaptive sampling algorithm targets performance mode boundaries with additional samples. The gap filling augmentation algorithm targets sparsely sampled areas in the factor space. BEAST allows for sampling to adapt to information obtained from pervious iterations of experimentation and target these regions of interest. Exploiting the advantages of simulation model experimentation, BEAST can be used to provide additional iterations of experimentation, providing clarity and high-fidelity in areas of interest along potentially steep gradient regions. The objective of this thesis is to research and present BEAST, then compare BEAST’s algorithms to other design augmentation techniques. Comparisons are made towards traditional methods that are already implemented in SAS Institute’s JMP software, or emerging adaptive sampling techniques, such as Range Adversarial Planning Tool (RAPT). The goal of this objective is to gain a deeper understanding of how BEAST works and where it stands in the design augmentation space for practical applications. With a gained understanding of how BEAST operates and how well BEAST performs, future research recommendations will be presented to improve BEAST’s capabilities.
ContributorsSimpson, Ryan James (Author) / Montgomery, Douglas (Thesis advisor) / Karl, Andrew (Committee member) / Pan, Rong (Committee member) / Pedrielli, Giulia (Committee member) / Wisnowski, James (Committee member) / Arizona State University (Publisher)
Created2024
158883-Thumbnail Image.png
Description
Nonregular designs are a preferable alternative to regular resolution four designs because they avoid confounding two-factor interactions. As a result nonregular designs can estimate and identify a few active two-factor interactions. However, due to the sometimes complex alias structure of nonregular designs, standard screening strategies can fail to identify all

Nonregular designs are a preferable alternative to regular resolution four designs because they avoid confounding two-factor interactions. As a result nonregular designs can estimate and identify a few active two-factor interactions. However, due to the sometimes complex alias structure of nonregular designs, standard screening strategies can fail to identify all active effects. In this research, two-level nonregular screening designs with orthogonal main effects will be discussed. By utilizing knowledge of the alias structure, a design based model selection process for analyzing nonregular designs is proposed.

The Aliased Informed Model Selection (AIMS) strategy is a design specific approach that is compared to three generic model selection methods; stepwise regression, least absolute shrinkage and selection operator (LASSO), and the Dantzig selector. The AIMS approach substantially increases the power to detect active main effects and two-factor interactions versus the aforementioned generic methodologies. This research identifies design specific model spaces; sets of models with strong heredity, all estimable, and exhibit no model confounding. These spaces are then used in the AIMS method along with design specific aliasing rules for model selection decisions. Model spaces and alias rules are identified for three designs; 16-run no-confounding 6, 7, and 8-factor designs. The designs are demonstrated with several examples as well as simulations to show the AIMS superiority in model selection.

A final piece of the research provides a method for augmenting no-confounding designs based on a model spaces and maximum average D-efficiency. Several augmented designs are provided for different situations. A final simulation with the augmented designs shows strong results for augmenting four additional runs if time and resources permit.
ContributorsMetcalfe, Carly E (Author) / Montgomery, Douglas C. (Thesis advisor) / Jones, Bradley (Committee member) / Pan, Rong (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2020
190990-Thumbnail Image.png
Description
This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the

This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the intersection of therapeutics and biomanufacturing has become increasingly important: (i) a huge push toward the design of new RNA nanoparticles has revolutionized the science of vaccines due to the COVID-19 pandemic; (ii) while the technology to produce personalized cancer medications is available, efficient design and operation of manufacturing systems is not yet agreed upon. In this work, the focus is on operations research methodologies that can support faster design of novel products, specifically RNA; and methods for the enabling of personalization in biomanufacturing, and will specifically look at the problem of cancer therapy manufacturing. Across both areas, methods are presented attempting to embed pre-existing knowledge (e.g., constraints characterizing good molecules, comparison between structures) as well as learn problem structure (e.g., the landscape of the rewards function while synthesizing the control for a single use bioreactor). This thesis produced three key outcomes: (i) ExpertRNA for the prediction of the structure of an RNA molecule given a sequence. RNA structure is fundamental in determining its function. Therefore, having efficient tools for such prediction can make all the difference for a scientist trying to understand optimal molecule configuration. For the first time, the algorithm allows expert evaluation in the loop to judge the partial predictions that the tool produces; (ii) BioMAN, a discrete event simulation tool for the study of single-use biomanufacturing of personalized cancer therapies. The discrete event simulation engine was designed tailored to handle the efficient scheduling of many parallel events which is cause by the presence of single use resources. This is the first simulator of this type for individual therapies; (iii) Part-MCTS, a novel sequential decision-making algorithm to support the control of single use systems. This tool integrates for the first-time simulation, monte-carlo tree search and optimal computing budget allocation for managing the computational effort.
ContributorsLiu, Menghan (Author) / Pedrielli, Giulia (Thesis advisor) / Bertsekas, Dimitri (Committee member) / Pan, Rong (Committee member) / Sulc, Petr (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2023