This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 31 - 40 of 82
190990-Thumbnail Image.png
Description
This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the

This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the intersection of therapeutics and biomanufacturing has become increasingly important: (i) a huge push toward the design of new RNA nanoparticles has revolutionized the science of vaccines due to the COVID-19 pandemic; (ii) while the technology to produce personalized cancer medications is available, efficient design and operation of manufacturing systems is not yet agreed upon. In this work, the focus is on operations research methodologies that can support faster design of novel products, specifically RNA; and methods for the enabling of personalization in biomanufacturing, and will specifically look at the problem of cancer therapy manufacturing. Across both areas, methods are presented attempting to embed pre-existing knowledge (e.g., constraints characterizing good molecules, comparison between structures) as well as learn problem structure (e.g., the landscape of the rewards function while synthesizing the control for a single use bioreactor). This thesis produced three key outcomes: (i) ExpertRNA for the prediction of the structure of an RNA molecule given a sequence. RNA structure is fundamental in determining its function. Therefore, having efficient tools for such prediction can make all the difference for a scientist trying to understand optimal molecule configuration. For the first time, the algorithm allows expert evaluation in the loop to judge the partial predictions that the tool produces; (ii) BioMAN, a discrete event simulation tool for the study of single-use biomanufacturing of personalized cancer therapies. The discrete event simulation engine was designed tailored to handle the efficient scheduling of many parallel events which is cause by the presence of single use resources. This is the first simulator of this type for individual therapies; (iii) Part-MCTS, a novel sequential decision-making algorithm to support the control of single use systems. This tool integrates for the first-time simulation, monte-carlo tree search and optimal computing budget allocation for managing the computational effort.
ContributorsLiu, Menghan (Author) / Pedrielli, Giulia (Thesis advisor) / Bertsekas, Dimitri (Committee member) / Pan, Rong (Committee member) / Sulc, Petr (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2023
191035-Thumbnail Image.png
Description
With the explosion of autonomous systems under development, complex simulation models are being tested and relied on far more than in the recent past. This uptick in autonomous systems being modeled then tested magnifies both the advantages and disadvantages of simulation experimentation. An inherent problem in autonomous systems development is

With the explosion of autonomous systems under development, complex simulation models are being tested and relied on far more than in the recent past. This uptick in autonomous systems being modeled then tested magnifies both the advantages and disadvantages of simulation experimentation. An inherent problem in autonomous systems development is when small changes in factor settings result in large changes in a response’s performance. These occurrences look like cliffs in a metamodel’s response surface and are referred to as performance mode boundary regions. These regions represent areas of interest in the autonomous system’s decision-making process. Therefore, performance mode boundary regions are areas of interest for autonomous systems developers.Traditional augmentation methods aid experimenters seeking different objectives, often by improving a certain design property of the factor space (such as variance) or a design’s modeling capabilities. While useful, these augmentation techniques do not target areas of interest that need attention in autonomous systems testing focused on the response. Boundary Explorer Adaptive Sampling Technique, or BEAST, is a set of design augmentation algorithms. The adaptive sampling algorithm targets performance mode boundaries with additional samples. The gap filling augmentation algorithm targets sparsely sampled areas in the factor space. BEAST allows for sampling to adapt to information obtained from pervious iterations of experimentation and target these regions of interest. Exploiting the advantages of simulation model experimentation, BEAST can be used to provide additional iterations of experimentation, providing clarity and high-fidelity in areas of interest along potentially steep gradient regions. The objective of this thesis is to research and present BEAST, then compare BEAST’s algorithms to other design augmentation techniques. Comparisons are made towards traditional methods that are already implemented in SAS Institute’s JMP software, or emerging adaptive sampling techniques, such as Range Adversarial Planning Tool (RAPT). The goal of this objective is to gain a deeper understanding of how BEAST works and where it stands in the design augmentation space for practical applications. With a gained understanding of how BEAST operates and how well BEAST performs, future research recommendations will be presented to improve BEAST’s capabilities.
ContributorsSimpson, Ryan James (Author) / Montgomery, Douglas (Thesis advisor) / Karl, Andrew (Committee member) / Pan, Rong (Committee member) / Pedrielli, Giulia (Committee member) / Wisnowski, James (Committee member) / Arizona State University (Publisher)
Created2024
156679-Thumbnail Image.png
Description
The recent technological advances enable the collection of various complex, heterogeneous and high-dimensional data in biomedical domains. The increasing availability of the high-dimensional biomedical data creates the needs of new machine learning models for effective data analysis and knowledge discovery. This dissertation introduces several unsupervised and supervised methods to hel

The recent technological advances enable the collection of various complex, heterogeneous and high-dimensional data in biomedical domains. The increasing availability of the high-dimensional biomedical data creates the needs of new machine learning models for effective data analysis and knowledge discovery. This dissertation introduces several unsupervised and supervised methods to help understand the data, discover the patterns and improve the decision making. All the proposed methods can generalize to other industrial fields.

The first topic of this dissertation focuses on the data clustering. Data clustering is often the first step for analyzing a dataset without the label information. Clustering high-dimensional data with mixed categorical and numeric attributes remains a challenging, yet important task. A clustering algorithm based on tree ensembles, CRAFTER, is proposed to tackle this task in a scalable manner.

The second part of this dissertation aims to develop data representation methods for genome sequencing data, a special type of high-dimensional data in the biomedical domain. The proposed data representation method, Bag-of-Segments, can summarize the key characteristics of the genome sequence into a small number of features with good interpretability.

The third part of this dissertation introduces an end-to-end deep neural network model, GCRNN, for time series classification with emphasis on both the accuracy and the interpretation. GCRNN contains a convolutional network component to extract high-level features, and a recurrent network component to enhance the modeling of the temporal characteristics. A feed-forward fully connected network with the sparse group lasso regularization is used to generate the final classification and provide good interpretability.

The last topic centers around the dimensionality reduction methods for time series data. A good dimensionality reduction method is important for the storage, decision making and pattern visualization for time series data. The CRNN autoencoder is proposed to not only achieve low reconstruction error, but also generate discriminative features. A variational version of this autoencoder has great potential for applications such as anomaly detection and process control.
ContributorsLin, Sangdi (Author) / Runger, George C. (Thesis advisor) / Kocher, Jean-Pierre A (Committee member) / Pan, Rong (Committee member) / Escobedo, Adolfo R. (Committee member) / Arizona State University (Publisher)
Created2018
157561-Thumbnail Image.png
Description
Optimal design theory provides a general framework for the construction of experimental designs for categorical responses. For a binary response, where the possible result is one of two outcomes, the logistic regression model is widely used to relate a set of experimental factors with the probability of a positive

Optimal design theory provides a general framework for the construction of experimental designs for categorical responses. For a binary response, where the possible result is one of two outcomes, the logistic regression model is widely used to relate a set of experimental factors with the probability of a positive (or negative) outcome. This research investigates and proposes alternative designs to alleviate the problem of separation in small-sample D-optimal designs for the logistic regression model. Separation causes the non-existence of maximum likelihood parameter estimates and presents a serious problem for model fitting purposes.

First, it is shown that exact, multi-factor D-optimal designs for the logistic regression model can be susceptible to separation. Several logistic regression models are specified, and exact D-optimal designs of fixed sizes are constructed for each model. Sets of simulated response data are generated to estimate the probability of separation in each design. This study proves through simulation that small-sample D-optimal designs are prone to separation and that separation risk is dependent on the specified model. Additionally, it is demonstrated that exact designs of equal size constructed for the same models may have significantly different chances of encountering separation.

The second portion of this research establishes an effective strategy for augmentation, where additional design runs are judiciously added to eliminate separation that has occurred in an initial design. A simulation study is used to demonstrate that augmenting runs in regions of maximum prediction variance (MPV), where the predicted probability of either response category is 50%, most reliably eliminates separation. However, it is also shown that MPV augmentation tends to yield augmented designs with lower D-efficiencies.

The final portion of this research proposes a novel compound optimality criterion, DMP, that is used to construct locally optimal and robust compromise designs. A two-phase coordinate exchange algorithm is implemented to construct exact locally DMP-optimal designs. To address design dependence issues, a maximin strategy is proposed for designating a robust DMP-optimal design. A case study demonstrates that the maximin DMP-optimal design maintains comparable D-efficiencies to a corresponding Bayesian D-optimal design while offering significantly improved separation performance.
ContributorsPark, Anson Robert (Author) / Montgomery, Douglas C. (Thesis advisor) / Mancenido, Michelle V (Thesis advisor) / Escobedo, Adolfo R. (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2019
157308-Thumbnail Image.png
Description
Image-based process monitoring has recently attracted increasing attention due to the advancement of the sensing technologies. However, existing process monitoring methods fail to fully utilize the spatial information of images due to their complex characteristics including the high dimensionality and complex spatial structures. Recent advancement of the unsupervised deep models

Image-based process monitoring has recently attracted increasing attention due to the advancement of the sensing technologies. However, existing process monitoring methods fail to fully utilize the spatial information of images due to their complex characteristics including the high dimensionality and complex spatial structures. Recent advancement of the unsupervised deep models such as a generative adversarial network (GAN) and generative adversarial autoencoder (AAE) has enabled to learn the complex spatial structures automatically. Inspired by this advancement, we propose an anomaly detection framework based on the AAE for unsupervised anomaly detection for images. AAE combines the power of GAN with the variational autoencoder, which serves as a nonlinear dimension reduction technique with regularization from the discriminator. Based on this, we propose a monitoring statistic efficiently capturing the change of the image data. The performance of the proposed AAE-based anomaly detection algorithm is validated through a simulation study and real case study for rolling defect detection.
ContributorsYeh, Huai-Ming (Author) / Yan, Hao (Thesis advisor) / Pan, Rong (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2019
157129-Thumbnail Image.png
Description
With the development of computer and sensing technology, rich datasets have become available in many fields such as health care, manufacturing, transportation, just to name a few. Also, data come from multiple heterogeneous sources or modalities. This is a common phenomenon in health care systems. While multi-modality data fusion is

With the development of computer and sensing technology, rich datasets have become available in many fields such as health care, manufacturing, transportation, just to name a few. Also, data come from multiple heterogeneous sources or modalities. This is a common phenomenon in health care systems. While multi-modality data fusion is a promising research area, there are several special challenges in health care applications. (1) The integration of biological and statistical model is a big challenge; (2) It is commonplace that data from various modalities is not available for every patient due to cost, accessibility, and other reasons. This results in a special missing data structure in which different modalities may be missed in “blocks”. Therefore, how to train a predictive model using such a dataset poses a significant challenge to statistical learning. (3) It is well known that different modality data may contain different aspects of information about the response. The current studies cannot afford to solve this problem. My dissertation includes new statistical learning model development to address each of the aforementioned challenges as well as application case studies using real health care datasets, included in three chapters (Chapter 2, 3, and 4), respectively. Collectively, it is expected that my dissertation could provide a new sets of statistical learning models, algorithms, and theory contributed to multi-modality heterogeneous data fusion driven by the unique challenges in this area. Also, application of these new methods to important medical problems using real-world datasets is expected to provide solutions to these problems, and therefore contributing to the application domains.
ContributorsLiu, Xiaonan (Ph.D.) (Author) / Li, Jing (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Fatyga, Mirek (Committee member) / Arizona State University (Publisher)
Created2019
154051-Thumbnail Image.png
Description
The demand for cleaner energy technology is increasing very rapidly. Hence it is

important to increase the eciency and reliability of this emerging clean energy technologies.

This thesis focuses on modeling and reliability of solar micro inverters. In

order to make photovoltaics (PV) cost competitive with traditional energy sources,

the economies of scale have

The demand for cleaner energy technology is increasing very rapidly. Hence it is

important to increase the eciency and reliability of this emerging clean energy technologies.

This thesis focuses on modeling and reliability of solar micro inverters. In

order to make photovoltaics (PV) cost competitive with traditional energy sources,

the economies of scale have been guiding inverter design in two directions: large,

centralized, utility-scale (500 kW) inverters vs. small, modular, module level (300

W) power electronics (MLPE). MLPE, such as microinverters and DC power optimizers,

oer advantages in safety, system operations and maintenance, energy yield,

and component lifetime due to their smaller size, lower power handling requirements,

and module-level power point tracking and monitoring capability [1]. However, they

suer from two main disadvantages: rst, depending on array topology (especially

the proximity to the PV module), they can be subjected to more extreme environments

(i.e. temperature cycling) during the day, resulting in a negative impact to

reliability; second, since solar installations can have tens of thousands to millions of

modules (and as many MLPE units), it may be dicult or impossible to track and

repair units as they go out of service. Therefore identifying the weak links in this

system is of critical importance to develop more reliable micro inverters.

While an overwhelming majority of time and research has focused on PV module

eciency and reliability, these issues have been largely ignored for the balance

of system components. As a relatively nascent industry, the PV power electronics

industry does not have the extensive, standardized reliability design and testing procedures

that exist in the module industry or other more mature power electronics

industries (e.g. automotive). To do so, the critical components which are at risk and

their impact on the system performance has to be studied. This thesis identies and

addresses some of the issues related to reliability of solar micro inverters.

This thesis presents detailed discussions on various components of solar micro inverter

and their design. A micro inverter with very similar electrical specications in

comparison with commercial micro inverter is modeled in detail and veried. Components

in various stages of micro inverter are listed and their typical failure mechanisms

are reviewed. A detailed FMEA is conducted for a typical micro inverter to identify

the weak links of the system. Based on the S, O and D metrics, risk priority number

(RPN) is calculated to list the critical at-risk components. Degradation of DC bus

capacitor is identied as one the failure mechanism and the degradation model is built

to study its eect on the system performance. The system is tested for surge immunity

using standard ring and combinational surge waveforms as per IEEE 62.41 and

IEC 61000-4-5 standards. All the simulation presented in this thesis is performed

using PLECS simulation software.
ContributorsManchanahalli Ranganatha, Arkanatha Sastry (Author) / Ayyanar, Raja (Thesis advisor) / Karady, George G. (Committee member) / Qin, Jiangchao (Committee member) / Arizona State University (Publisher)
Created2015
154055-Thumbnail Image.png
Description
The electromagnetic fields near power lines that may produce adverse effects on humans are of increasing interest in a variety of situations, thus making it worthwhile to develop general-purpose software that estimates both the electric and magnetic fields accurately. This study deals with the simulations of the electric and magnetic

The electromagnetic fields near power lines that may produce adverse effects on humans are of increasing interest in a variety of situations, thus making it worthwhile to develop general-purpose software that estimates both the electric and magnetic fields accurately. This study deals with the simulations of the electric and magnetic fields near high-voltage power lines for the triangular, horizontal and vertical conductor arrangements under both balanced and unbalanced conditions.

For all three conductor arrangements, the shapes of the electric field distribution curves are different with the vertical arrangement best for minimizing right of way consideration, while the shapes of the magnetic field distributions curves are similar. Except for the horizontal arrangement, the maximum electric field magnitudes with shield conductors are larger than those without shield conductors. Among the three different arrangements, the maximum field value of the vertical arrangement is most vulnerable to the unbalanced conditions.

For both the electric and magnetic fields, increasing the heights of phase conductors gradually results in diminishing return in terms of the field reduction. In this work, both the maximum electric field magnitudes and the maximum magnetic field magnitudes produced by 500 kV power lines at 1 m height from the ground are all within the permissible exposure levels for the general public. At last, the dynamic trajectories of both fields with time are simulated and interpreted, with each field represented by a vector rotating in a plane describing an ellipse, where the vector values can be compared to high-speed vector measurements.
ContributorsXiao, Lei (Author) / Holbert, Keith E. (Thesis advisor) / Karady, George G. (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2015
154077-Thumbnail Image.png
Description
ABSTRACT

This dissertation introduces a real-time topology monitoring scheme for power systems intended to provide enhanced situational awareness during major system disturbances. The topology monitoring scheme requires accurate real-time topology information to be effective. This scheme is supported by advances in transmission line outage detection based on data-mining phasor measurement unit

ABSTRACT

This dissertation introduces a real-time topology monitoring scheme for power systems intended to provide enhanced situational awareness during major system disturbances. The topology monitoring scheme requires accurate real-time topology information to be effective. This scheme is supported by advances in transmission line outage detection based on data-mining phasor measurement unit (PMU) measurements.

A network flow analysis scheme is proposed to track changes in user defined minimal cut sets within the system. This work introduces a new algorithm used to update a previous network flow solution after the loss of a single system branch. The proposed new algorithm provides a significantly decreased solution time that is desired in a real- time environment. This method of topology monitoring can provide system operators with visual indications of potential problems in the system caused by changes in topology.

This work also presents a method of determining all singleton cut sets within a given network topology called the one line remaining (OLR) algorithm. During operation, if a singleton cut set exists, then the system cannot withstand the loss of any one line and still remain connected. The OLR algorithm activates after the loss of a transmission line and determines if any singleton cut sets were created. These cut sets are found using properties of power transfer distribution factors and minimal cut sets.

The topology analysis algorithms proposed in this work are supported by line outage detection using PMU measurements aimed at providing accurate real-time topology information. This process uses a decision tree (DT) based data-mining approach to characterize a lost tie line in simulation. The trained DT is then used to analyze PMU measurements to detect line outages. The trained decision tree was applied to real PMU measurements to detect the loss of a 500 kV line and had no misclassifications.

The work presented has the objective of enhancing situational awareness during significant system disturbances in real time. This dissertation presents all parts of the proposed topology monitoring scheme and justifies and validates the methodology using a real system event.
ContributorsWerho, Trevor Nelson (Author) / Vittal, Vijay (Thesis advisor) / Heydt, Gerald (Committee member) / Hedman, Kory (Committee member) / Karady, George G. (Committee member) / Arizona State University (Publisher)
Created2015
154080-Thumbnail Image.png
Description
Optimal experimental design for generalized linear models is often done using a pseudo-Bayesian approach that integrates the design criterion across a prior distribution on the parameter values. This approach ignores the lack of utility of certain models contained in the prior, and a case is demonstrated where the heavy

Optimal experimental design for generalized linear models is often done using a pseudo-Bayesian approach that integrates the design criterion across a prior distribution on the parameter values. This approach ignores the lack of utility of certain models contained in the prior, and a case is demonstrated where the heavy focus on such hopeless models results in a design with poor performance and with wild swings in coverage probabilities for Wald-type confidence intervals. Design construction using a utility-based approach is shown to result in much more stable coverage probabilities in the area of greatest concern.

The pseudo-Bayesian approach can be applied to the problem of optimal design construction under dependent observations. Often, correlation between observations exists due to restrictions on randomization. Several techniques for optimal design construction are proposed in the case of the conditional response distribution being a natural exponential family member but with a normally distributed block effect . The reviewed pseudo-Bayesian approach is compared to an approach based on substituting the marginal likelihood with the joint likelihood and an approach based on projections of the score function (often called quasi-likelihood). These approaches are compared for several models with normal, Poisson, and binomial conditional response distributions via the true determinant of the expected Fisher information matrix where the dispersion of the random blocks is considered a nuisance parameter. A case study using the developed methods is performed.

The joint and quasi-likelihood methods are then extended to address the case when the magnitude of random block dispersion is of concern. Again, a simulation study over several models is performed, followed by a case study when the conditional response distribution is a Poisson distribution.
ContributorsHassler, Edgar (Author) / Montgomery, Douglas C. (Thesis advisor) / Silvestrini, Rachel T. (Thesis advisor) / Borror, Connie M. (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2015