Matching Items (70)
158398-Thumbnail Image.png
Description
The main objective of this research is to develop reliability assessment methodologies to quantify the effect of various environmental factors on photovoltaic (PV) module performance degradation. The manufacturers of these photovoltaic modules typically provide a warranty level of about 25 years for 20% power degradation from the initial specified power

The main objective of this research is to develop reliability assessment methodologies to quantify the effect of various environmental factors on photovoltaic (PV) module performance degradation. The manufacturers of these photovoltaic modules typically provide a warranty level of about 25 years for 20% power degradation from the initial specified power rating. To quantify the reliability of such PV modules, the Accelerated Life Testing (ALT) plays an important role. But there are several obstacles that needs to be tackled to conduct such experiments, since there has not been enough historical field data available. Even if some time-series performance data of maximum output power (Pmax) is available, it may not be useful to develop failure/degradation mode-specific accelerated tests. This is because, to study the specific failure modes, it is essential to use failure mode-specific performance variable (like short circuit current, open circuit voltage or fill factor) that is directly affected by the failure mode, instead of overall power which would be affected by one or more of the performance variables. Hence, to address several of the above-mentioned issues, this research is divided into three phases. The first phase deals with developing models to study climate specific failure modes using failure mode specific parameters instead of power degradation. The limited field data collected after a long time (say 18-21 years), is utilized to model the degradation rate and the developed model is then calibrated to account for several unknown environmental effects using the available qualification testing data. The second phase discusses the cumulative damage modeling method to quantify the effects of various environmental variables on the overall power production of the photovoltaic module. Mainly, this cumulative degradation modeling approach is used to model the power degradation path and quantify the effects of high frequency multiple environmental input data (like temperature, humidity measured every minute or hour) with very sparse response data (power measurements taken quarterly or annually). The third phase deals with optimal planning and inference framework using Iterative-Accelerated Life Testing (I-ALT) methodology. All the proposed methodologies are demonstrated and validated using appropriate case studies.
ContributorsBala Subramaniyan, Arun (Author) / Pan, Rong (Thesis advisor) / Tamizhmani, Govindasamy (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Wu, Teresa (Committee member) / Kuitche, Joseph (Committee member) / Arizona State University (Publisher)
Created2020
161801-Thumbnail Image.png
Description
High-dimensional data is omnipresent in modern industrial systems. An imaging sensor in a manufacturing plant a can take images of millions of pixels or a sensor may collect months of data at very granular time steps. Dimensionality reduction techniques are commonly used for dealing with such data. In addition, outliers

High-dimensional data is omnipresent in modern industrial systems. An imaging sensor in a manufacturing plant a can take images of millions of pixels or a sensor may collect months of data at very granular time steps. Dimensionality reduction techniques are commonly used for dealing with such data. In addition, outliers typically exist in such data, which may be of direct or indirect interest given the nature of the problem that is being solved. Current research does not address the interdependent nature of dimensionality reduction and outliers. Some works ignore the existence of outliers altogether—which discredits the robustness of these methods in real life—while others provide suboptimal, often band-aid solutions. In this dissertation, I propose novel methods to achieve outlier-awareness in various dimensionality reduction methods. The problem is considered from many different angles depend- ing on the dimensionality reduction technique used (e.g., deep autoencoder, tensors), the nature of the application (e.g., manufacturing, transportation) and the outlier structure (e.g., sparse point anomalies, novelties).
ContributorsSergin, Nurettin Dorukhan (Author) / Yan, Hao (Thesis advisor) / Li, Jing (Committee member) / Wu, Teresa (Committee member) / Tsung, Fugee (Committee member) / Arizona State University (Publisher)
Created2021
153567-Thumbnail Image.png
Description
Gas turbine engine for aircraft propulsion represents one of the most physics-complex and safety-critical systems in the world. Its failure diagnostic is challenging due to the complexity of the model system, difficulty involved in practical testing and the infeasibility of creating homogeneous diagnostic performance evaluation criteria for the diverse engine

Gas turbine engine for aircraft propulsion represents one of the most physics-complex and safety-critical systems in the world. Its failure diagnostic is challenging due to the complexity of the model system, difficulty involved in practical testing and the infeasibility of creating homogeneous diagnostic performance evaluation criteria for the diverse engine makes.

NASA has designed and publicized a standard benchmark problem for propulsion engine gas path diagnostic that enables comparisons among different engine diagnostic approaches. Some traditional model-based approaches and novel purely data-driven approaches such as machine learning, have been applied to this problem.

This study focuses on a different machine learning approach to the diagnostic problem. Some most common machine learning techniques, such as support vector machine, multi-layer perceptron, and self-organizing map are used to help gain insight into the different engine failure modes from the perspective of big data. They are organically integrated to achieve good performance based on a good understanding of the complex dataset.

The study presents a new hierarchical machine learning structure to enhance classification accuracy in NASA's engine diagnostic benchmark problem. The designed hierarchical structure produces an average diagnostic accuracy of 73.6%, which outperforms comparable studies that were most recently published.
ContributorsWu, Qiyu (Author) / Si, Jennie (Thesis advisor) / Wu, Teresa (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2015
132105-Thumbnail Image.png
Description

The primary purpose of this paper is to evaluate the energy impacts of faults in building heating, ventilation, and air conditioning systems and determine which systems’ faults have the highest effect on the energy consumption. With the knowledge obtained through the results described in this paper, building engineers and technicians

The primary purpose of this paper is to evaluate the energy impacts of faults in building heating, ventilation, and air conditioning systems and determine which systems’ faults have the highest effect on the energy consumption. With the knowledge obtained through the results described in this paper, building engineers and technicians will be more able to implement a data-driven solution to building fault detection and diagnostics

In the United States alone, commercial buildings consume 18% of the country’s energy. Due to this high percentage of energy consumption, many efforts are being made to make buildings more energy efficient. Heating, ventilation, and air conditioning (HVAC) systems are made to provide acceptable air quality and thermal comfort to building occupants. In large buildings, a demand-controlled HVAC system is used to save energy by dynamically adjusting the ventilation of the building. These systems rely on a multitude of sensors, actuators, dampers, and valves in order to keep the building ventilation efficient. Using a fault analysis framework developed by the University of Alabama and the National Renewable Energy Laboratory, building fault modes were simulated in the EnergyPlus whole building energy simulation program. The model and framework are based on the Department of Energy’s Commercial Prototype Building – Medium Office variant. A total of 3,002 simulations were performed in the Atlanta climate zone, with 129 fault cases and 41 fault types. These simulations serve two purposes: to validate the previously developed fault simulation framework, and to analyze how each fault mode affects the building over the simulation period.

The results demonstrate the effects of faults on HVAC systems, and validate the scalability of the framework. The most critical fault cases for the Medium Office building are those that affect the water systems of the building, as they cause the most harm to overall energy costs and occupant comfort.

ContributorsAjjampur, Vivek (Author) / Wu, Teresa (Thesis director) / McCarville, Daniel R. (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
132761-Thumbnail Image.png
Description
Rapid advancements in Artificial Intelligence (AI), Machine Learning, and Deep Learning technologies are widening the playing field for automated decision assistants in healthcare. The field of radiology offers a unique platform for this technology due to its repetitive work structure, ability to leverage large data sets, and high position for

Rapid advancements in Artificial Intelligence (AI), Machine Learning, and Deep Learning technologies are widening the playing field for automated decision assistants in healthcare. The field of radiology offers a unique platform for this technology due to its repetitive work structure, ability to leverage large data sets, and high position for clinical and social impact. Several technologies in cancer screening, such as Computer Aided Detection (CAD), have broken the barrier of research into reality through successful outcomes with patient data (Morton, Whaley, Brandt, & Amrami, 2006; Patel et al, 2018). Technologies, such as the IBM Medical Sieve, are growing excitement with the potential for increased impact through the addition of medical record information ("Medical Sieve Radiology Grand Challenge", 2018). As the capabilities of automation increase and become a part of expert-decision-making jobs, however, the careful consideration of its integration into human systems is often overlooked. This paper aims to identify how healthcare professionals and system engineers implementing and interacting with automated decision-making aids in Radiology should take bureaucratic, legal, professional, and political accountability concerns into consideration. This Accountability Framework is modeled after Romzek and Dubnick’s (1987) public administration framework and expanded on through an analysis of literature on accountability definitions and examples in military, healthcare, and research sectors. A cohesive understanding of this framework and the human concerns it raises helps drive the questions that, if fully addressed, create the potential for a successful integration and adoption of AI in radiology and ultimately the care environment.
ContributorsGilmore, Emily Anne (Author) / Chiou, Erin (Thesis director) / Wu, Teresa (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
150733-Thumbnail Image.png
Description
This research by studies the computational performance of four different mixed integer programming (MIP) formulations for single machine scheduling problems with varying complexity. These formulations are based on (1) start and completion time variables, (2) time index variables, (3) linear ordering variables and (4) assignment and positional date variables. The

This research by studies the computational performance of four different mixed integer programming (MIP) formulations for single machine scheduling problems with varying complexity. These formulations are based on (1) start and completion time variables, (2) time index variables, (3) linear ordering variables and (4) assignment and positional date variables. The objective functions that are studied in this paper are total weighted completion time, maximum lateness, number of tardy jobs and total weighted tardiness. Based on the computational results, discussion and recommendations are made on which MIP formulation might work best for these problems. The performances of these formulations very much depend on the objective function, number of jobs and the sum of the processing times of all the jobs. Two sets of inequalities are presented that can be used to improve the performance of the formulation with assignment and positional date variables. Further, this research is extend to single machine bicriteria scheduling problems in which jobs belong to either of two different disjoint sets, each set having its own performance measure. These problems have been referred to as interfering job sets in the scheduling literature and also been called multi-agent scheduling where each agent's objective function is to be minimized. In the first single machine interfering problem (P1), the criteria of minimizing total completion time and number of tardy jobs for the two sets of jobs is studied. A Forward SPT-EDD heuristic is presented that attempts to generate set of non-dominated solutions. The complexity of this specific problem is NP-hard. The computational efficiency of the heuristic is compared against the pseudo-polynomial algorithm proposed by Ng et al. [2006]. In the second single machine interfering job sets problem (P2), the criteria of minimizing total weighted completion time and maximum lateness is studied. This is an established NP-hard problem for which a Forward WSPT-EDD heuristic is presented that attempts to generate set of supported points and the solution quality is compared with MIP formulations. For both of these problems, all jobs are available at time zero and the jobs are not allowed to be preempted.
ContributorsKhowala, Ketan (Author) / Fowler, John (Thesis advisor) / Keha, Ahmet (Thesis advisor) / Balasubramanian, Hari J (Committee member) / Wu, Teresa (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2012
190990-Thumbnail Image.png
Description
This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the

This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the intersection of therapeutics and biomanufacturing has become increasingly important: (i) a huge push toward the design of new RNA nanoparticles has revolutionized the science of vaccines due to the COVID-19 pandemic; (ii) while the technology to produce personalized cancer medications is available, efficient design and operation of manufacturing systems is not yet agreed upon. In this work, the focus is on operations research methodologies that can support faster design of novel products, specifically RNA; and methods for the enabling of personalization in biomanufacturing, and will specifically look at the problem of cancer therapy manufacturing. Across both areas, methods are presented attempting to embed pre-existing knowledge (e.g., constraints characterizing good molecules, comparison between structures) as well as learn problem structure (e.g., the landscape of the rewards function while synthesizing the control for a single use bioreactor). This thesis produced three key outcomes: (i) ExpertRNA for the prediction of the structure of an RNA molecule given a sequence. RNA structure is fundamental in determining its function. Therefore, having efficient tools for such prediction can make all the difference for a scientist trying to understand optimal molecule configuration. For the first time, the algorithm allows expert evaluation in the loop to judge the partial predictions that the tool produces; (ii) BioMAN, a discrete event simulation tool for the study of single-use biomanufacturing of personalized cancer therapies. The discrete event simulation engine was designed tailored to handle the efficient scheduling of many parallel events which is cause by the presence of single use resources. This is the first simulator of this type for individual therapies; (iii) Part-MCTS, a novel sequential decision-making algorithm to support the control of single use systems. This tool integrates for the first-time simulation, monte-carlo tree search and optimal computing budget allocation for managing the computational effort.
ContributorsLiu, Menghan (Author) / Pedrielli, Giulia (Thesis advisor) / Bertsekas, Dimitri (Committee member) / Pan, Rong (Committee member) / Sulc, Petr (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2023
128818-Thumbnail Image.png
Description

Background: Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance

Background: Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML) algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM.

Methods: We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs <80% tumor nuclei) for corresponding samples. In a training set, we used three texture analysis algorithms and three ML methods to identify MRI-texture features that optimized model accuracy to distinguish tumor content. We confirmed model accuracy in a separate validation set.

Results: We collected 82 biopsies from 18 GBMs throughout ENH and BAT. The MRI-based model achieved 85% cross-validated accuracy to diagnose high- vs low-tumor in the training set (60 biopsies, 11 patients). The model achieved 81.8% accuracy in the validation set (22 biopsies, 7 patients).

Conclusion: Multi-parametric MRI and texture analysis can help characterize and visualize GBM’s spatial histologic heterogeneity to identify regional tumor-rich biopsy targets.

ContributorsHu, Leland S. (Author) / Ning, Shuluo (Author) / Eschbacher, Jennifer M. (Author) / Gaw, Nathan (Author) / Dueck, Amylou C. (Author) / Smith, Kris A. (Author) / Nakaji, Peter (Author) / Plasencia, Jonathan (Author) / Ranjbar, Sara (Author) / Price, Stephen J. (Author) / Tran, Nhan (Author) / Loftus, Joseph (Author) / Jenkins, Robert (Author) / O'Neill, Brian P. (Author) / Elmquist, William (Author) / Baxter, Leslie C. (Author) / Gao, Fei (Author) / Frakes, David (Author) / Karis, John P. (Author) / Zwart, Christine (Author) / Swanson, Kristin R. (Author) / Sarkaria, Jann (Author) / Wu, Teresa (Author) / Mitchell, J. Ross (Author) / Li, Jing (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-11-24
129596-Thumbnail Image.png
Description

Asymmetry of bilateral mammographic tissue density and patterns is a potentially strong indicator of having or developing breast abnormalities or early cancers. The purpose of this study is to design and test the global asymmetry features from bilateral mammograms to predict the near-term risk of women developing detectable high risk

Asymmetry of bilateral mammographic tissue density and patterns is a potentially strong indicator of having or developing breast abnormalities or early cancers. The purpose of this study is to design and test the global asymmetry features from bilateral mammograms to predict the near-term risk of women developing detectable high risk breast lesions or cancer in the next sequential screening mammography examination. The image dataset includes mammograms acquired from 90 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including image preprocessing, suspicious region segmentation, image feature extraction, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under curve (AUC) is 0.754 ± 0.024 when applying the new computerized aided diagnosis (CAD) scheme to our testing dataset. The positive predictive value and the negative predictive value were 0.58 and 0.80, respectively.

ContributorsSun, Wenqing (Author) / Zheng, Bin (Author) / Lure, Fleming (Author) / Wu, Teresa (Author) / Zhang, Jianying (Author) / Wang, Benjamin Y. (Author) / Saltzstein, Edward C. (Author) / Qian, Wei (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-07-01
193023-Thumbnail Image.png
Description
Alzheimer's disease (AD) and Alzheimer's Related Dementias (ADRD) is projected to affect 50 million people globally in the coming decades. Clinical research suggests that Mild Cognitive Impairment (MCI), a precursor to dementia, offers a critical window for lifestyle interventions to delay or prevent the progression of AD/ADRD. Previous research indicates

Alzheimer's disease (AD) and Alzheimer's Related Dementias (ADRD) is projected to affect 50 million people globally in the coming decades. Clinical research suggests that Mild Cognitive Impairment (MCI), a precursor to dementia, offers a critical window for lifestyle interventions to delay or prevent the progression of AD/ADRD. Previous research indicates that lifestyle changes, including increased physical exercise, reduced caloric intake, and mentally stimulating exercises, can reduce the risk of MCI. Early detection of MCI is challenging due to subtle and often unnoticed cognitive decline, traditionally monitored through infrequent clinical tests. As part of this research, the Smart Driving System was proposed, a novel, unobtrusive, and economical technology to detect early stages of neurodegenerative diseases. This system, leveraging a multi-modal biosensing array (MMS) and AI algorithms, assesses daily driving behavior, offering insights into a driver's cognitive function. The ultimate goal is to develop the Smart Driving Device and App, integrating it into vehicles, and validating its effectiveness in detecting MCI through comprehensive pilot studies. The Smart Driving System represents a breakthrough in AD/ADRD management, promising significant improvements in early detection and offering a scalable, cost-effective solution for monitoring cognitive health in real-world settings.
ContributorsSerhan, Peter (Author) / Forzani, Erica (Thesis advisor) / Wu, Teresa (Committee member) / Hihath, Joshua (Committee member) / Arizona State University (Publisher)
Created2024