Matching Items (44)
Filtering by

Clear all filters

171944-Thumbnail Image.png
Description
Over the past few decades, medical imaging is becoming important in medicine for disease diagnosis, prognosis, treatment assessment and health monitoring. As medical imaging has progressed, imaging biomarkers are being rapidly developed for early diagnosis and staging of disease. Detecting and segmenting objects from images are often the first steps

Over the past few decades, medical imaging is becoming important in medicine for disease diagnosis, prognosis, treatment assessment and health monitoring. As medical imaging has progressed, imaging biomarkers are being rapidly developed for early diagnosis and staging of disease. Detecting and segmenting objects from images are often the first steps in quantitative measurement of these biomarkers. While large objects can often be automatically or semi-automatically delineated, segmenting small objects (blobs) is challenging. The small object of particular interest in this dissertation are glomeruli from kidney magnetic resonance (MR) images. This problem has its unique challenges. First of all, the size of glomeruli is extremely small and very similar with noises from images. Second, there are massive of glomeruli in kidney, e.g. over 1 million glomeruli in human kidney, and the intensity distribution is heterogenous. A third recognized issue is that a large portion of glomeruli are overlapping and touched in images. The goal of this dissertation is to develop computational algorithms to identify and discover glomeruli related imaging biomarkers. The first phase is to develop a U-net joint with Hessian based Difference of Gaussians (UH-DoG) blob detector. Joining effort from deep learning alleviates the over-detection issue from Hessian analysis. Next, as extension of UH-DoG, a small blob detector using Bi-Threshold Constrained Adaptive Scales (BTCAS) is proposed. Deep learning is treated as prior of Difference of Gaussian (DoG) to improve its efficiency. By adopting BTCAS, under-segmentation issue of deep learning is addressed. The second phase is to develop a denoising convexity-consistent Blob Generative Adversarial Network (BlobGAN). BlobGAN could achieve high denoising performance and selectively denoise the image without affecting the blobs. These detectors are validated on datasets of 2D fluorescent images, 3D synthetic images, 3D MR (18 mice, 3 humans) images and proved to be outperforming the competing detectors. In the last phase, a Fréchet Descriptors Distance based Coreset approach (FDD-Coreset) is proposed for accelerating BlobGAN’s training. Experiments have shown that BlobGAN trained on FDD-Coreset not only significantly reduces the training time, but also achieves higher denoising performance and maintains approximate performance of blob identification compared with training on entire dataset.
ContributorsXu, Yanzhe (Author) / Wu, Teresa (Thesis advisor) / Iquebal, Ashif (Committee member) / Yan, Hao (Committee member) / Beeman, Scott (Committee member) / Arizona State University (Publisher)
Created2022
171949-Thumbnail Image.png
Description
Global decarbonization requires a large-scale shift to sustainable energy sources. Innovation will be a key enabler of this global energy transition. Although the energy transition and innovation literatures overwhelmingly focus on the Global North, energy innovation is arguably even more important for the Global South because it can enable them

Global decarbonization requires a large-scale shift to sustainable energy sources. Innovation will be a key enabler of this global energy transition. Although the energy transition and innovation literatures overwhelmingly focus on the Global North, energy innovation is arguably even more important for the Global South because it can enable them to grow their energy demand and power their development with sustainable resources. This dissertation examines three aspects of energy innovation, focusing on Mexico, to advance the understanding of innovation systems and identify policy levers for accelerating energy innovation in emerging economies. The first project utilizes econometric models to assess patenting drivers for renewable energy (wind and solar) and enabling technologies (energy storage, high voltage direct current technologies, hydrogen technologies, and fuel cells) across 34 countries, including Mexico. The examination of enabling technologies is a particular contribution, since most research on energy innovation focuses on renewable generation technologies. This research finds that policies have differential effects on renewable technologies versus enabling technology, with innovation in enabling technologies lagging behind the deployment of renewable energy. Although renewable energy policies have some spillover effects on enabling technologies, this research suggests that targeted policy instruments for enabling technologies may be needed for global decarbonization. The second and third projects apply the innovation systems framework to understand energy innovation in Mexico. The second project analyzes the sectoral innovation system (SIS) for wind and solar technologies, using expert interviews to evaluate SIS structure and functions systemically. It finds that this innovation system is susceptible to changes in its structure, specifically institutional modifications, and encounters cultural and social aspects that reduce its performance. Further, it finds that non-government organizations and local governments are trying to support the SIS, but their efforts are hampered by low participation from the federal government. The third project studies the technology innovation system (TIS) for green hydrogen, an emerging industrial opportunity for Latin America. It evaluates this TIS's functionality and identifies 22 initiatives to improve its performance by interviewing green hydrogen experts in Mexico. The most important initiatives for strengthening the green hydrogen TIS are information campaigns, policy and regulation (taxes, subsidies, standards, and industrial policies), pilot or demonstration projects, and professional training. Overall, this dissertation contributes to the nexus of energy transition and innovation studies by advancing the understanding of energy innovation in an emerging economy.
ContributorsAguiar Hernandez, Carlos Gabriel (Author) / Breetz, Hanna (Thesis advisor) / Parker, Nathan (Committee member) / Solis, Dario (Committee member) / Arizona State University (Publisher)
Created2022
171633-Thumbnail Image.png
Description
Additive manufacturing consists of successive fabrication of materials layer upon layer to manufacture three-dimensional items. Several key problems such as poor quality of finished products and excessive operational costs are yet to be addressed before it becomes widely applicable in the industry. Retroactive/offline actions such as post-manufacturing inspections for

Additive manufacturing consists of successive fabrication of materials layer upon layer to manufacture three-dimensional items. Several key problems such as poor quality of finished products and excessive operational costs are yet to be addressed before it becomes widely applicable in the industry. Retroactive/offline actions such as post-manufacturing inspections for defect detection in finished products are not only extremely expensive and ineffective but are also incapable of issuing corrective action signals during the building span. In-situ monitoring and optimal control methods, on the other hand, can provide viable alternatives to aid with the online detection of anomalies and control the process. Nevertheless, the complexity of process assumptions, unique structure of collected data, and high-frequency data acquisition rate severely deteriorates the performance of traditional and parametric control and process monitoring approaches. Out of diverse categories of additive manufacturing, Large-Scale Additive Manufacturing (LSAM) by material extrusion and Laser Powder Bed Fusion (LPBF) suffer the most due to their more advanced technologies and are therefore the subjects of study in this work. In LSAM, the geometry of large parts can impact the heat dissipation and lead to large thermal gradients between distance locations on the surface. The surface's temperature profile is captured by an infrared thermal camera and translated to a non-linear regression model to formulate the surface cooling dynamics. The surface temperature prediction methodology is then combined into an optimization model with probabilistic constraints for real-time layer time and material flow control. On-axis optical high-speed cameras can capture streams of melt pool images of laser-powder interaction in real-time during the process. Model-agnostic deep learning methods offer a great deal of flexibility when facing such unstructured big data and thus are appealing alternatives to their physical-related and regression-based modeling counterparts. A configuration of Convolutional Long-Short Term Memory (ConvLSTM) auto-encoder is proposed to learn a deep spatio-temporal representation from sequences of melt pool images collected from experimental builds. The unfolded bottleneck tensors are then further mined to construct a high accuracy and low false alarm rate anomaly detection and monitoring procedure.
ContributorsFathizadan, Sepehr (Author) / Ju, Feng (Thesis advisor) / Wu, Teresa (Committee member) / Lu, Yan (Committee member) / Iquebal, Ashif (Committee member) / Arizona State University (Publisher)
Created2022
168288-Thumbnail Image.png
Description
Intensified food production on large farms across the world has led to discussions on how to facilitate sustainable policies and practices to reduce nutrient pollution. In Chapter 1, I evaluated the co-variability of agricultural intensification, environmental degradation, and socio-economic indicators throughout the US to explore the potential evidence for the

Intensified food production on large farms across the world has led to discussions on how to facilitate sustainable policies and practices to reduce nutrient pollution. In Chapter 1, I evaluated the co-variability of agricultural intensification, environmental degradation, and socio-economic indicators throughout the US to explore the potential evidence for the existence of sustainable intensification of agriculture in the US. I identified distinct agro-social-eco regions in the US that provide background for future regional studies of (sustainable intensification) SI in the US and beyond. I observed regions of moderate agricultural intensity and lower environmental degradation within the Great Plains, and regions of high agricultural intensity and higher environmental degradation throughout portions of the Midwest. Insights gained from this study can provide roadmaps for improved sustainable agricultural intensification within the US. In Chapter 2, the study summarized state regulations controlling a key nutrient input - the land application of biosolids from human wastewater treatment and manures from regulated animal feeding operations. Results indicate high variability of both manure and biosolids regulations among the states and stark differences in the regulation of land application of biosolids versus manures. This work can be used to identify opportunities for the strengthening of regulatory frameworks for managing these resources with minimal risk to the environment. In Chapter 3, I combined aspects of the previous chapters to understand the potential impact of specific CAFO land application regulations on nutrient pollution and assess if stricter regulations related to better environmental outcomes. I compared TN AND TP accumulated yields in surface waters across US States with state specific CAFO land application regulations across US Policy scenario tests revealed that more restrictions were associated with higher nutrient levels, indicating reactive policy making and delayed nonpoint source pollution responses. Overall, I found that fostering adaptive capacity and management within delineated agro-social-eco regions will likely facilitate sustainable food systems in the US.
ContributorsRauh, Eleanor (Author) / Muenich, Rebecca (Thesis advisor) / Compton, Jana (Committee member) / Parker, Nathan (Committee member) / Hamilton, Kerry (Committee member) / Arizona State University (Publisher)
Created2021
161801-Thumbnail Image.png
Description
High-dimensional data is omnipresent in modern industrial systems. An imaging sensor in a manufacturing plant a can take images of millions of pixels or a sensor may collect months of data at very granular time steps. Dimensionality reduction techniques are commonly used for dealing with such data. In addition, outliers

High-dimensional data is omnipresent in modern industrial systems. An imaging sensor in a manufacturing plant a can take images of millions of pixels or a sensor may collect months of data at very granular time steps. Dimensionality reduction techniques are commonly used for dealing with such data. In addition, outliers typically exist in such data, which may be of direct or indirect interest given the nature of the problem that is being solved. Current research does not address the interdependent nature of dimensionality reduction and outliers. Some works ignore the existence of outliers altogether—which discredits the robustness of these methods in real life—while others provide suboptimal, often band-aid solutions. In this dissertation, I propose novel methods to achieve outlier-awareness in various dimensionality reduction methods. The problem is considered from many different angles depend- ing on the dimensionality reduction technique used (e.g., deep autoencoder, tensors), the nature of the application (e.g., manufacturing, transportation) and the outlier structure (e.g., sparse point anomalies, novelties).
ContributorsSergin, Nurettin Dorukhan (Author) / Yan, Hao (Thesis advisor) / Li, Jing (Committee member) / Wu, Teresa (Committee member) / Tsung, Fugee (Committee member) / Arizona State University (Publisher)
Created2021
190990-Thumbnail Image.png
Description
This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the

This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the intersection of therapeutics and biomanufacturing has become increasingly important: (i) a huge push toward the design of new RNA nanoparticles has revolutionized the science of vaccines due to the COVID-19 pandemic; (ii) while the technology to produce personalized cancer medications is available, efficient design and operation of manufacturing systems is not yet agreed upon. In this work, the focus is on operations research methodologies that can support faster design of novel products, specifically RNA; and methods for the enabling of personalization in biomanufacturing, and will specifically look at the problem of cancer therapy manufacturing. Across both areas, methods are presented attempting to embed pre-existing knowledge (e.g., constraints characterizing good molecules, comparison between structures) as well as learn problem structure (e.g., the landscape of the rewards function while synthesizing the control for a single use bioreactor). This thesis produced three key outcomes: (i) ExpertRNA for the prediction of the structure of an RNA molecule given a sequence. RNA structure is fundamental in determining its function. Therefore, having efficient tools for such prediction can make all the difference for a scientist trying to understand optimal molecule configuration. For the first time, the algorithm allows expert evaluation in the loop to judge the partial predictions that the tool produces; (ii) BioMAN, a discrete event simulation tool for the study of single-use biomanufacturing of personalized cancer therapies. The discrete event simulation engine was designed tailored to handle the efficient scheduling of many parallel events which is cause by the presence of single use resources. This is the first simulator of this type for individual therapies; (iii) Part-MCTS, a novel sequential decision-making algorithm to support the control of single use systems. This tool integrates for the first-time simulation, monte-carlo tree search and optimal computing budget allocation for managing the computational effort.
ContributorsLiu, Menghan (Author) / Pedrielli, Giulia (Thesis advisor) / Bertsekas, Dimitri (Committee member) / Pan, Rong (Committee member) / Sulc, Petr (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2023
193841-Thumbnail Image.png
Description
Recent advancements in computer vision models have largely been driven by supervised training on labeled data. However, the process of labeling datasets remains both costly and time-intensive. This dissertation delves into enhancing the performance of deep neural networks when faced with limited or no labeling information. I address this challenge

Recent advancements in computer vision models have largely been driven by supervised training on labeled data. However, the process of labeling datasets remains both costly and time-intensive. This dissertation delves into enhancing the performance of deep neural networks when faced with limited or no labeling information. I address this challenge through four primary methodologies: domain adaptation, self-supervision, input regularization, and label regularization. In situations where labeled data is unavailable but a similar dataset exists, domain adaptation emerges as a valuable strategy for transferring knowledge from the labeled dataset to the target dataset. This dissertation introduces three innovative domain adaptation methods that operate at pixel, feature, and output levels.Another approach to tackle the absence of labels involves a novel self-supervision technique tailored to train Vision Transformers in extracting rich features. The third and fourth approaches focus on scenarios where only a limited amount of labeled data is available. In such cases, I present novel regularization techniques designed to mitigate overfitting by modifying the input data and the target labels, respectively.
ContributorsChhabra, Sachin (Author) / Li, Baoxin (Thesis advisor) / Venkateswara, Hemanth (Committee member) / Yang, Yezhou (Committee member) / Wu, Teresa (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2024
193491-Thumbnail Image.png
Description
With the exponential growth of multi-modal data in the field of computer vision, the ability to do inference effectively among multiple modalities—such as visual, textual, and auditory data—shows significant opportunities. The rapid development of cross-modal applications such as retrieval and association is primarily attributed to their ability to bridge the

With the exponential growth of multi-modal data in the field of computer vision, the ability to do inference effectively among multiple modalities—such as visual, textual, and auditory data—shows significant opportunities. The rapid development of cross-modal applications such as retrieval and association is primarily attributed to their ability to bridge the gap between different modalities of data. However, the current mainstream cross-modal methods always heavily rely on the availability of fully annotated paired data, presenting a significant challenge due to the scarcity of precisely matched datasets in real-world scenarios. In response to this bottleneck, several sophisticated deep learning algorithms are designed to substantially improve the inference capabilities across a broad spectrum of cross-modal applications. This dissertation introduces novel deep learning algorithms aimed at enhancing inference capabilities in cross-modal applications, which take four primary aspects. Firstly, it introduces the algorithm for image retrieval by learning hashing codes. This algorithm only utilizes the other modality data in weakly supervised tags format rather than the supervised label. Secondly, it designs a novel framework for learning the joint embeddings of images and texts for the cross-modal retrieval tasks. It efficiently learns the binary codes from the continuous CLIP feature space and can even deliver competitive performance compared with the results from non-hashing methods. Thirdly, it conducts a method to learn the fragment-level embeddings that capture fine-grained cross-modal association in images and texts. This method uses the fragment proposals in an unsupervised manner. Lastly, this dissertation also outlines the algorithm to enhance the mask-text association ability of pre-trained semantic segmentation models with zero examples provided. Extensive future plans to further improve this algorithm for semantic segmentation tasks will be discussed.
ContributorsZhuo, Yaoxin (Author) / Li, Baoxin (Thesis advisor) / Wu, Teresa (Committee member) / Davulcu, Hasan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2024
156528-Thumbnail Image.png
Description
Technology advancements in diagnostic imaging, smart sensing, and health information systems have resulted in a data-rich environment in health care, which offers a great opportunity for Precision Medicine. The objective of my research is to develop data fusion and system informatics approaches for quality and performance improvement of health care.

Technology advancements in diagnostic imaging, smart sensing, and health information systems have resulted in a data-rich environment in health care, which offers a great opportunity for Precision Medicine. The objective of my research is to develop data fusion and system informatics approaches for quality and performance improvement of health care. In my dissertation, I focus on three emerging problems in health care and develop novel statistical models and machine learning algorithms to tackle these problems from diagnosis to care to system-level decision-making.

The first topic is diagnosis/subtyping of migraine to customize effective treatment to different subtypes of patients. Existing clinical definitions of subtypes use somewhat arbitrary boundaries primarily based on patient self-reported symptoms, which are subjective and error-prone. My research develops a novel Multimodality Factor Mixture Model that discovers subtypes of migraine from multimodality imaging MRI data, which provides complementary accurate measurements of the disease. Patients in the different subtypes show significantly different clinical characteristics of the disease. Treatment tailored and optimized for patients of the same subtype paves the road toward Precision Medicine.

The second topic focuses on coordinated patient care. Care coordination between nurses and with other health care team members is important for providing high-quality and efficient care to patients. The recently developed Nurse Care Coordination Instrument (NCCI) is the first of its kind that enables large-scale quantitative data to be collected. My research develops a novel Multi-response Multi-level Model (M3) that enables transfer learning in NCCI data fusion. M3 identifies key factors that contribute to improving care coordination, and facilitates the design and optimization of nurses’ training, workload assignment, and practice environment, which leads to improved patient outcomes.

The last topic is about system-level decision-making for Alzheimer’s disease early detection at the early stage of Mild Cognitive Impairment (MCI), by predicting each MCI patient’s risk of converting to AD using imaging and proteomic biomarkers. My research proposes a systems engineering approach that integrates the multi-perspectives, including prediction accuracy, biomarker cost/availability, patient heterogeneity and diagnostic efficiency, and allows for system-wide optimized decision regarding the biomarker testing process for prediction of MCI conversion.
ContributorsSi, Bing (Author) / Li, Jing (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Schwedt, Todd (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2018
157564-Thumbnail Image.png
Description
Semi-supervised learning (SSL) is sub-field of statistical machine learning that is useful for problems that involve having only a few labeled instances with predictor (X) and target (Y) information, and abundance of unlabeled instances that only have predictor (X) information. SSL harnesses the target information available in the limited

Semi-supervised learning (SSL) is sub-field of statistical machine learning that is useful for problems that involve having only a few labeled instances with predictor (X) and target (Y) information, and abundance of unlabeled instances that only have predictor (X) information. SSL harnesses the target information available in the limited labeled data, as well as the information in the abundant unlabeled data to build strong predictive models. However, not all the included information is useful. For example, some features may correspond to noise and including them will hurt the predictive model performance. Additionally, some instances may not be as relevant to model building and their inclusion will increase training time and potentially hurt the model performance. The objective of this research is to develop novel SSL models to balance data inclusivity and usability. My dissertation research focuses on applications of SSL in healthcare, driven by problems in brain cancer radiomics, migraine imaging, and Parkinson’s Disease telemonitoring.

The first topic introduces an integration of machine learning (ML) and a mechanistic model (PI) to develop an SSL model applied to predicting cell density of glioblastoma brain cancer using multi-parametric medical images. The proposed ML-PI hybrid model integrates imaging information from unbiopsied regions of the brain as well as underlying biological knowledge from the mechanistic model to predict spatial tumor density in the brain.

The second topic develops a multi-modality imaging-based diagnostic decision support system (MMI-DDS). MMI-DDS consists of modality-wise principal components analysis to incorporate imaging features at different aggregation levels (e.g., voxel-wise, connectivity-based, etc.), a constrained particle swarm optimization (cPSO) feature selection algorithm, and a clinical utility engine that utilizes inverse operators on chosen principal components for white-box classification models.

The final topic develops a new SSL regression model with integrated feature and instance selection called s2SSL (with “s2” referring to selection in two different ways: feature and instance). s2SSL integrates cPSO feature selection and graph-based instance selection to simultaneously choose the optimal features and instances and build accurate models for continuous prediction. s2SSL was applied to smartphone-based telemonitoring of Parkinson’s Disease patients.
ContributorsGaw, Nathan (Author) / Li, Jing (Thesis advisor) / Wu, Teresa (Committee member) / Yan, Hao (Committee member) / Hu, Leland (Committee member) / Arizona State University (Publisher)
Created2019