Matching Items (7)
Filtering by

Clear all filters

Description
Increased LV wall thickness is frequently encountered in transthoracicechocardiography (TTE). While accurate and early diagnosis is clinically important, given the differences in available therapeutic options and prognosis, an extensive workup is often required for establishing the diagnosis. I propose the first echo-based, automated deep learning model with a fusion architecture to facilitate the

Increased LV wall thickness is frequently encountered in transthoracicechocardiography (TTE). While accurate and early diagnosis is clinically important, given the differences in available therapeutic options and prognosis, an extensive workup is often required for establishing the diagnosis. I propose the first echo-based, automated deep learning model with a fusion architecture to facilitate the evaluation and diagnosis of increased left ventricular (LV) wall thickness. Patients with an established diagnosis for increased LV wall thickness (hypertrophic cardiomyopathy (HCM), cardiac amyloidosis (CA), and hypertensive heart disease (HTN)/others) between 1/2015 to 11/2019 at Mayo Clinic Arizona were identified. The cohort was divided into 80%/10%/10% for training, validation, and testing sets, respectively. Six baseline TTE views were used to optimize a pre-trained InceptionResnetV2 model, each model output was used to train a meta-learner under a fusion architecture. Model performance was assessed by multiclass area under the receiver operating characteristic curve (AUROC). A total of 586 patients were used for the final analysis (194 HCM, 201 CA, and 191 HTN/others). The mean age was 55.0 years, and 57.8% were male. Among the individual view-dependent models, the apical 4 chamber model had the best performance (AUROC: HCM: 0.94, CA: 0.73, and HTN/other: 0.87). The final fusion model outperformed all the view-dependent models (AUROC: CA: 0.90, HCM: 0.93, and HTN/other: 0.92). I successfully established an automatic end-to-end deep learning model framework that accurately differentiates the major etiologies of increased LV wall thickness, including HCM and CA from the background of HTN/other diagnoses.
ContributorsLi, James Shuyue (Author) / Patel, Bhavik (Thesis advisor) / Li, Baoxin (Thesis advisor) / Banerjee, Imon (Committee member) / Arizona State University (Publisher)
Created2022
193355-Thumbnail Image.png
Description
Image denoising, a fundamental task in computer vision, poses significant challenges due to its inherently inverse and ill-posed nature. Despite advancements in traditional methods and supervised learning approaches, particularly in medical imaging such as Medical Resonance Imaging (MRI) scans, the reliance on paired datasets and known noise distributions remains a

Image denoising, a fundamental task in computer vision, poses significant challenges due to its inherently inverse and ill-posed nature. Despite advancements in traditional methods and supervised learning approaches, particularly in medical imaging such as Medical Resonance Imaging (MRI) scans, the reliance on paired datasets and known noise distributions remains a practical hurdle. Recent progress in noise statistical independence theory and diffusion models has revitalized research interest, offering promising avenues for unsupervised denoising. However, existing methods often yield overly smoothed results or introduce hallucinated structures, limiting their clinical applicability. This thesis tackles the core challenge of progressing towards unsupervised denoising of MRI scans. It aims to retain intricate details without smoothing or introducing artificial structures, thus ensuring the production of high-quality MRI images. The thesis makes a three-fold contribution: Firstly, it presents a detailed analysis of traditional techniques, early machine learning algorithms for denoising, and new statistical-based models, with an extensive evaluation study on self-supervised denoising methods highlighting their limitations. Secondly, it conducts an evaluation study on an emerging class of diffusion-based denoising methods, accompanied by additional empirical findings and discussions on their effectiveness and limitations, proposing solutions to enhance their utility. Lastly, it introduces a novel approach, Unsupervised Multi-stage Ensemble Deep Learning with diffusion models for denoising MRI scans (MEDL). Leveraging diffusion models, this approach operates independently of signal or noise priors and incorporates weighted rescaling of multi-stage reconstructions to balance over-smoothing and hallucination tendencies. Evaluation using benchmark datasets demonstrates an average gain of 1dB and 2% in PSNR and SSIM metrics, respectively, over existing approaches.
ContributorsVora, Sahil (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Zhou, Yuxiang (Committee member) / Arizona State University (Publisher)
Created2024
156219-Thumbnail Image.png
Description
Deep learning architectures have been widely explored in computer vision and have

depicted commendable performance in a variety of applications. A fundamental challenge

in training deep networks is the requirement of large amounts of labeled training

data. While gathering large quantities of unlabeled data is cheap and easy, annotating

the data is an expensive

Deep learning architectures have been widely explored in computer vision and have

depicted commendable performance in a variety of applications. A fundamental challenge

in training deep networks is the requirement of large amounts of labeled training

data. While gathering large quantities of unlabeled data is cheap and easy, annotating

the data is an expensive process in terms of time, labor and human expertise.

Thus, developing algorithms that minimize the human effort in training deep models

is of immense practical importance. Active learning algorithms automatically identify

salient and exemplar samples from large amounts of unlabeled data and can augment

maximal information to supervised learning models, thereby reducing the human annotation

effort in training machine learning models. The goal of this dissertation is to

fuse ideas from deep learning and active learning and design novel deep active learning

algorithms. The proposed learning methodologies explore diverse label spaces to

solve different computer vision applications. Three major contributions have emerged

from this work; (i) a deep active framework for multi-class image classication, (ii)

a deep active model with and without label correlation for multi-label image classi-

cation and (iii) a deep active paradigm for regression. Extensive empirical studies

on a variety of multi-class, multi-label and regression vision datasets corroborate the

potential of the proposed methods for real-world applications. Additional contributions

include: (i) a multimodal emotion database consisting of recordings of facial

expressions, body gestures, vocal expressions and physiological signals of actors enacting

various emotions, (ii) four multimodal deep belief network models and (iii)

an in-depth analysis of the effect of transfer of multimodal emotion features between

source and target networks on classification accuracy and training time. These related

contributions help comprehend the challenges involved in training deep learning

models and motivate the main goal of this dissertation.
ContributorsRanganathan, Hiranmayi (Author) / Sethuraman, Panchanathan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Li, Baoxin (Committee member) / Chakraborty, Shayok (Committee member) / Arizona State University (Publisher)
Created2018
155457-Thumbnail Image.png
Description
Alzheimer’s Disease (AD), a neurodegenerative disease is a progressive disease that affects the brain gradually with time and worsens. Reliable and early diagnosis of AD and its prodromal stages (i.e. Mild Cognitive Impairment(MCI)) is essential. Fluorodeoxyglucose (FDG) positron emission tomography (PET) measures the decline in the regional cerebral metabolic rate

Alzheimer’s Disease (AD), a neurodegenerative disease is a progressive disease that affects the brain gradually with time and worsens. Reliable and early diagnosis of AD and its prodromal stages (i.e. Mild Cognitive Impairment(MCI)) is essential. Fluorodeoxyglucose (FDG) positron emission tomography (PET) measures the decline in the regional cerebral metabolic rate for glucose, offering a reliable metabolic biomarker even on presymptomatic AD patients. PET scans provide functional information that is unique and unavailable using other types of imaging. The computational efficacy of FDG-PET data alone, for the classification of various Alzheimer’s Diagnostic categories (AD, MCI (LMCI, EMCI), Control) has not been studied. This serves as motivation to correctly classify the various diagnostic categories using FDG-PET data. Deep learning has recently been applied to the analysis of structural and functional brain imaging data. This thesis is an introduction to a deep learning based classification technique using neural networks with dimensionality reduction techniques to classify the different stages of AD based on FDG-PET image analysis.

This thesis develops a classification method to investigate the performance of FDG-PET as an effective biomarker for Alzheimer's clinical group classification. This involves dimensionality reduction using Probabilistic Principal Component Analysis on max-pooled data and mean-pooled data, followed by a Multilayer Feed Forward Neural Network which performs binary classification. Max pooled features result into better classification performance compared to results on mean pooled features. Additionally, experiments are done to investigate if the addition of important demographic features such as Functional Activities Questionnaire(FAQ), gene information helps improve performance. Classification results indicate that our designed classifiers achieve competitive results, and better with the additional of demographic features.
ContributorsSingh, Shibani (Author) / Wang, Yalin (Thesis advisor) / Li, Baoxin (Committee member) / Liang, Jianming (Committee member) / Arizona State University (Publisher)
Created2017
157776-Thumbnail Image.png
Description
Interpersonal strain is linked with depressive symptoms in middle-aged adults. Self-compassion is an emerging resilience construct that may be advantageous in navigating relationship strain by helping individuals respond to emotions in a kind and nonjudgmental way. Although theory and empirical evidence suggests that self-compassion is protective against the impact of

Interpersonal strain is linked with depressive symptoms in middle-aged adults. Self-compassion is an emerging resilience construct that may be advantageous in navigating relationship strain by helping individuals respond to emotions in a kind and nonjudgmental way. Although theory and empirical evidence suggests that self-compassion is protective against the impact of stress on mental health outcomes, many studies have not investigated how self-compassion operates in the context of relationship strain. In addition, few studies have examined psychological or physiological mechanisms by which self-compassion protects against mental health outcomes, depression in particular. Thus, this study examined 1) the extent to which trait self-compassion buffers the relation between family strain and depressive symptoms, and 2) whether these buffering effects are mediated by hope and inflammatory processes (IL-6) in a sample of 762 middle-aged, community-dwelling adults. Results from structural equation models indicated that family strain was unrelated to depressive symptoms and the relation was not moderated by self-compassion. Hope, but not IL-6, mediated the relation between family strain and depressive symptoms and the indirect effect was not conditional on levels of self-compassion. Taken together, the findings suggest that family strain may lead individuals to experience less hope and subsequent increases in depressive symptoms, and further, that a self-compassionate attitude does not affect this relation. Implications for future self-compassion interventions are discussed.
ContributorsMistretta, Erin (Author) / Davis, Mary C. (Thesis advisor) / Karoly, Paul (Committee member) / Infurna, Frank (Committee member) / Arizona State University (Publisher)
Created2019
157936-Thumbnail Image.png
Description
Lifespan psychological perspectives have long suggested the context in which individuals live having the potential to shape the course of development across the adult lifespan. Thus, it is imperative to examine the role of both the objective and subjective neighborhood context in mitigating the consequences of lifetime adversity on mental

Lifespan psychological perspectives have long suggested the context in which individuals live having the potential to shape the course of development across the adult lifespan. Thus, it is imperative to examine the role of both the objective and subjective neighborhood context in mitigating the consequences of lifetime adversity on mental and physical health. To address the research questions, data was used from a sample of 362 individuals in midlife who were assessed on lifetime adversity, multiple outcomes of mental and physical health and aspects of the objective and subjective neighborhood. Results showed that reporting more lifetime adversity was associated with poorer mental and physical health. Aspects of the objective and subjective neighborhood, such as green spaces moderated these relationships. The discussion focuses on potential mechanisms underlying why objective and subjective indicators of the neighborhood are protective against lifetime adversity.
ContributorsStaben, Omar E (Author) / Infurna, Frank J. (Thesis advisor) / Luthar, Suniya S. (Committee member) / Grimm, Kevin J. (Committee member) / Arizona State University (Publisher)
Created2019
161756-Thumbnail Image.png
Description
There is intense interest in adopting computer-aided diagnosis (CAD) systems, particularly those developed based on deep learning algorithms, for applications in a number of medical specialties. However, success of these CAD systems relies heavily on large annotated datasets; otherwise, deep learning often results in algorithms that perform poorly and lack

There is intense interest in adopting computer-aided diagnosis (CAD) systems, particularly those developed based on deep learning algorithms, for applications in a number of medical specialties. However, success of these CAD systems relies heavily on large annotated datasets; otherwise, deep learning often results in algorithms that perform poorly and lack generalizability. Therefore, this dissertation seeks to address this critical problem: How to develop efficient and effective deep learning algorithms for medical applications where large annotated datasets are unavailable. In doing so, we have outlined three specific aims: (1) acquiring necessary annotations efficiently from human experts; (2) utilizing existing annotations effectively from advanced architecture; and (3) extracting generic knowledge directly from unannotated images. Our extensive experiments indicate that, with a small part of the dataset annotated, the developed deep learning methods can match, or even outperform those that require annotating the entire dataset. The last part of this dissertation presents the importance and application of imaging in healthcare, elaborating on how the developed techniques can impact several key facets of the CAD system for detecting pulmonary embolism. Further research is necessary to determine the feasibility of applying these advanced deep learning technologies in clinical practice, particularly when annotation is limited. Progress in this area has the potential to enable deep learning algorithms to generalize to real clinical data and eventually allow CAD systems to be employed in clinical medicine at the point of care.
ContributorsZhou, Zongwei (Author) / Liang, Jianming (Thesis advisor) / Shortliffe, Edward H (Committee member) / Greenes, Robert A (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2021