Filtering by
![156080-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/156080-Thumbnail%20Image.png?versionId=6i.zD685q.094wxhpxUi_Tro8lOZ5sKR&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T131351Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=ad88c1d8b930c928bbfa5e4e955f6ca64bbc3915f99e7e07eee53771852c8f15&itok=siIXEHW0)
the ability to accurately edit genomes at scale has remained elusive. Novel techniques
have been introduced recently to aid in the writing of DNA sequences. While writing
DNA is more accessible, it still remains expensive, justifying the increased interest in
in silico predictions of cell behavior. In order to accurately predict the behavior of
cells it is necessary to extensively model the cell environment, including gene-to-gene
interactions as completely as possible.
Significant algorithmic advances have been made for identifying these interactions,
but despite these improvements current techniques fail to infer some edges, and
fail to capture some complexities in the network. Much of this limitation is due to
heavily underdetermined problems, whereby tens of thousands of variables are to be
inferred using datasets with the power to resolve only a small fraction of the variables.
Additionally, failure to correctly resolve gene isoforms using short reads contributes
significantly to noise in gene quantification measures.
This dissertation introduces novel mathematical models, machine learning techniques,
and biological techniques to solve the problems described above. Mathematical
models are proposed for simulation of gene network motifs, and raw read simulation.
Machine learning techniques are shown for DNA sequence matching, and DNA
sequence correction.
Results provide novel insights into the low level functionality of gene networks. Also
shown is the ability to use normalization techniques to aggregate data for gene network
inference leading to larger data sets while minimizing increases in inter-experimental
noise. Results also demonstrate that high error rates experienced by third generation
sequencing are significantly different than previous error profiles, and that these errors can be modeled, simulated, and rectified. Finally, techniques are provided for amending this DNA error that preserve the benefits of third generation sequencing.
![155457-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/155457-Thumbnail%20Image.png?versionId=EdAyWJUL7hapHp7LGMGHh_NdCHVF7YiS&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T131351Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=b53993dc18774928c3270cfa6ca0a0576b445f18e4c6b28363d84256f5d6f9de&itok=ErFxeFDT)
This thesis develops a classification method to investigate the performance of FDG-PET as an effective biomarker for Alzheimer's clinical group classification. This involves dimensionality reduction using Probabilistic Principal Component Analysis on max-pooled data and mean-pooled data, followed by a Multilayer Feed Forward Neural Network which performs binary classification. Max pooled features result into better classification performance compared to results on mean pooled features. Additionally, experiments are done to investigate if the addition of important demographic features such as Functional Activities Questionnaire(FAQ), gene information helps improve performance. Classification results indicate that our designed classifiers achieve competitive results, and better with the additional of demographic features.
![155389-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/155389-Thumbnail%20Image.png?versionId=Zd4zt3DC_KlwMl_v3cCb.0Fs.Fiui.ph&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T093205Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=2782c7651b5792c5e9a03afde10ce7cc45788dffa270b918fc5e69e13f92f0b7&itok=b9TDJcCZ)
In this dissertation, I carry out the research along the direction with particular focuses on scaling up the optimization of sparse learning for supervised and unsupervised learning problems. For the supervised learning, I firstly propose an asynchronous parallel solver to optimize the large-scale sparse learning model in a multithreading environment. Moreover, I propose a distributed framework to conduct the learning process when the dataset is distributed stored among different machines. Then the proposed model is further extended to the studies of risk genetic factors for Alzheimer's Disease (AD) among different research institutions, integrating a group feature selection framework to rank the top risk SNPs for AD. For the unsupervised learning problem, I propose a highly efficient solver, termed Stochastic Coordinate Coding (SCC), scaling up the optimization of dictionary learning and sparse coding problems. The common issue for the medical imaging research is that the longitudinal features of patients among different time points are beneficial to study together. To further improve the dictionary learning model, I propose a multi-task dictionary learning method, learning the different task simultaneously and utilizing shared and individual dictionary to encode both consistent and changing imaging features.
![158676-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-08/158676-Thumbnail%20Image.png?versionId=LGEKgDPt_gMgENT2LkdkeZenAbnKp0.n&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T131351Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=9f7197867fee02e2db9c9f9690b294e73754dde53da33000abb409384f6bebb6&itok=LT3iPpAu)
In this study, I aim to achieve multimodal brain image fusion by referring to some intrinsic properties of data, e.g. geometry of embedding structures where the commonly used image features reside. Since the image features investigated in this study share an identical embedding space, i.e. either defined on a brain surface or brain atlas, where a graph structure is easy to define, it is straightforward to consider the mathematically meaningful properties of the shared structures from the geometry perspective.
I first introduce the background of multimodal fusion of brain image data and insights of geometric properties playing a potential role to link different modalities. Then, several proposed computational frameworks either using the solid and efficient geometric algorithms or current geometric deep learning models are be fully discussed. I show how these designed frameworks deal with distinct geometric properties respectively, and their applications in the real healthcare scenarios, e.g. to enhanced detections of fetal brain diseases or abnormal brain development.
![158291-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/158291-Thumbnail%20Image.png?versionId=eByTemdtnn8_wLCqs3BSr1I2AKRF4tWW&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240618/us-west-2/s3/aws4_request&X-Amz-Date=20240618T222856Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=641e4b5fe3983d86b777af84f96be0f1effee0d42bc8a220a758b9299dc6242b&itok=rcGACRaq)
![158811-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/158811-Thumbnail%20Image.png?versionId=VGY73CMdmeNYCcjTe2aEZpB0.aEiWAcg&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T131351Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=12541dd743a8e064ae15bfe4722b07c8ac60ad4137ee6bcc75851c7eefe0658d&itok=OoZcU90O)
etc. Given a low resolution image, it aims to reconstruct a high resolution
image. The problem is ill-posed since there can be more than one high resolution
image corresponding to the same low-resolution image. To address this problem, a
number of machine learning-based approaches have been proposed.
In this dissertation, I present my works on single image super-resolution (SISR)
and accelerated magnetic resonance imaging (MRI) (a.k.a. super-resolution on MR
images), followed by the investigation on transfer learning for accelerated MRI reconstruction.
For the SISR, a dictionary-based approach and two reconstruction based
approaches are presented. To be precise, a convex dictionary learning (CDL)
algorithm is proposed by constraining the dictionary atoms to be formed by nonnegative
linear combination of the training data, which is a natural, desired property.
Also, two reconstruction-based single methods are presented, which make use
of (i)the joint regularization, where a group-residual-based regularization (GRR) and
a ridge-regression-based regularization (3R) are combined; (ii)the collaborative representation
and non-local self-similarity. After that, two deep learning approaches
are proposed, aiming at reconstructing high-quality images from accelerated MRI
acquisition. Residual Dense Block (RDB) and feedback connection are introduced
in the proposed models. In the last chapter, the feasibility of transfer learning for
accelerated MRI reconstruction is discussed.
![158066-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/158066-Thumbnail%20Image.png?versionId=09n5c8oWWDnW_O_hJhYR4SXLMP_076IF&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T103005Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=4d23b534264939806c6e1a33722995524bb7d10b0449220994eb0cb6d92bdd0b&itok=MrV-JJ7k)
To overcome the above data scarcity and generalization issues, in my dissertation, I first propose two unsupervised conventional machine learning algorithms, hyperbolic stochastic coding, and multi-resemble multi-target low-rank coding, to solve the incomplete data and missing label problem. I further introduce a deep multi-domain adaptation network to leverage the power of deep learning by transferring the rich knowledge from a large-amount labeled source dataset. I also invent a novel time-sequence dynamically hierarchical network that adaptively simplifies the network to cope with the scarce data.
To learn a large number of unseen concepts, lifelong machine learning enjoys many advantages, including abstracting knowledge from prior learning and using the experience to help future learning, regardless of how much data is currently available. Incorporating this capability and making it versatile, I propose deep multi-task weight consolidation to accumulate knowledge continuously and significantly reduce data requirements in a variety of domains. Inspired by the recent breakthroughs in automatically learning suitable neural network architectures (AutoML), I develop a nonexpansive AutoML framework to train an online model without the abundance of labeled data. This work automatically expands the network to increase model capability when necessary, then compresses the model to maintain the model efficiency.
In my current ongoing work, I propose an alternative method of supervised learning that does not require direct labels. This could utilize various supervision from an image/object as a target value for supervising the target tasks without labels, and it turns out to be surprisingly effective. The proposed method only requires few-shot labeled data to train, and can self-supervised learn the information it needs and generalize to datasets not seen during training.
![129562-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-04/129562-Thumbnail%20Image.png?versionId=dn6e.Fe6tj3ZmzsIIN3ydcM98wEDTE53&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T062755Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=590815c59484d06326616268a9c85c30db52ad46620019ec0cc2e0ea9f16dae4&itok=NsiZ8uYo)
The objective of articulating sustainability visions through modeling is to enhance the outcomes and process of visioning in order to successfully move the system toward a desired state. Models emphasize approaches to develop visions that are viable and resilient and are crafted to adhere to sustainability principles. This approach is largely assembled from visioning processes (resulting in descriptions of desirable future states generated from stakeholder values and preferences) and participatory modeling processes (resulting in systems-based representations of future states co-produced by experts and stakeholders). Vision modeling is distinct from normative scenarios and backcasting processes in that the structure and function of the future desirable state is explicitly articulated as a systems model. Crafting, representing and evaluating the future desirable state as a systems model in participatory settings is intended to support compliance with sustainability visioning quality criteria (visionary, sustainable, systemic, coherent, plausible, tangible, relevant, nuanced, motivational and shared) in order to develop rigorous and operationalizable visions. We provide two empirical examples to demonstrate the incorporation of vision modeling in research practice and education settings. In both settings, vision modeling was used to develop, represent, simulate and evaluate future desirable states. This allowed participants to better identify, explore and scrutinize sustainability solutions.
![129574-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-04/129574-Thumbnail%20Image.png?versionId=YAy6kIWnysmfdKgt.R1jkN5GMHI1_ywc&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T131351Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=330245675e38a8d4dfd5402c9d992d00d6260b32ff49806a2c9bbe4dc4b3364a&itok=woAEDDsc)
It has become common for sustainability science and resilience theory to be considered as complementary approaches. Occasionally the terms have been used interchangeably. Although these two approaches share some working principles and objectives, they also are based on some distinct assumptions about the operation of systems and how we can best guide these systems into the future. Each approach would benefit from some scholars keeping sustainability science and resilience theory separate and focusing on further developing their distinctiveness and other scholars continuing to explore them in combination. Three areas of research in which following different procedures might be beneficial are whether to prioritize outcomes or system dynamics, how best to take advantage of community input, and increasing the use of knowledge of the past as a laboratory for potential innovations.