Matching Items (60)
Filtering by

Clear all filters

155228-Thumbnail Image.png
Description
Imaging genetics is an emerging and promising technique that investigates how genetic variations affect brain development, structure, and function. By exploiting disorder-related neuroimaging phenotypes, this class of studies provides a novel direction to reveal and understand the complex genetic mechanisms. Oftentimes, imaging genetics studies are challenging due to the relatively

Imaging genetics is an emerging and promising technique that investigates how genetic variations affect brain development, structure, and function. By exploiting disorder-related neuroimaging phenotypes, this class of studies provides a novel direction to reveal and understand the complex genetic mechanisms. Oftentimes, imaging genetics studies are challenging due to the relatively small number of subjects but extremely high-dimensionality of both imaging data and genomic data. In this dissertation, I carry on my research on imaging genetics with particular focuses on two tasks---building predictive models between neuroimaging data and genomic data, and identifying disorder-related genetic risk factors through image-based biomarkers. To this end, I consider a suite of structured sparse methods---that can produce interpretable models and are robust to overfitting---for imaging genetics. With carefully-designed sparse-inducing regularizers, different biological priors are incorporated into learning models. More specifically, in the Allen brain image--gene expression study, I adopt an advanced sparse coding approach for image feature extraction and employ a multi-task learning approach for multi-class annotation. Moreover, I propose a label structured-based two-stage learning framework, which utilizes the hierarchical structure among labels, for multi-label annotation. In the Alzheimer's disease neuroimaging initiative (ADNI) imaging genetics study, I employ Lasso together with EDPP (enhanced dual polytope projections) screening rules to fast identify Alzheimer's disease risk SNPs. I also adopt the tree-structured group Lasso with MLFre (multi-layer feature reduction) screening rules to incorporate linkage disequilibrium information into modeling. Moreover, I propose a novel absolute fused Lasso model for ADNI imaging genetics. This method utilizes SNP spatial structure and is robust to the choice of reference alleles of genotype coding. In addition, I propose a two-level structured sparse model that incorporates gene-level networks through a graph penalty into SNP-level model construction. Lastly, I explore a convolutional neural network approach for accurate predicting Alzheimer's disease related imaging phenotypes. Experimental results on real-world imaging genetics applications demonstrate the efficiency and effectiveness of the proposed structured sparse methods.
ContributorsYang, Tao (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Thesis advisor) / He, Jingrui (Committee member) / Li, Baoxin (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2017
155859-Thumbnail Image.png
Description
The power and communication networks are highly interdependent and form a part of the critical infrastructure of a country. Similarly, dependencies exist within the networks itself. Owing to cascading failures, interdependent and intradependent networks are extremely susceptible to widespread vulnerabilities. In recent times the research community has shown significant interest

The power and communication networks are highly interdependent and form a part of the critical infrastructure of a country. Similarly, dependencies exist within the networks itself. Owing to cascading failures, interdependent and intradependent networks are extremely susceptible to widespread vulnerabilities. In recent times the research community has shown significant interest in modeling to capture these dependencies. However, many of them are simplistic in nature which limits their applicability to real world systems. This dissertation presents a Boolean logic based model termed as Implicative Interdependency Model (IIM) to capture the complex dependencies and cascading failures resulting from an initial failure of one or more entities of either network.

Utilizing the IIM, four pertinent problems encompassing vulnerability and protection of critical infrastructures are formulated and solved. For protection analysis, the Entity Hardening Problem, Targeted Entity Hardening Problem and Auxiliary Entity Allocation Problem are formulated. Qualitatively, under a resource budget, the problems maximize the number of entities protected from failure from an initial failure of a set of entities. Additionally, the model is also used to come up with a metric to analyze the Robustness of critical infrastructure systems. The computational complexity of all these problems is NP-complete. Accordingly, Integer Linear Program solutions (to obtain the optimal solution) and polynomial time sub-optimal Heuristic solutions are proposed for these problems. To analyze the efficacy of the Heuristic solution, comparative studies are performed on real-world and test system data.
ContributorsBanerjee, Joydeep (Author) / Sen, Arunabha (Thesis advisor) / Dasgupta, Partha (Committee member) / Xue, Guoliang (Committee member) / Raravi, Gurulingesh (Committee member) / Arizona State University (Publisher)
Created2017
Description
Major Depression, clinically called Major Depressive Disorder, is a mood disorder that affects about one eighth of population in US and is projected to be the second leading cause of disability in the world by the year 2020. Recent advances in biotechnology have enabled us to

Major Depression, clinically called Major Depressive Disorder, is a mood disorder that affects about one eighth of population in US and is projected to be the second leading cause of disability in the world by the year 2020. Recent advances in biotechnology have enabled us to collect a great variety of data which could potentially offer us a deeper understanding of the disorder as well as advancing personalized medicine.

This dissertation focuses on developing methods for three different aspects of predictive analytics related to the disorder: automatic diagnosis, prognosis, and prediction of long-term treatment outcome. The data used for each task have their specific characteristics and demonstrate unique problems. Automatic diagnosis of melancholic depression is made on the basis of metabolic profiles and micro-array gene expression profiles where the presence of missing values and strong empirical correlation between the variables is not unusual. To deal with these problems, a method of generating a representative set of features is proposed. Prognosis is made on data collected from rating scales and questionnaires which consist mainly of categorical and ordinal variables and thus favor decision tree based predictive models. Decision tree models are known for the notorious problem of overfitting. A decision tree pruning method that overcomes the shortcomings of a greedy nature and reliance on heuristics inherent in traditional decision tree pruning approaches is proposed. The method is further extended to prune Gradient Boosting Decision Tree and tested on the task of prognosis of treatment outcome. Follow-up studies evaluating the long-term effect of the treatments on patients usually measure patients' depressive symptom severity monthly, resulting in the actual time of relapse upper bounded by the observed time of relapse. To resolve such uncertainty in response, a general loss function where the hypothesis could take different forms is proposed to predict the risk of relapse in situations where only an interval for time of relapse can be derived from the observed data.
ContributorsNie, Zhi (Author) / Ye, Jieping (Thesis advisor) / He, Jingrui (Thesis advisor) / Li, Baoxin (Committee member) / Xue, Guoliang (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2017
155666-Thumbnail Image.png
Description
Imagine that we have a piece of matter that can change its physical properties like its shape, density, conductivity, or color in a programmable fashion based on either user input or autonomous sensing. This is the vision behind what is commonly known as programmable matter. Envisioning systems of nano-sensors devices,

Imagine that we have a piece of matter that can change its physical properties like its shape, density, conductivity, or color in a programmable fashion based on either user input or autonomous sensing. This is the vision behind what is commonly known as programmable matter. Envisioning systems of nano-sensors devices, programmable matter consists of systems of simple computational elements, called particles, that can establish and release bonds, compute, and can actively move in a self-organized way. In this dissertation the feasibility of solving fundamental problems relevant for programmable matter is investigated. As a model for such self-organizing particle systems (SOPS), the geometric amoebot model is introduced. In this model, particles only have local information and have modest computational power. They achieve locomotion by expanding and contracting, which resembles the behavior of amoeba. Under this model, efficient local-control algorithms for the leader election problem in SOPS are presented. As a central problem for programmable matter, shape formation problems are then studied. The limitations of solving the leader election problem and the shape formation problem on a more general version of the amoebot model are also discussed. The \smart paint" problem is also studied which aims at having the particles self-organize in order to uniformly coat the surface of an object of arbitrary shape and size, forming multiple coating layers if necessary. A Universal Coating algorithm is presented and shown to be asymptotically worst-case optimal both in terms of time with high probability and work. In particular, the algorithm always terminates within a linear number of rounds with high probability. A linear lower bound on the competitive gap between fully local coating algorithms and coating algorithms that rely on global information is presented, which implies that the proposed algorithm is also optimal in a competitive sense. Simulation results show that the competitive ratio of the proposed algorithm may be better than linear in practice. Developed algorithms utilize only local control, require only constant-size memory particles, and are asymptotically optimal in terms of the total number of particle movements needed to reach the desired shape configuration.
ContributorsDerakhshandeh, Zahra (Author) / Richa, Andrea (Thesis advisor) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Scheideler, Christian (Committee member) / Arizona State University (Publisher)
Created2017
155389-Thumbnail Image.png
Description
Large-scale $\ell_1$-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. In many applications, it remains challenging to apply the sparse learning model to large-scale problems that have massive data samples with high-dimensional features. One popular and promising strategy

Large-scale $\ell_1$-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. In many applications, it remains challenging to apply the sparse learning model to large-scale problems that have massive data samples with high-dimensional features. One popular and promising strategy is to scaling up the optimization problem in parallel. Parallel solvers run multiple cores on a shared memory system or a distributed environment to speed up the computation, while the practical usage is limited by the huge dimension in the feature space and synchronization problems.

In this dissertation, I carry out the research along the direction with particular focuses on scaling up the optimization of sparse learning for supervised and unsupervised learning problems. For the supervised learning, I firstly propose an asynchronous parallel solver to optimize the large-scale sparse learning model in a multithreading environment. Moreover, I propose a distributed framework to conduct the learning process when the dataset is distributed stored among different machines. Then the proposed model is further extended to the studies of risk genetic factors for Alzheimer's Disease (AD) among different research institutions, integrating a group feature selection framework to rank the top risk SNPs for AD. For the unsupervised learning problem, I propose a highly efficient solver, termed Stochastic Coordinate Coding (SCC), scaling up the optimization of dictionary learning and sparse coding problems. The common issue for the medical imaging research is that the longitudinal features of patients among different time points are beneficial to study together. To further improve the dictionary learning model, I propose a multi-task dictionary learning method, learning the different task simultaneously and utilizing shared and individual dictionary to encode both consistent and changing imaging features.
ContributorsLi, Qingyang (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Thesis advisor) / He, Jingrui (Committee member) / Wang, Yalin (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2017
Description
Social media bot detection has been a signature challenge in recent years in online social networks. Many scholars agree that the bot detection problem has become an "arms race" between malicious actors, who seek to create bots to influence opinion on these networks, and the social media platforms to remove

Social media bot detection has been a signature challenge in recent years in online social networks. Many scholars agree that the bot detection problem has become an "arms race" between malicious actors, who seek to create bots to influence opinion on these networks, and the social media platforms to remove these accounts. Despite this acknowledged issue, bot presence continues to remain on social media networks. So, it has now become necessary to monitor different bots over time to identify changes in their activities or domain. Since monitoring individual accounts is not feasible, because the bots may get suspended or deleted, bots should be observed in smaller groups, based on their characteristics, as types. Yet, most of the existing research on social media bot detection is focused on labeling bot accounts by only distinguishing them from human accounts and may ignore differences between individual bot accounts. The consideration of these bots' types may be the best solution for researchers and social media companies alike as it is in both of their best interests to study these types separately. However, up until this point, bot categorization has only been theorized or done manually. Thus, the goal of this research is to automate this process of grouping bots by their respective types. To accomplish this goal, the author experimentally demonstrates that it is possible to use unsupervised machine learning to categorize bots into types based on the proposed typology by creating an aggregated dataset, subsequent to determining that the accounts within are bots, and utilizing an existing typology for bots. Having the ability to differentiate between types of bots automatically will allow social media experts to analyze bot activity, from a new perspective, on a more granular level. This way, researchers can identify patterns related to a given bot type's behaviors over time and determine if certain detection methods are more viable for that type.
ContributorsDavis, Matthew William (Author) / Liu, Huan (Thesis advisor) / Xue, Guoliang (Committee member) / Morstatter, Fred (Committee member) / Arizona State University (Publisher)
Created2019
158548-Thumbnail Image.png
Description
Hyperbolic geometry, which is a geometry which concerns itself with hyperbolic space, has caught the eye of certain circles in the machine learning community as of late. Lauded for its ability to encapsulate strong clustering as well as latent hierarchies in complex and social networks, hyperbolic geometry has proven itself

Hyperbolic geometry, which is a geometry which concerns itself with hyperbolic space, has caught the eye of certain circles in the machine learning community as of late. Lauded for its ability to encapsulate strong clustering as well as latent hierarchies in complex and social networks, hyperbolic geometry has proven itself to be an enduring presence in the network science community throughout the 2010s, with no signs of fading into obscurity anytime soon. Hyperbolic embeddings, which map a given graph to hyperbolic space, have particularly proven to be a powerful and dynamic tool for studying complex networks. Hyperbolic embeddings are exploited in this thesis to illustrate centrality in a graph. In network science, centrality quantifies the influence of individual nodes in a graph. Eigenvector centrality is one type of such measure, and assigns an influence weight to each node in a graph by solving for an eigenvector equation. A procedure is defined to embed a given network in a model of hyperbolic space, known as the Poincare disk, according to the influence weights computed by three eigenvector centrality measures: the PageRank algorithm, the Hyperlink-Induced Topic Search (HITS) algorithm, and the Pinski-Narin algorithm. The resulting embeddings are shown to accurately and meaningfully reflect each node's influence and proximity to influential nodes.
ContributorsChang, Alena (Author) / Xue, Guoliang (Thesis advisor) / Yang, Dejun (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
158566-Thumbnail Image.png
Description
Social media has become an important means of user-centered information sharing and communications in a gamut of domains, including news consumption, entertainment, marketing, public relations, and many more. The low cost, easy access, and rapid dissemination of information on social media draws a large audience but also exacerbate the wide

Social media has become an important means of user-centered information sharing and communications in a gamut of domains, including news consumption, entertainment, marketing, public relations, and many more. The low cost, easy access, and rapid dissemination of information on social media draws a large audience but also exacerbate the wide propagation of disinformation including fake news, i.e., news with intentionally false information. Disinformation on social media is growing fast in volume and can have detrimental societal effects. Despite the importance of this problem, our understanding of disinformation in social media is still limited. Recent advancements of computational approaches on detecting disinformation and fake news have shown some early promising results. Novel challenges are still abundant due to its complexity, diversity, dynamics, multi-modality, and costs of fact-checking or annotation.

Social media data opens the door to interdisciplinary research and allows one to collectively study large-scale human behaviors otherwise impossible. For example, user engagements over information such as news articles, including posting about, commenting on, or recommending the news on social media, contain abundant rich information. Since social media data is big, incomplete, noisy, unstructured, with abundant social relations, solely relying on user engagements can be sensitive to noisy user feedback. To alleviate the problem of limited labeled data, it is important to combine contents and this new (but weak) type of information as supervision signals, i.e., weak social supervision, to advance fake news detection.

The goal of this dissertation is to understand disinformation by proposing and exploiting weak social supervision for learning with little labeled data and effectively detect disinformation via innovative research and novel computational methods. In particular, I investigate learning with weak social supervision for understanding disinformation with the following computational tasks: bringing the heterogeneous social context as auxiliary information for effective fake news detection; discovering explanations of fake news from social media for explainable fake news detection; modeling multi-source of weak social supervision for early fake news detection; and transferring knowledge across domains with adversarial machine learning for cross-domain fake news detection. The findings of the dissertation significantly expand the boundaries of disinformation research and establish a novel paradigm of learning with weak social supervision that has important implications in broad applications in social media.
ContributorsShu, Kai (Author) / Liu, Huan (Thesis advisor) / Bernard, H. Russell (Committee member) / Maciejewski, Ross (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2020
161251-Thumbnail Image.png
Description
Cyber-Physical Systems (CPS) are becoming increasingly prevalent around the world. Co-simulation of cyber and physical components has shown to be an effective way towards the development of time-sensitive and reliable CPS. Correctly combining continuous models with discrete models for co-simulation can often be challenging. In this thesis, the Functional Marku

Cyber-Physical Systems (CPS) are becoming increasingly prevalent around the world. Co-simulation of cyber and physical components has shown to be an effective way towards the development of time-sensitive and reliable CPS. Correctly combining continuous models with discrete models for co-simulation can often be challenging. In this thesis, the Functional Markup Interface (FMI) is used to develop an adapter called DEVS-FMI for the DEVS-Suite simulator. The adapter, implemented using JavaFMI 2.0, allows any Functional Mock-Up Unit (FMU) to be co-simulated with a Discrete Event System Specification (DEVS) model. This approach enables taking advantage of the parallel DEVS formalism to model cyber systems and using Modelica to model physical systems. An FMU serves as a slave simulator while the DEVS-Suite serves as a master simulator. The Four-Variable model is used as a guide to define the requirements for the inputs and outputs of actuator and sensor devices used in cyber and physical systems. The input and output data as non-functional abstractions of the sensor and actuator devices. Select cyber and physical parts of an electric scooter are chosen, modeled, simulated, and evaluated using the integrated OpenModelica and the DEVS-Suite simulators. Closely related research is briefly examined and expanding this work with support for implicit state-changes for continuous models and distributed co-simulation is noted.
ContributorsLin, Xuanli (Author) / Sarjoughian, Hessam S (Thesis advisor) / Pedrielli, Giulia (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2021
154380-Thumbnail Image.png
Description
In brain imaging study, 3D surface-based algorithms may provide more advantages over volume-based methods, due to their sub-voxel accuracy to represent subtle subregional changes and solid mathematical foundations on which global shape analyses can be achieved on complicated topological structures, such as the convoluted cortical surfaces. On the other hand,

In brain imaging study, 3D surface-based algorithms may provide more advantages over volume-based methods, due to their sub-voxel accuracy to represent subtle subregional changes and solid mathematical foundations on which global shape analyses can be achieved on complicated topological structures, such as the convoluted cortical surfaces. On the other hand, given the enormous amount of data being generated daily, it is still challenging to develop effective and efficient surface-based methods to analyze brain shape morphometry. There are two major problems in surface-based shape analysis research: correspondence and similarity. This dissertation covers both topics by proposing novel surface registration and indexing algorithms based on conformal geometry for brain morphometry analysis.

First, I propose a surface fluid registration system, which extends the traditional image fluid registration to surfaces. With surface conformal parameterization, the complexity of the proposed registration formula has been greatly reduced, compared to prior methods. Inverse consistency is also incorporated to drive a symmetric correspondence between surfaces. After registration, the multivariate tensor-based morphometry (mTBM) is computed to measure local shape deformations. The algorithm was applied to study hippocampal atrophy associated with Alzheimer's disease (AD).

Next, I propose a ventricular surface registration algorithm based on hyperbolic Ricci flow, which computes a global conformal parameterization for each ventricular surface without introducing any singularity. Furthermore, in the parameter space, unique hyperbolic geodesic curves are introduced to guide consistent correspondences across subjects, a technique called geodesic curve lifting. Tensor-based morphometry (TBM) statistic is computed from the registration to measure shape changes. This algorithm was applied to study ventricular enlargement in mild cognitive impatient (MCI) converters.

Finally, a new shape index, the hyperbolic Wasserstein distance, is introduced. This algorithm computes the Wasserstein distance between general topological surfaces as a shape similarity measure of different surfaces. It is based on hyperbolic Ricci flow, hyperbolic harmonic map, and optimal mass transportation map, which is extended to hyperbolic space. This method fills a gap in the Wasserstein distance study, where prior work only dealt with images or genus-0 closed surfaces. The algorithm was applied in an AD vs. control cortical shape classification study and achieved promising accuracy rate.
ContributorsShi, Jie, Ph.D (Author) / Wang, Yalin (Thesis advisor) / Caselli, Richard (Committee member) / Li, Baoxin (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2016