Matching Items (85)
Filtering by

Clear all filters

152506-Thumbnail Image.png
Description
In this thesis, the application of pixel-based vertical axes used within parallel coordinate plots is explored in an attempt to improve how existing tools can explain complex multivariate interactions across temporal data. Several promising visualization techniques are combined, such as: visual boosting to allow for quicker consumption of large data

In this thesis, the application of pixel-based vertical axes used within parallel coordinate plots is explored in an attempt to improve how existing tools can explain complex multivariate interactions across temporal data. Several promising visualization techniques are combined, such as: visual boosting to allow for quicker consumption of large data sets, the bond energy algorithm to find finer patterns and anomalies through contrast, multi-dimensional scaling, flow lines, user guided clustering, and row-column ordering. User input is applied on precomputed data sets to provide for real time interaction. General applicability of the techniques are tested against industrial trade, social networking, financial, and sparse data sets of varying dimensionality.
ContributorsHayden, Thomas (Author) / Maciejewski, Ross (Thesis advisor) / Wang, Yalin (Committee member) / Runger, George C. (Committee member) / Mack, Elizabeth (Committee member) / Arizona State University (Publisher)
Created2014
152722-Thumbnail Image.png
Description
The coordination of group behavior in the social insects is representative of a broader phenomenon in nature, emergent biological complexity. In such systems, it is believed that large-scale patterns result from the interaction of relatively simple subunits. This dissertation involved the study of one such system: the social foraging of

The coordination of group behavior in the social insects is representative of a broader phenomenon in nature, emergent biological complexity. In such systems, it is believed that large-scale patterns result from the interaction of relatively simple subunits. This dissertation involved the study of one such system: the social foraging of the ant Temnothorax rugatulus. Physically tiny with small population sizes, these cavity-dwelling ants provide a good model system to explore the mechanisms and ultimate origins of collective behavior in insect societies. My studies showed that colonies robustly exploit sugar water. Given a choice between feeders unequal in quality, colonies allocate more foragers to the better feeder. If the feeders change in quality, colonies are able to reallocate their foragers to the new location of the better feeder. These qualities of flexibility and allocation could be explained by the nature of positive feedback (tandem run recruitment) that these ants use. By observing foraging colonies with paint-marked ants, I was able to determine the `rules' that individuals follow: foragers recruit more and give up less when they find a better food source. By altering the nutritional condition of colonies, I found that these rules are flexible - attuned to the colony state. In starved colonies, individual ants are more likely to explore and recruit to food sources than in well-fed colonies. Similar to honeybees, Temmnothorax foragers appear to modulate their exploitation and recruitment behavior in response to environmental and social cues. Finally, I explored the influence of ecology (resource distribution) on the foraging success of colonies. Larger colonies showed increased consistency and a greater rate of harvest than smaller colonies, but this advantage was mediated by the distribution of resources. While patchy or rare food sources exaggerated the relative success of large colonies, regularly (or easily found) distributions leveled the playing field for smaller colonies. Social foraging in ant societies can best be understood when we view the colony as a single organism and the phenotype - group size, communication, and individual behavior - as integrated components of a homeostatic unit.
ContributorsShaffer, Zachary (Author) / Pratt, Stephen C (Thesis advisor) / Hölldobler, Bert (Committee member) / Janssen, Marco (Committee member) / Fewell, Jennifer (Committee member) / Liebig, Juergen (Committee member) / Arizona State University (Publisher)
Created2014
153744-Thumbnail Image.png
Description
In Latin America food insecurity is still prevailing in those regions where extreme poverty and political instability are common. Tseltal communities are experiencing changes due to religious conversions and the incursion of external political institutions. These changes have diminished the importance of traditional reciprocal and redistributive institutions that historically have

In Latin America food insecurity is still prevailing in those regions where extreme poverty and political instability are common. Tseltal communities are experiencing changes due to religious conversions and the incursion of external political institutions. These changes have diminished the importance of traditional reciprocal and redistributive institutions that historically have been essential for personal and community survival. This dissertation investigated the impact that variations on governance systems and presence of reciprocal and distributional exchanges have on the food security status of communities. Qualitative data collected in four communities through 117 free lists and 117 semi-structured interviews was used to elaborate six scales that correspond to the traditional and civic authority system and to inter-community and intra-community reciprocity and redistribution. I explore the relationship that the scores of four communities on those scales have on the food security status of their inhabitants based on their results on the National Health and Nutrition Survey 2012. Findings from this study suggest that in marginalized communities that many scientists would described as experiencing market failure, participation in inter-community reciprocal, intra-community reciprocal and intra-community redistribution are better predictors of food security than enrollment in food security programs. Additionally, communities that participated the most in these non-market mechanisms have stronger traditional institutions. In contrast, communities that participated more in inter-community redistribution scored higher on the civic authority scale, are enrolled in more food aid programs, but are less food secure.
ContributorsDe La Torre Pacheco, Sindy Yaneth (Author) / Janssen, Marco (Thesis advisor) / Eakin, Hallie (Committee member) / BurnSilver, Shauna (Committee member) / Arizona State University (Publisher)
Created2015
153196-Thumbnail Image.png
Description
Sparse learning is a powerful tool to generate models of high-dimensional data with high interpretability, and it has many important applications in areas such as bioinformatics, medical image processing, and computer vision. Recently, the a priori structural information has been shown to be powerful for improving the performance of sparse

Sparse learning is a powerful tool to generate models of high-dimensional data with high interpretability, and it has many important applications in areas such as bioinformatics, medical image processing, and computer vision. Recently, the a priori structural information has been shown to be powerful for improving the performance of sparse learning models. A graph is a fundamental way to represent structural information of features. This dissertation focuses on graph-based sparse learning. The first part of this dissertation aims to integrate a graph into sparse learning to improve the performance. Specifically, the problem of feature grouping and selection over a given undirected graph is considered. Three models are proposed along with efficient solvers to achieve simultaneous feature grouping and selection, enhancing estimation accuracy. One major challenge is that it is still computationally challenging to solve large scale graph-based sparse learning problems. An efficient, scalable, and parallel algorithm for one widely used graph-based sparse learning approach, called anisotropic total variation regularization is therefore proposed, by explicitly exploring the structure of a graph. The second part of this dissertation focuses on uncovering the graph structure from the data. Two issues in graphical modeling are considered. One is the joint estimation of multiple graphical models using a fused lasso penalty and the other is the estimation of hierarchical graphical models. The key technical contribution is to establish the necessary and sufficient condition for the graphs to be decomposable. Based on this key property, a simple screening rule is presented, which reduces the size of the optimization problem, dramatically reducing the computational cost.
ContributorsYang, Sen (Author) / Ye, Jieping (Thesis advisor) / Wonka, Peter (Thesis advisor) / Wang, Yalin (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2014
Description
Tessellation and Screen-Space Ambient Occlusion are algorithms which have been widely-used in real-time rendering in the past decade. They aim to enhance the details of the mesh, cast better shadow effects and improve the quality of the rendered images in real time. WebGL is a web-based graphics library derived from

Tessellation and Screen-Space Ambient Occlusion are algorithms which have been widely-used in real-time rendering in the past decade. They aim to enhance the details of the mesh, cast better shadow effects and improve the quality of the rendered images in real time. WebGL is a web-based graphics library derived from OpenGL ES used for rendering in web applications. It is relatively new and has been rapidly evolving, this has resulted in it supporting a subset of rendering features normally supported by desktop applications. In this thesis, the research is focusing on evaluating Curved PN-Triangles tessellation with Screen Space Ambient Occlusion (SSAO), Horizon-Based Ambient Occlusion (HBAO) and Horizon-Based Ambient Occlusion Plus (HBAO+) in WebGL-based real-time application and comparing its performance to desktop based application and to discuss the capabilities, limitations and bottlenecks of WebGL 1.0.
ContributorsLi, Chenyang (Author) / Amresh, Ashish (Thesis advisor) / Wang, Yalin (Thesis advisor) / Kobayashi, Yoshihiro (Committee member) / Arizona State University (Publisher)
Created2017
156080-Thumbnail Image.png
Description
While techniques for reading DNA in some capacity has been possible for decades,

the ability to accurately edit genomes at scale has remained elusive. Novel techniques

have been introduced recently to aid in the writing of DNA sequences. While writing

DNA is more accessible, it still remains expensive, justifying the increased interest in

in

While techniques for reading DNA in some capacity has been possible for decades,

the ability to accurately edit genomes at scale has remained elusive. Novel techniques

have been introduced recently to aid in the writing of DNA sequences. While writing

DNA is more accessible, it still remains expensive, justifying the increased interest in

in silico predictions of cell behavior. In order to accurately predict the behavior of

cells it is necessary to extensively model the cell environment, including gene-to-gene

interactions as completely as possible.

Significant algorithmic advances have been made for identifying these interactions,

but despite these improvements current techniques fail to infer some edges, and

fail to capture some complexities in the network. Much of this limitation is due to

heavily underdetermined problems, whereby tens of thousands of variables are to be

inferred using datasets with the power to resolve only a small fraction of the variables.

Additionally, failure to correctly resolve gene isoforms using short reads contributes

significantly to noise in gene quantification measures.

This dissertation introduces novel mathematical models, machine learning techniques,

and biological techniques to solve the problems described above. Mathematical

models are proposed for simulation of gene network motifs, and raw read simulation.

Machine learning techniques are shown for DNA sequence matching, and DNA

sequence correction.

Results provide novel insights into the low level functionality of gene networks. Also

shown is the ability to use normalization techniques to aggregate data for gene network

inference leading to larger data sets while minimizing increases in inter-experimental

noise. Results also demonstrate that high error rates experienced by third generation

sequencing are significantly different than previous error profiles, and that these errors can be modeled, simulated, and rectified. Finally, techniques are provided for amending this DNA error that preserve the benefits of third generation sequencing.
ContributorsFaucon, Philippe Christophe (Author) / Liu, Huan (Thesis advisor) / Wang, Xiao (Committee member) / Crook, Sharon M (Committee member) / Wang, Yalin (Committee member) / Sarjoughian, Hessam S. (Committee member) / Arizona State University (Publisher)
Created2017
156503-Thumbnail Image.png
Description
The Internet and climate change are two forces that are poised to both cause and enable changes in how we provide our energy infrastructure. The Internet has catalyzed enormous changes across many sectors by shifting the feedback and organizational structure of systems towards more decentralized users. Today’s energy systems require

The Internet and climate change are two forces that are poised to both cause and enable changes in how we provide our energy infrastructure. The Internet has catalyzed enormous changes across many sectors by shifting the feedback and organizational structure of systems towards more decentralized users. Today’s energy systems require colossal shifts toward a more sustainable future. However, energy systems face enormous socio-technical lock-in and, thus far, have been largely unaffected by these destabilizing forces. More distributed information offers not only the ability to craft new markets, but to accelerate learning processes that respond to emerging user or prosumer centered design needs. This may include values and needs such as local reliability, transparency and accountability, integration into the built environment, and reduction of local pollution challenges.

The same institutions (rules, norms and strategies) that dominated with the hierarchical infrastructure system of the twentieth century are unlikely to be good fit if a more distributed infrastructure increases in dominance. As information is produced at more distributed points, it is more difficult to coordinate and manage as an interconnected system. This research examines several aspects of these, historically dominant, infrastructure provisioning strategies to understand the implications of managing more distributed information. The first chapter experimentally examines information search and sharing strategies under different information protection rules. The second and third chapters focus on strategies to model and compare distributed energy production effects on shared electricity grid infrastructure. Finally, the fourth chapter dives into the literature of co-production, and explores connections between concepts in co-production and modularity (an engineering approach to information encapsulation) using the distributed energy resource regulations for San Diego, CA. Each of these sections highlights different aspects of how information rules offer a design space to enable a more adaptive, innovative and sustainable energy system that can more easily react to the shocks of the twenty-first century.
ContributorsTyson, Madeline (Author) / Janssen, Marco (Thesis advisor) / Tuttle, John (Committee member) / Allenby, Braden (Committee member) / Potts, Jason (Committee member) / Arizona State University (Publisher)
Created2018
155457-Thumbnail Image.png
Description
Alzheimer’s Disease (AD), a neurodegenerative disease is a progressive disease that affects the brain gradually with time and worsens. Reliable and early diagnosis of AD and its prodromal stages (i.e. Mild Cognitive Impairment(MCI)) is essential. Fluorodeoxyglucose (FDG) positron emission tomography (PET) measures the decline in the regional cerebral metabolic rate

Alzheimer’s Disease (AD), a neurodegenerative disease is a progressive disease that affects the brain gradually with time and worsens. Reliable and early diagnosis of AD and its prodromal stages (i.e. Mild Cognitive Impairment(MCI)) is essential. Fluorodeoxyglucose (FDG) positron emission tomography (PET) measures the decline in the regional cerebral metabolic rate for glucose, offering a reliable metabolic biomarker even on presymptomatic AD patients. PET scans provide functional information that is unique and unavailable using other types of imaging. The computational efficacy of FDG-PET data alone, for the classification of various Alzheimer’s Diagnostic categories (AD, MCI (LMCI, EMCI), Control) has not been studied. This serves as motivation to correctly classify the various diagnostic categories using FDG-PET data. Deep learning has recently been applied to the analysis of structural and functional brain imaging data. This thesis is an introduction to a deep learning based classification technique using neural networks with dimensionality reduction techniques to classify the different stages of AD based on FDG-PET image analysis.

This thesis develops a classification method to investigate the performance of FDG-PET as an effective biomarker for Alzheimer's clinical group classification. This involves dimensionality reduction using Probabilistic Principal Component Analysis on max-pooled data and mean-pooled data, followed by a Multilayer Feed Forward Neural Network which performs binary classification. Max pooled features result into better classification performance compared to results on mean pooled features. Additionally, experiments are done to investigate if the addition of important demographic features such as Functional Activities Questionnaire(FAQ), gene information helps improve performance. Classification results indicate that our designed classifiers achieve competitive results, and better with the additional of demographic features.
ContributorsSingh, Shibani (Author) / Wang, Yalin (Thesis advisor) / Li, Baoxin (Committee member) / Liang, Jianming (Committee member) / Arizona State University (Publisher)
Created2017
155389-Thumbnail Image.png
Description
Large-scale $\ell_1$-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. In many applications, it remains challenging to apply the sparse learning model to large-scale problems that have massive data samples with high-dimensional features. One popular and promising strategy

Large-scale $\ell_1$-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. In many applications, it remains challenging to apply the sparse learning model to large-scale problems that have massive data samples with high-dimensional features. One popular and promising strategy is to scaling up the optimization problem in parallel. Parallel solvers run multiple cores on a shared memory system or a distributed environment to speed up the computation, while the practical usage is limited by the huge dimension in the feature space and synchronization problems.

In this dissertation, I carry out the research along the direction with particular focuses on scaling up the optimization of sparse learning for supervised and unsupervised learning problems. For the supervised learning, I firstly propose an asynchronous parallel solver to optimize the large-scale sparse learning model in a multithreading environment. Moreover, I propose a distributed framework to conduct the learning process when the dataset is distributed stored among different machines. Then the proposed model is further extended to the studies of risk genetic factors for Alzheimer's Disease (AD) among different research institutions, integrating a group feature selection framework to rank the top risk SNPs for AD. For the unsupervised learning problem, I propose a highly efficient solver, termed Stochastic Coordinate Coding (SCC), scaling up the optimization of dictionary learning and sparse coding problems. The common issue for the medical imaging research is that the longitudinal features of patients among different time points are beneficial to study together. To further improve the dictionary learning model, I propose a multi-task dictionary learning method, learning the different task simultaneously and utilizing shared and individual dictionary to encode both consistent and changing imaging features.
ContributorsLi, Qingyang (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Thesis advisor) / He, Jingrui (Committee member) / Wang, Yalin (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2017
158676-Thumbnail Image.png
Description
The rapid development in acquiring multimodal neuroimaging data provides opportunities to systematically characterize human brain structures and functions. For example, in the brain magnetic resonance imaging (MRI), a typical non-invasive imaging technique, different acquisition sequences (modalities) lead to the different descriptions of brain functional activities, or anatomical biomarkers. Nowadays, in

The rapid development in acquiring multimodal neuroimaging data provides opportunities to systematically characterize human brain structures and functions. For example, in the brain magnetic resonance imaging (MRI), a typical non-invasive imaging technique, different acquisition sequences (modalities) lead to the different descriptions of brain functional activities, or anatomical biomarkers. Nowadays, in addition to the traditional voxel-level analysis of images, there is a trend to process and investigate the cross-modality relationship in a high dimensional level of images, e.g. surfaces and networks.

In this study, I aim to achieve multimodal brain image fusion by referring to some intrinsic properties of data, e.g. geometry of embedding structures where the commonly used image features reside. Since the image features investigated in this study share an identical embedding space, i.e. either defined on a brain surface or brain atlas, where a graph structure is easy to define, it is straightforward to consider the mathematically meaningful properties of the shared structures from the geometry perspective.

I first introduce the background of multimodal fusion of brain image data and insights of geometric properties playing a potential role to link different modalities. Then, several proposed computational frameworks either using the solid and efficient geometric algorithms or current geometric deep learning models are be fully discussed. I show how these designed frameworks deal with distinct geometric properties respectively, and their applications in the real healthcare scenarios, e.g. to enhanced detections of fetal brain diseases or abnormal brain development.
ContributorsZhang, Wen (Author) / Wang, Yalin (Thesis advisor) / Liu, Huan (Committee member) / Li, Baoxin (Committee member) / Braden, B. Blair (Committee member) / Arizona State University (Publisher)
Created2020