Matching Items (89)
Filtering by

Clear all filters

154464-Thumbnail Image.png
Description
The rapid growth of social media in recent years provides a large amount of user-generated visual objects, e.g., images and videos. Advanced semantic understanding approaches on such visual objects are desired to better serve applications such as human-machine interaction, image retrieval, etc. Semantic visual attributes have been proposed and utilized

The rapid growth of social media in recent years provides a large amount of user-generated visual objects, e.g., images and videos. Advanced semantic understanding approaches on such visual objects are desired to better serve applications such as human-machine interaction, image retrieval, etc. Semantic visual attributes have been proposed and utilized in multiple visual computing tasks to bridge the so-called "semantic gap" between extractable low-level feature representations and high-level semantic understanding of the visual objects.

Despite years of research, there are still some unsolved problems on semantic attribute learning. First, real-world applications usually involve hundreds of attributes which requires great effort to acquire sufficient amount of labeled data for model learning. Second, existing attribute learning work for visual objects focuses primarily on images, with semantic analysis on videos left largely unexplored.

In this dissertation I conduct innovative research and propose novel approaches to tackling the aforementioned problems. In particular, I propose robust and accurate learning frameworks on both attribute ranking and prediction by exploring the correlation among multiple attributes and utilizing various types of label information. Furthermore, I propose a video-based skill coaching framework by extending attribute learning to the video domain for robust motion skill analysis. Experiments on various types of applications and datasets and comparisons with multiple state-of-the-art baseline approaches confirm that my proposed approaches can achieve significant performance improvements for the general attribute learning problem.
ContributorsChen, Lin (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Wang, Yalin (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2016
152833-Thumbnail Image.png
Description
In many fields one needs to build predictive models for a set of related machine learning tasks, such as information retrieval, computer vision and biomedical informatics. Traditionally these tasks are treated independently and the inference is done separately for each task, which ignores important connections among the tasks. Multi-task learning

In many fields one needs to build predictive models for a set of related machine learning tasks, such as information retrieval, computer vision and biomedical informatics. Traditionally these tasks are treated independently and the inference is done separately for each task, which ignores important connections among the tasks. Multi-task learning aims at simultaneously building models for all tasks in order to improve the generalization performance, leveraging inherent relatedness of these tasks. In this thesis, I firstly propose a clustered multi-task learning (CMTL) formulation, which simultaneously learns task models and performs task clustering. I provide theoretical analysis to establish the equivalence between the CMTL formulation and the alternating structure optimization, which learns a shared low-dimensional hypothesis space for different tasks. Then I present two real-world biomedical informatics applications which can benefit from multi-task learning. In the first application, I study the disease progression problem and present multi-task learning formulations for disease progression. In the formulations, the prediction at each point is a regression task and multiple tasks at different time points are learned simultaneously, leveraging the temporal smoothness among the tasks. The proposed formulations have been tested extensively on predicting the progression of the Alzheimer's disease, and experimental results demonstrate the effectiveness of the proposed models. In the second application, I present a novel data-driven framework for densifying the electronic medical records (EMR) to overcome the sparsity problem in predictive modeling using EMR. The densification of each patient is a learning task, and the proposed algorithm simultaneously densify all patients. As such, the densification of one patient leverages useful information from other patients.
ContributorsZhou, Jiayu (Author) / Ye, Jieping (Thesis advisor) / Mittelmann, Hans (Committee member) / Li, Baoxin (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2014
152840-Thumbnail Image.png
Description
Many learning models have been proposed for various tasks in visual computing. Popular examples include hidden Markov models and support vector machines. Recently, sparse-representation-based learning methods have attracted a lot of attention in the computer vision field, largely because of their impressive performance in many applications. In the literature, many

Many learning models have been proposed for various tasks in visual computing. Popular examples include hidden Markov models and support vector machines. Recently, sparse-representation-based learning methods have attracted a lot of attention in the computer vision field, largely because of their impressive performance in many applications. In the literature, many of such sparse learning methods focus on designing or application of some learning techniques for certain feature space without much explicit consideration on possible interaction between the underlying semantics of the visual data and the employed learning technique. Rich semantic information in most visual data, if properly incorporated into algorithm design, should help achieving improved performance while delivering intuitive interpretation of the algorithmic outcomes. My study addresses the problem of how to explicitly consider the semantic information of the visual data in the sparse learning algorithms. In this work, we identify four problems which are of great importance and broad interest to the community. Specifically, a novel approach is proposed to incorporate label information to learn a dictionary which is not only reconstructive but also discriminative; considering the formation process of face images, a novel image decomposition approach for an ensemble of correlated images is proposed, where a subspace is built from the decomposition and applied to face recognition; based on the observation that, the foreground (or salient) objects are sparse in input domain and the background is sparse in frequency domain, a novel and efficient spatio-temporal saliency detection algorithm is proposed to identify the salient regions in video; and a novel hidden Markov model learning approach is proposed by utilizing a sparse set of pairwise comparisons among the data, which is easier to obtain and more meaningful, consistent than tradition labels, in many scenarios, e.g., evaluating motion skills in surgical simulations. In those four problems, different types of semantic information are modeled and incorporated in designing sparse learning algorithms for the corresponding visual computing tasks. Several real world applications are selected to demonstrate the effectiveness of the proposed methods, including, face recognition, spatio-temporal saliency detection, abnormality detection, spatio-temporal interest point detection, motion analysis and emotion recognition. In those applications, data of different modalities are involved, ranging from audio signal, image to video. Experiments on large scale real world data with comparisons to state-of-art methods confirm the proposed approaches deliver salient advantages, showing adding those semantic information dramatically improve the performances of the general sparse learning methods.
ContributorsZhang, Qiang (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Wang, Yalin (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2014
152506-Thumbnail Image.png
Description
In this thesis, the application of pixel-based vertical axes used within parallel coordinate plots is explored in an attempt to improve how existing tools can explain complex multivariate interactions across temporal data. Several promising visualization techniques are combined, such as: visual boosting to allow for quicker consumption of large data

In this thesis, the application of pixel-based vertical axes used within parallel coordinate plots is explored in an attempt to improve how existing tools can explain complex multivariate interactions across temporal data. Several promising visualization techniques are combined, such as: visual boosting to allow for quicker consumption of large data sets, the bond energy algorithm to find finer patterns and anomalies through contrast, multi-dimensional scaling, flow lines, user guided clustering, and row-column ordering. User input is applied on precomputed data sets to provide for real time interaction. General applicability of the techniques are tested against industrial trade, social networking, financial, and sparse data sets of varying dimensionality.
ContributorsHayden, Thomas (Author) / Maciejewski, Ross (Thesis advisor) / Wang, Yalin (Committee member) / Runger, George C. (Committee member) / Mack, Elizabeth (Committee member) / Arizona State University (Publisher)
Created2014
153250-Thumbnail Image.png
Description
As a developing nation, China is currently faced with the challenge of providing

safe, reliable and adequate energy resources to the county's growing urban areas as well as to its expanding rural populations. To meet this demand, the country has initiated massive construction projects to expand its national energy infrastructure, particularly

As a developing nation, China is currently faced with the challenge of providing

safe, reliable and adequate energy resources to the county's growing urban areas as well as to its expanding rural populations. To meet this demand, the country has initiated massive construction projects to expand its national energy infrastructure, particularly in the form of natural gas pipeline. The most notable of these projects is the ongoing West-East Gas Pipeline Project. This project is currently in its third phase, which will supply clean and efficient natural gas to nearly sixty million users located in the densely populated Yangtze River Delta.

Trenchless Technologies, in particular the construction method of Horizontal

Directional Drilling (HDD), have played a critical role in executing this project by

providing economical, practical and environmentally responsible ways to install buried pipeline systems. HDD has proven to be the most popular method selected to overcome challenges along the path of the pipeline, which include mountainous terrain, extensive farmland and numerous bodies of water. The Yangtze River, among other large-scale water bodies, have proven to be the most difficult obstacle for the pipeline installation as it widens and changes course numerous times along its path to the East China Sea. The purpose of this study is to examine those practices being used in China in order to compare those to those long used practices in the North American in order to understand the advantages of Chinese advancements.

Developing countries would benefit from the Chinese advancements for large-scale HDD installation. In developed areas, such as North America, studying Chinese execution may allow for new ideas to help to improve long established methods. These factors combined further solidify China's role as the global leader in trenchless technology methods and provide the opportunity for Chinese HDD contractors to contribute to the world's knowledge for best practices of the Horizontal Directional Drilling method.
ContributorsCarlin, Maureen Cassin (Author) / Ariaratnam, Samuel T (Thesis advisor) / Chong, Oswald (Committee member) / Bearup, Wylie (Committee member) / Arizona State University (Publisher)
Created2014
153196-Thumbnail Image.png
Description
Sparse learning is a powerful tool to generate models of high-dimensional data with high interpretability, and it has many important applications in areas such as bioinformatics, medical image processing, and computer vision. Recently, the a priori structural information has been shown to be powerful for improving the performance of sparse

Sparse learning is a powerful tool to generate models of high-dimensional data with high interpretability, and it has many important applications in areas such as bioinformatics, medical image processing, and computer vision. Recently, the a priori structural information has been shown to be powerful for improving the performance of sparse learning models. A graph is a fundamental way to represent structural information of features. This dissertation focuses on graph-based sparse learning. The first part of this dissertation aims to integrate a graph into sparse learning to improve the performance. Specifically, the problem of feature grouping and selection over a given undirected graph is considered. Three models are proposed along with efficient solvers to achieve simultaneous feature grouping and selection, enhancing estimation accuracy. One major challenge is that it is still computationally challenging to solve large scale graph-based sparse learning problems. An efficient, scalable, and parallel algorithm for one widely used graph-based sparse learning approach, called anisotropic total variation regularization is therefore proposed, by explicitly exploring the structure of a graph. The second part of this dissertation focuses on uncovering the graph structure from the data. Two issues in graphical modeling are considered. One is the joint estimation of multiple graphical models using a fused lasso penalty and the other is the estimation of hierarchical graphical models. The key technical contribution is to establish the necessary and sufficient condition for the graphs to be decomposable. Based on this key property, a simple screening rule is presented, which reduces the size of the optimization problem, dramatically reducing the computational cost.
ContributorsYang, Sen (Author) / Ye, Jieping (Thesis advisor) / Wonka, Peter (Thesis advisor) / Wang, Yalin (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2014
155683-Thumbnail Image.png
Description
The solar energy sector has been growing rapidly over the past decade. Growth in renewable electricity generation using photovoltaic (PV) systems is accompanied by an increased awareness of the fault conditions developing during the operational lifetime of these systems. While the annual energy losses caused by faults in PV systems

The solar energy sector has been growing rapidly over the past decade. Growth in renewable electricity generation using photovoltaic (PV) systems is accompanied by an increased awareness of the fault conditions developing during the operational lifetime of these systems. While the annual energy losses caused by faults in PV systems could reach up to 18.9% of their total capacity, emerging technologies and models are driving for greater efficiency to assure the reliability of a product under its actual application. The objectives of this dissertation consist of (1) reviewing the state of the art and practice of prognostics and health management for the Direct Current (DC) side of photovoltaic systems; (2) assessing the corrosion of the driven posts supporting PV structures in utility scale plants; and (3) assessing the probabilistic risk associated with the failure of polymeric materials that are used in tracker and fixed tilt systems.

As photovoltaic systems age under relatively harsh and changing environmental conditions, several potential fault conditions can develop during the operational lifetime including corrosion of supporting structures and failures of polymeric materials. The ability to accurately predict the remaining useful life of photovoltaic systems is critical for plants ‘continuous operation. This research contributes to the body of knowledge of PV systems reliability by: (1) developing a meta-model of the expected service life of mounting structures; (2) creating decision frameworks and tools to support practitioners in mitigating risks; (3) and supporting material selection for fielded and future photovoltaic systems. The newly developed frameworks were validated by a global solar company.
ContributorsChokor, Abbas (Author) / El Asmar, Mounir (Thesis advisor) / Chong, Oswald (Committee member) / Ernzen, James (Committee member) / Arizona State University (Publisher)
Created2017
155632-Thumbnail Image.png
Description
Civil infrastructures undergo frequent spatial changes such as deviations between as-designed model and as-is condition, rigid body motions of the structure, and deformations of individual elements of the structure, etc. These spatial changes can occur during the design phase, the construction phase, or during the service life of a structure.

Civil infrastructures undergo frequent spatial changes such as deviations between as-designed model and as-is condition, rigid body motions of the structure, and deformations of individual elements of the structure, etc. These spatial changes can occur during the design phase, the construction phase, or during the service life of a structure. Inability to accurately detect and analyze the impact of such changes may miss opportunities for early detections of pending structural integrity and stability issues. Commercial Building Information Modeling (BIM) tools could hardly track differences between as-designed and as-built conditions as they mainly focus on design changes and rely on project managers to manually update and analyze the impact of field changes on the project performance. Structural engineers collect detailed onsite data of a civil infrastructure to perform manual updates of the model for structural analysis, but such approach tends to become tedious and complicated while handling large civil infrastructures.

Previous studies started collecting detailed geometric data generated by 3D laser scanners for defect detection and geometric change analysis of structures. However, previous studies have not yet systematically examined methods for exploring the correlation between the detected geometric changes and their relation to the behaviors of the structural system. Manually checking every possible loading combination leading to the observed geometric change is tedious and sometimes error-prone. The work presented in this dissertation develops a spatial change analysis framework that utilizes spatiotemporal data collected using 3D laser scanning technology and the as-designed models of the structures to automatically detect, classify, and correlate the spatial changes of a structure. The change detection part of the developed framework is computationally efficient and can automatically detect spatial changes between as-designed model and as-built data or between two sets of as-built data collected using 3D laser scanning technology. Then a spatial change classification algorithm automatically classifies the detected spatial changes as global (rigid body motion) and local deformations (tension, compression). Finally, a change correlation technique utilizes a qualitative shape-based reasoning approach for identifying correlated deformations of structure elements connected at joints that contradicts the joint equilibrium. Those contradicting deformations can help to eliminate improbable loading combinations therefore guiding the loading path analysis of the structure.
ContributorsKalasapudi, Vamsi Sai (Author) / Tang, Pingbo (Thesis advisor) / Chong, Oswald (Committee member) / Hjelmstad, Keith (Committee member) / Arizona State University (Publisher)
Created2017
155422-Thumbnail Image.png
Description
The performance of the Alpha Sprayed Polyurethane Foam (SPF) roofing system is perceived as not an economical option when compared to a 20-year modified bitumen roofing system. Today, the majority of roofs are being replaced, rather than newly installed. The coating manufacturer, Neogard, implemented the Alpha roofing program to identify

The performance of the Alpha Sprayed Polyurethane Foam (SPF) roofing system is perceived as not an economical option when compared to a 20-year modified bitumen roofing system. Today, the majority of roofs are being replaced, rather than newly installed. The coating manufacturer, Neogard, implemented the Alpha roofing program to identify the best contractors in the industry and to measure their roof performance. The Alpha roof system has shown consistent high performance on over 230 million square feet of surveyed roof. The author proposes to identify if the Alpha roof system is renewable, has proven performance that competes with the traditional modified roofing system, and is a more economical option by evaluating an Alpha roof system installation and the performance of a 29-year-old Alpha roof system. The Dallas Independent School District utilized the Alpha program for William Lipscomb Elementary School in 2016. Dallas Fort Worth Urethane installed the Alpha SPF roof system with high customer satisfaction ratings. This roofing installation showed the value of the Alpha roof system by saving over 20% on costs for the installation and will save approximately 69% of costs on the recoating of the roof in 20 years. The Casa View Elementary School roof system was installed with a Neogard Permathane roof system in 1987. This roof was hail tested with ten drops from 17 feet 9 inches of 1-3/4-inch steel ball (9 out of 10 passed) and four drops from 17 feet 9 inches with a 3-inch diameter steel ball (2 out of 4 passed). The analysis of the passing and failing core samples show that the thickness of the top and base Alpha SPF coating is one of the major differences in a roof passing or failing the FM-SH hail test. Over the 40-year service life, the main difference of purchasing a 61,000 square feet Alpha SPF roof versus modified bitumen roof are savings of approximately $1,067,500. Past hail tests on Alpha SPF roof systems show its cost effectiveness with high customer satisfaction (9.8 out of 10), an over 40-year service life after a $6.00/SF recoat and savings of over $1M for DISD.
ContributorsZulanas, Charles J., IV (Author) / Kashiwagi, Dean T. (Thesis advisor) / Kashiwagi, Jacob S (Thesis advisor) / Chong, Oswald (Committee member) / Arizona State University (Publisher)
Created2017
156080-Thumbnail Image.png
Description
While techniques for reading DNA in some capacity has been possible for decades,

the ability to accurately edit genomes at scale has remained elusive. Novel techniques

have been introduced recently to aid in the writing of DNA sequences. While writing

DNA is more accessible, it still remains expensive, justifying the increased interest in

in

While techniques for reading DNA in some capacity has been possible for decades,

the ability to accurately edit genomes at scale has remained elusive. Novel techniques

have been introduced recently to aid in the writing of DNA sequences. While writing

DNA is more accessible, it still remains expensive, justifying the increased interest in

in silico predictions of cell behavior. In order to accurately predict the behavior of

cells it is necessary to extensively model the cell environment, including gene-to-gene

interactions as completely as possible.

Significant algorithmic advances have been made for identifying these interactions,

but despite these improvements current techniques fail to infer some edges, and

fail to capture some complexities in the network. Much of this limitation is due to

heavily underdetermined problems, whereby tens of thousands of variables are to be

inferred using datasets with the power to resolve only a small fraction of the variables.

Additionally, failure to correctly resolve gene isoforms using short reads contributes

significantly to noise in gene quantification measures.

This dissertation introduces novel mathematical models, machine learning techniques,

and biological techniques to solve the problems described above. Mathematical

models are proposed for simulation of gene network motifs, and raw read simulation.

Machine learning techniques are shown for DNA sequence matching, and DNA

sequence correction.

Results provide novel insights into the low level functionality of gene networks. Also

shown is the ability to use normalization techniques to aggregate data for gene network

inference leading to larger data sets while minimizing increases in inter-experimental

noise. Results also demonstrate that high error rates experienced by third generation

sequencing are significantly different than previous error profiles, and that these errors can be modeled, simulated, and rectified. Finally, techniques are provided for amending this DNA error that preserve the benefits of third generation sequencing.
ContributorsFaucon, Philippe Christophe (Author) / Liu, Huan (Thesis advisor) / Wang, Xiao (Committee member) / Crook, Sharon M (Committee member) / Wang, Yalin (Committee member) / Sarjoughian, Hessam S. (Committee member) / Arizona State University (Publisher)
Created2017