Matching Items (13)
Filtering by

Clear all filters

151511-Thumbnail Image.png
Description
With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus

With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus knowledge discovery by machine learning techniques is necessary if we want to better understand information from data. In this dissertation, we explore the topics of asymmetric loss and asymmetric data in machine learning and propose new algorithms as solutions to some of the problems in these topics. We also studied variable selection of matched data sets and proposed a solution when there is non-linearity in the matched data. The research is divided into three parts. The first part addresses the problem of asymmetric loss. A proposed asymmetric support vector machine (aSVM) is used to predict specific classes with high accuracy. aSVM was shown to produce higher precision than a regular SVM. The second part addresses asymmetric data sets where variables are only predictive for a subset of the predictor classes. Asymmetric Random Forest (ARF) was proposed to detect these kinds of variables. The third part explores variable selection for matched data sets. Matched Random Forest (MRF) was proposed to find variables that are able to distinguish case and control without the restrictions that exists in linear models. MRF detects variables that are able to distinguish case and control even in the presence of interaction and qualitative variables.
ContributorsKoh, Derek (Author) / Runger, George C. (Thesis advisor) / Wu, Tong (Committee member) / Pan, Rong (Committee member) / Cesta, John (Committee member) / Arizona State University (Publisher)
Created2013
150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
ContributorsThulasiram, Ramesh (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
153988-Thumbnail Image.png
Description
With the advent of Internet, the data being added online is increasing at enormous rate. Though search engines are using IR techniques to facilitate the search requests from users, the results are not effective towards the search query of the user. The search engine user has to go through certain

With the advent of Internet, the data being added online is increasing at enormous rate. Though search engines are using IR techniques to facilitate the search requests from users, the results are not effective towards the search query of the user. The search engine user has to go through certain webpages before getting at the webpage he/she wanted. This problem of Information Overload can be solved using Automatic Text Summarization. Summarization is a process of obtaining at abridged version of documents so that user can have a quick view to understand what exactly the document is about. Email threads from W3C are used in this system. Apart from common IR features like Term Frequency, Inverse Document Frequency, Term Rank, a variation of page rank based on graph model, which can cluster the words with respective to word ambiguity, is implemented. Term Rank also considers the possibility of co-occurrence of words with the corpus and evaluates the rank of the word accordingly. Sentences of email threads are ranked as per features and summaries are generated. System implemented the concept of pyramid evaluation in content selection. The system can be considered as a framework for Unsupervised Learning in text summarization.
ContributorsNadella, Sravan (Author) / Davulcu, Hasan (Thesis advisor) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2015
156475-Thumbnail Image.png
Description
This research start utilizing an efficient sparse inverse covariance matrix (precision matrix) estimation technique to identify a set of highly correlated discriminative perspectives between radical and counter-radical groups. A ranking system has been developed that utilizes ranked perspectives to map Islamic organizations on a set of socio-cultural, political and behavioral

This research start utilizing an efficient sparse inverse covariance matrix (precision matrix) estimation technique to identify a set of highly correlated discriminative perspectives between radical and counter-radical groups. A ranking system has been developed that utilizes ranked perspectives to map Islamic organizations on a set of socio-cultural, political and behavioral scales based on their web site corpus. Simultaneously, a gold standard ranking of these organizations was created through domain experts and compute expert-to-expert agreements and present experimental results comparing the performance of the QUIC based scaling system to another baseline method for organizations. The QUIC based algorithm not only outperforms the baseline methods, but it is also the only system that consistently performs at area expert-level accuracies for all scales. Also, a multi-scale ideological model has been developed and it investigates the correlates of Islamic extremism in Indonesia, Nigeria and UK. This analysis demonstrate that violence does not correlate strongly with broad Muslim theological or sectarian orientations; it shows that religious diversity intolerance is the only consistent and statistically significant ideological correlate of Islamic extremism in these countries, alongside desire for political change in UK and Indonesia, and social change in Nigeria. Next, dynamic issues and communities tracking system based on NMF(Non-negative Matrix Factorization) co-clustering algorithm has been built to better understand the dynamics of virtual communities. The system used between Iran and Saudi Arabia to build and apply a multi-party agent-based model that can demonstrate the role of wedges and spoilers in a complex environment where coalitions are dynamic. Lastly, a visual intelligence platform for tracking the diffusion of online social movements has been developed called LookingGlass to track the geographical footprint, shifting positions and flows of individuals, topics and perspectives between groups. The algorithm utilize large amounts of text collected from a wide variety of organizations’ media outlets to discover their hotly debated topics, and their discriminative perspectives voiced by opposing camps organized into multiple scales. Discriminating perspectives is utilized to classify and map individual Tweeter’s message content to social movements based on the perspectives expressed in their tweets.
ContributorsKim, Nyunsu (Author) / Davulcu, Hasan (Thesis advisor) / Sen, Arunabha (Committee member) / Hsiao, Sharon (Committee member) / Corman, Steven (Committee member) / Arizona State University (Publisher)
Created2018
156299-Thumbnail Image.png
Description
In healthcare facilities, health information systems (HISs) are used to serve different purposes. The radiology department adopts multiple HISs in managing their operations and patient care. In general, the HISs that touch radiology fall into two categories: tracking HISs and archive HISs. Electronic Health Records (EHR) is a typical tracking

In healthcare facilities, health information systems (HISs) are used to serve different purposes. The radiology department adopts multiple HISs in managing their operations and patient care. In general, the HISs that touch radiology fall into two categories: tracking HISs and archive HISs. Electronic Health Records (EHR) is a typical tracking HIS, which tracks the care each patient receives at multiple encounters and facilities. Archive HISs are typically specialized databases to store large-size data collected as part of the patient care. A typical example of an archive HIS is the Picture Archive and Communication System (PACS), which provides economical storage and convenient access to diagnostic images from multiple modalities. How to integrate such HISs and best utilize their data remains a challenging problem due to the disparity of HISs as well as high-dimensionality and heterogeneity of the data. My PhD dissertation research includes three inter-connected and integrated topics and focuses on designing integrated HISs and further developing statistical models and machine learning algorithms for process and patient care improvement.

Topic 1: Design of super-HIS and tracking of quality of care (QoC). My research developed an information technology that integrates multiple HISs in radiology, and proposed QoC metrics defined upon the data that measure various dimensions of care. The DDD assisted the clinical practices and enabled an effective intervention for reducing lengthy radiologist turnaround times for patients.

Topic 2: Monitoring and change detection of QoC data streams for process improvement. With the super-HIS in place, high-dimensional data streams of QoC metrics are generated. I developed a statistical model for monitoring high- dimensional data streams that integrated Singular Vector Decomposition (SVD) and process control. The algorithm was applied to QoC metrics data, and additionally extended to another application of monitoring traffic data in communication networks.

Topic 3: Deep transfer learning of archive HIS data for computer-aided diagnosis (CAD). The novelty of the CAD system is the development of a deep transfer learning algorithm that combines the ideas of transfer learning and multi- modality image integration under the deep learning framework. Our system achieved high accuracy in breast cancer diagnosis compared with conventional machine learning algorithms.
ContributorsWang, Kun (Author) / Li, Jing (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Zwart, Christine M. (Committee member) / Arizona State University (Publisher)
Created2018
156679-Thumbnail Image.png
Description
The recent technological advances enable the collection of various complex, heterogeneous and high-dimensional data in biomedical domains. The increasing availability of the high-dimensional biomedical data creates the needs of new machine learning models for effective data analysis and knowledge discovery. This dissertation introduces several unsupervised and supervised methods to hel

The recent technological advances enable the collection of various complex, heterogeneous and high-dimensional data in biomedical domains. The increasing availability of the high-dimensional biomedical data creates the needs of new machine learning models for effective data analysis and knowledge discovery. This dissertation introduces several unsupervised and supervised methods to help understand the data, discover the patterns and improve the decision making. All the proposed methods can generalize to other industrial fields.

The first topic of this dissertation focuses on the data clustering. Data clustering is often the first step for analyzing a dataset without the label information. Clustering high-dimensional data with mixed categorical and numeric attributes remains a challenging, yet important task. A clustering algorithm based on tree ensembles, CRAFTER, is proposed to tackle this task in a scalable manner.

The second part of this dissertation aims to develop data representation methods for genome sequencing data, a special type of high-dimensional data in the biomedical domain. The proposed data representation method, Bag-of-Segments, can summarize the key characteristics of the genome sequence into a small number of features with good interpretability.

The third part of this dissertation introduces an end-to-end deep neural network model, GCRNN, for time series classification with emphasis on both the accuracy and the interpretation. GCRNN contains a convolutional network component to extract high-level features, and a recurrent network component to enhance the modeling of the temporal characteristics. A feed-forward fully connected network with the sparse group lasso regularization is used to generate the final classification and provide good interpretability.

The last topic centers around the dimensionality reduction methods for time series data. A good dimensionality reduction method is important for the storage, decision making and pattern visualization for time series data. The CRNN autoencoder is proposed to not only achieve low reconstruction error, but also generate discriminative features. A variational version of this autoencoder has great potential for applications such as anomaly detection and process control.
ContributorsLin, Sangdi (Author) / Runger, George C. (Thesis advisor) / Kocher, Jean-Pierre A (Committee member) / Pan, Rong (Committee member) / Escobedo, Adolfo R. (Committee member) / Arizona State University (Publisher)
Created2018
157308-Thumbnail Image.png
Description
Image-based process monitoring has recently attracted increasing attention due to the advancement of the sensing technologies. However, existing process monitoring methods fail to fully utilize the spatial information of images due to their complex characteristics including the high dimensionality and complex spatial structures. Recent advancement of the unsupervised deep models

Image-based process monitoring has recently attracted increasing attention due to the advancement of the sensing technologies. However, existing process monitoring methods fail to fully utilize the spatial information of images due to their complex characteristics including the high dimensionality and complex spatial structures. Recent advancement of the unsupervised deep models such as a generative adversarial network (GAN) and generative adversarial autoencoder (AAE) has enabled to learn the complex spatial structures automatically. Inspired by this advancement, we propose an anomaly detection framework based on the AAE for unsupervised anomaly detection for images. AAE combines the power of GAN with the variational autoencoder, which serves as a nonlinear dimension reduction technique with regularization from the discriminator. Based on this, we propose a monitoring statistic efficiently capturing the change of the image data. The performance of the proposed AAE-based anomaly detection algorithm is validated through a simulation study and real case study for rolling defect detection.
ContributorsYeh, Huai-Ming (Author) / Yan, Hao (Thesis advisor) / Pan, Rong (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2019
154859-Thumbnail Image.png
Description
In supervised learning, machine learning techniques can be applied to learn a model on

a small set of labeled documents which can be used to classify a larger set of unknown

documents. Machine learning techniques can be used to analyze a political scenario

in a given society. A lot of research has been

In supervised learning, machine learning techniques can be applied to learn a model on

a small set of labeled documents which can be used to classify a larger set of unknown

documents. Machine learning techniques can be used to analyze a political scenario

in a given society. A lot of research has been going on in this field to understand

the interactions of various people in the society in response to actions taken by their

organizations.

This paper talks about understanding the Russian influence on people in Latvia.

This is done by building an eeffective model learnt on initial set of documents

containing a combination of official party web-pages, important political leaders' social

networking sites. Since twitter is a micro-blogging site which allows people to post

their opinions on any topic, the model built is used for estimating the tweets sup-

porting the Russian and Latvian political organizations in Latvia. All the documents

collected for analysis are in Latvian and Russian languages which are rich in vocabulary resulting into huge number of features. Hence, feature selection techniques can

be used to reduce the vocabulary set relevant to the classification model. This thesis

provides a comparative analysis of traditional feature selection techniques and implementation of a new iterative feature selection method using EM and cross-domain

training along with supportive visualization tool. This method out performed other

feature selection methods by reducing the number of features up-to 50% along with

good model accuracy. The results from the classification are used to interpret user

behavior and their political influence patterns across organizations in Latvia using

interactive dashboard with combination of powerful widgets.
ContributorsBollapragada, Lakshmi Gayatri Niharika (Author) / Davulcu, Hasan (Thesis advisor) / Sen, Arunabha (Committee member) / Hsiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2016
158867-Thumbnail Image.png
Description
The accurate monitoring of the bulk transmission system of the electric power grid by sensors, such as Remote Terminal Units (RTUs) and Phasor Measurement Units (PMUs), is essential for maintaining the reliability of the modern power system. One of the primary objectives of power system monitoring is the identification of

The accurate monitoring of the bulk transmission system of the electric power grid by sensors, such as Remote Terminal Units (RTUs) and Phasor Measurement Units (PMUs), is essential for maintaining the reliability of the modern power system. One of the primary objectives of power system monitoring is the identification of the snapshots of the system at regular intervals by performing state estimation using the available measurements from the sensors. The process of state estimation corresponds to the estimation of the complex voltages at all buses of the system. PMU measurements play an important role in this regard, because of the time-synchronized nature of these measurements as well as the faster rates at which they are produced. However, a model-based linear state estimator created using PMU-only data requires complete observability of the system by PMUs for its continuous functioning. The conventional model-based techniques also make certain assumptions in the modeling of the physical system, such as the constant values of the line parameters. The measurement error models in the conventional state estimators are also assumed to follow a Gaussian distribution. In this research, a data mining technique using Deep Neural Networks (DNNs) is proposed for performing a high-speed, time-synchronized state estimation of the transmission system of the power system. The proposed technique uses historical data to identify the correlation between the measurements and the system states as opposed to directly using the physical model of the system. Therefore, the highlight of the proposed technique is its ability to provide an accurate, fast, time-synchronized estimate of the system states even in the absence of complete system observability by PMUs.
The state estimator is formulated for the IEEE 118-bus system and its reliable performance is demonstrated in the presence of redundant observability, complete observability, and incomplete observability. The robustness of the state estimator is also demonstrated by performing the estimation in presence of Non-Gaussian measurement errors and varying line parameters. The consistency of the DNN state estimator is demonstrated by performing state estimation for an entire day.
ContributorsChandrasekaran, Harish (Author) / Pal, Anamitra (Thesis advisor) / Sen, Arunabha (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2020
187351-Thumbnail Image.png
Description
Quantum computing holds the potential to revolutionize various industries by solving problems that classical computers cannot solve efficiently. However, building quantum computers is still in its infancy, and simulators are currently the best available option to explore the potential of quantum computing. Therefore, developing comprehensive benchmarking suites for quantum computing

Quantum computing holds the potential to revolutionize various industries by solving problems that classical computers cannot solve efficiently. However, building quantum computers is still in its infancy, and simulators are currently the best available option to explore the potential of quantum computing. Therefore, developing comprehensive benchmarking suites for quantum computing simulators is essential to evaluate their performance and guide the development of future quantum algorithms and hardware. This study presents a systematic evaluation of quantum computing simulators’ performance using a benchmarking suite. The benchmarking suite is designed to meet the industry-standard performance benchmarks established by the Defense Advanced Research Projects Agency (DARPA) and includes standardized test data and comparison metrics that encompass a wide range of applications, deep neural network models, and optimization techniques. The thesis is divided into two parts to cover basic quantum algorithms and variational quantum algorithms for practical machine-learning tasks. In the first part, the run time and memory performance of quantum computing simulators are analyzed using basic quantum algorithms. The performance is evaluated using standardized test data and comparison metrics that cover fundamental quantum algorithms, including Quantum Fourier Transform (QFT), Inverse Quantum Fourier Transform (IQFT), Quantum Adder, and Variational Quantum Eigensolver (VQE). The analysis provides valuable insights into the simulators’ strengths and weaknesses and highlights the need for further development to enhance their performance. In the second part, benchmarks are developed using variational quantum algorithms for practical machine learning tasks such as image classification, natural language processing, and recommendation. The benchmarks address several unique challenges posed by benchmarking quantum machine learning (QML), including the effect of optimizations on time-to-solution, the stochastic nature of training, the inclusion of hybrid quantum-classical layers, and the diversity of software and hardware systems. The findings offer valuable insights into the simulators’ ability to solve practical machine-learning tasks and pinpoint areas for future research and enhancement. In conclusion, this study provides a rigorous evaluation of quantum computing simulators’ performance using a benchmarking suite that meets industry-standard performance benchmarks.
ContributorsSathyakumar, Rajesh (Author) / Spanias, Andreas (Thesis advisor) / Sen, Arunabha (Thesis advisor) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2023