Matching Items (182)
Filtering by

Clear all filters

150488-Thumbnail Image.png
Description
Mobile ad hoc networks (MANETs) have attracted attention for mission critical applications. This dissertation investigates techniques of statistical monitoring and control for overhead reduction in a proactive MANET routing protocol. Proactive protocols transmit overhead periodically. Instead, we propose that the local conditions of a node should determine this transmission decision.

Mobile ad hoc networks (MANETs) have attracted attention for mission critical applications. This dissertation investigates techniques of statistical monitoring and control for overhead reduction in a proactive MANET routing protocol. Proactive protocols transmit overhead periodically. Instead, we propose that the local conditions of a node should determine this transmission decision. While the goal is to minimize overhead, a balance in the amount of overhead transmitted and the performance achieved is required. Statistical monitoring consists of techniques to determine if a characteristic has shifted away from an in-control state. A basic tool for monitoring is a control chart, a time-oriented representation of the characteristic. When a sample deviates outside control limits, a significant change has occurred and corrective actions are required to return to the in-control state. We investigate the use of statistical monitoring of local conditions in the Optimized Link State Routing (OLSR) protocol. Three versions are developed. In A-OLSR, each node uses a Shewhart chart to monitor betweenness of its two-hop neighbourhood. Betweenness is a social network metric that measures a node's influence; betweenness is larger when a node has more influence. Changes in topology are associated with changes in betweenness. We incorporate additional local node conditions including speed, density, packet arrival rate, and number of flows it forwards in A+-OLSR. Response Surface Methodology (RSM) is used to optimize timer values. As well, the Shewhart chart is replaced by an Exponentially Weighted Moving Average (EWMA) chart, which is more sensitive to small changes in the characteristic. It is known that control charts do not work as well in the presence of correlation. Hence, in A*-OLSR the autocorrelation in the time series is removed and an Auto-Regressive Integrated Moving Average (ARIMA) model found; this removes the dependence on node speed. A*-OLSR also extends monitoring to two characteristics concurrently using multivariate cumulative sum (MCUSUM) charts. The protocols are evaluated in simulation, and compared to OLSR and its variants. The techniques for statistical monitoring and control are general and have great potential to be applied to the adaptive control of many network protocols.
ContributorsShaukat, Kahkashan (Author) / Syrotiuk, Violet R. (Thesis advisor) / Colbourn, Charles J (Committee member) / Montgomery, Douglas C. (Committee member) / Sarjoughian, Hessam S. (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
150969-Thumbnail Image.png
Description
Analysis of political texts, which contains a huge amount of personal political opinions, sentiments, and emotions towards powerful individuals, leaders, organizations, and a large number of people, is an interesting task, which can lead to discover interesting interactions between the political parties and people. Recently, political blogosphere plays an increasingly

Analysis of political texts, which contains a huge amount of personal political opinions, sentiments, and emotions towards powerful individuals, leaders, organizations, and a large number of people, is an interesting task, which can lead to discover interesting interactions between the political parties and people. Recently, political blogosphere plays an increasingly important role in politics, as a forum for debating political issues. Most of the political weblogs are biased towards their political parties, and they generally express their sentiments towards their issues (i.e. leaders, topics etc.,) and also towards issues of the opposing parties. In this thesis, I have modeled the above interactions/debate as a sentimental bi-partite graph, a bi-partite graph with Blogs forming vertices of a disjoint set, and the issues (i.e. leaders, topics etc.,) forming the other disjoint set,and the edges between the two sets representing the sentiment of the blogs towards the issues. I have used American Political blog data to model the sentimental bi- partite graph, in particular, a set of popular political liberal and conservative blogs that have clearly declared positions. These blogs contain discussion about social, political, economic issues and related key individuals in their conservative/liberal view. To be more focused and more polarized, 22 most popular liberal/conservative blogs of a particular time period, May 2008 - October 2008(because of high intensity of debate and discussions), just before the presidential elections, was considered, involving around 23,800 articles. This thesis involves solving the questions: a) which is the most liberal/conservative blogs on the web? b) Who is on which side of debate and what are the issues? c) Who are the important leaders? d) How do you model the relationship between the participants of the debate and the underlying issues?
ContributorsThirumalai, Dananjayan (Author) / Davulcu, Hasan (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
151063-Thumbnail Image.png
Description
Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four medium access (MAC) protocols (i.e., JADE, ANTIJAM, COMAC, and SINRMAC)

Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four medium access (MAC) protocols (i.e., JADE, ANTIJAM, COMAC, and SINRMAC) which aim to achieve high throughput despite jamming activities under a variety of network and adversary models are presented. We also propose a self-stabilizing leader election protocol, SELECT, that can effectively elect a leader in the network with the existence of a strong adversary. Our protocols can not only deal with internal interference without the exact knowledge on the number of participants in the network, but they are also robust to unintentional or intentional external interference, e.g., due to co-existing networks or jammers. We model the external interference by a powerful adaptive and/or reactive adversary which can jam a (1 − ε)-portion of the time steps, where 0 < ε ≤ 1 is an arbitrary constant. We allow the adversary to be adaptive and to have complete knowledge of the entire protocol history. Moreover, in case the adversary is also reactive, it uses carrier sensing to make informed decisions to disrupt communications. Among the proposed protocols, JADE, ANTIJAM and COMAC are able to achieve Θ(1)-competitive throughput with the presence of the strong adversary; while SINRMAC is the first attempt to apply SINR model (i.e., Signal to Interference plus Noise Ratio), in robust medium access protocols design; the derived principles are also useful to build applications on top of the MAC layer, and we present SELECT, which is an exemplary study for leader election, which is one of the most fundamental tasks in distributed computing.
ContributorsZhang, Jin (Author) / Richa, Andréa W. (Thesis advisor) / Scheideler, Christian (Committee member) / Sen, Arunabha (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
150836-Thumbnail Image.png
Description
Muslim radicalism is recognized as one of the greatest security threats for the United States and the rest of the world. Use of force to eliminate specific radical entities is ineffective in containing radicalism as a whole. There is a need to understand the origin, ideologies and behavior of Radical

Muslim radicalism is recognized as one of the greatest security threats for the United States and the rest of the world. Use of force to eliminate specific radical entities is ineffective in containing radicalism as a whole. There is a need to understand the origin, ideologies and behavior of Radical and Counter-Radical organizations and how they shape up over a period of time. Recognizing and supporting counter-radical organizations is one of the most important steps towards impeding radical organizations. A lot of research has already been done to categorize and recognize organizations, to understand their behavior, their interactions with other organizations, their target demographics and the area of influence. We have a huge amount of information which is a result of the research done over these topics. This thesis provides a powerful and interactive way to navigate through all this information, using a Visualization Dashboard. The dashboard makes it easier for Social Scientists, Policy Analysts, Military and other personnel to visualize an organization's propensity towards violence and radicalism. It also tracks the peaking religious, political and socio-economic markers, their target demographics and locations. A powerful search interface with parametric search helps in narrowing down to specific scenarios and view the corresponding information related to the organizations. This tool helps to identify moderate Counter-Radical organizations and also has the potential of predicting the orientation of various organizations based on the current information.
ContributorsNair, Shreejay (Author) / Davulcu, Hasan (Thesis advisor) / Dasgpta, Partha (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
151231-Thumbnail Image.png
Description
Gray codes are perhaps the best known structures for listing sequences of combinatorial objects, such as binary strings. Simply defined as a minimal change listing, Gray codes vary greatly both in structure and in the types of objects that they list. More specific types of Gray codes are universal cycles

Gray codes are perhaps the best known structures for listing sequences of combinatorial objects, such as binary strings. Simply defined as a minimal change listing, Gray codes vary greatly both in structure and in the types of objects that they list. More specific types of Gray codes are universal cycles and overlap sequences. Universal cycles are Gray codes on a set of strings of length n in which the first n-1 letters of one object are the same as the last n-1 letters of its predecessor in the listing. Overlap sequences allow this overlap to vary between 1 and n-1. Some of our main contributions to the areas of Gray codes and universal cycles include a new Gray code algorithm for fixed weight m-ary words, and results on the existence of universal cycles for weak orders on [n]. Overlap cycles are a relatively new structure with very few published results. We prove the existence of s-overlap cycles for k-permutations of [n], which has been an open research problem for several years, as well as constructing 1- overlap cycles for Steiner triple and quadruple systems of every order. Also included are various other results of a similar nature covering other structures such as binary strings, m-ary strings, subsets, permutations, weak orders, partitions, and designs. These listing structures lend themselves readily to some classes of combinatorial objects, such as binary n-tuples and m-ary n-tuples. Others require more work to find an appropriate structure, such as k-subsets of an n-set, weak orders, and designs. Still more require a modification in the representation of the objects to fit these structures, such as partitions. Determining when and how we can fit these sets of objects into our three listing structures is the focus of this dissertation.
ContributorsHoran, Victoria E (Author) / Hurlbert, Glenn H. (Thesis advisor) / Czygrinow, Andrzej (Committee member) / Fishel, Susanna (Committee member) / Colbourn, Charles (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
153988-Thumbnail Image.png
Description
With the advent of Internet, the data being added online is increasing at enormous rate. Though search engines are using IR techniques to facilitate the search requests from users, the results are not effective towards the search query of the user. The search engine user has to go through certain

With the advent of Internet, the data being added online is increasing at enormous rate. Though search engines are using IR techniques to facilitate the search requests from users, the results are not effective towards the search query of the user. The search engine user has to go through certain webpages before getting at the webpage he/she wanted. This problem of Information Overload can be solved using Automatic Text Summarization. Summarization is a process of obtaining at abridged version of documents so that user can have a quick view to understand what exactly the document is about. Email threads from W3C are used in this system. Apart from common IR features like Term Frequency, Inverse Document Frequency, Term Rank, a variation of page rank based on graph model, which can cluster the words with respective to word ambiguity, is implemented. Term Rank also considers the possibility of co-occurrence of words with the corpus and evaluates the rank of the word accordingly. Sentences of email threads are ranked as per features and summaries are generated. System implemented the concept of pyramid evaluation in content selection. The system can be considered as a framework for Unsupervised Learning in text summarization.
ContributorsNadella, Sravan (Author) / Davulcu, Hasan (Thesis advisor) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2015
156044-Thumbnail Image.png
Description
In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based

In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based on the presence

of these agents. A theoretical framework was introduced which performs interaction

learning from demonstrations in a two-agent work environment, and it is called

Interaction Primitives.

This document is an in-depth description of the new state of the art Python

Framework for Interaction Primitives between two agents in a single as well as multiple

task work environment and extension of the original framework in a work environment

with multiple agents doing a single task. The original theory of Interaction

Primitives has been extended to create a framework which will capture correlation

between more than two agents while performing a single task. The new state of the

art Python framework is an intuitive, generic, easy to install and easy to use python

library which can be applied to use the Interaction Primitives framework in a work

environment. This library was tested in simulated environments and controlled laboratory

environment. The results and benchmarks of this library are available in the

related sections of this document.
ContributorsKumar, Ashish, M.S (Author) / Amor, Hani Ben (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
155963-Thumbnail Image.png
Description
Computer Vision as a eld has gone through signicant changes in the last decade.

The eld has seen tremendous success in designing learning systems with hand-crafted

features and in using representation learning to extract better features. In this dissertation

some novel approaches to representation learning and task learning are studied.

Multiple-instance learning which is

Computer Vision as a eld has gone through signicant changes in the last decade.

The eld has seen tremendous success in designing learning systems with hand-crafted

features and in using representation learning to extract better features. In this dissertation

some novel approaches to representation learning and task learning are studied.

Multiple-instance learning which is generalization of supervised learning, is one

example of task learning that is discussed. In particular, a novel non-parametric k-

NN-based multiple-instance learning is proposed, which is shown to outperform other

existing approaches. This solution is applied to a diabetic retinopathy pathology

detection problem eectively.

In cases of representation learning, generality of neural features are investigated

rst. This investigation leads to some critical understanding and results in feature

generality among datasets. The possibility of learning from a mentor network instead

of from labels is then investigated. Distillation of dark knowledge is used to eciently

mentor a small network from a pre-trained large mentor network. These studies help

in understanding representation learning with smaller and compressed networks.
ContributorsVenkatesan, Ragav (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017
156193-Thumbnail Image.png
Description
With the rise of the Big Data Era, an exponential amount of network data is being generated at an unprecedented rate across a wide-range of high impact micro and macro areas of research---from protein interaction to social networks. The critical challenge is translating this large scale network data into actionable

With the rise of the Big Data Era, an exponential amount of network data is being generated at an unprecedented rate across a wide-range of high impact micro and macro areas of research---from protein interaction to social networks. The critical challenge is translating this large scale network data into actionable information.

A key task in the data translation is the analysis of network connectivity via marked nodes---the primary focus of our research. We have developed a framework for analyzing network connectivity via marked nodes in large scale graphs, utilizing novel algorithms in three interrelated areas: (1) analysis of a single seed node via it’s ego-centric network (AttriPart algorithm); (2) pathway identification between two seed nodes (K-Simple Shortest Paths Multithreaded and Search Reduced (KSSPR) algorithm); and (3) tree detection, defining the interaction between three or more seed nodes (Shortest Path MST algorithm).

In an effort to address both fundamental and applied research issues, we have developed the LocalForcasting algorithm to explore how network connectivity analysis can be applied to local community evolution and recommender systems. The goal is to apply the LocalForecasting algorithm to various domains---e.g., friend suggestions in social networks or future collaboration in co-authorship networks. This algorithm utilizes link prediction in combination with the AttriPart algorithm to predict future connections in local graph partitions.

Results show that our proposed AttriPart algorithm finds up to 1.6x denser local partitions, while running approximately 43x faster than traditional local partitioning techniques (PageRank-Nibble). In addition, our LocalForecasting algorithm demonstrates a significant improvement in the number of nodes and edges correctly predicted over baseline methods. Furthermore, results for the KSSPR algorithm demonstrate a speed-up of up to 2.5x the standard k-simple shortest paths algorithm.
ContributorsFreitas, Scott (Author) / Tong, Hanghang (Thesis advisor) / Maciejewski, Ross (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2018
156084-Thumbnail Image.png
Description
The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle

The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle anomalies in the data such as missing samples and noisy input caused by the undesired, external factors of variation. It should also reduce the data redundancy. Over the years, many feature extraction processes have been invented to produce good representations of raw images and videos.

The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss.

In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.
ContributorsChandakkar, Parag Shridhar (Author) / Li, Baoxin (Thesis advisor) / Yang, Yezhou (Committee member) / Turaga, Pavan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017