Matching Items (55)
Filtering by

Clear all filters

150244-Thumbnail Image.png
Description
A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment

A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment challenging, including the massive amounts of data available, large numbers of users, and a highly dynamic environment, provide unique and untapped opportunities for solving the provenance problem for social media. Current approaches for tracking provenance data do not scale for online social media and consequently there is a gap in provenance methodologies and technologies providing exciting research opportunities. The guiding vision is the use of social media information itself to realize a useful amount of provenance data for information in social media. This departs from traditional approaches for data provenance which rely on a central store of provenance information. The contemporary online social media environment is an enormous and constantly updated "central store" that can be mined for provenance information that is not readily made available to the average social media user. This research introduces an approach and builds a foundation aimed at realizing a provenance data capability for social media users that is not accessible today.
ContributorsBarbier, Geoffrey P (Author) / Liu, Huan (Thesis advisor) / Bell, Herbert (Committee member) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150111-Thumbnail Image.png
Description
Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to

Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to improve the upper bound on the size of the optimal solution. This dissertation presents an alternative method which can be used to improve a solution to a problem rather than construct a solution from scratch. Necessity analysis, which is the key to this approach, is the process of analyzing the necessity of each element in a solution. The post-optimization algorithm presented here utilizes the result of the necessity analysis to improve the quality of the solution by eliminating unnecessary objects from the solution. While this technique could potentially be applied to different domains, this dissertation focuses on k-restriction problems, where a solution to the problem can be presented as an array. A scalable post-optimization algorithm for covering arrays is described, which starts from a valid solution and performs necessity analysis to iteratively improve the quality of the solution. It is shown that not only can this technique improve upon the previously best known results, it can also be added as a refinement step to any construction technique and in most cases further improvements are expected. The post-optimization algorithm is then modified to accommodate every k-restriction problem; and this generic algorithm can be used as a starting point to create a reasonable sized solution for any such problem. This generic algorithm is then further refined for hash family problems, by adding a conflict graph analysis to the necessity analysis phase. By recoloring the conflict graphs a new degree of flexibility is explored, which can further improve the quality of the solution.
ContributorsNayeri, Peyman (Author) / Colbourn, Charles (Thesis advisor) / Konjevod, Goran (Thesis advisor) / Sen, Arunabha (Committee member) / Stanzione Jr, Daniel (Committee member) / Arizona State University (Publisher)
Created2011
149703-Thumbnail Image.png
Description
This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based

This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based model is provided; it creates all edges deterministically and shows an asymptotically matching upper bound on the route length. The main goal is to improve greedy routing through a decentralized machine learning process. Two considered methods are based on weighted majority and an algorithm of de Farias and Megiddo, both learning from feedback using ensembles of experts. Tests are run on both artificial and real networks, with decentralized spectral graph embedding supplying geometric information for real networks where it is not intrinsically available. An important measure analyzed in this work is overpayment, the difference between the cost of the method and that of the shortest path. Adaptive routing overtakes greedy after about a hundred or fewer searches per node, consistently across different network sizes and types. Learning stabilizes, typically at overpayment of a third to a half of that by greedy. The problem is made more difficult by eliminating the knowledge of neighbors' locations or by introducing uncooperative nodes. Even under these conditions, the learned routes are usually better than the greedy routes. The second part of the dissertation is related to the community structure of unannotated networks. A modularity-based algorithm of Newman is extended to work with overlapping communities (including considerably overlapping communities), where each node locally makes decisions to which potential communities it belongs. To measure quality of a cover of overlapping communities, a notion of a node contribution to modularity is introduced, and subsequently the notion of modularity is extended from partitions to covers. The final part considers a problem of network anonymization, mostly by the means of edge deletion. The point of interest is utility preservation. It is shown that a concentration on the preservation of routing abilities might damage the preservation of community structure, and vice versa.
ContributorsBakun, Oleg (Author) / Konjevod, Goran (Thesis advisor) / Richa, Andrea (Thesis advisor) / Syrotiuk, Violet R. (Committee member) / Czygrinow, Andrzej (Committee member) / Arizona State University (Publisher)
Created2011
151653-Thumbnail Image.png
Description
Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling

Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages.
ContributorsMeng, Yunsong (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2013
151405-Thumbnail Image.png
Description
Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and

Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and challenging. The principal hurdle in this regard is the characterization of the context driven interactions between software and the physical environment (cyber-physical interactions), which introduce multi-dimensional dynamics in space and time, complex non-linearities, and non-trivial aggregation of interaction in case of networked operations. Traditionally, CPS software is tested for safety either through experimental trials, which can be expensive, incomprehensive, and hazardous, or through static analysis of code, which ignore the cyber-physical interactions. This thesis considers model based engineering, a paradigm widely used in different disciplines of engineering, for safety verification of CPS software and contributes to three fundamental phases: a) modeling, building abstractions or models that characterize cyberphysical interactions in a mathematical framework, b) analysis, reasoning about safety based on properties of the model, and c) synthesis, implementing models on standard testbeds for performing preliminary experimental trials. In this regard, CPS modeling techniques are proposed that can accurately capture the context driven spatio-temporal aggregate cyber-physical interactions. Different levels of abstractions are considered, which result in high level architectural models, or more detailed formal behavioral models of CPSes. The outcomes include, a well defined architectural specification framework called CPS-DAS and a novel spatio-temporal formal model called Spatio-Temporal Hybrid Automata (STHA) for CPSes. Model analysis techniques are proposed for the CPS models, which can simulate the effects of dynamic context changes on non-linear spatio-temporal cyberphysical interactions, and characterize aggregate effects. The outcomes include tractable algorithms for simulation analysis and for theoretically proving safety properties of CPS software. Lastly a software synthesis technique is proposed that can automatically convert high level architectural models of CPSes in the healthcare domain into implementations in high level programming languages. The outcome is a tool called Health-Dev that can synthesize software implementations of CPS models in healthcare for experimental verification of safety properties.
ContributorsBanerjee, Ayan (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Poovendran, Radha (Committee member) / Fainekos, Georgios (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
151500-Thumbnail Image.png
Description
Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding

Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks.
ContributorsBanerjee, Sujogya (Author) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Richa, Andrea (Committee member) / Hurlbert, Glenn (Committee member) / Arizona State University (Publisher)
Created2013
152082-Thumbnail Image.png
Description
While network problems have been addressed using a central administrative domain with a single objective, the devices in most networks are actually not owned by a single entity but by many individual entities. These entities make their decisions independently and selfishly, and maybe cooperate with a small group of other

While network problems have been addressed using a central administrative domain with a single objective, the devices in most networks are actually not owned by a single entity but by many individual entities. These entities make their decisions independently and selfishly, and maybe cooperate with a small group of other entities only when this form of coalition yields a better return. The interaction among multiple independent decision-makers necessitates the use of game theory, including economic notions related to markets and incentives. In this dissertation, we are interested in modeling, analyzing, addressing network problems caused by the selfish behavior of network entities. First, we study how the selfish behavior of network entities affects the system performance while users are competing for limited resource. For this resource allocation domain, we aim to study the selfish routing problem in networks with fair queuing on links, the relay assignment problem in cooperative networks, and the channel allocation problem in wireless networks. Another important aspect of this dissertation is the study of designing efficient mechanisms to incentivize network entities to achieve certain system objective. For this incentive mechanism domain, we aim to motivate wireless devices to serve as relays for cooperative communication, and to recruit smartphones for crowdsourcing. In addition, we apply different game theoretic approaches to problems in security and privacy domain. For this domain, we aim to analyze how a user could defend against a smart jammer, who can quickly learn about the user's transmission power. We also design mechanisms to encourage mobile phone users to participate in location privacy protection, in order to achieve k-anonymity.
ContributorsYang, Dejun (Author) / Xue, Guoliang (Thesis advisor) / Richa, Andrea (Committee member) / Sen, Arunabha (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
150488-Thumbnail Image.png
Description
Mobile ad hoc networks (MANETs) have attracted attention for mission critical applications. This dissertation investigates techniques of statistical monitoring and control for overhead reduction in a proactive MANET routing protocol. Proactive protocols transmit overhead periodically. Instead, we propose that the local conditions of a node should determine this transmission decision.

Mobile ad hoc networks (MANETs) have attracted attention for mission critical applications. This dissertation investigates techniques of statistical monitoring and control for overhead reduction in a proactive MANET routing protocol. Proactive protocols transmit overhead periodically. Instead, we propose that the local conditions of a node should determine this transmission decision. While the goal is to minimize overhead, a balance in the amount of overhead transmitted and the performance achieved is required. Statistical monitoring consists of techniques to determine if a characteristic has shifted away from an in-control state. A basic tool for monitoring is a control chart, a time-oriented representation of the characteristic. When a sample deviates outside control limits, a significant change has occurred and corrective actions are required to return to the in-control state. We investigate the use of statistical monitoring of local conditions in the Optimized Link State Routing (OLSR) protocol. Three versions are developed. In A-OLSR, each node uses a Shewhart chart to monitor betweenness of its two-hop neighbourhood. Betweenness is a social network metric that measures a node's influence; betweenness is larger when a node has more influence. Changes in topology are associated with changes in betweenness. We incorporate additional local node conditions including speed, density, packet arrival rate, and number of flows it forwards in A+-OLSR. Response Surface Methodology (RSM) is used to optimize timer values. As well, the Shewhart chart is replaced by an Exponentially Weighted Moving Average (EWMA) chart, which is more sensitive to small changes in the characteristic. It is known that control charts do not work as well in the presence of correlation. Hence, in A*-OLSR the autocorrelation in the time series is removed and an Auto-Regressive Integrated Moving Average (ARIMA) model found; this removes the dependence on node speed. A*-OLSR also extends monitoring to two characteristics concurrently using multivariate cumulative sum (MCUSUM) charts. The protocols are evaluated in simulation, and compared to OLSR and its variants. The techniques for statistical monitoring and control are general and have great potential to be applied to the adaptive control of many network protocols.
ContributorsShaukat, Kahkashan (Author) / Syrotiuk, Violet R. (Thesis advisor) / Colbourn, Charles J (Committee member) / Montgomery, Douglas C. (Committee member) / Sarjoughian, Hessam S. (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
151063-Thumbnail Image.png
Description
Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four medium access (MAC) protocols (i.e., JADE, ANTIJAM, COMAC, and SINRMAC)

Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four medium access (MAC) protocols (i.e., JADE, ANTIJAM, COMAC, and SINRMAC) which aim to achieve high throughput despite jamming activities under a variety of network and adversary models are presented. We also propose a self-stabilizing leader election protocol, SELECT, that can effectively elect a leader in the network with the existence of a strong adversary. Our protocols can not only deal with internal interference without the exact knowledge on the number of participants in the network, but they are also robust to unintentional or intentional external interference, e.g., due to co-existing networks or jammers. We model the external interference by a powerful adaptive and/or reactive adversary which can jam a (1 − ε)-portion of the time steps, where 0 < ε ≤ 1 is an arbitrary constant. We allow the adversary to be adaptive and to have complete knowledge of the entire protocol history. Moreover, in case the adversary is also reactive, it uses carrier sensing to make informed decisions to disrupt communications. Among the proposed protocols, JADE, ANTIJAM and COMAC are able to achieve Θ(1)-competitive throughput with the presence of the strong adversary; while SINRMAC is the first attempt to apply SINR model (i.e., Signal to Interference plus Noise Ratio), in robust medium access protocols design; the derived principles are also useful to build applications on top of the MAC layer, and we present SELECT, which is an exemplary study for leader election, which is one of the most fundamental tasks in distributed computing.
ContributorsZhang, Jin (Author) / Richa, Andréa W. (Thesis advisor) / Scheideler, Christian (Committee member) / Sen, Arunabha (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012