Matching Items (20)
Filtering by

Clear all filters

151653-Thumbnail Image.png
Description
Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling

Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages.
ContributorsMeng, Yunsong (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2013
151405-Thumbnail Image.png
Description
Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and

Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and challenging. The principal hurdle in this regard is the characterization of the context driven interactions between software and the physical environment (cyber-physical interactions), which introduce multi-dimensional dynamics in space and time, complex non-linearities, and non-trivial aggregation of interaction in case of networked operations. Traditionally, CPS software is tested for safety either through experimental trials, which can be expensive, incomprehensive, and hazardous, or through static analysis of code, which ignore the cyber-physical interactions. This thesis considers model based engineering, a paradigm widely used in different disciplines of engineering, for safety verification of CPS software and contributes to three fundamental phases: a) modeling, building abstractions or models that characterize cyberphysical interactions in a mathematical framework, b) analysis, reasoning about safety based on properties of the model, and c) synthesis, implementing models on standard testbeds for performing preliminary experimental trials. In this regard, CPS modeling techniques are proposed that can accurately capture the context driven spatio-temporal aggregate cyber-physical interactions. Different levels of abstractions are considered, which result in high level architectural models, or more detailed formal behavioral models of CPSes. The outcomes include, a well defined architectural specification framework called CPS-DAS and a novel spatio-temporal formal model called Spatio-Temporal Hybrid Automata (STHA) for CPSes. Model analysis techniques are proposed for the CPS models, which can simulate the effects of dynamic context changes on non-linear spatio-temporal cyberphysical interactions, and characterize aggregate effects. The outcomes include tractable algorithms for simulation analysis and for theoretically proving safety properties of CPS software. Lastly a software synthesis technique is proposed that can automatically convert high level architectural models of CPSes in the healthcare domain into implementations in high level programming languages. The outcome is a tool called Health-Dev that can synthesize software implementations of CPS models in healthcare for experimental verification of safety properties.
ContributorsBanerjee, Ayan (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Poovendran, Radha (Committee member) / Fainekos, Georgios (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
168389-Thumbnail Image.png
Description
A storage system requiring file redundancy and on-line repairability can be represented as a Steiner system, a combinatorial design with the property that every $t$-subset of its points occurs in exactly one of its blocks. Under this representation, files are the points and storage units are the blocks of the

A storage system requiring file redundancy and on-line repairability can be represented as a Steiner system, a combinatorial design with the property that every $t$-subset of its points occurs in exactly one of its blocks. Under this representation, files are the points and storage units are the blocks of the Steiner system, or vice-versa. Often, the popularities of the files of such storage systems run the gamut, with some files receiving hardly any attention, and others receiving most of it. For such systems, minimizing the difference in the collective popularity between any two storage units is nontrivial; this is the access balancing problem. With regard to the representative Steiner system, the access balancing problem in its simplest form amounts to constructing either a point or block labelling: an assignment of a set of integer labels (popularity ranks) to the Steiner system's point set or block set, respectively, requiring of the former assignment that the sums of the labelled points of any two blocks differ as little as possible and of the latter that the sums of the labels assigned to the containing blocks of any two distinct points differ as little as possible. The central aim of this dissertation is to supply point and block labellings for Steiner systems of block size greater than three, for which up to this point no attempt has been made. Four major results are given in this connection. First, motivated by the close connection between the size of the independent sets of a Steiner system and the quality of its labellings, a Steiner triple system of any admissible order is constructed with a pair of disjoint independent sets of maximum cardinality. Second, the spectrum of resolvable Bose triple systems is determined in order to label some Steiner 2-designs with block size four. Third, several kinds of independent sets are used to point-label Steiner 2-designs with block size four. Finally, optimal and close to optimal block labellings are given for an infinite class of 1-rotational resolvable Steiner 2-designs with arbitrarily large block size by exploiting their underlying group-theoretic properties.
ContributorsLusi, Dylan (Author) / Colbourn, Charles J (Thesis advisor) / Czygrinow, Andrzej (Committee member) / Fainekos, Georgios (Committee member) / Richa, Andrea (Committee member) / Arizona State University (Publisher)
Created2021
161988-Thumbnail Image.png
Description
Autonomous Vehicles (AV) are inevitable entities in future mobility systems thatdemand safety and adaptability as two critical factors in replacing/assisting human drivers. Safety arises in defining, standardizing, quantifying, and monitoring requirements for all autonomous components. Adaptability, on the other hand, involves efficient handling of uncertainty and inconsistencies in models and data. First, I

Autonomous Vehicles (AV) are inevitable entities in future mobility systems thatdemand safety and adaptability as two critical factors in replacing/assisting human drivers. Safety arises in defining, standardizing, quantifying, and monitoring requirements for all autonomous components. Adaptability, on the other hand, involves efficient handling of uncertainty and inconsistencies in models and data. First, I address safety by presenting a search-based test-case generation framework that can be used in training and testing deep-learning components of AV. Next, to address adaptability, I propose a framework based on multi-valued linear temporal logic syntax and semantics that allows autonomous agents to perform model-checking on systems with uncertainties. The search-based test-case generation framework provides safety assurance guarantees through formalizing and monitoring Responsibility Sensitive Safety (RSS) rules. I use the RSS rules in signal temporal logic as qualification specifications for monitoring and screening the quality of generated test-drive scenarios. Furthermore, to extend the existing temporal-based formal languages’ expressivity, I propose a new spatio-temporal perception logic that enables formalizing qualification specifications for perception systems. All-in-one, my test-generation framework can be used for reasoning about the quality of perception, prediction, and decision-making components in AV. Finally, my efforts resulted in publicly available software. One is an offline monitoring algorithm based on the proposed logic to reason about the quality of perception systems. The other is an optimal planner (model checker) that accepts mission specifications and model descriptions in the form of multi-valued logic and multi-valued sets, respectively. My monitoring framework is distributed with the publicly available S-TaLiRo and Sim-ATAV tools.
ContributorsHekmatnejad, Mohammad (Author) / Fainekos, Georgios (Thesis advisor) / Deshmukh, Jyotirmoy V (Committee member) / Karam, Lina (Committee member) / Pedrielli, Giulia (Committee member) / Shrivastava, Aviral (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
161997-Thumbnail Image.png
Description
Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus

Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus of this work is on the data-driven surrogate models, in which empirical approximations of the output are performed given the input parameters. Recently neural networks (NN) have re-emerged as a popular method for constructing data-driven surrogate models. Although, NNs have achieved excellent accuracy and are widely used, they pose their own challenges. This work addresses two common challenges, the need for: (1) hardware acceleration and (2) uncertainty quantification (UQ) in the presence of input variability. The high demand in the inference phase of deep NNs in cloud servers/edge devices calls for the design of low power custom hardware accelerators. The first part of this work describes the design of an energy-efficient long short-term memory (LSTM) accelerator. The overarching goal is to aggressively reduce the power consumption and area of the LSTM components using approximate computing, and then use architectural level techniques to boost the performance. The proposed design is synthesized and placed and routed as an application-specific integrated circuit (ASIC). The results demonstrate that this accelerator is 1.2X and 3.6X more energy-efficient and area-efficient than the baseline LSTM. In the second part of this work, a robust framework is developed based on an alternate data-driven surrogate model referred to as polynomial chaos expansion (PCE) for addressing UQ. In contrast to many existing approaches, no assumptions are made on the elements of the function space and UQ is a function of the expansion coefficients. Moreover, the sensitivity of the output with respect to any subset of the input variables can be computed analytically by post-processing the PCE coefficients. This provides a systematic and incremental method to pruning or changing the order of the model. This framework is evaluated on several real-world applications from different domains and is extended for classification tasks as well.
ContributorsAzari, Elham (Author) / Vrudhula, Sarma (Thesis advisor) / Fainekos, Georgios (Committee member) / Ren, Fengbo (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
193680-Thumbnail Image.png
Description
Recent advances in Artificial Intelligence (AI) have brought AI closer to laypeople than ever before. This leads to a pervasive problem: how would a user ascertain whether an AI system will be safe, reliable, or useful in a given situation? This problem becomes particularly challenging when it is considered that

Recent advances in Artificial Intelligence (AI) have brought AI closer to laypeople than ever before. This leads to a pervasive problem: how would a user ascertain whether an AI system will be safe, reliable, or useful in a given situation? This problem becomes particularly challenging when it is considered that most autonomous systems are not designed by their users; the internal software of these systems may be unavailable or difficult to understand; and the functionality of these systems may even change from initial specifications as a result of learning. To overcome these challenges, this dissertation proposes a paradigm for third-party autonomous assessment of black-box taskable AI systems. The four main desiderata of such assessment systems are: (i) interpretability: generating a description of the AI system's functionality in a language that the target user can understand; (ii) correctness: ensuring that the description of AI system's working is accurate; (iii) generalizability creating a solution approach that works well for different types of AI systems; and (iv) minimal requirements: creating an assessment system that does not place complex requirements on AI systems to support the third-party assessment, otherwise the manufacturers of AI system's might not support such an assessment. To satisfy these properties, this dissertation presents algorithms and requirements that would enable user-aligned autonomous assessment that helps the user understand the limits of a black-box AI system's safe operability. This dissertation proposes a personalized AI assessment module that discovers the high-level ``capabilities'' of an AI system with arbitrary internal planning algorithms/policies and learns an accurate symbolic description of these capabilities in terms of concepts that a user understands. Furthermore, the dissertation includes the associated theoretical results and the empirical evaluations. The results show that (i) a primitive query-response interface can enable the development of autonomous assessment modules that can derive a causally accurate user-interpretable model of the system's capabilities efficiently, and (ii) such descriptions are easier to understand and reason with for the users than the agent's primitive actions.
ContributorsVerma, Pulkit (Author) / Srivastava, Siddharth (Thesis advisor) / Cooke, Nancy (Committee member) / Fainekos, Georgios (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2024
154026-Thumbnail Image.png
Description
There has been a vast increase in applications of Unmanned Aerial Vehicles (UAVs) in civilian domains. To operate in the civilian airspace, a UAV must be able to sense and avoid both static and moving obstacles for flight safety. While indoor and low-altitude environments are mainly occupied by static obstacles,

There has been a vast increase in applications of Unmanned Aerial Vehicles (UAVs) in civilian domains. To operate in the civilian airspace, a UAV must be able to sense and avoid both static and moving obstacles for flight safety. While indoor and low-altitude environments are mainly occupied by static obstacles, risks in space of higher altitude primarily come from moving obstacles such as other aircraft or flying vehicles in the airspace. Therefore, the ability to avoid moving obstacles becomes a necessity

for Unmanned Aerial Vehicles.

Towards enabling a UAV to autonomously sense and avoid moving obstacles, this thesis makes the following contributions. Initially, an image-based reactive motion planner is developed for a quadrotor to avoid a fast approaching obstacle. Furthermore, A Dubin’s curve based geometry method is developed as a global path planner for a fixed-wing UAV to avoid collisions with aircraft. The image-based method is unable to produce an optimal path and the geometry method uses a simplified UAV model. To compensate

these two disadvantages, a series of algorithms built upon the Closed-Loop Rapid Exploratory Random Tree are developed as global path planners to generate collision avoidance paths in real time. The algorithms are validated in Software-In-the-Loop (SITL) and Hardware-In-the-Loop (HIL) simulations using a fixed-wing UAV model and in real flight experiments using quadrotors. It is observed that the algorithm enables a UAV to avoid moving obstacles approaching to it with different directions and speeds.
ContributorsLin, Yucong (Author) / Saripalli, Srikanth (Thesis advisor) / Scowen, Paul (Committee member) / Fainekos, Georgios (Committee member) / Thangavelautham, Jekanthan (Committee member) / Youngbull, Cody (Committee member) / Arizona State University (Publisher)
Created2015
154003-Thumbnail Image.png
Description
Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while the program is running. Unfortunately, the non-deterministic behavior of multithreaded embedded software makes the dynamic analysis difficult. In addition, instrumentation

Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while the program is running. Unfortunately, the non-deterministic behavior of multithreaded embedded software makes the dynamic analysis difficult. In addition, instrumentation overhead for gathering execution information may change the execution of a program, and lead to distorted analysis results, i.e., probe effect. This thesis presents a framework that tackles the non-determinism and probe effect incurred in dynamic analysis of embedded software. The thesis largely consists of three parts. First of all, we discusses a deterministic replay framework to provide reproducible execution. Once a program execution is recorded, software instrumentation can be safely applied during replay without probe effect. Second, a discussion of probe effect is presented and a simulation-based analysis is proposed to detect execution changes of a program caused by instrumentation overhead. The simulation-based analysis examines if the recording instrumentation changes the original program execution. Lastly, the thesis discusses data race detection algorithms that help to remove data races for correctness of the replay and the simulation-based analysis. The focus is to make the detection efficient for C/C++ programs, and to increase scalability of the detection on multi-core machines.
ContributorsSong, Young Wn (Author) / Lee, Yann-Hang (Thesis advisor) / Shrivastava, Aviral (Committee member) / Fainekos, Georgios (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2015
154497-Thumbnail Image.png
Description
Robotic technology is advancing to the point where it will soon be feasible to deploy massive populations, or swarms, of low-cost autonomous robots to collectively perform tasks over large domains and time scales. Many of these tasks will require the robots to allocate themselves around the boundaries of regions

Robotic technology is advancing to the point where it will soon be feasible to deploy massive populations, or swarms, of low-cost autonomous robots to collectively perform tasks over large domains and time scales. Many of these tasks will require the robots to allocate themselves around the boundaries of regions or features of interest and achieve target objectives that derive from their resulting spatial configurations, such as forming a connected communication network or acquiring sensor data around the entire boundary. We refer to this spatial allocation problem as boundary coverage. Possible swarm tasks that will involve boundary coverage include cooperative load manipulation for applications in construction, manufacturing, and disaster response.

In this work, I address the challenges of controlling a swarm of resource-constrained robots to achieve boundary coverage, which I refer to as the problem of stochastic boundary coverage. I first examined an instance of this behavior in the biological phenomenon of group food retrieval by desert ants, and developed a hybrid dynamical system model of this process from experimental data. Subsequently, with the aid of collaborators, I used a continuum abstraction of swarm population dynamics, adapted from a modeling framework used in chemical kinetics, to derive stochastic robot control policies that drive a swarm to target steady-state allocations around multiple boundaries in a way that is robust to environmental variations.

Next, I determined the statistical properties of the random graph that is formed by a group of robots, each with the same capabilities, that have attached to a boundary at random locations. I also computed the probability density functions (pdfs) of the robot positions and inter-robot distances for this case.

I then extended this analysis to cases in which the robots have heterogeneous communication/sensing radii and attach to a boundary according to non-uniform, non-identical pdfs. I proved that these more general coverage strategies generate random graphs whose probability of connectivity is Sharp-P Hard to compute. Finally, I investigated possible approaches to validating our boundary coverage strategies in multi-robot simulations with realistic Wi-fi communication.
ContributorsPeruvemba Kumar, Ganesh (Author) / Berman, Spring M (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Bazzi, Rida (Committee member) / Syrotiuk, Violet (Committee member) / Taylor, Thomas (Committee member) / Arizona State University (Publisher)
Created2016
155052-Thumbnail Image.png
Description
The critical infrastructures of the nation are a large and complex network of human, physical and cyber-physical systems. In recent times, it has become increasingly apparent that individual critical infrastructures, such as the power and communication networks, do not operate in isolation, but instead are part of a complex interdependent

The critical infrastructures of the nation are a large and complex network of human, physical and cyber-physical systems. In recent times, it has become increasingly apparent that individual critical infrastructures, such as the power and communication networks, do not operate in isolation, but instead are part of a complex interdependent ecosystem where a failure involving a small set of network entities can trigger a cascading event resulting in the failure of a much larger set of entities through the failure propagation process.

Recognizing the need for a deeper understanding of the interdependent relationships between such critical infrastructures, several models have been proposed and analyzed in the last few years. However, most of these models are over-simplified and fail to capture the complex interdependencies that may exist between critical infrastructures. To overcome the limitations of existing models, this dissertation presents a new model -- the Implicative Interdependency Model (IIM) that is able to capture such complex interdependency relations. As the potential for a failure cascade in critical interdependent networks poses several risks that can jeopardize the nation, this dissertation explores relevant research problems in the interdependent power and communication networks using the proposed IIM and lays the foundations for further study using this model.

Apart from exploring problems in interdependent critical infrastructures, this dissertation also explores resource allocation techniques for environments enabled with cyber-physical systems. Specifically, the problem of efficient path planning for data collection using mobile cyber-physical systems is explored. Two such environments are considered: a Radio-Frequency IDentification (RFID) environment with mobile “Tags” and “Readers”, and a sensor data collection environment where both the sensors and the data mules (data collectors) are mobile.

Finally, from an applied research perspective, this dissertation presents Raptor, an advanced network planning and management tool for mitigating the impact of spatially correlated, or region based faults on infrastructure networks. Raptor consolidates a wide range of studies conducted in the last few years on region based faults, and provides an interface for network planners, designers and operators to use the results of these studies for designing robust and resilient networks in the presence of spatially correlated faults.
ContributorsDas, Arun (Author) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Fainekos, Georgios (Committee member) / Qiao, Chunming (Committee member) / Arizona State University (Publisher)
Created2016