Matching Items (3)
Filtering by

Clear all filters

151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
153213-Thumbnail Image.png
Description
The processing of large volumes of RDF data require an efficient storage and query processing engine that can scale well with the volume of data. The initial attempts to address this issue focused on optimizing native RDF stores as well as conventional relational databases management systems. But as the

The processing of large volumes of RDF data require an efficient storage and query processing engine that can scale well with the volume of data. The initial attempts to address this issue focused on optimizing native RDF stores as well as conventional relational databases management systems. But as the volume of RDF data grew to exponential proportions, the limitations of these systems became apparent and researchers began to focus on using big data analysis tools, most notably Hadoop, to process RDF data. Various studies and benchmarks that evaluate these tools for RDF data processing have been published. In the past two and half years, however, heavy users of big data systems, like Facebook, noted limitations with the query performance of these big data systems and began to develop new distributed query engines for big data that do not rely on map-reduce. Facebook's Presto is one such example.

This thesis deals with evaluating the performance of Presto in processing big RDF data against Apache Hive. A comparative analysis was also conducted against 4store, a native RDF store. To evaluate the performance Presto for big RDF data processing, a map-reduce program and a compiler, based on Flex and Bison, were implemented. The map-reduce program loads RDF data into HDFS while the compiler translates SPARQL queries into a subset of SQL that Presto (and Hive) can understand. The evaluation was done on four and eight node Linux clusters installed on Microsoft Windows Azure platform with RDF datasets of size 10, 20, and 30 million triples. The results of the experiment show that Presto has a much higher performance than Hive can be used to process big RDF data. The thesis also proposes an architecture based on Presto, Presto-RDF, that can be used to process big RDF data.
ContributorsMammo, Mulugeta (Author) / Bansal, Srividya (Thesis advisor) / Bansal, Ajay (Committee member) / Lindquist, Timothy (Committee member) / Arizona State University (Publisher)
Created2014
148109-Thumbnail Image.png
Description

System and software verification is a vital component in the development and reliability of cyber-physical systems - especially in critical domains where the margin of error is minimal. In the case of autonomous driving systems (ADS), the vision perception subsystem is a necessity to ensure correct maneuvering of the environment

System and software verification is a vital component in the development and reliability of cyber-physical systems - especially in critical domains where the margin of error is minimal. In the case of autonomous driving systems (ADS), the vision perception subsystem is a necessity to ensure correct maneuvering of the environment and identification of objects. The challenge posed in perception systems involves verifying the accuracy and rigidity of detections. The use of Spatio-Temporal Perception Logic (STPL) enables the user to express requirements for the perception system to verify, validate, and ensure its behavior; however, a drawback to STPL involves its accessibility. It is limited to individuals with an expert or higher-level knowledge of temporal and spatial logics, and the formal-written requirements become quite verbose with more restrictions imposed. In this thesis, I propose a domain-specific language (DSL) catered to Spatio-Temporal Perception Logic to enable non-expert users the ability to capture requirements for perception subsystems while reducing the necessity to have an experienced background in said logic. The domain-specific language for the Spatio-Temporal Perception Logic is built upon the formal language with two abstractions. The main abstraction captures simple programming statements that are translated to a lower-level STPL expression accepted by the testing monitor. The STPL DSL provides a seamless interface to writing formal expressions while maintaining the power and expressiveness of STPL. These translated equivalent expressions are capable of directing a standard for perception systems to ensure the safety and reduce the risks involved in ill-formed detections.

ContributorsAnderson, Jacob (Author) / Fainekos, Georgios (Thesis director) / Yezhou, Yang (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05