Matching Items (13)

155205-Thumbnail Image.png

New methodology of automatic design collaboration

Description

When software design teams attempt to collaborate on different design docu-

ments they suffer from a serious collaboration problem. Designers collaborate either in person or remotely. In person collaboration is expensive

When software design teams attempt to collaborate on different design docu-

ments they suffer from a serious collaboration problem. Designers collaborate either in person or remotely. In person collaboration is expensive but effective. Remote collaboration is inexpensive but inefficient. In, order to gain the most benefit from collaboration there needs to be remote collaboration that is not only cheap but also as efficient as physical collaboration.

Remotely collaborating on software design relies on general tools such as Word, and Excel. These tools are then shared in an inefficient manner by using either email, cloud based file locking tools, or something like google docs. Because these tools either increase the number of design building blocks, or limit the number

of available times in which one can work on a specific document, they drastically decrease productivity.

This thesis outlines a new methodology to increase design productivity, accom- plished by providing design specific collaboration. Using version control systems, this methodology allows for effective project collaboration between remotely lo- cated design teams. The methodology of this paper encompasses role management, policy management, and design artifact management, including nonfunctional re- quirements. Version control can be used for different design products, improving communication and productivity amongst design teams. This thesis outlines this methodology and then outlines a proof of concept tool that embodies the core of these principles.

Contributors

Agent

Created

Date Created
  • 2016

154160-Thumbnail Image.png

Covering arrays: generation and post-optimization

Description

Exhaustive testing is generally infeasible except in the smallest of systems. Research

has shown that testing the interactions among fewer (up to 6) components is generally

sufficient while retaining the capability to

Exhaustive testing is generally infeasible except in the smallest of systems. Research

has shown that testing the interactions among fewer (up to 6) components is generally

sufficient while retaining the capability to detect up to 99% of defects. This leads to a

substantial decrease in the number of tests. Covering arrays are combinatorial objects

that guarantee that every interaction is tested at least once.

In the absence of direct constructions, forming small covering arrays is generally

an expensive computational task. Algorithms to generate covering arrays have been

extensively studied yet no single algorithm provides the smallest solution. More

recently research has been directed towards a new technique called post-optimization.

These algorithms take an existing covering array and attempt to reduce its size.

This thesis presents a new idea for post-optimization by representing covering

arrays as graphs. Some properties of these graphs are established and the results are

contrasted with existing post-optimization algorithms. The idea is then generalized to

close variants of covering arrays with surprising results which in some cases reduce

the size by 30%. Applications of the method to generation and test prioritization are

studied and some interesting results are reported.

Contributors

Agent

Created

Date Created
  • 2015

154772-Thumbnail Image.png

Consumption in the age of digital plenty: three essays into an emerging phenomenon

Description

The recent changes in the software markets gave users an unprecedented number

of alternatives for any given task. In such a competitive environment, it is imperative

to understand what drives user behavior.

The recent changes in the software markets gave users an unprecedented number

of alternatives for any given task. In such a competitive environment, it is imperative

to understand what drives user behavior. To that end, the research presented in

this dissertation, tries to uncover the impact of business strategies often used in the

software markets.

The dissertation is organized into three distinct studies into user choice and post

choice use of software. First using social judgment theory as foundation, zero price

strategies effects on user choice is investigated, with respect to product features,

consumer characteristics, and context effects. Second, role of social features in

moderating network effects on user choice is studied. And finally, the role of social

features on the effectiveness of add-on content strategy on continued user engagement

is investigated.

The findings of this dissertation highlight the alignments between popular business

strategies and broad software context. The dissertation contributes to the litera-

ture by uncovering hitherto overlooked complementarities between business strategy

and product features: (1) zero price strategy enhances utilitarian features but not

non-utilitarian features in software choice, (2) social features only enhance network

externalities but not social influence in user choice, (3) social features enhance the

effect of add-on content strategy in extending software engagement.

Contributors

Agent

Created

Date Created
  • 2016

153593-Thumbnail Image.png

Generating mixed-level covering arrays of lambda = 2 and test prioritization

Description

In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together.

In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together. This stage of testing is often difficult because there can be numerous configurations between just two components.

Covering arrays are one way to ensure a set of tests will cover every possible configuration at least once. However, on systems with many settings, it is computationally intensive to run every possible test. Test prioritization methods can identify tests of greater importance. This concept of test prioritization can help determine which tests can be removed with minimal impact to the overall testing of the system.

This thesis presents three algorithms that generate covering arrays that test the interaction of every two components at least twice. These algorithms extend the functionality of an established greedy test prioritization method to ensure important components are selected in earlier tests. The algorithms are tested on various inputs and the results reveal that on average, the resulting covering arrays are two-fifths to one-half times smaller than a covering array generated through brute force.

Contributors

Agent

Created

Date Created
  • 2015

151275-Thumbnail Image.png

Characterization of cost excess in cloud applications

Description

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.

Contributors

Agent

Created

Date Created
  • 2012

154704-Thumbnail Image.png

E-mail header injections - an analysis of the World Wide Web

Description

E-Mail header injection vulnerability is a class of vulnerability that can occur in web applications that use user input to construct e-mail messages. E-Mail injection is possible when the mailing

E-Mail header injection vulnerability is a class of vulnerability that can occur in web applications that use user input to construct e-mail messages. E-Mail injection is possible when the mailing script fails to check for the presence of e-mail headers in user input (either form fields or URL parameters). The vulnerability exists in the reference implementation of the built-in “mail” functionality in popular languages like PHP, Java, Python, and Ruby. With the proper injection string, this vulnerability can be exploited to inject additional headers and/or modify existing headers in an e-mail message, allowing an attacker to completely alter the content of the e-mail.

This thesis develops a scalable mechanism to automatically detect E-Mail Header Injection vulnerability and uses this mechanism to quantify the prevalence of E- Mail Header Injection vulnerabilities on the Internet. Using a black-box testing approach, the system crawled 21,675,680 URLs to find URLs which contained form fields. 6,794,917 such forms were found by the system, of which 1,132,157 forms contained e-mail fields. The system used this data feed to discern the forms that could be fuzzed with malicious payloads. Amongst the 934,016 forms tested, 52,724 forms were found to be injectable with more malicious payloads. The system tested 46,156 of these and was able to find 496 vulnerable URLs across 222 domains, which proves that the threat is widespread and deserves future research attention.

Contributors

Agent

Created

Date Created
  • 2016

154217-Thumbnail Image.png

Modeling, simulation and analysis for software-as-service in cloud

Description

Software-as-a-Service (SaaS) has received significant attention in recent years as major computer companies such as Google, Microsoft, Amazon, and Salesforce are adopting this new approach to develop software and systems.

Software-as-a-Service (SaaS) has received significant attention in recent years as major computer companies such as Google, Microsoft, Amazon, and Salesforce are adopting this new approach to develop software and systems. Cloud computing is a computing infrastructure to enable rapid delivery of computing resources as a utility in a dynamic, scalable, and virtualized manner. Computer Simulations are widely utilized to analyze the behaviors of software and test them before fully implementations. Simulation can further benefit SaaS application in a cost-effective way taking the advantages of cloud such as customizability, configurability and multi-tendency.

This research introduces Modeling, Simulation and Analysis for Software-as-Service in Cloud. The researches cover the following topics: service modeling, policy specification, code generation, dynamic simulation, timing, event and log analysis. Moreover, the framework integrates current advantages of cloud: configurability, Multi-Tenancy, scalability and recoverability.

The following chapters are provided in the architecture:

Multi-Tenancy Simulation Software-as-a-Service.

Policy Specification for MTA simulation environment.

Model Driven PaaS Based SaaS modeling.

Dynamic analysis and dynamic calibration for timing analysis.

Event-driven Service-Oriented Simulation Framework.

LTBD: A Triage Solution for SaaS.

Contributors

Agent

Created

Date Created
  • 2015

154330-Thumbnail Image.png

Cognitive software complexity analysis

Description

A well-defined Software Complexity Theory which captures the Cognitive means of algorithmic information comprehension is needed in the domain of cognitive informatics & computing. The existing complexity heuristics are vague

A well-defined Software Complexity Theory which captures the Cognitive means of algorithmic information comprehension is needed in the domain of cognitive informatics & computing. The existing complexity heuristics are vague and empirical. Industrial software is a combination of algorithms implemented. However, it would be wrong to conclude that algorithmic space and time complexity is software complexity. An algorithm with multiple lines of pseudocode might sometimes be simpler to understand that the one with fewer lines. So, it is crucial to determine the Algorithmic Understandability for an algorithm, in order to better understand Software Complexity. This work deals with understanding Software Complexity from a cognitive angle. Also, it is vital to compute the effect of reducing cognitive complexity. The work aims to prove three important statements. The first being, that, while algorithmic complexity is a part of software complexity, software complexity does not solely and entirely mean algorithmic Complexity. Second, the work intends to bring to light the importance of cognitive understandability of algorithms. Third, is about the impact, reducing Cognitive Complexity, would have on Software Design and Development.

Contributors

Agent

Created

Date Created
  • 2016

154747-Thumbnail Image.png

A study of text mining framework for automated classification of software requirements in enterprise systems

Description

Text Classification is a rapidly evolving area of Data Mining while Requirements Engineering is a less-explored area of Software Engineering which deals the process of defining, documenting and maintaining a

Text Classification is a rapidly evolving area of Data Mining while Requirements Engineering is a less-explored area of Software Engineering which deals the process of defining, documenting and maintaining a software system's requirements. When researchers decided to blend these two streams in, there was research on automating the process of classification of software requirements statements into categories easily comprehensible for developers for faster development and delivery, which till now was mostly done manually by software engineers - indeed a tedious job. However, most of the research was focused on classification of Non-functional requirements pertaining to intangible features such as security, reliability, quality and so on. It is indeed a challenging task to automatically classify functional requirements, those pertaining to how the system will function, especially those belonging to different and large enterprise systems. This requires exploitation of text mining capabilities. This thesis aims to investigate results of text classification applied on functional software requirements by creating a framework in R and making use of algorithms and techniques like k-nearest neighbors, support vector machine, and many others like boosting, bagging, maximum entropy, neural networks and random forests in an ensemble approach. The study was conducted by collecting and visualizing relevant enterprise data manually classified previously and subsequently used for training the model. Key components for training included frequency of terms in the documents and the level of cleanliness of data. The model was applied on test data and validated for analysis, by studying and comparing parameters like precision, recall and accuracy.

Contributors

Agent

Created

Date Created
  • 2016

154003-Thumbnail Image.png

Dynamic analysis of embedded software

Description

Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while

Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while the program is running. Unfortunately, the non-deterministic behavior of multithreaded embedded software makes the dynamic analysis difficult. In addition, instrumentation overhead for gathering execution information may change the execution of a program, and lead to distorted analysis results, i.e., probe effect. This thesis presents a framework that tackles the non-determinism and probe effect incurred in dynamic analysis of embedded software. The thesis largely consists of three parts. First of all, we discusses a deterministic replay framework to provide reproducible execution. Once a program execution is recorded, software instrumentation can be safely applied during replay without probe effect. Second, a discussion of probe effect is presented and a simulation-based analysis is proposed to detect execution changes of a program caused by instrumentation overhead. The simulation-based analysis examines if the recording instrumentation changes the original program execution. Lastly, the thesis discusses data race detection algorithms that help to remove data races for correctness of the replay and the simulation-based analysis. The focus is to make the detection efficient for C/C++ programs, and to increase scalability of the detection on multi-core machines.

Contributors

Agent

Created

Date Created
  • 2015