Matching Items (16)

136615-Thumbnail Image.png

Exploration of Sea Ice Concentrations using Graph Metrics

Description

As an example of "big data," we consider a repository of Arctic sea ice concentration data collected from satellites over the years 1979-2005. The data is represented by a graph,

As an example of "big data," we consider a repository of Arctic sea ice concentration data collected from satellites over the years 1979-2005. The data is represented by a graph, where vertices correspond to measurement points, and an edge is inserted between two vertices if the Pearson correlation coefficient between them exceeds a threshold. We investigate new questions about the structure of the graph related to betweenness, closeness centrality, vertex degrees, and characteristic path length. We also investigate whether an offset of weeks and years in graph generation results in a cosine similarity value that differs significantly from expected values. Finally, we relate the computational results to trends in Arctic ice.

Contributors

Created

Date Created
  • 2015-05

137627-Thumbnail Image.png

Graph Analysis of Arctic Ice

Description

Polar ice masses can be valuable indicators of trends in global climate. In an effort to better understand the dynamics of Arctic ice, this project analyzes sea ice concentration anomaly

Polar ice masses can be valuable indicators of trends in global climate. In an effort to better understand the dynamics of Arctic ice, this project analyzes sea ice concentration anomaly data collected over gridded regions (cells) and builds graphs based upon high correlations between cells. These graphs offer the opportunity to use metrics such as clustering coefficients and connected components to isolate representative trends in ice masses. Based upon this analysis, the structure of sea ice graphs differs at a statistically significant level from random graphs, and several regions show erratically decreasing trends in sea ice concentration.

Contributors

Created

Date Created
  • 2013-05

135810-Thumbnail Image.png

Testbed Implementation of the Meta-MAC Protocol

Description

The meta-MAC protocol is a systematic and automatic method to dynamically combine any set of existing Medium Access Control (MAC) protocols into a single higher level MAC protocol. The meta-MAC

The meta-MAC protocol is a systematic and automatic method to dynamically combine any set of existing Medium Access Control (MAC) protocols into a single higher level MAC protocol. The meta-MAC concept was proposed more than a decade ago, but until now has not been implemented in a testbed environment due to a lack of suitable hardware. This thesis presents a proof-of-concept implementation of the meta-MAC protocol by utilizing a programmable radio platform, the Wireless MAC Processor (WMP), in combination with a host-level software module. The implementation of this host module, and the requirements and challenges faced therein, is the primary subject of this thesis. This implementation can combine, with certain constraints, a set of protocols each represented as an extended finite state machine for easy programmability. To illustrate the combination principle, protocols of the same type but with varying parameters are combined in a testbed environment, in what is termed parameter optimization. Specifically, a set of TDMA protocols with differing slot assignments are experimentally combined. This experiment demonstrates that the meta-MAC implementation rapidly converges to non-conflicting TDMA slot assignments for the nodes, with similar results to those in simulation. This both validates that the presented implementation properly implements the meta-MAC protocol, and verifies that the meta-MAC protocol can be as effective on real wireless hardware as it is in simulation.

Contributors

Created

Date Created
  • 2016-05

132876-Thumbnail Image.png

Constructing Locating arrays with Constraints using Constraint Satisfaction

Description

When designing screening experiments for many factors, two problems quickly arise. The first is that testing all the different combinations of the factors and interactions creates an experiment that

When designing screening experiments for many factors, two problems quickly arise. The first is that testing all the different combinations of the factors and interactions creates an experiment that is too large to conduct in a practical amount of time. One way this problem is solved is with a combinatorial design called a locating array (LA) which can efficiently identify the factors and interactions most influential on a response. The second problem is how to ensure that combinations that prohibit some particular tests are absent, a requirement that is common in real-world systems. This research proposes a solution to the second problem using constraint satisfaction.

Contributors

Agent

Created

Date Created
  • 2019-05

133763-Thumbnail Image.png

Using an Open-Source Solution to Implement a Drone Cyber-Physical System

Description

The goal of this project is to use an open-source solution to implement a drone Cyber-Physical System that can fly autonomously and accurately. The proof-of-concept to analyze the drone's flight

The goal of this project is to use an open-source solution to implement a drone Cyber-Physical System that can fly autonomously and accurately. The proof-of-concept to analyze the drone's flight capabilities is to fly in a pattern corresponding to the outline of an image, a process that requires both stability and precision to accurately depict the image. In this project, we found that building a Cyber-Physical System is difficult because of the tedious and complex nature of designing and testing the hardware and software solutions of this system. Furthermore, we reflect on the difficulties that arose from using open-source hardware and software.

Contributors

Agent

Created

Date Created
  • 2018-05

157771-Thumbnail Image.png

R^2IM: Reliable and Robust Intersection Manager Robust to Rogue Vehicles

Description

At modern-day intersections, traffic lights and stop signs assist human drivers to cross the intersection safely. Traffic congestion in urban road networks is a costly problem that affects all major

At modern-day intersections, traffic lights and stop signs assist human drivers to cross the intersection safely. Traffic congestion in urban road networks is a costly problem that affects all major cities. Efficiently operating intersections is largely dependent on accuracy and precision of human drivers, engendering a lingering uncertainty of attaining safety and high throughput. To improve the efficiency of the existing traffic network and mitigate the effects of human error in the intersection, many studies have proposed autonomous, intelligent transportation systems. These studies often involve utilizing connected autonomous vehicles, implementing a supervisory system, or both. Implementing a supervisory system is relatively more popular due to the security concerns of vehicle-to-vehicle communication. Even though supervisory systems are a step in the right direction for security, many supervisory systems’ safe operation solely relies on the promise of connected data being correct, making system reliability difficult to achieve. To increase fault-tolerance and decrease the effects of position uncertainty, this thesis proposes the Reliable and Robust Intersection Manager, a supervisory system that uses a separate surveillance system to dependably detect vehicles present in the intersection in order to create data redundancy for more accurate scheduling of connected autonomous vehicles. Adding the Surveillance System ensures that the temporal safety buffers between arrival times of connected autonomous vehicles are maintained. This guarantees that connected autonomous vehicles can traverse the intersection safely in the event of large vehicle controller error, a single rogue car entering the intersection, or a sybil attack. To test the proposed system given these fault-models, MATLAB® was used to create simulations in order to observe the functionality of R2IM compared to the state-of-the-art supervisory system, Robust Intersection Manager. Though R2IM is less efficient than the Robust Intersection Manager, it considers more fault models. The Robust Intersection Manager failed to maintain safety in the event of large vehicle controller errors and rogue cars, however R2IM resulted in zero collisions.

Contributors

Agent

Created

Date Created
  • 2019

153593-Thumbnail Image.png

Generating mixed-level covering arrays of lambda = 2 and test prioritization

Description

In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together.

In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together. This stage of testing is often difficult because there can be numerous configurations between just two components.

Covering arrays are one way to ensure a set of tests will cover every possible configuration at least once. However, on systems with many settings, it is computationally intensive to run every possible test. Test prioritization methods can identify tests of greater importance. This concept of test prioritization can help determine which tests can be removed with minimal impact to the overall testing of the system.

This thesis presents three algorithms that generate covering arrays that test the interaction of every two components at least twice. These algorithms extend the functionality of an established greedy test prioritization method to ensure important components are selected in earlier tests. The algorithms are tested on various inputs and the results reveal that on average, the resulting covering arrays are two-fifths to one-half times smaller than a covering array generated through brute force.

Contributors

Agent

Created

Date Created
  • 2015

153127-Thumbnail Image.png

Context-aware search principles in automated learning environments

Description

Many web search improvements have been developed since the advent of the modern search engine, but one underrepresented area is the application of specific customizations to search results for educational

Many web search improvements have been developed since the advent of the modern search engine, but one underrepresented area is the application of specific customizations to search results for educational web sites. In order to address this issue and improve the relevance of search results in automated learning environments, this work has integrated context-aware search principles with applications of preference based re-ranking and query modifications. This research investigates several aspects of context-aware search principles, specifically context-sensitive and preference based re-ranking of results which take user inputs as to their preferred content, and combines this with search query modifications which automatically search for a variety of modified terms based on the given search query, integrating these results into the overall re-ranking for the context. The result of this work is a novel web search algorithm which could be applied to any online learning environment attempting to collect relevant resources for learning about a given topic. The algorithm has been evaluated through user studies comparing traditional search results to the context-aware results returned through the algorithm for a given topic. These studies explore how this integration of methods could provide improved relevance in the search results returned when compared against other modern search engines.

Contributors

Agent

Created

Date Created
  • 2014

155696-Thumbnail Image.png

Policy Conflict Management in Distributed SDN Environments

Description

The ease of programmability in Software-Defined Networking (SDN) makes it a great platform for implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in

The ease of programmability in Software-Defined Networking (SDN) makes it a great platform for implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. However, implementing security solutions in such an environment is fraught with policy conflicts and consistency issues with the hardness of this problem being affected by the distribution scheme for the SDN controllers.

In this dissertation, a formalism for flow rule conflicts in SDN environments is introduced. This formalism is realized in Brew, a security policy analysis framework implemented on an OpenDaylight SDN controller. Brew has comprehensive conflict detection and resolution modules to ensure that no two flow rules in a distributed SDN-based cloud environment have conflicts at any layer; thereby assuring consistent conflict-free security policy implementation and preventing information leakage. Techniques for global prioritization of flow rules in a decentralized environment are presented, using which all SDN flow rule conflicts are recognized and classified. Strategies for unassisted resolution of these conflicts are also detailed. Alternately, if administrator input is desired to resolve conflicts, a novel visualization scheme is implemented to help the administrators view the conflicts in an aesthetic manner. The correctness, feasibility and scalability of the Brew proof-of-concept prototype is demonstrated. Flow rule conflict avoidance using a buddy address space management technique is studied as an alternate to conflict detection and resolution in highly dynamic cloud systems attempting to implement an SDN-based Moving Target Defense (MTD) countermeasures.

Contributors

Agent

Created

Date Created
  • 2017

155634-Thumbnail Image.png

Pingo: A Framework for the Management of Storage of Intermediate Outputs of Computational Workflows

Description

Scientific workflows allow scientists to easily model and express the entire data processing steps, typically as a directed acyclic graph (DAG). These scientific workflows are made of a collection of

Scientific workflows allow scientists to easily model and express the entire data processing steps, typically as a directed acyclic graph (DAG). These scientific workflows are made of a collection of tasks that usually take a long time to compute and that produce a considerable amount of intermediate datasets. Because of the nature of scientific exploration, a scientific workflow can be modified and re-run multiple times, or new scientific workflows are created that might make use of past intermediate datasets. Storing intermediate datasets has the potential to save time in computations. Since storage is limited, one main problem that needs a solution is determining which intermediate datasets need to be saved at creation time in order to minimize the computational time of the workflows to be run in the future. This research thesis proposes the design and implementation of Pingo, a system that is capable of managing the computations of scientific workflows as well as the storage, provenance and deletion of intermediate datasets. Pingo uses the history of workflows submitted to the system to predict the most likely datasets to be needed in the future, and subjects the decision of dataset deletion to the optimization of the computational time of future workflows.

Contributors

Agent

Created

Date Created
  • 2017