Matching Items (22)

136691-Thumbnail Image.png

Post-Optimization of Permutation Coverings

Description

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.

Contributors

Created

Date Created
  • 2014-12

133093-Thumbnail Image.png

Efficient Algorithms for the Construction of Low-Density Parity-Check Codes

Description

Error-correcting codes are fundamental in modern digital communication with applications in data storage and data transmission. Interest in a class of error-correcting codes called low-density parity-check (LDPC) codes has been

Error-correcting codes are fundamental in modern digital communication with applications in data storage and data transmission. Interest in a class of error-correcting codes called low-density parity-check (LDPC) codes has been growing since their recent rediscovery because of their low decoding complexity and their high-performance. However, practical applications have been limited due to the difficulty of finding good LDPC codes for practical parameters. This paper proposes an exhaustive and a randomized algorithm for constructing a family of LDPC codes with practical parameters whose matrix representations meet the following requirements: for each row in the LDPC code matrix there exists exactly one common nonzero element, each row has a minimum weight of one and must be odd, and each column has a weight of at least two. These conditions improve performance of the resulting codes and simplify conversion into codes for quantum systems. Both algorithms utilize a maximal clique algorithm to construct LDPC matrices from graphs whose vertices are possible rows within said matrices and are adjacent the first condition is true. While these algorithms were found to be suitable for small parameters, future work which optimizes the resulting codes for their expected applications could also dramatically increase performance of the algorithms themselves.

Contributors

Agent

Created

Date Created
  • 2018-12

136615-Thumbnail Image.png

Exploration of Sea Ice Concentrations using Graph Metrics

Description

As an example of "big data," we consider a repository of Arctic sea ice concentration data collected from satellites over the years 1979-2005. The data is represented by a graph,

As an example of "big data," we consider a repository of Arctic sea ice concentration data collected from satellites over the years 1979-2005. The data is represented by a graph, where vertices correspond to measurement points, and an edge is inserted between two vertices if the Pearson correlation coefficient between them exceeds a threshold. We investigate new questions about the structure of the graph related to betweenness, closeness centrality, vertex degrees, and characteristic path length. We also investigate whether an offset of weeks and years in graph generation results in a cosine similarity value that differs significantly from expected values. Finally, we relate the computational results to trends in Arctic ice.

Contributors

Created

Date Created
  • 2015-05

137627-Thumbnail Image.png

Graph Analysis of Arctic Ice

Description

Polar ice masses can be valuable indicators of trends in global climate. In an effort to better understand the dynamics of Arctic ice, this project analyzes sea ice concentration anomaly

Polar ice masses can be valuable indicators of trends in global climate. In an effort to better understand the dynamics of Arctic ice, this project analyzes sea ice concentration anomaly data collected over gridded regions (cells) and builds graphs based upon high correlations between cells. These graphs offer the opportunity to use metrics such as clustering coefficients and connected components to isolate representative trends in ice masses. Based upon this analysis, the structure of sea ice graphs differs at a statistically significant level from random graphs, and several regions show erratically decreasing trends in sea ice concentration.

Contributors

Created

Date Created
  • 2013-05

131863-Thumbnail Image.png

The Necessity of Error Correction In The Quantum World

Description

Quantum computers provide a promising future, where computationally difficult
problems can be executed exponentially faster than the current classical computers we have in use today. While there is tremendous research

Quantum computers provide a promising future, where computationally difficult
problems can be executed exponentially faster than the current classical computers we have in use today. While there is tremendous research and development in the creation of quantum computers, there is a fundamental challenge that exists in the quantum world. Due to the fragility of the quantum world, error correction methods have originated since 1995 to tackle the giant problem. Since the birth of the idea that these powerful computers can crunch and process numbers beyond the limit of the current computers, there exist several mathematical error correcting codes that could potentially give the required stability in the fragile and fault tolerant quantum world. While there has been a multitude of possible solutions, there is no one single error correcting code that is the key to solving the problem. Almost every solution presented has shared with it a limiting factor or an issue that prevents it from becoming the breakthrough that is desperately needed.

This paper gives an introductory knowledge of what is the quantum world and why there is a need for error correcting topologies. Finally, it introduces one recent topology that could be added to the list of possible solutions to this central problem. Rather than focusing on the mathematical frameworks, the paper introduces the main concepts so that most readers even outside the major field of computer science can understand what the main problem is and how this topology attempts to solve it.

Contributors

Agent

Created

Date Created
  • 2020-05

132876-Thumbnail Image.png

Constructing Locating arrays with Constraints using Constraint Satisfaction

Description

When designing screening experiments for many factors, two problems quickly arise. The first is that testing all the different combinations of the factors and interactions creates an experiment that

When designing screening experiments for many factors, two problems quickly arise. The first is that testing all the different combinations of the factors and interactions creates an experiment that is too large to conduct in a practical amount of time. One way this problem is solved is with a combinatorial design called a locating array (LA) which can efficiently identify the factors and interactions most influential on a response. The second problem is how to ensure that combinations that prohibit some particular tests are absent, a requirement that is common in real-world systems. This research proposes a solution to the second problem using constraint satisfaction.

Contributors

Agent

Created

Date Created
  • 2019-05

156392-Thumbnail Image.png

Smoothed Airtime Linear Tuning and Optimized REACT with Multi-hop Extensions

Description

Medium access control (MAC) is a fundamental problem in wireless networks.

In ad-hoc wireless networks especially, many of the performance and scaling issues

these networks face can be attributed to their use

Medium access control (MAC) is a fundamental problem in wireless networks.

In ad-hoc wireless networks especially, many of the performance and scaling issues

these networks face can be attributed to their use of the core IEEE 802.11 MAC

protocol: distributed coordination function (DCF). Smoothed Airtime Linear Tuning

(SALT) is a new contention window tuning algorithm proposed to address some of the

deficiencies of DCF in 802.11 ad-hoc networks. SALT works alongside a new user level

and optimized implementation of REACT, a distributed resource allocation protocol,

to ensure that each node secures the amount of airtime allocated to it by REACT.

The algorithm accomplishes that by tuning the contention window size parameter

that is part of the 802.11 backoff process. SALT converges more tightly on airtime

allocations than a contention window tuning algorithm from previous work and this

increases fairness in transmission opportunities and reduces jitter more than either

802.11 DCF or the other tuning algorithm. REACT and SALT were also extended

to the multi-hop flow scenario with the introduction of a new airtime reservation

algorithm. With a reservation in place multi-hop TCP throughput actually increased

when running SALT and REACT as compared to 802.11 DCF, and the combination of

protocols still managed to maintain its fairness and jitter advantages. All experiments

were performed on a wireless testbed, not in simulation.

Contributors

Agent

Created

Date Created
  • 2018

151578-Thumbnail Image.png

Coloring graphs from almost maximum degree sized palettes

Description

Every graph can be colored with one more color than its maximum degree. A well-known theorem of Brooks gives the precise conditions under which a graph can be colored with

Every graph can be colored with one more color than its maximum degree. A well-known theorem of Brooks gives the precise conditions under which a graph can be colored with maximum degree colors. It is natural to ask for the required conditions on a graph to color with one less color than the maximum degree; in 1977 Borodin and Kostochka conjectured a solution for graphs with maximum degree at least 9: as long as the graph doesn't contain a maximum-degree-sized clique, it can be colored with one fewer than the maximum degree colors. This study attacks the conjecture on multiple fronts. The first technique is an extension of a vertex shuffling procedure of Catlin and is used to prove the conjecture for graphs with edgeless high vertex subgraphs. This general approach also bears more theoretical fruit. The second technique is an extension of a method Kostochka used to reduce the Borodin-Kostochka conjecture to the maximum degree 9 case. Results on the existence of independent transversals are used to find an independent set intersecting every maximum clique in a graph. The third technique uses list coloring results to exclude induced subgraphs in a counterexample to the conjecture. The classification of such excludable graphs that decompose as the join of two graphs is the backbone of many of the results presented here. The fourth technique uses the structure theorem for quasi-line graphs of Chudnovsky and Seymour in concert with the third technique to prove the Borodin-Kostochka conjecture for claw-free graphs. The fifth technique adds edges to proper induced subgraphs of a minimum counterexample to gain control over the colorings produced by minimality. The sixth technique adapts a recoloring technique originally developed for strong coloring by Haxell and by Aharoni, Berger and Ziv to general coloring. Using this recoloring technique, the Borodin-Kostochka conjectured is proved for graphs where every vertex is in a large clique. The final technique is naive probabilistic coloring as employed by Reed in the proof of the Borodin-Kostochka conjecture for large maximum degree. The technique is adapted to prove the Borodin-Kostochka conjecture for list coloring for large maximum degree.

Contributors

Agent

Created

Date Created
  • 2013

154195-Thumbnail Image.png

Energy-efficient digital circuit design using threshold logic gates

Description

Improving energy efficiency has always been the prime objective of the custom and automated digital circuit design techniques. As a result, a multitude of methods to reduce power without sacrificing

Improving energy efficiency has always been the prime objective of the custom and automated digital circuit design techniques. As a result, a multitude of methods to reduce power without sacrificing performance have been proposed. However, as the field of design automation has matured over the last few decades, there have been no new automated design techniques, that can provide considerable improvements in circuit power, leakage and area. Although emerging nano-devices are expected to replace the existing MOSFET devices, they are far from being as mature as semiconductor devices and their full potential and promises are many years away from being practical.

The research described in this dissertation consists of four main parts. First is a new circuit architecture of a differential threshold logic flipflop called PNAND. The PNAND gate is an edge-triggered multi-input sequential cell whose next state function is a threshold function of its inputs. Second a new approach, called hybridization, that replaces flipflops and parts of their logic cones with PNAND cells is described. The resulting \hybrid circuit, which consists of conventional logic cells and PNANDs, is shown to have significantly less power consumption, smaller area, less standby power and less power variation.

Third, a new architecture of a field programmable array, called field programmable threshold logic array (FPTLA), in which the standard lookup table (LUT) is replaced by a PNAND is described. The FPTLA is shown to have as much as 50% lower energy-delay product compared to conventional FPGA using well known FPGA modeling tool called VPR.

Fourth, a novel clock skewing technique that makes use of the completion detection feature of the differential mode flipflops is described. This clock skewing method improves the area and power of the ASIC circuits by increasing slack on timing paths. An additional advantage of this method is the elimination of hold time violation on given short paths.

Several circuit design methodologies such as retiming and asynchronous circuit design can use the proposed threshold logic gate effectively. Therefore, the use of threshold logic flipflops in conventional design methodologies opens new avenues of research towards more energy-efficient circuits.

Contributors

Agent

Created

Date Created
  • 2015

153593-Thumbnail Image.png

Generating mixed-level covering arrays of lambda = 2 and test prioritization

Description

In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together.

In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together. This stage of testing is often difficult because there can be numerous configurations between just two components.

Covering arrays are one way to ensure a set of tests will cover every possible configuration at least once. However, on systems with many settings, it is computationally intensive to run every possible test. Test prioritization methods can identify tests of greater importance. This concept of test prioritization can help determine which tests can be removed with minimal impact to the overall testing of the system.

This thesis presents three algorithms that generate covering arrays that test the interaction of every two components at least twice. These algorithms extend the functionality of an established greedy test prioritization method to ensure important components are selected in earlier tests. The algorithms are tested on various inputs and the results reveal that on average, the resulting covering arrays are two-fifths to one-half times smaller than a covering array generated through brute force.

Contributors

Agent

Created

Date Created
  • 2015