Matching Items (22)
Filtering by

Clear all filters

149703-Thumbnail Image.png
Description
This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based

This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based model is provided; it creates all edges deterministically and shows an asymptotically matching upper bound on the route length. The main goal is to improve greedy routing through a decentralized machine learning process. Two considered methods are based on weighted majority and an algorithm of de Farias and Megiddo, both learning from feedback using ensembles of experts. Tests are run on both artificial and real networks, with decentralized spectral graph embedding supplying geometric information for real networks where it is not intrinsically available. An important measure analyzed in this work is overpayment, the difference between the cost of the method and that of the shortest path. Adaptive routing overtakes greedy after about a hundred or fewer searches per node, consistently across different network sizes and types. Learning stabilizes, typically at overpayment of a third to a half of that by greedy. The problem is made more difficult by eliminating the knowledge of neighbors' locations or by introducing uncooperative nodes. Even under these conditions, the learned routes are usually better than the greedy routes. The second part of the dissertation is related to the community structure of unannotated networks. A modularity-based algorithm of Newman is extended to work with overlapping communities (including considerably overlapping communities), where each node locally makes decisions to which potential communities it belongs. To measure quality of a cover of overlapping communities, a notion of a node contribution to modularity is introduced, and subsequently the notion of modularity is extended from partitions to covers. The final part considers a problem of network anonymization, mostly by the means of edge deletion. The point of interest is utility preservation. It is shown that a concentration on the preservation of routing abilities might damage the preservation of community structure, and vice versa.
ContributorsBakun, Oleg (Author) / Konjevod, Goran (Thesis advisor) / Richa, Andrea (Thesis advisor) / Syrotiuk, Violet R. (Committee member) / Czygrinow, Andrzej (Committee member) / Arizona State University (Publisher)
Created2011
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
135998-Thumbnail Image.png
Description
In this paper we explore the design, implementation, and analysis of two different approaches for providing music recommendations to targeted users by implementing the Gram-ART unsupervised learning algorithm. We provide a content filtering approach using a dataset of one million songs which include various metadata tags and a collaborative filtering

In this paper we explore the design, implementation, and analysis of two different approaches for providing music recommendations to targeted users by implementing the Gram-ART unsupervised learning algorithm. We provide a content filtering approach using a dataset of one million songs which include various metadata tags and a collaborative filtering approach using the listening histories of over one million users. The two methods are evaluated by their results from Million Song Dataset Challenge. While both placed near the top third of the 150 challenge participants, the knowledge gained from the experiments will help further refine the process and likely produced much higher results in a system with the potential to scale several magnitudes.
ContributorsMeiss, Trevor (Author) / Meuth, Ryan (Thesis director) / Miller, Phill (Committee member) / Barrett, The Honors College (Contributor)
Created2015-05
133428-Thumbnail Image.png
Description
Optical Communications are at a high point of interest by the space engineering community. After successful projects like the Lunar Laser Communications Demonstration (LLCD), NASA has become interested in augmenting their current Deep Space Network (DSN) with optical communication links. One such link is Deep Space Optical Communications (DSOC) which

Optical Communications are at a high point of interest by the space engineering community. After successful projects like the Lunar Laser Communications Demonstration (LLCD), NASA has become interested in augmenting their current Deep Space Network (DSN) with optical communication links. One such link is Deep Space Optical Communications (DSOC) which will be launching with the Psyche mission. To gain a full understanding of the advantages of this network, this thesis will go over the history and benefits of optical communications both on Earth and in space. This thesis will then go in depth on NASAs DSOC project through an algorithmic implementation of the communications channel.
ContributorsHorton, Paul Alexander (Author) / Mauskopf, Philip (Thesis director) / Sandy, Douglas (Committee member) / Martin, Thomas (Committee member) / Software Engineering (Contributor) / College of Integrative Sciences and Arts (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137020-Thumbnail Image.png
Description
In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this ill-posed problem. Two such algorithms were examined: alternating projections, utilizing iterative Fourier transforms with manipulations performed in each domain on every iteration, and phase lifting, converting the problem to that of trace minimization, allowing for the use of convex optimization algorithms to perform the signal recovery. These recovery algorithms were compared on a basis of robustness as a function of signal-to-noise ratio. A second problem examined was that of unimodular polyphase radar waveform design. Under a finite signal energy constraint, the maximal energy return of a scene operator is obtained by transmitting the eigenvector of the scene Gramian associated with the largest eigenvalue. It is shown that if instead the problem is considered under a power constraint, a unimodular signal can be constructed starting from such an eigenvector that will have a greater return.
ContributorsJones, Scott Robert (Author) / Cochran, Douglas (Thesis director) / Diaz, Rodolfo (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
135269-Thumbnail Image.png
Description
Computer Science and Dance are choice driven disciplines. The output of their processes are compositions of experience. Dancers are not computers and computers are not people but there are comparable traces of humanity in the way each interpret and interact with their respective inputs, outputs, and environments. These overlaps are

Computer Science and Dance are choice driven disciplines. The output of their processes are compositions of experience. Dancers are not computers and computers are not people but there are comparable traces of humanity in the way each interpret and interact with their respective inputs, outputs, and environments. These overlaps are perhaps not obvious, but in an increasingly specialized world it is important to discuss them. Dynamic Programming and improvisational movement exist within exclusive corners of their respective fields and are characterized by their inherent adaption to change. Inspired by the work of Ivar Hagendoorn, John Cage and other interdisciplinary artists, complexMovement is motivated by the need to create space for intersections between these two powerful groups and find overlaps in the questions they ask to achieve their goals. Dance and Computer Science are just one example of hidden partnerships between their respective fields. Their respective sides allow for ample side by side comparisons but for the purpose of this work, we will focus upon two smaller sectors of their studies: improvisational movement and the design of Dynamic Programming algorithms.
ContributorsOhlsen, Lai Yi Ni (Author) / Britt, Melissa (Thesis director) / Crissman, Angel (Committee member) / Standley, Eileen (Committee member) / Computer Science and Engineering Program (Contributor) / School of Art (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134701-Thumbnail Image.png
Description
This paper is a supplement to our interactive algorithmic art generator project which can be found at weiverlyroe.github.io/waverlyplace. For this thesis, we demonstrate how with certain input we can algorithmically generate art, specifically a playable random maze with exactly one solution. We explore interactive and algorithmic art and how our

This paper is a supplement to our interactive algorithmic art generator project which can be found at weiverlyroe.github.io/waverlyplace. For this thesis, we demonstrate how with certain input we can algorithmically generate art, specifically a playable random maze with exactly one solution. We explore interactive and algorithmic art and how our mazes are a form of both. Through examining several maze generation algorithms, we show that an ideal representation of a single-solution maze, called a perfect maze, is a spanning tree of a planar graph. The final algorithm is a re-imagining of Kruskal's Minimum Spanning Tree Algorithm with these adjustments: (1) potential edges are ordered randomly rather than sorted and (2) certain edges are forced in the maze in order for the wall structure to display the player's text input. Lastly, we discuss improvements which could be made and features which we plan to add to the project in the future.
ContributorsRoeger, Waverly Wu (Author) / Richa, Andrea (Thesis director) / Ellsworth, Angela (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
133531-Thumbnail Image.png
Description
Predicting the binding sites of proteins has historically relied on the determination of protein structural data. However, the ability to utilize binding data obtained from a simple assay and computationally make the same predictions using only sequence information would be more efficient, both in time and resources. The purpose of

Predicting the binding sites of proteins has historically relied on the determination of protein structural data. However, the ability to utilize binding data obtained from a simple assay and computationally make the same predictions using only sequence information would be more efficient, both in time and resources. The purpose of this study was to evaluate the effectiveness of an algorithm developed to predict regions of high-binding on proteins as it applies to determining the regions of interaction between binding partners. This approach was applied to tumor necrosis factor alpha (TNFα), its receptor TNFR2, programmed cell death protein-1 (PD-1), and one of its ligand PD-L1. The algorithms applied accurately predicted the binding region between TNFα and TNFR2 in which the interacting residues are sequential on TNFα, however failed to predict discontinuous regions of binding as accurately. The interface of PD-1 and PD-L1 contained continuous residues interacting with each other, however this region was predicted to bind weaker than the regions on the external portions of the molecules. Limitations of this approach include use of a linear search window (resulting in inability to predict discontinuous binding residues), and the use of proteins with unnaturally exposed regions, in the case of PD-1 and PD-L1 (resulting in observed interactions which would not occur normally). However, this method was overall very effective in utilizing the available information to make accurate predictions. The use of the microarray to obtain binding information and a computer algorithm to analyze is a versatile tool capable of being adapted to refine accuracy.
ContributorsBrooks, Meilia Catherine (Author) / Woodbury, Neal (Thesis director) / Diehnelt, Chris (Committee member) / Ghirlanda, Giovanna (Committee member) / Department of Psychology (Contributor) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134914-Thumbnail Image.png
Description
Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this

Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this model has focused on shape formation and other spatial configuration problems, including line formation, compression, and coating. In this work we study foundational computational tasks that exceed the capabilities of the individual constant memory particles described by the model. These tasks represent new ways to use these self-organizing systems, which, in conjunction with previous shape and configuration work, make the systems useful for a wider variety of tasks. We present an implementation of a counter using a line of particles, which makes it possible for the line of particles to count to and store values much larger than their individual capacities. We then present an algorithm that takes a matrix and a vector as input and then sets up and uses a rectangular block of particles to compute the matrix-vector multiplication. This setup also utilizes the counter implementation to store the resulting vector from the matrix-vector multiplication. Operations such as counting and matrix multiplication can leverage the distributed and dynamic nature of the self-organizing system to be more efficient and adaptable than on traditional linear computing hardware. Such computational tools also give the systems more power to make complex decisions when adapting to new situations or to analyze the data they collect, reducing reliance on a central controller for setup and output processing. Finally, we demonstrate an application of similar types of computations with self-organizing systems to image processing, with an implementation of an image edge detection algorithm.
ContributorsPorter, Alexandra Marie (Author) / Richa, Andrea (Thesis director) / Xue, Guoliang (Committee member) / School of Music (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
135739-Thumbnail Image.png
Description
Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.
ContributorsDaymude, Joshua Jungwoo (Author) / Richa, Andrea (Thesis director) / Kierstead, Henry (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05