Matching Items (11)
Filtering by

Clear all filters

133381-Thumbnail Image.png
Description
This thesis discusses three recent optimization problems that seek to reduce disease spread on arbitrary graphs by deleting edges, and it discusses three approximation algorithms developed for these problems. Important definitions are presented including the Linear Threshold and Triggering Set models and the set function properties of submodularity and monotonicity.

This thesis discusses three recent optimization problems that seek to reduce disease spread on arbitrary graphs by deleting edges, and it discusses three approximation algorithms developed for these problems. Important definitions are presented including the Linear Threshold and Triggering Set models and the set function properties of submodularity and monotonicity. Also, important results regarding the Linear Threshold model and computation of the influence function are presented along with proof sketches. The three main problems are formally presented, and NP-hardness results along with proof sketches are presented where applicable. The first problem seeks to reduce spread of infection over the Linear Threshold process by making use of an efficient tree data structure. The second problem seeks to reduce the spread of infection over the Linear Threshold process while preserving the PageRank distribution of the input graph. The third problem seeks to minimize the spectral radius of the input graph. The algorithms designed for these problems are described in writing and with pseudocode, and their approximation bounds are stated along with time complexities. Discussion of these algorithms considers how these algorithms could see real-world use. Challenges and the ways in which these algorithms do or do not overcome them are noted. Two related works, one which presents an edge-deletion disease spread reduction problem over a deterministic threshold process and the other which considers a graph modification problem aimed at minimizing worst-case disease spread, are compared with the three main works to provide interesting perspectives. Furthermore, a new problem is proposed that could avoid some issues faced by the three main problems described, and directions for future work are suggested.
ContributorsStanton, Andrew Warren (Author) / Richa, Andrea (Thesis director) / Czygrinow, Andrzej (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
136549-Thumbnail Image.png
Description
A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic

A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic materials that can change their physical properties \u2014 such as color, density, and even shape \u2014 based on predefined rules or continuous, autonomous collection of input. In this research, we are most interested in particles that can perform computations, bond with other particles, and move. In this paper, we provide a theoretical particle model that can be used to simulate the performance of such physical particle systems, as well as an algorithm to perform expansion, wherein these particles can be used to enclose spaces or even objects.
ContributorsLaff, Miles (Author) / Richa, Andrea (Thesis director) / Bazzi, Rida (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
137465-Thumbnail Image.png
Description
As technologies advance, so does the curiosity and exploration of humankind. There are many domains across this planet that are unexplored \u2014 the depths of Earth's ocean being one of the most predominant. While the ocean covers seventy percent of Earth's surface, a vast ninety-five percent of this realm remains

As technologies advance, so does the curiosity and exploration of humankind. There are many domains across this planet that are unexplored \u2014 the depths of Earth's ocean being one of the most predominant. While the ocean covers seventy percent of Earth's surface, a vast ninety-five percent of this realm remains untouched and unseen by the human eye. The biggest causality of this can be identified in the limitations of current technologies and the large expense associated with delving into these dangerous and uncharted areas. Underwater communication between unmanned devices is the solution to this problem. With the oceanic deployment of wirelessly connected unmanned underwater vehicles (UUVs), researchers can limit risk to human safely and retrieve invaluable oceanographic data from unimaginable depths. However, before this system can be physically deployed, the network topology and environmental interactions must be simulated. More specific to the application, how does attenuation of optical propagation degrade between transmissions? A widely used open source network simulator is the ns series: ns-1, ns-2, and ns-3. Ns-3 is the most recent version, and is a valuable tool for modeling network interactions. However, underwater simulation proposes a limitation \u2014 a three-dimensional consideration for pressure. To properly model this interaction, it is vital that an extension to ns-3 be provided in order to account for the affects pressure has on the propagation of a signal at varying depths.
ContributorsSowa, Ryan John (Author) / Richa, Andrea (Thesis director) / Saripalli, Srikanth (Committee member) / Zhou, Chenyang (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
135251-Thumbnail Image.png
Description
Many systems in the world \u2014 such as cellular networks, the post service, or transportation pathways \u2014 can be modeled as networks or graphs. The practical applications of graph algorithms generally seek to achieve some goal while minimizing some cost such as money or distance. While the minimum linear arrangement

Many systems in the world \u2014 such as cellular networks, the post service, or transportation pathways \u2014 can be modeled as networks or graphs. The practical applications of graph algorithms generally seek to achieve some goal while minimizing some cost such as money or distance. While the minimum linear arrangement (MLA) problem has been widely-studied amongst graph ordering and embedding problems, there have been no developments into versions of the problem involving degree higher than 2. An application of our problem can be seen in overlay networks in telecommunications. An overlay network is a virtual network that is built on top of another network. It is a logical network where the links between nodes represent the physical paths connecting the nodes in the underlying infrastructure. The underlying physical network may be incomplete, but as long as it is connected, we can build a complete overlay network on top of it. Since some nodes may be overloaded by traffic, we can reduce the strain on the overlay network by limiting the communication between nodes. Some edges, however, may have more importance than others so we must be careful about our selection of which nodes are allowed to communicate with each other. The balance of reducing the degree of the network while maximizing communication forms the basis of our d-degree minimum arrangement problem. In this thesis we will look at several approaches to solving the generalized d-degree minimum arrangement d-MA problem where we embed a graph onto a subgraph of a given degree. We first look into the requirements and challenges of solving the d-MA problem. We will then present a polynomial-time heuristic and compare its performance with the optimal solution derived from integer linear programming. We will show that a simple (d-1)-ary tree construction provides the optimal structure for uniform graphs with large requests sets. Finally, we will present experimental data gathered from running simulations on a variety of graphs to evaluate the efficiency of our heuristic and tree construction.
ContributorsWang, Xiao (Author) / Richa, Andrea (Thesis director) / Nakamura, Mutsumi (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134701-Thumbnail Image.png
Description
This paper is a supplement to our interactive algorithmic art generator project which can be found at weiverlyroe.github.io/waverlyplace. For this thesis, we demonstrate how with certain input we can algorithmically generate art, specifically a playable random maze with exactly one solution. We explore interactive and algorithmic art and how our

This paper is a supplement to our interactive algorithmic art generator project which can be found at weiverlyroe.github.io/waverlyplace. For this thesis, we demonstrate how with certain input we can algorithmically generate art, specifically a playable random maze with exactly one solution. We explore interactive and algorithmic art and how our mazes are a form of both. Through examining several maze generation algorithms, we show that an ideal representation of a single-solution maze, called a perfect maze, is a spanning tree of a planar graph. The final algorithm is a re-imagining of Kruskal's Minimum Spanning Tree Algorithm with these adjustments: (1) potential edges are ordered randomly rather than sorted and (2) certain edges are forced in the maze in order for the wall structure to display the player's text input. Lastly, we discuss improvements which could be made and features which we plan to add to the project in the future.
ContributorsRoeger, Waverly Wu (Author) / Richa, Andrea (Thesis director) / Ellsworth, Angela (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134339-Thumbnail Image.png
Description
Implementing a distributed algorithm is more complicated than implementing a non-distributed algorithm. This is because distributed systems involve coordination of different processes each of which has a partial view of the global system state. The only way to share information in a distributed system is by message passing. Task that

Implementing a distributed algorithm is more complicated than implementing a non-distributed algorithm. This is because distributed systems involve coordination of different processes each of which has a partial view of the global system state. The only way to share information in a distributed system is by message passing. Task that are straightforward in a non-distributed system, like deciding on the value of a global system state, can be quite complicated to achieve in a distributed system [1]. On top of the difficulties caused by the distributed nature of the computations, distributed systems typically need to be able to operate normally even if some of the nodes in the system are faulty which further adds to the uncertainty that processes have about the global state. Many factors make the implementation of a distributed algorithms difficult. Design patterns [2] are useful in simplifying the development of general algorithms. A design pattern describes a high level solution to a common, abstract problem that many systems may face. Common structural, creational, and behavioral problems are identified and elegantly solved by design patterns. By identifying features that an algorithm uses, and framing each feature as one of the common problems that a specific design pattern solves, designing a robust implementation of an algorithm becomes more manageable. In this way, design patterns can aid the implementation of algorithms. Unfortunately, design patterns are typically not discussed when developing distributed algorithms. Because correctly developing a distributed algorithm is difficult, many papers (eg. [1], [3], [4]) focus on verifying the correctness of the developed algorithm. Papers that are more practical ([5], [6]) establish the correctness of their algorithm and that their algorithm is efficient enough to be practical. However, papers on distributed algorithms usually make little mention of design patterns. The goal of this work was to gain experience implementing distributed systems including learning the application of design patterns and the application of related technical topics. This was achieved by implementing a currently unpublished algorithm that is tentatively called Bakery Consensus. Bakery Consensus is a replicated state-machine protocol that can tolerate servers with Byzantine faults, but assumes non-faulty clients. The algorithm also establishes non-skipping timestamps for each operation completed by the replicated state-machine. The design of the structure, communication, and creation of the different system parts depended heavily upon the book Design Patterns [2]. After implementing the system, the success of the in implementing its various parts was based upon their ability to satisfy the SOLID [7] principles as well as their ability to establish low coupling and high cohesion [8]. The rest of this paper is organized as follows. We begin by providing background information about distributed algorithms, including replicated state-machine protocols and the Bakery Consensus algorithm. Section 3 gives a background on several design patterns and software engineering principles that were used in the development process. Section 4 discusses the well designed parts of the system that used design patterns, and how these design patterns were chosen. Section 5 discusses well designed system parts that relied upon other technical topics. Section 6 discusses system parts that need redesign. The conclusion summarizes what was accomplished by the implementation process and the lessons learned about design patterns for distributed algorithms.
ContributorsStoutenburg, Tristan Kaleb (Author) / Bazzi, Rida (Thesis director) / Richa, Andrea (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134914-Thumbnail Image.png
Description
Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this

Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this model has focused on shape formation and other spatial configuration problems, including line formation, compression, and coating. In this work we study foundational computational tasks that exceed the capabilities of the individual constant memory particles described by the model. These tasks represent new ways to use these self-organizing systems, which, in conjunction with previous shape and configuration work, make the systems useful for a wider variety of tasks. We present an implementation of a counter using a line of particles, which makes it possible for the line of particles to count to and store values much larger than their individual capacities. We then present an algorithm that takes a matrix and a vector as input and then sets up and uses a rectangular block of particles to compute the matrix-vector multiplication. This setup also utilizes the counter implementation to store the resulting vector from the matrix-vector multiplication. Operations such as counting and matrix multiplication can leverage the distributed and dynamic nature of the self-organizing system to be more efficient and adaptable than on traditional linear computing hardware. Such computational tools also give the systems more power to make complex decisions when adapting to new situations or to analyze the data they collect, reducing reliance on a central controller for setup and output processing. Finally, we demonstrate an application of similar types of computations with self-organizing systems to image processing, with an implementation of an image edge detection algorithm.
ContributorsPorter, Alexandra Marie (Author) / Richa, Andrea (Thesis director) / Xue, Guoliang (Committee member) / School of Music (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
135739-Thumbnail Image.png
Description
Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.
ContributorsDaymude, Joshua Jungwoo (Author) / Richa, Andrea (Thesis director) / Kierstead, Henry (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description

Secure Scuttlebutt is a digital social network in which the network data is distributed among the users.<br/>This is done to secure several benefits, like offline browsing, censorship resistance, and to imitate natural social networks, but it comes with downsides, like the lack of an obvious implementation of a recommendation algorithm.<br/>This

Secure Scuttlebutt is a digital social network in which the network data is distributed among the users.<br/>This is done to secure several benefits, like offline browsing, censorship resistance, and to imitate natural social networks, but it comes with downsides, like the lack of an obvious implementation of a recommendation algorithm.<br/>This paper proposes Whuffie, an algorithm that tracks each user's reputation for having information that is interesting to a user using conditional probabilities.<br/>Some errors in the main Secure Scuttlebutt network prevent current large-scale testing of the usefulness of the algorithm, but testing on my own personal account led me to believe it a success.

ContributorsVermillion, Alexander J (Author) / Bazzi, Rida (Thesis director) / Richa, Andrea (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

In the age of growing technology, Computer Science (CS) professionals have come into high demand. However, despite popular demand there are not enough computer scientists to fill these roles. The current demographic of computer scientists consists mainly of white men. This apparent gender gap must be addressed to promote diversity

In the age of growing technology, Computer Science (CS) professionals have come into high demand. However, despite popular demand there are not enough computer scientists to fill these roles. The current demographic of computer scientists consists mainly of white men. This apparent gender gap must be addressed to promote diversity and inclusivity in a career that requires high creativity and innovation. To understand what enforces gender stereotypes and the gender gap within CS, survey and interview data were collected from both male and female senior students studying CS and those who have left the CS program at Arizona State University. Students were asked what experiences either diminished or reinforced their sense of belonging in this field as well as other questions related to their involvement in CS. Interview and survey data reveal a lack of representation within courses as well as lack of peer support are key factors that influence the involvement and retention of students in CS, especially women. This data was used to identify key factors that influence retention and what can be done to remedy the growing deficit of professionals in this field.

ContributorsKent, Victoria (Author) / Kappes, Janelle (Thesis director) / Forrest, Stephanie (Committee member) / Richa, Andrea (Committee member) / Barrett, The Honors College (Contributor) / School of Life Sciences (Contributor)
Created2023-05