Matching Items (143)
Filtering by

Clear all filters

148212-Thumbnail Image.png
Description

Developed a business product with a team of CS students.

ContributorsPerri, Cole Thomas (Co-author) / Hernandez, Maximilliano (Co-author) / Schneider, Kaitlin (Co-author) / Call, Andy (Thesis director) / Hunt, Neil (Committee member) / School of Accountancy (Contributor) / Watts College of Public Service & Community Solut (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149703-Thumbnail Image.png
Description
This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based

This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based model is provided; it creates all edges deterministically and shows an asymptotically matching upper bound on the route length. The main goal is to improve greedy routing through a decentralized machine learning process. Two considered methods are based on weighted majority and an algorithm of de Farias and Megiddo, both learning from feedback using ensembles of experts. Tests are run on both artificial and real networks, with decentralized spectral graph embedding supplying geometric information for real networks where it is not intrinsically available. An important measure analyzed in this work is overpayment, the difference between the cost of the method and that of the shortest path. Adaptive routing overtakes greedy after about a hundred or fewer searches per node, consistently across different network sizes and types. Learning stabilizes, typically at overpayment of a third to a half of that by greedy. The problem is made more difficult by eliminating the knowledge of neighbors' locations or by introducing uncooperative nodes. Even under these conditions, the learned routes are usually better than the greedy routes. The second part of the dissertation is related to the community structure of unannotated networks. A modularity-based algorithm of Newman is extended to work with overlapping communities (including considerably overlapping communities), where each node locally makes decisions to which potential communities it belongs. To measure quality of a cover of overlapping communities, a notion of a node contribution to modularity is introduced, and subsequently the notion of modularity is extended from partitions to covers. The final part considers a problem of network anonymization, mostly by the means of edge deletion. The point of interest is utility preservation. It is shown that a concentration on the preservation of routing abilities might damage the preservation of community structure, and vice versa.
ContributorsBakun, Oleg (Author) / Konjevod, Goran (Thesis advisor) / Richa, Andrea (Thesis advisor) / Syrotiuk, Violet R. (Committee member) / Czygrinow, Andrzej (Committee member) / Arizona State University (Publisher)
Created2011
152164-Thumbnail Image.png
Description
Contention based IEEE 802.11MAC uses the binary exponential backoff algorithm (BEB) for the contention resolution. The protocol suffers poor performance in the heavily loaded networks and MANETs, high collision rate and packet drops, probabilistic delay guarantees, and unfairness. Many backoff strategies were proposed to improve the performance of IEEE 802.11

Contention based IEEE 802.11MAC uses the binary exponential backoff algorithm (BEB) for the contention resolution. The protocol suffers poor performance in the heavily loaded networks and MANETs, high collision rate and packet drops, probabilistic delay guarantees, and unfairness. Many backoff strategies were proposed to improve the performance of IEEE 802.11 but all ignore the network topology and demand. Persistence is defined as the fraction of time a node is allowed to transmit, when this allowance should take into account topology and load, it is topology and load aware persistence (TLA). We develop a relation between contention window size and the TLA-persistence. We implement a new backoff strategy where the TLA-persistence is defined as the lexicographic max-min channel allocation. We use a centralized algorithm to calculate each node's TLApersistence and then convert it into a contention window size. The new backoff strategy is evaluated in simulation, comparing with that of the IEEE 802.11 using BEB. In most of the static scenarios like exposed terminal, flow in the middle, star topology, and heavy loaded multi-hop networks and in MANETs, through the simulation study, we show that the new backoff strategy achieves higher overall average throughput as compared to that of the IEEE 802.11 using BEB.
ContributorsBhyravajosyula, Sai Vishnu Kiran (Author) / Syrotiuk, Violet R. (Thesis advisor) / Sen, Arunabha (Committee member) / Richa, Andrea (Committee member) / Arizona State University (Publisher)
Created2013
151500-Thumbnail Image.png
Description
Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding

Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks.
ContributorsBanerjee, Sujogya (Author) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Richa, Andrea (Committee member) / Hurlbert, Glenn (Committee member) / Arizona State University (Publisher)
Created2013
152082-Thumbnail Image.png
Description
While network problems have been addressed using a central administrative domain with a single objective, the devices in most networks are actually not owned by a single entity but by many individual entities. These entities make their decisions independently and selfishly, and maybe cooperate with a small group of other

While network problems have been addressed using a central administrative domain with a single objective, the devices in most networks are actually not owned by a single entity but by many individual entities. These entities make their decisions independently and selfishly, and maybe cooperate with a small group of other entities only when this form of coalition yields a better return. The interaction among multiple independent decision-makers necessitates the use of game theory, including economic notions related to markets and incentives. In this dissertation, we are interested in modeling, analyzing, addressing network problems caused by the selfish behavior of network entities. First, we study how the selfish behavior of network entities affects the system performance while users are competing for limited resource. For this resource allocation domain, we aim to study the selfish routing problem in networks with fair queuing on links, the relay assignment problem in cooperative networks, and the channel allocation problem in wireless networks. Another important aspect of this dissertation is the study of designing efficient mechanisms to incentivize network entities to achieve certain system objective. For this incentive mechanism domain, we aim to motivate wireless devices to serve as relays for cooperative communication, and to recruit smartphones for crowdsourcing. In addition, we apply different game theoretic approaches to problems in security and privacy domain. For this domain, we aim to analyze how a user could defend against a smart jammer, who can quickly learn about the user's transmission power. We also design mechanisms to encourage mobile phone users to participate in location privacy protection, in order to achieve k-anonymity.
ContributorsYang, Dejun (Author) / Xue, Guoliang (Thesis advisor) / Richa, Andrea (Committee member) / Sen, Arunabha (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
150895-Thumbnail Image.png
Description
Broadcast Encryption is the task of cryptographically securing communication in a broadcast environment so that only a dynamically specified subset of subscribers, called the privileged subset, may decrypt the communication. In practical applications, it is desirable for a Broadcast Encryption Scheme (BES) to demonstrate resilience against attacks by colluding, unprivileged

Broadcast Encryption is the task of cryptographically securing communication in a broadcast environment so that only a dynamically specified subset of subscribers, called the privileged subset, may decrypt the communication. In practical applications, it is desirable for a Broadcast Encryption Scheme (BES) to demonstrate resilience against attacks by colluding, unprivileged subscribers. Minimal Perfect Hash Families (PHFs) have been shown to provide a basis for the construction of memory-efficient t-resilient Key Pre-distribution Schemes (KPSs) from multiple instances of 1-resilient KPSs. Using this technique, the task of constructing a large t-resilient BES is reduced to finding a near-minimal PHF of appropriate parameters. While combinatorial and probabilistic constructions exist for minimal PHFs with certain parameters, the complexity of constructing them in general is currently unknown. This thesis introduces a new type of hash family, called a Scattering Hash Family (ScHF), which is designed to allow for the scalable and ingredient-independent design of memory-efficient BESs for large parameters, specifically resilience and total number of subscribers. A general BES construction using ScHFs is shown, which constructs t-resilient KPSs from other KPSs of any resilience ≤w≤t. In addition to demonstrating how ScHFs can be used to produce BESs , this thesis explores several ScHF construction techniques. The initial technique demonstrates a probabilistic, non-constructive proof of existence for ScHFs . This construction is then derandomized into a direct, polynomial time construction of near-minimal ScHFs using the method of conditional expectations. As an alternative approach to direct construction, representing ScHFs as a k-restriction problem allows for the indirect construction of ScHFs via randomized post-optimization. Using the methods defined, ScHFs are constructed and the parameters' effects on solution size are analyzed. For large strengths, constructive techniques lose significant performance, and as such, asymptotic analysis is performed using the non-constructive existential results. This work concludes with an analysis of the benefits and disadvantages of BESs based on the constructed ScHFs. Due to the novel nature of ScHFs, the results of this analysis are used as the foundation for an empirical comparison between ScHF-based and PHF-based BESs . The primary bases of comparison are construction efficiency, key material requirements, and message transmission overhead.
ContributorsO'Brien, Devon James (Author) / Colbourn, Charles J (Thesis advisor) / Bazzi, Rida (Committee member) / Richa, Andrea (Committee member) / Arizona State University (Publisher)
Created2012
150427-Thumbnail Image.png
Description
The Dual Marching Tetrahedra algorithm is a generalization of the Dual Marching Cubes algorithm, used to build a boundary surface around points which have been assigned a particular scalar density value, such as the data produced by and Magnetic Resonance Imaging or Computed Tomography scanner. This boundary acts as a

The Dual Marching Tetrahedra algorithm is a generalization of the Dual Marching Cubes algorithm, used to build a boundary surface around points which have been assigned a particular scalar density value, such as the data produced by and Magnetic Resonance Imaging or Computed Tomography scanner. This boundary acts as a skin between points which are determined to be "inside" and "outside" of an object. However, the DMT is vague in regards to exactly where each vertex of the boundary should be placed, which will not necessarily produce smooth results. Mesh smoothing algorithms which ignore the DMT data structures may distort the output mesh so that it could incorrectly include or exclude density points. Thus, an algorithm is presented here which is designed to smooth the output mesh, while obeying the underlying data structures of the DMT algorithm.
ContributorsJohnson, Sean (Author) / Farin, Gerald (Thesis advisor) / Richa, Andrea (Committee member) / Nallure Balasubramanian, Vineeth (Committee member) / Arizona State University (Publisher)
Created2011
148193-Thumbnail Image.png
Description

This project explores how modern mobile technology can be used to provide support for domestic violence victims. The goal of the project is to create a proof-of-concept iOS mobile application that maintains a discreet safety front and provides domestic violence victims with resources and safety planning. The design and implementation

This project explores how modern mobile technology can be used to provide support for domestic violence victims. The goal of the project is to create a proof-of-concept iOS mobile application that maintains a discreet safety front and provides domestic violence victims with resources and safety planning. The design and implementation are disguised as a hair salon app to maintain a low profile on the user’s phone. The HairHelp app features quick exit navigation, a secure database to store a user’s private and personal documents in case of emergency, and a checklist of safety planning measures. The steps taken in this project serve as the foundation for a larger project in the long term.

ContributorsShovkovy, Sophia (Author) / Balasooriya, Janaka (Thesis director) / Wilkey, Douglas (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148142-Thumbnail Image.png
Description

HackerHero is an educational game designed to teach children, especially those from marginalized backgrounds, computation thinking skills needed for STEAM fields. It also teaches children about social injustice. This project was focused on creating an audio visualization for an AI character within the HackerHero game. The audio visualization consisted of

HackerHero is an educational game designed to teach children, especially those from marginalized backgrounds, computation thinking skills needed for STEAM fields. It also teaches children about social injustice. This project was focused on creating an audio visualization for an AI character within the HackerHero game. The audio visualization consisted of a static silhouette of a face and a wave-like form to represent the mouth. Audio content analysis was performed on audio sampled from the character’s voice lines. Pitch and amplitude derived from the analysis was used to animate the character’s visual features such as it’s brightness, color, and mouth movement. The mouth’s movement and color was manipulated with the audio’s pitch. The lights of Wave were controlled by the amplitude of the audio. Design considerations were made to accommodate those with visual disabilities such as color blindness and epilepsy. Overall the final audio visualization satisfied the project sponsor and built upon existing audio visualization work. User feedback will be a necessity for improving the audio visualization in the future.

ContributorsNguyen, Joshep D (Author) / Chavez-Echaegaray, Helen (Thesis director) / Waggoner, Trae (Committee member) / Department of Psychology (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
136549-Thumbnail Image.png
Description
A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic

A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic materials that can change their physical properties \u2014 such as color, density, and even shape \u2014 based on predefined rules or continuous, autonomous collection of input. In this research, we are most interested in particles that can perform computations, bond with other particles, and move. In this paper, we provide a theoretical particle model that can be used to simulate the performance of such physical particle systems, as well as an algorithm to perform expansion, wherein these particles can be used to enclose spaces or even objects.
ContributorsLaff, Miles (Author) / Richa, Andrea (Thesis director) / Bazzi, Rida (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05