Matching Items (30)
Filtering by

Clear all filters

131525-Thumbnail Image.png
Description
The original version of Helix, the one I pitched when first deciding to make a video game
for my thesis, is an action-platformer, with the intent of metroidvania-style progression
and an interconnected world map.

The current version of Helix is a turn based role-playing game, with the intent of roguelike
gameplay and a dark

The original version of Helix, the one I pitched when first deciding to make a video game
for my thesis, is an action-platformer, with the intent of metroidvania-style progression
and an interconnected world map.

The current version of Helix is a turn based role-playing game, with the intent of roguelike
gameplay and a dark fantasy theme. We will first be exploring the challenges that came
with programming my own game - not quite from scratch, but also without a prebuilt
engine - then transition into game design and how Helix has evolved from its original form
to what we see today.
ContributorsDiscipulo, Isaiah K (Author) / Meuth, Ryan (Thesis director) / Kobayashi, Yoshihiro (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
134294-Thumbnail Image.png
Description
Global violent conflict has become an increasing problem in recent decades, especially in the African continent. Civil wars, terrorism, riots, and political violence has wrought havoc not only on civilian lives, but also on economic foundations. Trade networks are a way to measure these economic foundations. To summarize trade networks

Global violent conflict has become an increasing problem in recent decades, especially in the African continent. Civil wars, terrorism, riots, and political violence has wrought havoc not only on civilian lives, but also on economic foundations. Trade networks are a way to measure these economic foundations. To summarize trade networks clustering coefficient as well as trade quantity/value summation measures are used. To understand effects of global trade on violent conflict, Pearson product-moment correlations are utilized. This work details a comparison of African national economies and violent conflict events using clustering coefficient, trade summation measures and Pearson correlation coefficient.
ContributorsKadambi, Sagarika Sanjay (Author) / Maciejewski, Ross (Thesis director) / Shutters, Shade (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
136549-Thumbnail Image.png
Description
A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic

A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic materials that can change their physical properties \u2014 such as color, density, and even shape \u2014 based on predefined rules or continuous, autonomous collection of input. In this research, we are most interested in particles that can perform computations, bond with other particles, and move. In this paper, we provide a theoretical particle model that can be used to simulate the performance of such physical particle systems, as well as an algorithm to perform expansion, wherein these particles can be used to enclose spaces or even objects.
ContributorsLaff, Miles (Author) / Richa, Andrea (Thesis director) / Bazzi, Rida (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136516-Thumbnail Image.png
Description
Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot

Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot detection, we are interested in bots on Twitter that tweet Arabic extremist-like phrases. A testing dataset is collected using the honeypot method, and five different heuristics are measured for their effectiveness in detecting bots. The model underperformed, but we have laid the ground-work for a vastly untapped focus on bot detection: extremist ideal diffusion through bots.
ContributorsKarlsrud, Mark C. (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
137221-Thumbnail Image.png
Description
This is a report of a study that investigated the thinking of a high-achieving precalculus student when responding to tasks that required him to define linear formulas to relate covarying quantities. Two interviews were conducted for analysis. A team of us in the mathematics education department at Arizona State University

This is a report of a study that investigated the thinking of a high-achieving precalculus student when responding to tasks that required him to define linear formulas to relate covarying quantities. Two interviews were conducted for analysis. A team of us in the mathematics education department at Arizona State University initially identified mental actions that we conjectured were needed for constructing meaningful linear formulas. This guided the development of tasks for the sequence of clinical interviews with one high-performing precalculus student. Analysis of the interview data revealed that in instances when the subject engaged in meaning making that led to him imagining and identifying the relevant quantities and how they change together, he was able to give accurate definitions of variables and was usually able to define a formula to relate the two quantities of interest. However, we found that the student sometimes had difficulty imagining how the two quantities of interest were changing together. At other times he exhibited a weak understanding of the operation of subtraction and the idea of constant rate of change. He did not appear to conceptualize subtraction as a quantitative comparison. His inability to conceptualize a constant rate of change as a proportional relationship between the changes in two quantities also presented an obstacle in his developing a meaningful formula that relied on this understanding. The results further stress the need to develop a student's ability to engage in mental operations that involve covarying quantities and a more robust understanding of constant rate of change since these abilities and understanding are critical for student success in future courses in mathematics.
ContributorsKlinger, Tana Paige (Author) / Carlson, Marilyn (Thesis director) / Thompson, Pat (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137197-Thumbnail Image.png
Description
This work explores the development of a visual analytics tool for geodemographic exploration in an online environment. We mine 78 million records from the United States white pages, link the location data to demographic data (specifically income) from the United States Census Bureau, and allow users to interactively compare distributions

This work explores the development of a visual analytics tool for geodemographic exploration in an online environment. We mine 78 million records from the United States white pages, link the location data to demographic data (specifically income) from the United States Census Bureau, and allow users to interactively compare distributions of names with regards to spatial location similarity and income. In order to enable interactive similarity exploration, we explore methods of pre-processing the data as well as on-the-fly lookups. As data becomes larger and more complex, the development of appropriate data storage and analytics solutions has become even more critical when enabling online visualization. We discuss problems faced in implementation, design decisions and directions for future work.
ContributorsIbarra, Jose Luis (Author) / Maciejewski, Ross (Thesis director) / Mack, Elizabeth (Committee member) / Longley, Paul (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-05
137156-Thumbnail Image.png
Description
Due to the popularity of the movie industry, a film's opening weekend box-office performance is of great interest not only to movie studios, but to the general public, as well. In hopes of maximizing a film's opening weekend revenue, movie studios invest heavily in pre-release advertisement. The most visible advertisement

Due to the popularity of the movie industry, a film's opening weekend box-office performance is of great interest not only to movie studios, but to the general public, as well. In hopes of maximizing a film's opening weekend revenue, movie studios invest heavily in pre-release advertisement. The most visible advertisement is the movie trailer, which, in no more than two minutes and thirty seconds, serves as many people's first introduction to a film. The question, however, is how can we be confident that a trailer will succeed in its promotional task, and bring about the audience a studio expects? In this thesis, we use machine learning classification techniques to determine the effectiveness of a movie trailer in the promotion of its namesake. We accomplish this by creating a predictive model that automatically analyzes the audio and visual characteristics of a movie trailer to determine whether or not a film's opening will be successful by earning at least 35% of a film's production budget during its first U.S. box office weekend. Our predictive model performed reasonably well, achieving an accuracy of 68.09% in a binary classification. Accuracy increased to 78.62% when including genre in our predictive model.
ContributorsWilliams, Terrance D'Mitri (Author) / Pon-Barry, Heather (Thesis director) / Zafarani, Reza (Committee member) / Maciejewski, Ross (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-05
135246-Thumbnail Image.png
Description
The areas of cloud computing and web services have grown rapidly in recent years, resulting in software that is more interconnected and and widely used than ever before. As a result of this proliferation, there needs to be a way to assess the quality of these web services in order

The areas of cloud computing and web services have grown rapidly in recent years, resulting in software that is more interconnected and and widely used than ever before. As a result of this proliferation, there needs to be a way to assess the quality of these web services in order to ensure their reliability and accuracy. This project explores different ways in which services can be tested and evaluated through the design of various testing techniques and their implementations in a web application, which can be used by students or developers to test their web services.
ContributorsHilliker, Mark Paul (Author) / Chen, Yinong (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135251-Thumbnail Image.png
Description
Many systems in the world \u2014 such as cellular networks, the post service, or transportation pathways \u2014 can be modeled as networks or graphs. The practical applications of graph algorithms generally seek to achieve some goal while minimizing some cost such as money or distance. While the minimum linear arrangement

Many systems in the world \u2014 such as cellular networks, the post service, or transportation pathways \u2014 can be modeled as networks or graphs. The practical applications of graph algorithms generally seek to achieve some goal while minimizing some cost such as money or distance. While the minimum linear arrangement (MLA) problem has been widely-studied amongst graph ordering and embedding problems, there have been no developments into versions of the problem involving degree higher than 2. An application of our problem can be seen in overlay networks in telecommunications. An overlay network is a virtual network that is built on top of another network. It is a logical network where the links between nodes represent the physical paths connecting the nodes in the underlying infrastructure. The underlying physical network may be incomplete, but as long as it is connected, we can build a complete overlay network on top of it. Since some nodes may be overloaded by traffic, we can reduce the strain on the overlay network by limiting the communication between nodes. Some edges, however, may have more importance than others so we must be careful about our selection of which nodes are allowed to communicate with each other. The balance of reducing the degree of the network while maximizing communication forms the basis of our d-degree minimum arrangement problem. In this thesis we will look at several approaches to solving the generalized d-degree minimum arrangement d-MA problem where we embed a graph onto a subgraph of a given degree. We first look into the requirements and challenges of solving the d-MA problem. We will then present a polynomial-time heuristic and compare its performance with the optimal solution derived from integer linear programming. We will show that a simple (d-1)-ary tree construction provides the optimal structure for uniform graphs with large requests sets. Finally, we will present experimental data gathered from running simulations on a variety of graphs to evaluate the efficiency of our heuristic and tree construction.
ContributorsWang, Xiao (Author) / Richa, Andrea (Thesis director) / Nakamura, Mutsumi (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05