Matching Items (11)
Filtering by

Clear all filters

133880-Thumbnail Image.png
Description
In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form a dependency tree. An agent operating within these environments have access to low amounts of data about the environment before interacting with it, so it is crucial that this agent is able to effectively utilize a tree of dependencies and its environmental surroundings to make judgements about which sub-goals are most efficient to pursue at any point in time. A successful agent aims to minimizes cost when completing a given goal. A deep neural network in combination with Q-learning techniques was employed to act as the agent in this environment. This agent consistently performed better than agents using alternate models (models that used dependency tree heuristics or human-like approaches to make sub-goal oriented choices), with an average performance advantage of 33.86% (with a standard deviation of 14.69%) over the best alternate agent. This shows that machine learning techniques can be consistently employed to make goal-oriented choices within an environment with recursive sub-goal dependencies and low amounts of pre-known information.
ContributorsKoleber, Derek (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / W.P. Carey School of Business (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133568-Thumbnail Image.png
Description
The functional programming paradigm is able to provide clean and concise solutions to many common programming problems, as well as promote safer, more testable code by encouraging an isolation of state-modifying behavior. Functional programming is finding its way into traditionally object-oriented and imperative languages, most notably with the introduction of

The functional programming paradigm is able to provide clean and concise solutions to many common programming problems, as well as promote safer, more testable code by encouraging an isolation of state-modifying behavior. Functional programming is finding its way into traditionally object-oriented and imperative languages, most notably with the introduction of Java 8 and in LINQ for C#. However, no functional programming language has achieved widespread adoption, meaning that students without a formal computer science background who learn technology on-demand for personal projects or for business may not come across functional programming in a significant way. Programmers need a reason to spend time learning these concepts to not miss out on the subtle but profound benefits they provide. I propose the use of a video game as an environment in which learning functional programming is the player's goal. In this carefully constructed video game, learning functional programming is the key to progression. Players will be motivated to learn and will be given an immediate chance to test and demonstrate their understanding. The game, named Lambda Starship (stylized as (lambda () starship)), is a 3D first-person video game. It takes place in a spaceship that, due to extreme magnetic interference, has lost all on-board software while leaving the hardware completely intact. The player is tasked to write software using functional programming paradigms to replace the old software and bring the spaceship back to a working state. Throughout the process, the player is guided by an in-game manual and other descriptive resources. The game is implemented in Unity and scripted using C#. The game's educational and entertainment value was evaluated with a study case. 24 undergraduate students at Arizona State University (ASU) played the game and were surveyed detailing their experience. During play, user statistics were recorded automatically, providing a data-driven way to analyze where players struggled with the concepts introduced in the game. Reception was neutral or positive in both the entertainment and educational sides of the game. A few players expressed concerns about the manual in its form factor and engagement value.
ContributorsCompton, Tyler Alexander (Author) / Gonzalez-Sanchez, Javier (Thesis director) / Bansal, Srividya (Committee member) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on the most microscopic levels, and everything in between. As the

Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on the most microscopic levels, and everything in between. As the years have gone on, the extent and depth of interaction between brains and computers have consistently widened, to the point where computers help brains with their thinking in virtually infinite everyday situations around the world. The first purpose of this research project was to conduct a brief review for the purposes of gaining a sound understanding of how both brains and computers operate at fundamental levels, and what it is about these two entities that allow them to work evermore seamlessly as the years go on. Next, a history of interaction between brains and computers was developed, which expanded upon the first task and helped to contribute to visions of future brain-computer interaction (BCI). The subsequent and primary task of this research project was to develop a theoretical framework for a potential brain-aiding device of the future. This was done by conducting an extensive literature review regarding the most advanced BCI technology in modern times and expanding upon the findings to argue feasibility of the future device and its components. Next, social predictions regarding the acceptance and use of the new technology were made by designing and executing a survey based on the Unified Theory of the Acceptance and Use of Technology (UTAUT). Finally, general economic predictions were inferred by examining several relationships between money and computers over time.
ContributorsThum, Giuseppe Edwardo (Author) / Gaffar, Ashraf (Thesis director) / Gonzalez-Sanchez, Javier (Committee member) / College of Integrative Sciences and Arts (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
133464-Thumbnail Image.png
Description
The Internet of Things (IoT) is term used to refer to the billions of Internet connected, embedded devices that communicate with one another with the purpose of sharing data or performing actions. One of the core usages of the proverbial network is the ability for its devices and services to

The Internet of Things (IoT) is term used to refer to the billions of Internet connected, embedded devices that communicate with one another with the purpose of sharing data or performing actions. One of the core usages of the proverbial network is the ability for its devices and services to interact with one another to automate daily tasks and routines. For example, IoT devices are often used to automate tasks within the household, such as turning the lights on/off or starting the coffee pot. However, designing a modular system to create and schedule these routines is a difficult task.

Current IoT integration utilities attempt to help simplify this task, but most fail to satisfy one of the requirements many users want in such a system ‒ simplified integration with third party devices. This project seeks to solve this issue through the creation of an easily extendable, modular integrating utility. It is open-source and does not require the use of a cloud-based server, with users hosting the server themselves. With a server and data controller implemented in pure Python and a library for embedded ESP8266 microcontroller-powered devices, the solution seeks to satisfy both casual users as well as those interested in developing their own integrations.
ContributorsBeagle, Bryce Edward (Author) / Acuna, Ruben (Thesis director) / Jordan, Shawn (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
147884-Thumbnail Image.png
Description

Affective computing allows computers to monitor and influence people’s affects, in other words emotions. Currently, there is a lot of research exploring what can be done with this technology. There are many fields, such as education, healthcare, and marketing, that this technology can transform. However, it is important to question

Affective computing allows computers to monitor and influence people’s affects, in other words emotions. Currently, there is a lot of research exploring what can be done with this technology. There are many fields, such as education, healthcare, and marketing, that this technology can transform. However, it is important to question what should be done. There are unique ethical considerations in regards to affective computing that haven't been explored. The purpose of this study is to understand the user’s perspective of affective computing in regards to the Association of Computing Machinery (ACM) Code of Ethics, to ultimately start developing a better understanding of these ethical concerns. For this study, participants were required to watch three different videos and answer a questionnaire, all while wearing an Emotiv EPOC+ EEG headset that measures their emotions. Using the information gathered, the study explores the ethics of affective computing through the user’s perspective.

ContributorsInjejikian, Angelica (Author) / Gonzalez-Sanchez, Javier (Thesis director) / Chavez-Echeagaray, Maria Elena (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The aim of this project is to understand the basic algorithmic components of the transformer deep learning architecture. At a high level, a transformer is a machine learning model based off of a recurrent neural network that adopts a self-attention mechanism, which can weigh significant parts of sequential input data

The aim of this project is to understand the basic algorithmic components of the transformer deep learning architecture. At a high level, a transformer is a machine learning model based off of a recurrent neural network that adopts a self-attention mechanism, which can weigh significant parts of sequential input data which is very useful for solving problems in natural language processing and computer vision. There are other approaches to solving these problems which have been implemented in the past (i.e., convolutional neural networks and recurrent neural networks), but these architectures introduce the issue of the vanishing gradient problem when an input becomes too long (which essentially means the network loses its memory and halts learning) and have a slow training time in general. The transformer architecture’s features enable a much better “memory” and a faster training time, which makes it a more optimal architecture in solving problems. Most of this project will be spent producing a survey that captures the current state of research on the transformer, and any background material to understand it. First, I will do a keyword search of the most well cited and up-to-date peer reviewed publications on transformers to understand them conceptually. Next, I will investigate any necessary programming frameworks that will be required to implement the architecture. I will use this to implement a simplified version of the architecture or follow an easy to use guide or tutorial in implementing the architecture. Once the programming aspect of the architecture is understood, I will then Implement a transformer based on the academic paper “Attention is All You Need”. I will then slightly tweak this model using my understanding of the architecture to improve performance. Once finished, the details (i.e., successes, failures, process and inner workings) of the implementation will be evaluated and reported, as well as the fundamental concepts surveyed. The motivation behind this project is to explore the rapidly growing area of AI algorithms, and the transformer algorithm in particular was chosen because it is a major milestone for engineering with AI and software. Since their introduction, transformers have provided a very effective way of solving natural language processing, which has allowed any related applications to succeed with high speed while maintaining accuracy. Since then, this type of model can be applied to more cutting edge natural language processing applications, such as extracting semantic information from a text description and generating an image to satisfy it.

ContributorsCereghini, Nicola (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / Barrett, The Honors College (Contributor) / Software Engineering (Contributor)
Created2023-05
Description

The seamless integration of autonomous vehicles (AVs) into highly interactive and dynamic driving environments requires AVs to safely and effectively communicate with human drivers. Furthermore, the design of motion planning strategies that satisfy safety constraints inherit the challenges involved in implementing a safety-critical and dynamics-aware motion planning algorithm that produces

The seamless integration of autonomous vehicles (AVs) into highly interactive and dynamic driving environments requires AVs to safely and effectively communicate with human drivers. Furthermore, the design of motion planning strategies that satisfy safety constraints inherit the challenges involved in implementing a safety-critical and dynamics-aware motion planning algorithm that produces feasible motion trajectories. Driven by the complexities of arriving at such a motion planner, this thesis leverages a motion planning toolkit that utilizes spline parameterization to compute the optimal motion trajectory within a dynamic environment. Our approach is comprised of techniques originating from optimal control, vehicle dynamics, and spline interpolation. To ensure dynamic feasibility of the computed trajectories, we formulate the optimal control problem in relation to the intrinsic state constraints derived from the bicycle state space model. In addition, we apply input constraints to bound the rate of change of the steering angle and acceleration provided to the system. To produce collision-averse trajectories, we enforce extrinsic state constraints extracted from the static and dynamic obstacles in the circumambient environment. We proceed to exploit the mathematical properties of B-splines, such as the Convex Hull Property, and the piecewise composition of polynomial functions. Second, we focus on constructing a highly interactive environment in which the con- figured optimal control problem is deployed. Vehicle interactions are categorized into two distinct cases: Case 1 is representative of a single-agent interaction, whereas Case 2 is representative of a multi-agent interaction. The computed motion trajectories per each case are displayed in simulation.

ContributorsGanti, Sruti (Author) / Zhang, Wenlong (Thesis director) / Acuna, Ruben (Committee member) / Barrett, The Honors College (Contributor) / Software Engineering (Contributor)
Created2023-05
Description

In this thesis, several different methods for detecting and removing satellite streaks from astronomic images were evaluated and compared with a new machine learning based approach. Simulated data was generated with a variety of conditions, and the performance of each method was evaluated both quantitatively, using Mean Absolute Error (MAE)

In this thesis, several different methods for detecting and removing satellite streaks from astronomic images were evaluated and compared with a new machine learning based approach. Simulated data was generated with a variety of conditions, and the performance of each method was evaluated both quantitatively, using Mean Absolute Error (MAE) against a ground truth detection mask and processing throughput of the method, as well as qualitatively, examining the situations in which each model performs well and poorly. Detection methods from existing systems Pyradon and ASTRiDE were implemented and tested. A machine learning (ML) image segmentation model was trained on simulated data and used to detect streaks in test data. The ML model performed favorably relative to the traditional methods tested, and demonstrated superior robustness in general. However, the model also exhibited some unpredictable behavior in certain scenarios which should be considered. This demonstrated that machine learning is a viable tool for the detection of satellite streaks in astronomic images, however special care must be taken to prevent and to minimize the effects of unpredictable behavior in such models.

ContributorsJeffries, Charles (Author) / Acuna, Ruben (Thesis director) / Martin, Thomas (Committee member) / Bansal, Ajay (Committee member) / Barrett, The Honors College (Contributor) / Software Engineering (Contributor)
Created2023-05
132708-Thumbnail Image.png
Description
In this paper, I explore practical applications of neural networks for automated skin lesion identification. The visual characteristics are of primary importance in the recognition of skin diseases, hence, the development of deep neural network models proven capable of classifying skin lesions can potentially change the face of modern medicine

In this paper, I explore practical applications of neural networks for automated skin lesion identification. The visual characteristics are of primary importance in the recognition of skin diseases, hence, the development of deep neural network models proven capable of classifying skin lesions can potentially change the face of modern medicine by extending the availability and lowering the cost of diagnostic care. Previous work has demonstrated the effectiveness of convolutional neural networks in image classification in general, with even higher accuracy achievable by data augmentation techniques, such as cropping, rotating, and flipping input images, along with more advanced computationally intensive approaches. In this research, I provide an overview of Convolutional Neural Networks (CNN) and CNN implementation with TensorFlow and Keras API in context of image recognition and classification. I also experiment with custom convolutional neural network model architecture trained using HAM10000 dataset. The dataset used for the case study is obtained from Harvard Dataverse and is maintained by Medical University of Vienna. The HAM10000 dataset is a large collection of multi-source dermatoscopic images of common pigmented skin lesions and is available for academic research under Creative Commons Attribution-Noncommercial 4.0 International Public License. With over ten thousand dermatoscopic images of seven classes of benign and malignant skin lesions, the dataset is substantial for academic machine learning purposes for multiclass image classification. I discuss the successes and shortcomings of the model in respect to its application to the dataset.
ContributorsKaraliova, Natallia (Author) / Bansal, Ajay (Thesis director) / Gonzalez-Sanchez, Javier (Committee member) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
165594-Thumbnail Image.png
Description

With the recent focus of attention towards remote work and mobile computing, the possibility of taking a powerful workstation wherever needed is enticing. However, even emerging laptops today struggle to compete with desktops in terms of cost, maintenance, and future upgrades. The price point of a powerful laptop is considerably

With the recent focus of attention towards remote work and mobile computing, the possibility of taking a powerful workstation wherever needed is enticing. However, even emerging laptops today struggle to compete with desktops in terms of cost, maintenance, and future upgrades. The price point of a powerful laptop is considerably higher compared to an equally powerful desktop computer, and most laptops are manufactured in a way that makes upgrading parts of the machine difficult or impossible, forcing a complete purchase in the event of failure or a component needing an upgrade. In the case where someone already owns a desktop computer and must be mobile, instead of needing to purchase a second device at full price, it may be possible to develop a low-cost computer that has just enough power to connect to the existing desktop and run all processing there, using the mobile device only as a user interface. This thesis will explore the development of a custom PCB that utilizes a Raspberry Pi Computer Module 4, as well as the development of a fork of the Open Source project Moonlight to stream a host machine's screen to a remote client. This implementation will be compared against other existing remote desktop solutions to analyze it's performance and quality.

ContributorsLathrum, Dylan (Author) / Heinrichs, Robert (Thesis director) / Acuna, Ruben (Committee member) / Jordan, Shawn (Committee member) / Barrett, The Honors College (Contributor) / Software Engineering (Contributor)
Created2022-05