Matching Items (28)
Filtering by

Clear all filters

156331-Thumbnail Image.png
Description
Graph theory is a critical component of computer science and software engineering, with algorithms concerning graph traversal and comprehension powering much of the largest problems in both industry and research. Engineers and researchers often have an accurate view of their target graph, however they struggle to implement a correct, and

Graph theory is a critical component of computer science and software engineering, with algorithms concerning graph traversal and comprehension powering much of the largest problems in both industry and research. Engineers and researchers often have an accurate view of their target graph, however they struggle to implement a correct, and efficient, search over that graph.

To facilitate rapid, correct, efficient, and intuitive development of graph based solutions we propose a new programming language construct - the search statement. Given a supra-root node, a procedure which determines the children of a given parent node, and optional definitions of the fail-fast acceptance or rejection of a solution, the search statement can conduct a search over any graph or network. Structurally, this statement is modelled after the common switch statement and is put into a largely imperative/procedural context to allow for immediate and intuitive development by most programmers. The Go programming language has been used as a foundation and proof-of-concept of the search statement. A Go compiler is provided which implements this construct.
ContributorsHenderson, Christopher (Author) / Bansal, Ajay (Thesis advisor) / Lindquist, Timothy (Committee member) / Acuna, Ruben (Committee member) / Arizona State University (Publisher)
Created2018
157365-Thumbnail Image.png
Description
UVLabel was created to enable radio astronomers to view and annotate their own data such that they could then expand their future research paths. It simplifies their data rendering process by providing a simple user interface to better access sections of their data. Furthermore, it provides an interface to track

UVLabel was created to enable radio astronomers to view and annotate their own data such that they could then expand their future research paths. It simplifies their data rendering process by providing a simple user interface to better access sections of their data. Furthermore, it provides an interface to track trends in their data through a labelling feature.

The tool was developed following the incremental development process in order to quickly create a functional and testable tool. The incremental process also allowed for feedback from radio astronomers to help guide the project's development.

UVLabel provides both a functional product, and a modifiable and scalable code base for radio astronomer developers. This enables astronomers studying various astronomical interferometric data labelling capabilities. The tool can then be used to improve their filtering methods, pursue machine learning solutions, and discover new trends. Finally, UVLabel will be open source to put customization, scalability, and adaptability in the hands of these researchers.
ContributorsLa Place, Cecilia (Author) / Bansal, Ajay (Thesis advisor) / Jacobs, Daniel (Thesis advisor) / Acuna, Ruben (Committee member) / Arizona State University (Publisher)
Created2019
157371-Thumbnail Image.png
Description
Capturing the information in an image into a natural language sentence is

considered a difficult problem to be solved by computers. Image captioning involves not just detecting objects from images but understanding the interactions between the objects to be translated into relevant captions. So, expertise in the fields of computer vision

Capturing the information in an image into a natural language sentence is

considered a difficult problem to be solved by computers. Image captioning involves not just detecting objects from images but understanding the interactions between the objects to be translated into relevant captions. So, expertise in the fields of computer vision paired with natural language processing are supposed to be crucial for this purpose. The sequence to sequence modelling strategy of deep neural networks is the traditional approach to generate a sequential list of words which are combined to represent the image. But these models suffer from the problem of high variance by not being able to generalize well on the training data.

The main focus of this thesis is to reduce the variance factor which will help in generating better captions. To achieve this, Ensemble Learning techniques have been explored, which have the reputation of solving the high variance problem that occurs in machine learning algorithms. Three different ensemble techniques namely, k-fold ensemble, bootstrap aggregation ensemble and boosting ensemble have been evaluated in this thesis. For each of these techniques, three output combination approaches have been analyzed. Extensive experiments have been conducted on the Flickr8k dataset which has a collection of 8000 images and 5 different captions for every image. The bleu score performance metric, which is considered to be the standard for evaluating natural language processing (NLP) problems, is used to evaluate the predictions. Based on this metric, the analysis shows that ensemble learning performs significantly better and generates more meaningful captions compared to any of the individual models used.
ContributorsKatpally, Harshitha (Author) / Bansal, Ajay (Thesis advisor) / Acuna, Ruben (Committee member) / Gonzalez-Sanchez, Javier (Committee member) / Arizona State University (Publisher)
Created2019
157387-Thumbnail Image.png
Description
The development of new Ultra-Violet/Visible/IR range (UV/Vis/IR) astronomical instrumentation that use novel approaches for imaging and increase the accessibility of observing time for more research groups is essential for rapid innovation within the community. Unique focal planes that are rapid-prototyped, low cost, and provide high resolution are key.

In this

The development of new Ultra-Violet/Visible/IR range (UV/Vis/IR) astronomical instrumentation that use novel approaches for imaging and increase the accessibility of observing time for more research groups is essential for rapid innovation within the community. Unique focal planes that are rapid-prototyped, low cost, and provide high resolution are key.

In this dissertation the emergent designs of three unique focal planes are discussed. These focal planes were each designed for a different astronomical platform: suborbital balloon, suborbital rocket, and ground-based observatory. The balloon-based payload is a hexapod-actuated focal plane that uses tip-tilt motion to increase angular resolution through the removal of jitter – known as the HExapod Resolution-Enhancement SYstem (HERESY), the suborbital rocket imaging payload is a Jet Propulsion Laboratory (JPL) delta-doped charge-coupled device (CCD) packaged to survive the rigors of launch and image far-ultra-violet (FUV) spectra, and the ground-based observatory payload is a star centroid tracking modification to the balloon version of HERESY for the tip-tilt correction of atmospheric turbulence.

The design, construction, verification, and validation of each focal plane payload is discussed in detail. For HERESY’s balloon implementation, pointing error data from the Stratospheric Terahertz Observatory (STO) Antarctic balloon mission was used to form an experimental lab test setup to demonstrate the hexapod can eliminate jitter in flight-like conditions. For the suborbital rocket focal plane, a harsh set of unit-level tests to ensure the payload could survive launch and space conditions, as well as the characterization and optimization of the JPL detector, are detailed. Finally, a modification of co-mounting a fast-read detector to the HERESY focal plane, for use on ground-based observatories, intended to reduce atmospherically induced tip-tilt error through the centroid tracking of bright natural guidestars, is described.
ContributorsMiller, Alexander Duke (Author) / Scowen, Paul (Thesis advisor) / Groppi, Christopher (Committee member) / Mauskopf, Philip (Committee member) / Jacobs, Daniel (Committee member) / Butler, Nathaniel (Committee member) / Arizona State University (Publisher)
Created2019
133880-Thumbnail Image.png
Description
In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form a dependency tree. An agent operating within these environments have access to low amounts of data about the environment before interacting with it, so it is crucial that this agent is able to effectively utilize a tree of dependencies and its environmental surroundings to make judgements about which sub-goals are most efficient to pursue at any point in time. A successful agent aims to minimizes cost when completing a given goal. A deep neural network in combination with Q-learning techniques was employed to act as the agent in this environment. This agent consistently performed better than agents using alternate models (models that used dependency tree heuristics or human-like approaches to make sub-goal oriented choices), with an average performance advantage of 33.86% (with a standard deviation of 14.69%) over the best alternate agent. This shows that machine learning techniques can be consistently employed to make goal-oriented choices within an environment with recursive sub-goal dependencies and low amounts of pre-known information.
ContributorsKoleber, Derek (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / W.P. Carey School of Business (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
132940-Thumbnail Image.png
Description
The Epoch of Reionization (EoR) is the period in the evolution of the universe during which neutral hydrogen was ionized by the first luminous sources, and is closely linked to the formation of structure in the early universe. The Hydrogen Epoch of Reionization Array (HERA) is a radio interferometer currently

The Epoch of Reionization (EoR) is the period in the evolution of the universe during which neutral hydrogen was ionized by the first luminous sources, and is closely linked to the formation of structure in the early universe. The Hydrogen Epoch of Reionization Array (HERA) is a radio interferometer currently under construction in South Africa designed to study this era. Specifically, HERA is dedicated to studying the large-scale structure during the EoR and the preceding Cosmic Dawn by measuring the redshifted 21-cm line from neutral hydrogen. However, the 21-cm signal from the EoR is extremely faint relative to galactic and extragalactic radio foregrounds, and instrumental and environmental systematics make measuring the signal all the more difficult. Radio frequency interference (RFI) from terrestrial sources is one such systematic. In this thesis, we explore various methods of removing RFI from early science-grade HERA data and characterize the effects of different removal patterns on the final 21-cm power spectrum. In particular, we focus on the impact of masking narrowband signals, such as those characteristic of FM radio and aircraft or satellite communications, in the context of the algorithms currently used by the HERA collaboration for analysis.
ContributorsWhitler, Lily (Author) / Jacobs, Daniel (Thesis director) / Bowman, Judd (Committee member) / Beardsley, Adam (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133464-Thumbnail Image.png
Description
The Internet of Things (IoT) is term used to refer to the billions of Internet connected, embedded devices that communicate with one another with the purpose of sharing data or performing actions. One of the core usages of the proverbial network is the ability for its devices and services to

The Internet of Things (IoT) is term used to refer to the billions of Internet connected, embedded devices that communicate with one another with the purpose of sharing data or performing actions. One of the core usages of the proverbial network is the ability for its devices and services to interact with one another to automate daily tasks and routines. For example, IoT devices are often used to automate tasks within the household, such as turning the lights on/off or starting the coffee pot. However, designing a modular system to create and schedule these routines is a difficult task.

Current IoT integration utilities attempt to help simplify this task, but most fail to satisfy one of the requirements many users want in such a system ‒ simplified integration with third party devices. This project seeks to solve this issue through the creation of an easily extendable, modular integrating utility. It is open-source and does not require the use of a cloud-based server, with users hosting the server themselves. With a server and data controller implemented in pure Python and a library for embedded ESP8266 microcontroller-powered devices, the solution seeks to satisfy both casual users as well as those interested in developing their own integrations.
ContributorsBeagle, Bryce Edward (Author) / Acuna, Ruben (Thesis director) / Jordan, Shawn (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133491-Thumbnail Image.png
Description
Accurate pointing is essential for any space mission with an imaging payload. The Phoenix Cubesat mission is being designed to take thermal images of major US cities from Low Earth Orbit in order to study the Urban Heat Island effect. Accurate pointing is vital to ensure mission success, so the

Accurate pointing is essential for any space mission with an imaging payload. The Phoenix Cubesat mission is being designed to take thermal images of major US cities from Low Earth Orbit in order to study the Urban Heat Island effect. Accurate pointing is vital to ensure mission success, so the satellite's Attitude Determination and Control System, or ADCS, must be properly tested and calibrated on the ground to ensure that it performs to its requirements. A commercial ADCS unit, the MAI-400, has been selected for this mission. The expected environmental disturbances must be characterized and modeled in order to inform planning the operations of this system. Appropriate control gains must also be selected to ensure the optimal satellite response. These gains are derived through a system model in Simulink and its response optimization tool, and these gains are then tested in a supplier provided Dynamic Simulator.
ContributorsWofford, Justin Michael (Author) / Bowman, Judd (Thesis director) / Jacobs, Daniel (Committee member) / School of Earth and Space Exploration (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
148128-Thumbnail Image.png
Description

CubeSats can encounter a myriad of difficulties in space like cosmic rays, temperature<br/>issues, and loss of control. By creating better, more reliable software, these problems can be<br/>mitigated and increase the chance of success for the mission. This research sets out to answer the<br/>question: how do we create reliable flight software

CubeSats can encounter a myriad of difficulties in space like cosmic rays, temperature<br/>issues, and loss of control. By creating better, more reliable software, these problems can be<br/>mitigated and increase the chance of success for the mission. This research sets out to answer the<br/>question: how do we create reliable flight software for CubeSats? by providing a concentrated<br/>list of the best flight software development practices. The CubeSat used in this research is the<br/>Deployable Optical Receiver Aperture (DORA) CubeSat, which is a 3U CubeSat that seeks to<br/>demonstrate optical communication data rates of 1 Gbps over long distances. We present an<br/>analysis over many of the flight software development practices currently in use in the industry,<br/>from industry leads NASA, and identify three key flight software development areas of focus:<br/>memory, concurrency, and error handling. Within each of these areas, the best practices were<br/>defined for how to approach the area. These practices were also developed using experience<br/>from the creation of flight software for the DORA CubeSat in order to drive the design and<br/>testing of the system. We analyze DORA’s effectiveness in the three areas of focus, as well as<br/>discuss how following the best practices identified helped to create a more reliable flight<br/>software system for the DORA CubeSat.

ContributorsHoffmann, Zachary Christian (Author) / Chavez-Echeagaray, Maria Elena (Thesis director) / Jacobs, Daniel (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The aim of this project is to understand the basic algorithmic components of the transformer deep learning architecture. At a high level, a transformer is a machine learning model based off of a recurrent neural network that adopts a self-attention mechanism, which can weigh significant parts of sequential input data

The aim of this project is to understand the basic algorithmic components of the transformer deep learning architecture. At a high level, a transformer is a machine learning model based off of a recurrent neural network that adopts a self-attention mechanism, which can weigh significant parts of sequential input data which is very useful for solving problems in natural language processing and computer vision. There are other approaches to solving these problems which have been implemented in the past (i.e., convolutional neural networks and recurrent neural networks), but these architectures introduce the issue of the vanishing gradient problem when an input becomes too long (which essentially means the network loses its memory and halts learning) and have a slow training time in general. The transformer architecture’s features enable a much better “memory” and a faster training time, which makes it a more optimal architecture in solving problems. Most of this project will be spent producing a survey that captures the current state of research on the transformer, and any background material to understand it. First, I will do a keyword search of the most well cited and up-to-date peer reviewed publications on transformers to understand them conceptually. Next, I will investigate any necessary programming frameworks that will be required to implement the architecture. I will use this to implement a simplified version of the architecture or follow an easy to use guide or tutorial in implementing the architecture. Once the programming aspect of the architecture is understood, I will then Implement a transformer based on the academic paper “Attention is All You Need”. I will then slightly tweak this model using my understanding of the architecture to improve performance. Once finished, the details (i.e., successes, failures, process and inner workings) of the implementation will be evaluated and reported, as well as the fundamental concepts surveyed. The motivation behind this project is to explore the rapidly growing area of AI algorithms, and the transformer algorithm in particular was chosen because it is a major milestone for engineering with AI and software. Since their introduction, transformers have provided a very effective way of solving natural language processing, which has allowed any related applications to succeed with high speed while maintaining accuracy. Since then, this type of model can be applied to more cutting edge natural language processing applications, such as extracting semantic information from a text description and generating an image to satisfy it.

ContributorsCereghini, Nicola (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / Barrett, The Honors College (Contributor) / Software Engineering (Contributor)
Created2023-05