Matching Items (107)
Filtering by

Clear all filters

133880-Thumbnail Image.png
Description
In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form a dependency tree. An agent operating within these environments have access to low amounts of data about the environment before interacting with it, so it is crucial that this agent is able to effectively utilize a tree of dependencies and its environmental surroundings to make judgements about which sub-goals are most efficient to pursue at any point in time. A successful agent aims to minimizes cost when completing a given goal. A deep neural network in combination with Q-learning techniques was employed to act as the agent in this environment. This agent consistently performed better than agents using alternate models (models that used dependency tree heuristics or human-like approaches to make sub-goal oriented choices), with an average performance advantage of 33.86% (with a standard deviation of 14.69%) over the best alternate agent. This shows that machine learning techniques can be consistently employed to make goal-oriented choices within an environment with recursive sub-goal dependencies and low amounts of pre-known information.
ContributorsKoleber, Derek (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / W.P. Carey School of Business (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133894-Thumbnail Image.png
Description
Pandora is a play exploring our relationship with gendered technology through the lens of artificial intelligence. Can women be subjective under patriarchy? Do robots who look like women have subjectivity? Hoping to create a better version of ourselves, The Engineer must navigate the loss of her creation, and Pandora must

Pandora is a play exploring our relationship with gendered technology through the lens of artificial intelligence. Can women be subjective under patriarchy? Do robots who look like women have subjectivity? Hoping to create a better version of ourselves, The Engineer must navigate the loss of her creation, and Pandora must navigate their new world. The original premiere run was March 27-28, 2018, original cast: Caitlin Andelora, Rikki Tremblay, and Michael Tristano Jr.
ContributorsToye, Abigail Elizabeth (Author) / Linde, Jennifer (Thesis director) / Abele, Kelsey (Committee member) / Department of Information Systems (Contributor) / Economics Program in CLAS (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133901-Thumbnail Image.png
Description
This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally

This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally accepted model of an artificial neuron is broken down into its key components and then analyzed for functionality by relating back to its biological counterpart. The role of a neuron is then described in the context of a neural network, with equal emphasis placed on how it individually undergoes training and then for an entire network. Using the technique of supervised learning, the neural network is trained with three main factors for housing price classification, including its total number of rooms, bathrooms, and square footage. Once trained with most of the generated data set, it is tested for accuracy by introducing the remainder of the data-set and observing how closely its computed output for each set of inputs compares to the target value. From a programming perspective, the artificial neuron is implemented in C so that it would be more closely tied to the operating system and therefore make the collected profiler data more precise during the program's execution. The program is designed to break down each stage of the neuron's training process into distinct functions. In addition to utilizing more functional code, the struct data type is used as the underlying data structure for this project to not only represent the neuron but for implementing the neuron's training and test data. Once fully trained, the neuron's test results are then graphed to visually depict how well the neuron learned from its sample training set. Finally, the profiler data is analyzed to describe how the program operated from a data management perspective on the software and hardware level.
ContributorsRichards, Nicholas Giovanni (Author) / Miller, Phillip (Thesis director) / Meuth, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135204-Thumbnail Image.png
Description
The vastly growing field of supercomputing is in dire need of a new measurement system to optimize JMRAM (Josephson junction magnetoresistive random access memory) devices. To effectively measure these devices, an ultra-low-noise, low cost cryogenic dipping probe with a dynamic voltage range is required. This dipping probe has been designed

The vastly growing field of supercomputing is in dire need of a new measurement system to optimize JMRAM (Josephson junction magnetoresistive random access memory) devices. To effectively measure these devices, an ultra-low-noise, low cost cryogenic dipping probe with a dynamic voltage range is required. This dipping probe has been designed by ASU with <100 nVp-p noise, <10 nV offsets, 10 pV to 16 mV voltage range, and negligible thermoelectric drift. There is currently no other research group or company that can currently match both these low noise levels and wide voltage range. Two different dipping probes can be created with these specifications: one for high-use applications and one for low-use applications. The only difference between these probes is the outer shell; the high-use application probe has a shell made of G-10 fiberglass for a higher price, and the low-use application probe has a shell made of AISI 310 steel for a lower price. Both types of probes can be assembled in less than 8 hours for less than $2,500, requiring only soldering expertise. The low cost and short time to create these probes makes wide profit margins possible. The market for these cryogenic dipping probes is currently untapped, as most research groups and companies that use these probes build their own, which allows for rapid business growth. These potential consumers can be easily reached by marketing these probes at superconducting conferences. After several years of selling >50 probes, mass production can easily become possible by hiring several technicians, and still maintaining wide profit margins.
ContributorsHudson, Brooke Ashley (Author) / Adams, James (Thesis director) / Anwar, Shahriar (Committee member) / Materials Science and Engineering Program (Contributor) / W. P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136455-Thumbnail Image.png
Description
Although wind turbine bearings are designed to operate 18-20 years, in the recent years premature failure among these bearings has caused this life to reduce to as low as a few months to a year. One of the leading causes of premature failure called white structure flaking is a mechanism

Although wind turbine bearings are designed to operate 18-20 years, in the recent years premature failure among these bearings has caused this life to reduce to as low as a few months to a year. One of the leading causes of premature failure called white structure flaking is a mechanism that was first cited in literature decades ago but not much is understood about it even today. The cause of this mode of failure results from the initiation of white etched cracks (WECs). In this report, different failure mechanisms, especially premature failure mechanisms that were tested and analyzed are demonstrated as a pathway to understanding this phenomenon. Through the use of various tribometers, samples were tested in diverse and extreme conditions in order to study the effect of these different operational conditions on the specimen. Analysis of the tested samples allowed for a comparison of the microstructure alterations in the tested samples to the field bearings affected by WSF.
ContributorsSharma, Aman (Author) / Foy, Joseph (Thesis director) / Adams, James (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2015-05
136442-Thumbnail Image.png
Description
A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to

A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to edge-line deflection data extracted from digital imagery of experimentally loaded beams. In addition, an Ellipse Logistic Model (ELM) has been proposed, using L1-regularized logistic regression, to predict the impact of a knot on the displacement of a beam. By classifying a knot as severely positive or negative, vs. mildly positive or negative, ELM can classify knots that lead to large changes to beam deflection, while not over-emphasizing knots that may not be a problem. Using ELM with a regression-fit Young's Modulus on three-point bending of Douglass Fir, it is possible estimate the effects a knot will have on the shape of the resulting displacement curve.
Created2015-05
136339-Thumbnail Image.png
Description
The following is a report that will evaluate the microstructure of the nickel-based superalloy Hastelloy X and its relationship to mechanical properties in different load conditions. Hastelloy X is of interest to the company AORA because its strength and oxidation resistance at high temperatures is directly applicable to their needs

The following is a report that will evaluate the microstructure of the nickel-based superalloy Hastelloy X and its relationship to mechanical properties in different load conditions. Hastelloy X is of interest to the company AORA because its strength and oxidation resistance at high temperatures is directly applicable to their needs in a hybrid concentrated solar module. The literature review shows that the microstructure will produce different carbides at various temperatures, which can be beneficial to the strength of the alloy. These precipitates are found along the grain boundaries and act as pins that limit dislocation flow, as well as grain boundary sliding, and improve the rupture strength of the material. Over time, harmful precipitates form which counteract the strengthening effect of the carbides and reduce rupture strength, leading to failure. A combination of indentation and microstructure mapping was used in an effort to link local mechanical behavior to microstructure variability. Electron backscatter diffraction (EBSD) and energy dispersive spectroscopy (EDS) were initially used as a means to characterize the microstructure prior to testing. Then, a series of room temperature Vickers hardness tests at 50 and 500 gram-force were used to evaluate the variation in the local response as a function of indentation size. The room temperature study concluded that both the hardness and standard deviation increased at lower loads, which is consistent with the grain size distribution seen in the microstructure scan. The material was then subjected to high temperature spherical indentation. Load-displacement curves were essential in evaluating the decrease in strength of the material with increasing temperature. Through linear regression of the unloading portion of the curve, the plastic deformation was determined and compared at different temperatures as a qualitative method to evaluate local strength.
ContributorsCelaya, Andrew Jose (Author) / Peralta, Pedro (Thesis director) / Solanki, Kiran (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2015-05
136202-Thumbnail Image.png
Description
The objective of this research is to determine an approach for automating the learning of the initial lexicon used in translating natural language sentences to their formal knowledge representations based on lambda-calculus expressions. Using a universal knowledge representation and its associated parser, this research attempts to use word alignment techniques

The objective of this research is to determine an approach for automating the learning of the initial lexicon used in translating natural language sentences to their formal knowledge representations based on lambda-calculus expressions. Using a universal knowledge representation and its associated parser, this research attempts to use word alignment techniques to align natural language sentences to the linearized parses of their associated knowledge representations in order to learn the meanings of individual words. The work includes proposing and analyzing an approach that can be used to learn some of the initial lexicon.
ContributorsBaldwin, Amy Lynn (Author) / Baral, Chitta (Thesis director) / Vo, Nguyen (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
136173-Thumbnail Image.png
Description
The transition to lead-free solder in the electronics industry has benefited the environment in many ways. However, with new materials systems comes new materials issues. During the processing of copper pads, a protective surface treatment is needed to prevent the copper from oxidizing. Characterizing the copper oxidation underneath the surface

The transition to lead-free solder in the electronics industry has benefited the environment in many ways. However, with new materials systems comes new materials issues. During the processing of copper pads, a protective surface treatment is needed to prevent the copper from oxidizing. Characterizing the copper oxidation underneath the surface treatment is challenging but necessary for product reliability and failure analysis. Currently, FIB-SEM, which is time-consuming and expensive, is what is used to understand and analyze the surface treatment-copper oxide(s)-copper system. This project's goals were to determine a characterization methodology that cuts both characterization time and cost in half for characterizing copper oxidation beneath a surface treatment and to determine which protective surface treatment is the best as defined by multiple criterion such as cost, sustainability, and reliability. Two protective surface treatments, organic solderability preservative (OSP) and chromium zincate, were investigated, and multiple characterization techniques were researched. Six techniques were tested, and three were deemed promising. Through our studies, it was determined that the best surface treatment was organic solderability preservative (OSP) and the ideal characterization methodology would be using FIB-SEM to calibrate a QCM model, along with using SERA to confirm the QCM model results. The methodology we propose would result in a 91% reduction in characterization cost and a 92% reduction in characterization time. Future work includes further calibration of the QCM model using more FIB/SEM data points and eventually creating a model for oxide layer thickness as a function of exposure time and processing temperature using QCM as the primary data source. An additional short essay on the role of SEM on the continuing miniaturization of integrated circuits is included at the end. This paper explores the intertwined histories of the scanning electron microscope and the integrated circuit, highlighting how advances in SEM influence integrated circuit advances.
ContributorsSmith, Bethany Blair (Co-author) / Marion, Branch Kelly (Co-author) / Cruz, Hernandez (Co-author) / Kimberly, McGuiness (Co-author) / Adams, James (Thesis director) / Krause, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Materials Science and Engineering Program (Contributor)
Created2015-05
133393-Thumbnail Image.png
Description
Artificial intelligence (AI) is a burgeoning technology, industry, and field of study. While interest levels regarding its applications in marketing have not yet translated into widespread adoption, AI holds tremendous potential for vastly altering how marketing is done. As such, AI in marketing is a crucial topic to research. By

Artificial intelligence (AI) is a burgeoning technology, industry, and field of study. While interest levels regarding its applications in marketing have not yet translated into widespread adoption, AI holds tremendous potential for vastly altering how marketing is done. As such, AI in marketing is a crucial topic to research. By analyzing its current applications, its potential use cases in the near future, how to implement it and its areas for improvement, we can achieve a high-level understanding of AI's long-term implications in marketing. AI offers an improvement to current marketing tactics, as well as entirely new ways of creating and distributing value to customers. For example, programmatic advertising and social media marketing can allow for a more comprehensive view of customer behavior, predictive analytics, and deeper insights through integration with AI. New marketing tools like biometrics, voice, and conversational user interfaces offer novel ways to add value for brands and consumers alike. These innovations all carry similar characteristics of hyper-personalization, efficient spending, scalable experiences, and deep insights. There are important issues that need to be addressed before AI is extensively implemented, including the potential for it to be used maliciously, its effects on job displacement, and the technology itself. The recent progression of AI in marketing is indicative that it will be adopted by a majority of companies soon. The long-term implications of vast implementation are crucial to consider, as an AI-powered industry entails fundamental changes to the skill-sets required to thrive, the way marketers and brands work, and consumer expectations.
ContributorsCannella, James (Author) / Ostrom, Amy (Thesis director) / Giles, Charles (Committee member) / Department of Marketing (Contributor) / Department of Management and Entrepreneurship (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05