Matching Items (563)
Filtering by

Clear all filters

133901-Thumbnail Image.png
Description
This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally

This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally accepted model of an artificial neuron is broken down into its key components and then analyzed for functionality by relating back to its biological counterpart. The role of a neuron is then described in the context of a neural network, with equal emphasis placed on how it individually undergoes training and then for an entire network. Using the technique of supervised learning, the neural network is trained with three main factors for housing price classification, including its total number of rooms, bathrooms, and square footage. Once trained with most of the generated data set, it is tested for accuracy by introducing the remainder of the data-set and observing how closely its computed output for each set of inputs compares to the target value. From a programming perspective, the artificial neuron is implemented in C so that it would be more closely tied to the operating system and therefore make the collected profiler data more precise during the program's execution. The program is designed to break down each stage of the neuron's training process into distinct functions. In addition to utilizing more functional code, the struct data type is used as the underlying data structure for this project to not only represent the neuron but for implementing the neuron's training and test data. Once fully trained, the neuron's test results are then graphed to visually depict how well the neuron learned from its sample training set. Finally, the profiler data is analyzed to describe how the program operated from a data management perspective on the software and hardware level.
ContributorsRichards, Nicholas Giovanni (Author) / Miller, Phillip (Thesis director) / Meuth, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134154-Thumbnail Image.png
Description
The need for automated / computational fact checking has grown substantially in recent times due to the high volume of false information and limited workforce of human fact checkers. This need has spawned research and new developments in this field and has created many different systems and approaches to this

The need for automated / computational fact checking has grown substantially in recent times due to the high volume of false information and limited workforce of human fact checkers. This need has spawned research and new developments in this field and has created many different systems and approaches to this complex problem. This paper attempts to not just explain the most popular methods that are currently being used, but provide experimental results of the comparison of two different systems, the replication of results from their respective papers, and an annotated data-set of different test sentences to be used in these systems.
ContributorsRosenkilde, Trevor Curtis (Author) / Papotti, Paolo (Thesis director) / Candan, Kasim (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
134157-Thumbnail Image.png
Description
This paper details the specification and implementation of a single-machine blockchain simulator. It also includes a brief introduction on the history & underlying concepts of blockchain, with explanations on features such as decentralization, openness, trustlessness, and consensus. The introduction features a brief overview of public interest and current implementations of

This paper details the specification and implementation of a single-machine blockchain simulator. It also includes a brief introduction on the history & underlying concepts of blockchain, with explanations on features such as decentralization, openness, trustlessness, and consensus. The introduction features a brief overview of public interest and current implementations of blockchain before stating potential use cases for blockchain simulation software. The paper then gives a brief literature review of blockchain's role, both as a disruptive technology and a foundational technology. The literature review also addresses the potential and difficulties regarding the use of blockchain in Internet of Things (IoT) networks, and also describes the limitations of blockchain in general regarding computational intensity, storage capacity, and network architecture. Next, the paper gives the specification for a generic blockchain structure, with summaries on the behaviors and purposes of transactions, blocks, nodes, miners, public & private key cryptography, signature validation, and hashing. Finally, the author gives an overview of their specific implementation of the blockchain using C/C++ and OpenSSL. The overview includes a brief description of all the classes and data structures involved in the implementation, including their function and behavior. While the implementation meets the requirements set forward in the specification, the results are more qualitative and intuitive, as time constraints did not allow for quantitative measurements of the network simulation. The paper concludes by discussing potential applications for the simulator, and the possibility for future hardware implementations of blockchain.
ContributorsRauschenbach, Timothy Rex (Author) / Vrudhula, Sarma (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
135581-Thumbnail Image.png
Description
As the need for data concerning the health of the world's oceans increases, it becomes necessary to develop large, networked communication systems underwater. This research involves the development of an embedded operating system that is suited for optically-linked underwater wireless sensor networks (WSNs). Optical WSNs are unique in that large

As the need for data concerning the health of the world's oceans increases, it becomes necessary to develop large, networked communication systems underwater. This research involves the development of an embedded operating system that is suited for optically-linked underwater wireless sensor networks (WSNs). Optical WSNs are unique in that large sums of data may be received relatively infrequently, and so an operating system for each node must be very responsive. Additionally, the volatile nature of the underwater environment means that the operating system must be accurate, while still maintaining a low profile on a relatively small microprocessor core. The first part of this research concerns the actual implementation of the operating system's task scheduler and additional libraries to maintain synchronization, and the second part involves testing the operating system for responsiveness to interrupts and overall performance.
ContributorsTueller, Peter Michael (Author) / Youngbull, Cody (Thesis director) / Meuth, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135207-Thumbnail Image.png
Description
Situations present themselves in which someone needs to navigate inside of a building, for example, to the exit or to retrieve and object. Sometimes, vision is not a reliable sense of spatial awareness, maybe because of a smoky environment, a dark environment, distractions, etc. I propose a wearable haptic device,

Situations present themselves in which someone needs to navigate inside of a building, for example, to the exit or to retrieve and object. Sometimes, vision is not a reliable sense of spatial awareness, maybe because of a smoky environment, a dark environment, distractions, etc. I propose a wearable haptic device, a belt or vest, that provides haptic feedback to help people navigate inside of a building that does not rely on the user's vision. The first proposed device has an obstacle avoidance component and a navigation component. This paper discussed the challenges of designing and implementing this kind of technology in the context of indoor navigation, where GPS signal is poor. Analyzing accelerometer data for the purpose of indoor navigation and then using haptic cues from a wearable haptic device for the navigation were explored in this project, and the device is promising.
ContributorsBerk, Emily Marie (Author) / Atkinson, Robert (Thesis director) / Chavez-Echeagaray, Maria Elena (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135208-Thumbnail Image.png
Description
Radiometric dating estimates the age of rocks by comparing the concentration of a decaying radioactive isotope to the concentrations of the decay byproducts. Radiometric dating has been instrumental in the calculation of the Earth's age, the Moon's age, and the age of our solar system. Geochronologists in the School of

Radiometric dating estimates the age of rocks by comparing the concentration of a decaying radioactive isotope to the concentrations of the decay byproducts. Radiometric dating has been instrumental in the calculation of the Earth's age, the Moon's age, and the age of our solar system. Geochronologists in the School of Earth and Space Exploration at ASU use radiometric dating extensively in their research, and have very specific procedures, hardware, and software to perform the dating calculations. Researchers use lasers to drill small holes, or ablations, in rock faces, collect the masses of various isotopes using a mass spectrometer, and scan the pit with an interferometer, which records the z heights of the pit on an x-y grid. This scan is then processed by custom-made software to determine the volume of the pit, which then is used along with the isotope masses and known decay rates to determine the age of the rock. My research has been focused on improving this volume calculation through computational geometry methods of surface reconstruction. During the process, I created an web application that reads interferometer scans, reconstructs a surface from those scans with Poisson reconstruction, renders the surface in the browser, and calculates the volume of the pit based on parameters provided by the researcher. The scans are stored in a central cloud datastore for future analysis, allowing the researchers in the geochronology community to collaborate together on scans from various rocks in their individual labs. The result of the project has been a complete and functioning application that is accessible to any researcher and reproducible from any computer. The 3D representation of the scan data allows researchers to easily understand the topology of the pit ablation and determine early on whether the measurements of the interferometer are trustworthy for the particular ablation. The volume calculation by the new software also reduces the variability in the volume calculation, which hopefully indicates the process is removing noise from the scan data and performing volume calculations on a more realistic representation of the actual ablation. In the future, this research will be used as the groundwork for more robust testing and closer approximations through implementation of different reconstruction algorithms. As the project grows and becomes more usable, hopefully there will be adoption in the community and it will become a reproducible standard for geochronologists performing radiometric dating.
ContributorsPruitt, Jacob Richard (Author) / Hodges, Kip (Thesis director) / Mercer, Cameron (Committee member) / van Soest, Matthijs (Committee member) / Department of Economics (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134286-Thumbnail Image.png
Description
Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be

Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be able to successfully navigate the office environment. While mobile robots are well suited for navigating and interacting with elements inside a deterministic office environment, attempting to interact with human beings in an office environment remains a challenge due to the limits on the amount of cost-efficient compute power onboard the robot. In this work, I propose the use of remote cloud services to offload intensive interaction tasks. I detail the interactions required in an office environment and discuss the challenges faced when implementing a human-robot interaction platform in a stochastic office environment. I also experiment with cloud services for facial recognition, speech recognition, and environment navigation and discuss my results. As part of my thesis, I have implemented a human-robot interaction system utilizing cloud APIs into a mobile robot, enabling it to navigate the office environment, identify humans within the environment, and communicate with these humans.
Created2017-05
134292-Thumbnail Image.png
Description
Millions of people every day log onto their computers to play competitive games with others around the world. Each of these players has their own unique personality and their own reasons for playing. To explore the relationship between player personalities and gameplay, this study asked participants to report their Myers-Briggs

Millions of people every day log onto their computers to play competitive games with others around the world. Each of these players has their own unique personality and their own reasons for playing. To explore the relationship between player personalities and gameplay, this study asked participants to report their Myers-Briggs sixteen personality types and complete a survey that asked them questions about their behavior while games playing competitively online including their preferred in-game archetype and questions about how they interact with other players online. The survey also included the Grit Scale test, which which was intended to explore players' perseverance. Nearly 700 people participated in the study and all responses were analyzed based on their Myers-Briggs' personality type. While this study revealed that Myers-Briggs' personality type alone cannot determine a player's mindset while playing online, it was found to be an indicator of how they feel about socializing with others online. The implications of these results are discussed in this paper.
ContributorsKeyvani, Kurosh (Author) / Atkinson, Robert (Thesis director) / Chavez-Echeagaray, Maria Elena (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134293-Thumbnail Image.png
Description
Lie detection is used prominently in contemporary society for many purposes such as for pre-employment screenings, granting security clearances, and determining if criminals or potential subjects may or may not be lying, but by no means is not limited to that scope. However, lie detection has been criticized for being

Lie detection is used prominently in contemporary society for many purposes such as for pre-employment screenings, granting security clearances, and determining if criminals or potential subjects may or may not be lying, but by no means is not limited to that scope. However, lie detection has been criticized for being subjective, unreliable, inaccurate, and susceptible to deliberate manipulation. Furthermore, critics also believe that the administrator of the test also influences the outcome as well. As a result, the polygraph machine, the contemporary device used for lie detection, has come under scrutiny when used as evidence in the courts. The purpose of this study is to use three entirely different tools and concepts to determine whether eye tracking systems, electroencephalogram (EEG), and Facial Expression Emotion Analysis (FACET) are reliable tools for lie detection. This study found that certain constructs such as where the left eye is looking at in regard to its usual position and engagement levels in eye tracking and EEG respectively could distinguish between truths and lies. However, the FACET proved the most reliable tool out of the three by providing not just one distinguishing variable but seven, all related to emotions derived from movements in the facial muscles during the present study. The emotions associated with the FACET that were documented to possess the ability to distinguish between truthful and lying responses were joy, anger, fear, confusion, and frustration. In addition, an overall measure of the subject's neutral and positive emotional expression were found to be distinctive factors. The implications of this study and future directions are discussed.
ContributorsSeto, Raymond Hua (Author) / Atkinson, Robert (Thesis director) / Runger, George (Committee member) / W. P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134294-Thumbnail Image.png
Description
Global violent conflict has become an increasing problem in recent decades, especially in the African continent. Civil wars, terrorism, riots, and political violence has wrought havoc not only on civilian lives, but also on economic foundations. Trade networks are a way to measure these economic foundations. To summarize trade networks

Global violent conflict has become an increasing problem in recent decades, especially in the African continent. Civil wars, terrorism, riots, and political violence has wrought havoc not only on civilian lives, but also on economic foundations. Trade networks are a way to measure these economic foundations. To summarize trade networks clustering coefficient as well as trade quantity/value summation measures are used. To understand effects of global trade on violent conflict, Pearson product-moment correlations are utilized. This work details a comparison of African national economies and violent conflict events using clustering coefficient, trade summation measures and Pearson correlation coefficient.
ContributorsKadambi, Sagarika Sanjay (Author) / Maciejewski, Ross (Thesis director) / Shutters, Shade (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05