Matching Items (44)
Filtering by

Clear all filters

133901-Thumbnail Image.png
Description
This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally

This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally accepted model of an artificial neuron is broken down into its key components and then analyzed for functionality by relating back to its biological counterpart. The role of a neuron is then described in the context of a neural network, with equal emphasis placed on how it individually undergoes training and then for an entire network. Using the technique of supervised learning, the neural network is trained with three main factors for housing price classification, including its total number of rooms, bathrooms, and square footage. Once trained with most of the generated data set, it is tested for accuracy by introducing the remainder of the data-set and observing how closely its computed output for each set of inputs compares to the target value. From a programming perspective, the artificial neuron is implemented in C so that it would be more closely tied to the operating system and therefore make the collected profiler data more precise during the program's execution. The program is designed to break down each stage of the neuron's training process into distinct functions. In addition to utilizing more functional code, the struct data type is used as the underlying data structure for this project to not only represent the neuron but for implementing the neuron's training and test data. Once fully trained, the neuron's test results are then graphed to visually depict how well the neuron learned from its sample training set. Finally, the profiler data is analyzed to describe how the program operated from a data management perspective on the software and hardware level.
ContributorsRichards, Nicholas Giovanni (Author) / Miller, Phillip (Thesis director) / Meuth, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135654-Thumbnail Image.png
Description
Company X has developed RealSenseTM technology, a depth sensing camera that provides machines the ability to capture three-dimensional spaces along with motion within these spaces. The goal of RealSense was to give machines human-like senses, such as knowing how far away objects are and perceiving the surrounding environment. The key

Company X has developed RealSenseTM technology, a depth sensing camera that provides machines the ability to capture three-dimensional spaces along with motion within these spaces. The goal of RealSense was to give machines human-like senses, such as knowing how far away objects are and perceiving the surrounding environment. The key issue for Company X is how to commercialize RealSense's depth recognition capabilities. This thesis addresses the problem by examining which markets to address and how to monetize this technology. The first part of the analysis identified potential markets for RealSense. This was achieved by evaluating current markets that could benefit from the camera's gesture recognition, 3D scanning, and depth sensing abilities. After identifying seven industries where RealSense could add value, a model of the available, addressable, and obtainable market sizes was developed for each segment. Key competitors and market dynamics were used to estimate the portion of the market that Company X could capture. These models provided a forecast of the discounted gross profits that could be earned over the next five years. These forecasted gross profits, combined with an examination of the competitive landscape and synergistic opportunities, resulted in the selection of the three segments thought to be most profitable to Company X. These segments are smart home, consumer drones, and automotive. The final part of the analysis investigated entrance strategies. Company X's competitive advantages in each space were found by examining the competition, both for the RealSense camera in general and other technologies specific to each industry. Finally, ideas about ways to monetize RealSense were developed by exploring various revenue models and channels.
ContributorsDunn, Nicole (Co-author) / Boudreau, Thomas (Co-author) / Kinzy, Chris (Co-author) / Radigan, Thomas (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / WPC Graduate Programs (Contributor) / Department of Psychology (Contributor) / Department of Finance (Contributor) / School of Accountancy (Contributor) / Department of Economics (Contributor) / School of Mathematical and Statistical Science (Contributor) / W. P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135678-Thumbnail Image.png
Description
The constant evolution of technology has greatly shifted the way in which we gain knowledge information. This, in turn, has an affect on how we learn. Long gone are the days where students sit in libraries for hours flipping through numerous books to find one specific piece of information. With

The constant evolution of technology has greatly shifted the way in which we gain knowledge information. This, in turn, has an affect on how we learn. Long gone are the days where students sit in libraries for hours flipping through numerous books to find one specific piece of information. With the advent of Google, modern day students are able to arrive at the same information within 15 seconds. This technology, the internet, is reshaping the way we learn. As a result, the academic integrity policies that are set forth at the college level seem to be outdated, often prohibiting the use of technology as a resource for learning. The purpose of this paper is to explore why exactly these resources are prohibited. By contrasting a subject such as Computer Science with the Humanities, the paper explores the need for the internet as a resource in some fields as opposed to others. Taking a look at the knowledge presented in Computer Science, the course structure, and the role that professors play in teaching this knowledge, this thesis evaluates the epistemology of Engineering subjects. By juxtaposing Computer Science with the less technology reliant humanities subjects, it is clear that one common policy outlining academic integrity does not suffice for an entire university. Instead, there should be amendments made to the policy specific to each subject, in order to best foster an environment of learning at the university level. In conclusion of this thesis, Arizona State University's Academic Integrity Policy is analyzed and suggestions are made to remove ambiguity in the language of the document, in order to promote learning at the university.
ContributorsMohan, Sishir Basavapatna (Author) / Brake, Elizabeth (Thesis director) / Martin, William (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136202-Thumbnail Image.png
Description
The objective of this research is to determine an approach for automating the learning of the initial lexicon used in translating natural language sentences to their formal knowledge representations based on lambda-calculus expressions. Using a universal knowledge representation and its associated parser, this research attempts to use word alignment techniques

The objective of this research is to determine an approach for automating the learning of the initial lexicon used in translating natural language sentences to their formal knowledge representations based on lambda-calculus expressions. Using a universal knowledge representation and its associated parser, this research attempts to use word alignment techniques to align natural language sentences to the linearized parses of their associated knowledge representations in order to learn the meanings of individual words. The work includes proposing and analyzing an approach that can be used to learn some of the initial lexicon.
ContributorsBaldwin, Amy Lynn (Author) / Baral, Chitta (Thesis director) / Vo, Nguyen (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
133636-Thumbnail Image.png
Description
All of the modern technology tools that are being used today, have a purpose to support a variety of human tasks. Ambient Intelligence is the next step to transform modern technology. Ambient Intelligence is an electronic environment that is sensitive and responsive to human interaction/activity. We understand that Ambient Intelligence(AmI)

All of the modern technology tools that are being used today, have a purpose to support a variety of human tasks. Ambient Intelligence is the next step to transform modern technology. Ambient Intelligence is an electronic environment that is sensitive and responsive to human interaction/activity. We understand that Ambient Intelligence(AmI) concentrates on connectivity within a person's environment and the purpose of having a new connection is to make life simpler. Today, technology is in the transition of a new lifestyle where technology is discretely living with us. Ambient Intelligence is still in progress, but we can analyze the technology we have today, ties a relationship with Ambient Intelligence. In order to examine this concern, I investigated how much awareness/knowledge users that range from Generation X to Xennials, that had experience from replacing habitual items and technologies they use on a daily basis. A few questions I mainly wanted answered: - What kind of technologies, software, or tech services replace items you use daily? - What kind of benefits did the technology give you, did it change the way you think/act on any kind of activities? - What kind of expectations/concerns do you have for future technologies? To accomplish this, I gathered information from interviewing multiples groups: millennials and other older generations (33+ years old). I retrieved data from students at Arizona State University, Intel Corporation, and a local clinic. From this study, I've discovered from both groups, that both sides agree that modern technology is rapidly growing to a point that computers think as humans. Through multiple interviews and research, I have found that the technology today makes an impact through all aspects of our lives and through artificial intelligence. Furthermore, I will discuss and predict what will society will encounter later on as the new technology discretely arises.
ContributorsPascua, Roman Paolo Bustos (Author) / Yang, Yezhou (Thesis director) / Caviedes, Jorge (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135242-Thumbnail Image.png
Description
Penetration testing is regarded as the gold-standard for understanding how well an organization can withstand sophisticated cyber-attacks. However, the recent prevalence of markets specializing in zero-day exploits on the darknet make exploits widely available to potential attackers. The cost associated with these sophisticated kits generally precludes penetration testers from simply

Penetration testing is regarded as the gold-standard for understanding how well an organization can withstand sophisticated cyber-attacks. However, the recent prevalence of markets specializing in zero-day exploits on the darknet make exploits widely available to potential attackers. The cost associated with these sophisticated kits generally precludes penetration testers from simply obtaining such exploits – so an alternative approach is needed to understand what exploits an attacker will most likely purchase and how to defend against them. In this paper, we introduce a data-driven security game framework to model an attacker and provide policy recommendations to the defender. In addition to providing a formal framework and algorithms to develop strategies, we present experimental results from applying our framework, for various system configurations, on real-world exploit market data actively mined from the darknet.
ContributorsRobertson, John James (Author) / Shakarian, Paulo (Thesis director) / Doupe, Adam (Committee member) / Electrical Engineering Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134476-Thumbnail Image.png
Description
The purpose of this thesis is to examine the current state of the brick-and-mortar bookselling industry, with particular focus on independent bookstores and their strengths and weaknesses, and synthesizing recommendations for these bookstores to reinvent themselves in a rapidly changing market. This examination is highly relevant given recent concerns that,

The purpose of this thesis is to examine the current state of the brick-and-mortar bookselling industry, with particular focus on independent bookstores and their strengths and weaknesses, and synthesizing recommendations for these bookstores to reinvent themselves in a rapidly changing market. This examination is highly relevant given recent concerns that, with the rise of e-retailers like Amazon and the closure of bookstore chain Borders, brick-and-mortar bookstores may be superseded by new digital vendors. Independent bookstores are thought to be at a particular disadvantage to these retailers, given their limited size and resources, as well as the lack of capital or consumer base that a larger chain like Barnes and Noble can draw upon to invest in emerging technology. With these more limited financial opportunities, independent bookstores must find different ways to not only keep abreast of the technology that consumers are coming to expect from modern businesses, but attract customers.
To gain insight into the state of the industry and current position of independent bookstores, I will first examine the past fifty years of the brick-and-mortar bookstore, followed by a Porter’s Five Forces analysis of the industry threats and a SWOT analysis to compare the strengths and weaknesses of independent bookstores. Next, the patrons of independent bookstores will be discussed with a focus on the two largest consumer groups of Millennials and Baby Boomers, their characteristics, and the opportunities they provide to bookstores. After this there will be an exploration of the competitors to brick-and-mortar bookstores, focusing on Amazon and then touching on some of the other rivals to bookstores’ consumer base. The next section will be an in-depth analysis of a variety of bookstores across the United States, with attention to their successful practices, goals, concerns, and failures. First, there will be a comparison of industry success and failure through case studies of Borders and Powell’s bookstores. Next, there will be a comparison of five beloved independent bookstores across the country to share their varied competitive advantages that are the secret to their success. Finally, there are primary source interviews with the employees of three major Phoenix bookstores, which provide insight into the goals, current projects, attitudes, and inner strengths of these businesses. Finally, the thesis will conclude with a section offering solutions and suggestions for independent bookstores to pursue based on the primary and secondary research discussed above. These recommendations are focused on five key areas:
• Community
• Consumers
• Store Design
• Technology
• Diversification
Ultimately, the information provided by this research and these interviews indicates that while vital business changes are being pursued by independent and chain bookstores across the United States, the independent bookstore shows no signs of disappearing in favor of online vendors or e-readers.
ContributorsPorrell, Kelly Maria (Author) / Montoya, Detra (Thesis director) / Schlacter, John (Committee member) / School of Historical, Philosophical and Religious Studies (Contributor) / Department of Marketing (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
132967-Thumbnail Image.png
Description
Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that,

Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that, given an abstract problem state, predicts both (i) the best action to be taken from that state and (ii) the generalized “role” of the object being manipulated. The neural network was tested on two classical planning domains: the blocks world domain and the logistic domain. Results indicate that neural networks are capable of making such
predictions with high accuracy, indicating a promising new framework for approaching generalized planning problems.
ContributorsNakhleh, Julia Blair (Author) / Srivastava, Siddharth (Thesis director) / Fainekos, Georgios (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133339-Thumbnail Image.png
Description
Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important

Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important aspect within these records is the presence of prescription information. Existing techniques for extracting prescription information — which includes medication names, dosages, frequencies, reasons for taking, and mode of administration — from unstructured text have focused on the application of rule- and classifier-based methods. While state-of-the-art systems can be effective in extracting many types of information, they require significant effort to develop hand-crafted rules and conduct effective feature engineering. This paper presents the use of a bidirectional LSTM with CRF tagging model initialized with precomputed word embeddings for extracting prescription information from sentences without requiring significant feature engineering. The experimental results, run on the i2b2 2009 dataset, achieve an F1 macro measure of 0.8562, and scores above 0.9449 on four of the six categories, indicating significant potential for this model.
ContributorsRawal, Samarth Chetan (Author) / Baral, Chitta (Thesis director) / Anwar, Saadat (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134133-Thumbnail Image.png
Description
Hackathons are 24-36 hour events where participants are encouraged to learn, collaborate, and build technological inventions with leaders, companies, and peers in the tech community. Hackathons have been sweeping the nation in the recent years especially at the collegiate level; however, there is no substantial research or documentation of the

Hackathons are 24-36 hour events where participants are encouraged to learn, collaborate, and build technological inventions with leaders, companies, and peers in the tech community. Hackathons have been sweeping the nation in the recent years especially at the collegiate level; however, there is no substantial research or documentation of the actual effects of hackathons especially at the collegiate level. This makes justifying the usage of valuable time and resources to host hackathons difficult for tech companies and academic institutions. This thesis specifically examines the effects of collegiate hackathons through running a collegiate hackathon known as Desert Hacks at Arizona State University (ASU). The participants of Desert Hacks were surveyed at the start and at the end of the event to analyze the effects. The results of the survey implicate that participants have grown in base computer programming skills, inclusion in the tech community, overall confidence, and motivation for the technological field. Through these results, this study can be used to help justify the necessity of collegiate hackathons and events similar.
ContributorsLe, Peter Thuan (Author) / Atkinson, Robert (Thesis director) / Chavez-Echeagaray, Maria Elena (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12