Matching Items (291)
Filtering by

Clear all filters

135637-Thumbnail Image.png
Description
The foundations of legacy media, especially the news media, are not as strong as they once were. A digital revolution has changed the operation models for and journalistic organizations are trying to find their place in the new market. This project is intended to analyze the effects of new/emerging technologies

The foundations of legacy media, especially the news media, are not as strong as they once were. A digital revolution has changed the operation models for and journalistic organizations are trying to find their place in the new market. This project is intended to analyze the effects of new/emerging technologies on the journalism industry. Five different categories of technology will be explored. They are as follows: the semantic web, automation software, data analysis and aggregators, virtual reality and drone journalism. The potential of these technologies will be broken up according to four guidelines, ethical implications, effects on the reportorial process, business impacts and changes to the consumer experience. Upon my examination, it is apparent that no single technology will offer the journalism industry the remedy it has been searching for. Some combination of emerging technologies however, may form the basis for the next generation of news. Findings are presented on a website that features video, visuals, linked content, and original graphics. Website found at http://www.explorenewstech.com/
Created2016-05
135654-Thumbnail Image.png
Description
Company X has developed RealSenseTM technology, a depth sensing camera that provides machines the ability to capture three-dimensional spaces along with motion within these spaces. The goal of RealSense was to give machines human-like senses, such as knowing how far away objects are and perceiving the surrounding environment. The key

Company X has developed RealSenseTM technology, a depth sensing camera that provides machines the ability to capture three-dimensional spaces along with motion within these spaces. The goal of RealSense was to give machines human-like senses, such as knowing how far away objects are and perceiving the surrounding environment. The key issue for Company X is how to commercialize RealSense's depth recognition capabilities. This thesis addresses the problem by examining which markets to address and how to monetize this technology. The first part of the analysis identified potential markets for RealSense. This was achieved by evaluating current markets that could benefit from the camera's gesture recognition, 3D scanning, and depth sensing abilities. After identifying seven industries where RealSense could add value, a model of the available, addressable, and obtainable market sizes was developed for each segment. Key competitors and market dynamics were used to estimate the portion of the market that Company X could capture. These models provided a forecast of the discounted gross profits that could be earned over the next five years. These forecasted gross profits, combined with an examination of the competitive landscape and synergistic opportunities, resulted in the selection of the three segments thought to be most profitable to Company X. These segments are smart home, consumer drones, and automotive. The final part of the analysis investigated entrance strategies. Company X's competitive advantages in each space were found by examining the competition, both for the RealSense camera in general and other technologies specific to each industry. Finally, ideas about ways to monetize RealSense were developed by exploring various revenue models and channels.
ContributorsDunn, Nicole (Co-author) / Boudreau, Thomas (Co-author) / Kinzy, Chris (Co-author) / Radigan, Thomas (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / WPC Graduate Programs (Contributor) / Department of Psychology (Contributor) / Department of Finance (Contributor) / School of Accountancy (Contributor) / Department of Economics (Contributor) / School of Mathematical and Statistical Science (Contributor) / W. P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135678-Thumbnail Image.png
Description
The constant evolution of technology has greatly shifted the way in which we gain knowledge information. This, in turn, has an affect on how we learn. Long gone are the days where students sit in libraries for hours flipping through numerous books to find one specific piece of information. With

The constant evolution of technology has greatly shifted the way in which we gain knowledge information. This, in turn, has an affect on how we learn. Long gone are the days where students sit in libraries for hours flipping through numerous books to find one specific piece of information. With the advent of Google, modern day students are able to arrive at the same information within 15 seconds. This technology, the internet, is reshaping the way we learn. As a result, the academic integrity policies that are set forth at the college level seem to be outdated, often prohibiting the use of technology as a resource for learning. The purpose of this paper is to explore why exactly these resources are prohibited. By contrasting a subject such as Computer Science with the Humanities, the paper explores the need for the internet as a resource in some fields as opposed to others. Taking a look at the knowledge presented in Computer Science, the course structure, and the role that professors play in teaching this knowledge, this thesis evaluates the epistemology of Engineering subjects. By juxtaposing Computer Science with the less technology reliant humanities subjects, it is clear that one common policy outlining academic integrity does not suffice for an entire university. Instead, there should be amendments made to the policy specific to each subject, in order to best foster an environment of learning at the university level. In conclusion of this thesis, Arizona State University's Academic Integrity Policy is analyzed and suggestions are made to remove ambiguity in the language of the document, in order to promote learning at the university.
ContributorsMohan, Sishir Basavapatna (Author) / Brake, Elizabeth (Thesis director) / Martin, William (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136728-Thumbnail Image.png
Description
This project was centered around designing a processor model (using the C programming language) based on the Coldfire computer architecture that will run on third party software known as Open Virtual Platforms. The end goal is to have a fully functional processor that can run Coldfire instructions and utilize peripheral

This project was centered around designing a processor model (using the C programming language) based on the Coldfire computer architecture that will run on third party software known as Open Virtual Platforms. The end goal is to have a fully functional processor that can run Coldfire instructions and utilize peripheral devices in the same way as the hardware used in the embedded systems lab at ASU. This project would cut down the substantial amount of time students spend commuting to the lab. Having the processor directly at their disposal would also encourage them to spend more time outside of class learning the hardware and familiarizing themselves with development on an embedded micro-controller. The model will be accurate, fast and reliable. These aspects will be achieved through rigorous unit testing and use of the OVP platform which provides instruction accurate simulations at hundreds of MIPS (million instructions per second) for the specified model. The end product was able to accurately simulate a subset of the Coldfire instructions at very high rates.
ContributorsDunning, David Connor (Author) / Burger, Kevin (Thesis director) / Meuth, Ryan (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-12
136746-Thumbnail Image.png
Description
The abortion debate has been a heated topic since the early 1970's when the monumental case, Roe v. Wade was decided. Roe v. Wade, along side of it's sister case Doe v. Bolton, ruled that no law restricting abortion could be passed and set the precedent that life did not

The abortion debate has been a heated topic since the early 1970's when the monumental case, Roe v. Wade was decided. Roe v. Wade, along side of it's sister case Doe v. Bolton, ruled that no law restricting abortion could be passed and set the precedent that life did not exist before birth. Before this time, people were largely unaware of what life inside the womb looked like and therefore had no reason to believe that life truly began at conception. As medical technology has revealed more about life inside the womb, the pro-life movement has been tasked with the uphill battle to shift the discussion around the topic. Because people now spend so much time using various forms of technology, it has become an effective way for groups and organizations to come in contact with large amounts of people. This is something the pro-life group has not only done, but has excelled at. By successfully utilizing advancing technology combined with new medical tools and discoveries, the pro-life movement has successfully gained an increasingly large momentum and following in a relatively short amount of time. Recognizing that technology alone does not have the ability to change people's hearts, but must be backed up with arguments and strong evidence, this paper will explore the medical advances that helped drive pro-life arguments, the technological advances that have become a platform to disseminate this information, and ways the pro-life movement has utilized each new form of technology. Lastly, this paper will explore the amount of growth the pro-life movement has experienced since the early 1970's. In the end, the pro-life movement has successfully combined all these different advances to create a movement that has reached a vast audience and gained exponential awareness and momentum. They have used everything from social media, the Internet, and videos to spread the truth about abortion. As a result, minds are being changed, people are driven into action, and babies are being saved.
ContributorsSnyder, Lorne Lynn (Author) / Critchlow, Donald (Thesis director) / Anderson, Owen (Committee member) / Barrett, The Honors College (Contributor) / Department of Supply Chain Management (Contributor) / School of Politics and Global Studies (Contributor)
Created2014-12
136785-Thumbnail Image.png
Description
This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of a chair to provide vibrotactile stimulation in the context of a dyadic (one-on-one) interaction across a table. This work explores the design of spatiotemporal vibration patterns that can be used to convey the basic building blocks of facial movements according to the Facial Action Unit Coding System. A behavioral study was conducted to explore the factors that influence the naturalness of conveying affect using vibrotactile cues.
ContributorsBala, Shantanu (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
136545-Thumbnail Image.png
Description
This thesis examines contemporary cinematic adaptations of the Ovidian Pygmalion story. The films Blade Runner (1981), Lars and the Real Girl (2007), Ruby Sparks (2012), and Her (2013) are analyzed. This thesis seeks to understand why this particular myth is so resonant in today's popular culture and what this relevance

This thesis examines contemporary cinematic adaptations of the Ovidian Pygmalion story. The films Blade Runner (1981), Lars and the Real Girl (2007), Ruby Sparks (2012), and Her (2013) are analyzed. This thesis seeks to understand why this particular myth is so resonant in today's popular culture and what this relevance reveals about modern society. The roles of female subjugation, sexualization, and relationship with technology will be major areas of concern. Research includes film criticism, Ovidian scholarship, and new advances in computer technology.
ContributorsStory, Sara Katherine (Author) / Corse, Taylor (Thesis director) / Ellis, Lawrence (Committee member) / Barrett, The Honors College (Contributor) / Department of English (Contributor)
Created2015-05
136549-Thumbnail Image.png
Description
A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic

A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic materials that can change their physical properties \u2014 such as color, density, and even shape \u2014 based on predefined rules or continuous, autonomous collection of input. In this research, we are most interested in particles that can perform computations, bond with other particles, and move. In this paper, we provide a theoretical particle model that can be used to simulate the performance of such physical particle systems, as well as an algorithm to perform expansion, wherein these particles can be used to enclose spaces or even objects.
ContributorsLaff, Miles (Author) / Richa, Andrea (Thesis director) / Bazzi, Rida (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12