Matching Items (572)
136523-Thumbnail Image.png
Description
Cyber threats are growing in number and sophistication making it important to continually study and improve all dimensions of digital forensics. Teamwork in forensic analysis has been overlooked in systems even though forensics relies on collaboration. Forensic analysis lacks a system that is flexible and available on different electronic devices

Cyber threats are growing in number and sophistication making it important to continually study and improve all dimensions of digital forensics. Teamwork in forensic analysis has been overlooked in systems even though forensics relies on collaboration. Forensic analysis lacks a system that is flexible and available on different electronic devices which are being used and incorporated into everyday life. For instance, cellphones or tablets that are easy to bring on-the-go to sites where the first steps of forensic analysis is done. Due to the present day conversion to online accessibility, most electronic devices connect to the internet. Squeegee is a proof of concept that forensic analysis can be done on the web. The forensic analysis expansion to the web opens many doors to collaboration and accessibility.
ContributorsJuntiff, Samantha Maria (Author) / Ahn, Gail-Joon (Thesis director) / Kashiwagi, Jacob (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
Description
Bhairavi is a solo performance that investigates belonging and dis-belonging in diaspora communities, especially as it relates to the female body. Specifically, through my experience as a second-generation Indian-American woman - I expose and challenge the notion of ‘tradition,’ as it is forced into women’s bodies, and displaces them in

Bhairavi is a solo performance that investigates belonging and dis-belonging in diaspora communities, especially as it relates to the female body. Specifically, through my experience as a second-generation Indian-American woman - I expose and challenge the notion of ‘tradition,’ as it is forced into women’s bodies, and displaces them in their own homes. Bhairavi is a story told through movement and theatrical narrative composition with research and material collected through structured and unstructured observation of my family, cultural community, and myself.

Note: This work of creative scholarship is rooted in collaboration between three female artist-scholars: Carly Bates, Raji Ganesan, and Allyson Yoder. Working from a common intersectional, feminist framework, we served as artistic co-directors of each other’s solo pieces and co-producers of Negotiations, in which we share these pieces in relationship to each other. Thus, Negotiations is not a showcase of three individual works, but rather a conversation among three voices. As collaborators, we have been uncompromising in the pursuit of our own unique inquiries and voices, and each of our works of creative scholarship stand alone. However, we believe that all of the parts are best understood in relationship to each other, and to the whole. For this reason, we have chosen to cross-reference our thesis documents.

French Vanilla: An Exploration of Biracial Identity Through Narrative Performance by Carly Bates

Deep roots, shared fruits: Emergent creative process and the ecology of solo performance through “Dress in Something Plain and Dark” by Allyson Yoder

Bhairavi: A Performance-Investigation of Belonging and Dis-Belonging in Diaspora
Communities by Raji Ganesan
ContributorsGanesan, Raji J (Author) / Underiner, Tamara (Thesis director) / Stephens, Mary (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135822-Thumbnail Image.png
Description
Keyboard input biometric authentication systems are software systems which record keystroke information and use it to identify a typist. The primary statistics used to determine the accuracy of a keyboard biometric authentication system are the false acceptance rate (FAR) and false rejection rate (FRR), which are aimed to be as

Keyboard input biometric authentication systems are software systems which record keystroke information and use it to identify a typist. The primary statistics used to determine the accuracy of a keyboard biometric authentication system are the false acceptance rate (FAR) and false rejection rate (FRR), which are aimed to be as low as possible [1]. However, even if a system has a low FAR and FRR, there is nothing stopping an attacker from also monitoring an individual's typing habits in the same way a legitimate authentication system would, and using its knowledge of their habits to recreate virtual keyboard events for typing arbitrary text, with precise timing mimicking those habits, which would theoretically spoof a legitimate keyboard biometric authentication system into thinking it is the intended user doing the typing. A proof of concept of this very attack, called keyboard input biometric authentication spoofing, is the focus of this paper, with the purpose being to show that even if a biometric authentication system is reasonably accurate, with a low FAR and FRR, it can still potentially be very vulnerable to a well-crafted spoofing system. A rudimentary keyboard input biometric authentication system was written in C and C++ which drew influence from already existing methods and attempted new methods of authentication as well. A spoofing system was then built which exploited the authentication system's statistical representation of a user's typing habits to recreate keyboard events as described above. This proof of concept is aimed at raising doubts about the idea of relying too heavily upon keyboard input based biometric authentication systems since the user's typing input can demonstrably be spoofed in this way if an attacker has full access to the system, even if the system itself is accurate. The results are that the authentication system built for this study, when ran on a database of typing event logs recorded from 15 users in 4 sessions, had a 0% FAR and FRR (more detailed analysis of FAR and FRR is also presented), yet it was still very susceptible to being spoofed, with a 44% to 71% spoofing rate in some instances.
ContributorsJohnson, Peter Thomas (Author) / Nelson, Brian (Thesis director) / Amresh, Ashish (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135891-Thumbnail Image.png
Description
This paper explores how US Cold War nuclear testing in the Pacific Islands has been approached in three different regions \u2014affected Pacific Islands, the US, and Japan. Because the US has failed to adequately address its nuclear past in the Pacific Islands, and Pacific Islander narratives struggle to reach the

This paper explores how US Cold War nuclear testing in the Pacific Islands has been approached in three different regions \u2014affected Pacific Islands, the US, and Japan. Because the US has failed to adequately address its nuclear past in the Pacific Islands, and Pacific Islander narratives struggle to reach the international community on their own, my study considers the possibility of Pacific Islanders finding greater outlet for their perspectives within dominant Japanese narratives, which also feature nuclear memory. Whereas the US government has remained largely evasive and aloof about the consequences of its nuclear testing in the Pacific, Japan encourages active, anti-nuclear war memory that could be congruent with Pacific Islander interests. After examining historical events, surrounding context, and prevailing sentiments surrounding this issue in each region however, my study finds that even within Japanese narratives, Pacific Islander narratives can only go so far because of Japan's own nuclear power industry, its hierarchical relationship with the Pacific Islands, and Japan's strong ties to the US in what can be interpreted as enduring Cold War politics.
ContributorsHinze, Rie Victoria (Author) / Benkert, Volker (Thesis director) / Moore, Aaron (Committee member) / School of International Letters and Cultures (Contributor) / Computer Science and Engineering Program (Contributor) / School of Politics and Global Studies (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
135654-Thumbnail Image.png
Description
Company X has developed RealSenseTM technology, a depth sensing camera that provides machines the ability to capture three-dimensional spaces along with motion within these spaces. The goal of RealSense was to give machines human-like senses, such as knowing how far away objects are and perceiving the surrounding environment. The key

Company X has developed RealSenseTM technology, a depth sensing camera that provides machines the ability to capture three-dimensional spaces along with motion within these spaces. The goal of RealSense was to give machines human-like senses, such as knowing how far away objects are and perceiving the surrounding environment. The key issue for Company X is how to commercialize RealSense's depth recognition capabilities. This thesis addresses the problem by examining which markets to address and how to monetize this technology. The first part of the analysis identified potential markets for RealSense. This was achieved by evaluating current markets that could benefit from the camera's gesture recognition, 3D scanning, and depth sensing abilities. After identifying seven industries where RealSense could add value, a model of the available, addressable, and obtainable market sizes was developed for each segment. Key competitors and market dynamics were used to estimate the portion of the market that Company X could capture. These models provided a forecast of the discounted gross profits that could be earned over the next five years. These forecasted gross profits, combined with an examination of the competitive landscape and synergistic opportunities, resulted in the selection of the three segments thought to be most profitable to Company X. These segments are smart home, consumer drones, and automotive. The final part of the analysis investigated entrance strategies. Company X's competitive advantages in each space were found by examining the competition, both for the RealSense camera in general and other technologies specific to each industry. Finally, ideas about ways to monetize RealSense were developed by exploring various revenue models and channels.
ContributorsDunn, Nicole (Co-author) / Boudreau, Thomas (Co-author) / Kinzy, Chris (Co-author) / Radigan, Thomas (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / WPC Graduate Programs (Contributor) / Department of Psychology (Contributor) / Department of Finance (Contributor) / School of Accountancy (Contributor) / Department of Economics (Contributor) / School of Mathematical and Statistical Science (Contributor) / W. P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135739-Thumbnail Image.png
Description
Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.
ContributorsDaymude, Joshua Jungwoo (Author) / Richa, Andrea (Thesis director) / Kierstead, Henry (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135743-Thumbnail Image.png
Description
The two authors completed the entirety of their schooling within the United States, from preschool to university. Both authors experienced loss of interest towards their education each successive year and assumed the nature of learning and education was to blame. The two students took a class on the Kashiwagi Information

The two authors completed the entirety of their schooling within the United States, from preschool to university. Both authors experienced loss of interest towards their education each successive year and assumed the nature of learning and education was to blame. The two students took a class on the Kashiwagi Information Measurement Theory their second years at Arizona State University and applied the concepts taught in that class to past experiences in the United States education system to determine the cause behind their waning interest in their education. Using KSM principles the authors identified that the environment produced by and ineffectual and inefficient educational system is what resulted in their, and the majority of their peers, growing dissatisfaction in their education. A negative correlation was found between GPA and control. As the control in a students environment increased, their GPA decreased. The data collected in this thesis also supports the conclusions that as a student is exposed to a high stress environment, their GPA and average amount of sleep per night decrease.
ContributorsKulanathan, Shivaan (Co-author) / Westlake, Kyle (Co-author) / Kashiwagi, Dean (Thesis director) / Kashiwagi, Jacob (Committee member) / Gunnoe, Jake (Committee member) / Computer Science and Engineering Program (Contributor) / Chemical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135758-Thumbnail Image.png
Description
Food safety is vital to the well-being of society; therefore, it is important to inspect food products to ensure minimal health risks are present. A crucial phase of food inspection is the identification of foreign particles found in the sample, such as insect body parts. The presence of certain species

Food safety is vital to the well-being of society; therefore, it is important to inspect food products to ensure minimal health risks are present. A crucial phase of food inspection is the identification of foreign particles found in the sample, such as insect body parts. The presence of certain species of insects, especially storage beetles, is a reliable indicator of possible contamination during storage and food processing. However, the current approach to identifying species is visual examination by human analysts; this method is rather subjective and time-consuming. Furthermore, confident identification requires extensive experience and training. To aid this inspection process, we have developed in collaboration with FDA analysts some image analysis-based machine intelligence to achieve species identification with up to 90% accuracy. The current project is a continuation of this development effort. Here we present an image analysis environment that allows practical deployment of the machine intelligence on computers with limited processing power and memory. Using this environment, users can prepare input sets by selecting images for analysis, and inspect these images through the integrated pan, zoom, and color analysis capabilities. After species analysis, the results panel allows the user to compare the analyzed images with referenced images of the proposed species. Further additions to this environment should include a log of previously analyzed images, and eventually extend to interaction with a central cloud repository of images through a web-based interface. Additional issues to address include standardization of image layout, extension of the feature-extraction algorithm, and utilizing image classification to build a central search engine for widespread usage.
ContributorsMartin, Daniel Luis (Author) / Ahn, Gail-Joon (Thesis director) / Doupé, Adam (Committee member) / Xu, Joshua (Committee member) / Computer Science and Engineering Program (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135700-Thumbnail Image.png
Description
This project looks into elementary school lunches around the world, with a focus on nutrition and government involvement. The project uses recent obesity research to determine the extent of childhood obesity and draws connections between obesity rates and each country's school food policies and resulting school lunch meals. The countries

This project looks into elementary school lunches around the world, with a focus on nutrition and government involvement. The project uses recent obesity research to determine the extent of childhood obesity and draws connections between obesity rates and each country's school food policies and resulting school lunch meals. The countries researched are Greece, the United States, Japan, and France. An effort is made to find accurate representations by using real unstaged pictures of the school lunches as well as using real, recent school lunch menus. Analysis of the nutritive balance of each country's overall school lunch meals includes explanation of possible reasoning for lower quality or lesser-balanced school lunch meals. In Greece, the steadily rising child obesity rates are possibly due to Greece's struggling economy and the loss of traditional Greek foods in school lunches. In the U.S., the culprit of uncontrolled obesity rates may be a combination of budget and an unhealthful food culture that can't easily adopt wholesome meals and meal preparation methods. However, there have been recent efforts at improving school lunches through reimbursement to schools who comply with the new USDA NSLP meal pattern, and in combination with a general increased interest in making school lunches better, school lunches in the U.S. have been improving. In Japan, where obesity rates are fairly low, the retaining of traditional cuisine and wholesome foods and cooking methods in combination with a higher meal budget are probable reasons why child obesity rates are under control. In France, the combination of a higher budget with school lunches carefully calculated for balance along with traditional foods cooked by skilled chefs results in possibly the most healthful and palatable school lunches of the countries analyzed. Overall it is concluded that major predictors of more healthy and less obese children are higher food budgets, greater use of traditional foods, and more wholesome foods and cooking methods over packaged foods.
ContributorsOsugi, Mallory Nicole (Author) / Grgich, Traci (Thesis director) / Mason, Maureen (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05