Matching Items (1,386)
Filtering by

Clear all filters

134328-Thumbnail Image.png
Description
As mobile devices have risen to prominence over the last decade, their importance has been increasingly recognized. Workloads for mobile devices are often very different from those on desktop and server computers, and solutions that worked in the past are not always the best fit for the resource- and energy-constrained

As mobile devices have risen to prominence over the last decade, their importance has been increasingly recognized. Workloads for mobile devices are often very different from those on desktop and server computers, and solutions that worked in the past are not always the best fit for the resource- and energy-constrained computing that characterizes mobile devices. While this is most commonly seen in CPU and graphics workloads, this device class difference extends to I/O as well. However, while a few tools exist to help analyze mobile storage solutions, there exists a gap in the available software that prevents quality analysis of certain research initiatives, such as I/O deduplication on mobile devices. This honors thesis will demonstrate a new tool that is capable of capturing I/O on the filesystem layer of mobile devices running the Android operating system, in support of new mobile storage research. Uniquely, it is able to capture both metadata of writes as well as the actual written data, transparently to the apps running on the devices. Based on a modification of the strace program, fstrace and its companion tool fstrace-replay can record and replay filesystem I/O of actual Android apps. Using this new tracing tool, several traces from popular Android apps such as Facebook and Twitter were collected and analyzed.
ContributorsMor, Omri (Author) / Zhao, Ming (Thesis director) / Zhao, Ziming (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135380-Thumbnail Image.png
Description
Bioscience High School, a small magnet high school located in Downtown Phoenix and a STEAM (Science, Technology, Engineering, Arts, Math) focused school, has been pushing to establish a computer science curriculum for all of their students from freshman to senior year. The school's Mision (Mission and Vision) is to: "..provide

Bioscience High School, a small magnet high school located in Downtown Phoenix and a STEAM (Science, Technology, Engineering, Arts, Math) focused school, has been pushing to establish a computer science curriculum for all of their students from freshman to senior year. The school's Mision (Mission and Vision) is to: "..provide a rigorous, collaborative, and relevant academic program emphasizing an innovative, problem-based curriculum that develops literacy in the sciences, mathematics, and the arts, thus cultivating critical thinkers, creative problem-solvers, and compassionate citizens, who are able to thrive in our increasingly complex and technological communities." Computational thinking is an important part in developing a future problem solver Bioscience High School is looking to produce. Bioscience High School is unique in the fact that every student has a computer available for him or her to use. Therefore, it makes complete sense for the school to add computer science to their curriculum because one of the school's goals is to be able to utilize their resources to their full potential. However, the school's attempt at computer science integration falls short due to the lack of expertise amongst the math and science teachers. The lack of training and support has postponed the development of the program and they are desperately in need of someone with expertise in the field to help reboot the program. As a result, I've decided to create a course that is focused on teaching students the concepts of computational thinking and its application through Scratch and Arduino programming.
ContributorsLiu, Deming (Author) / Meuth, Ryan (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135390-Thumbnail Image.png
Description
Technology both stimulates and is simultaneously stimulated by people and society. As a classic example, games have shaped and have been shaped by people's preferences. Today, online computer games are roaming everywhere, attracting and connecting players all over the world. However, at the same time, a lesser form of technology

Technology both stimulates and is simultaneously stimulated by people and society. As a classic example, games have shaped and have been shaped by people's preferences. Today, online computer games are roaming everywhere, attracting and connecting players all over the world. However, at the same time, a lesser form of technology has emerged alongside of online computer games. It is known as trading card games (TCGs). Surprisingly, TCGs have been able to compete on same level as online computer games. Looking at my past experiences, I offered a theory that encompasses three forms of social interactions to explain for TCGs' success. The three types of interactions are: interaction of identities, interaction of interests, and interaction of influences. Interaction of identities is the constant interchange of player information and knowledge through different mediums. Interaction of interests involves the exchange and quality of the tangible and intangible. Interaction of influences deals with the fluid flow of communication and ideas that players use to change an outcome, however consciously or unconsciously done. Not one of these three factors of interactions act along; each is a part of a grand picture of interaction of interactions. Altogether, these factors explain, at least in part, the current popularity of trading card games as a relatively simple technology in a society with a plethora of technologically advanced entertainments and games such as virtual computer gaming. A web-based survey was devised to further examine the effects of the three forms of interactions have on online computer game and trading card game players. The results were consistent with the premises of the three-factor interaction theory.
ContributorsQin, Sheng (Author) / Eaton, John P. (Thesis director) / Olsen, G. Douglas (Committee member) / Department of Psychology (Contributor) / Department of Management (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135339-Thumbnail Image.png
Description
Observations of four times ionized iron and nickel (Fe V & Ni V) in the G191-B2B white dwarf spectrum have been used to test for variations in the fine structure constant, α, in the presence of strong gravitational fields. The laboratory wavelengths for these ions were thought to be the

Observations of four times ionized iron and nickel (Fe V & Ni V) in the G191-B2B white dwarf spectrum have been used to test for variations in the fine structure constant, α, in the presence of strong gravitational fields. The laboratory wavelengths for these ions were thought to be the cause of inconsistent conclusions regarding the
variation of α as observed through the white dwarf spectrum. This thesis presents 129 revised Fe V wavelengths (1200 Å to 1600 Å) and 161 revised Ni V wavelengths (1200 Å to 1400 Å) with uncertainties of approximately 3 mÅ. A systematic calibration error
is identified in the previous Ni V wavelengths and is corrected in this work. The evaluation of the fine structure variation is significantly improved with the results
found in this thesis.
ContributorsWard, Jacob Wolfgang (Author) / Treacy, Michael (Thesis director) / Alarcon, Ricardo (Committee member) / Nave, Gillian (Committee member) / Department of Physics (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135340-Thumbnail Image.png
Description
Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.
ContributorsMian, Sami T. (Author) / Collofello, James (Thesis director) / Chen, Yinong (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135359-Thumbnail Image.png
Description
Background: Noninvasive MRI methods that can accurately detect subtle brain changes are highly desirable when studying disease-modifying interventions. Texture analysis is a novel imaging technique which utilizes the extraction of a large number of image features with high specificity and predictive power. In this investigation, we use texture analysis to

Background: Noninvasive MRI methods that can accurately detect subtle brain changes are highly desirable when studying disease-modifying interventions. Texture analysis is a novel imaging technique which utilizes the extraction of a large number of image features with high specificity and predictive power. In this investigation, we use texture analysis to assess and classify age-related changes in the right and left hippocampal regions, the areas known to show some of the earliest change in Alzheimer's disease (AD). Apolipoprotein E (APOE)'s e4 allele confers an increased risk for AD, so studying differences in APOE e4 carriers may help to ascertain subtle brain changes before there has been an obvious change in behavior. We examined texture analysis measures that predict age-related changes, which reflect atrophy in a group of cognitively normal individuals. We hypothesized that the APOE e4 carriers would exhibit significant age-related differences in texture features compared to non-carriers, so that the predictive texture features hold promise for early assessment of AD. Methods: 120 normal adults between the ages of 32 and 90 were recruited for this neuroimaging study from a larger parent study at Mayo Clinic Arizona studying longitudinal cognitive functioning (Caselli et al., 2009). As part of the parent study, the participants were genotyped for APOE genetic polymorphisms and received comprehensive cognitive testing every two years, on average. Neuroimaging was done at Barrow Neurological Institute and a 3D T1-weighted magnetic resonance image was obtained during scanning that allowed for subsequent texture analysis processing. Voxel-based features of the appearance, structure, and arrangement of these regions of interest were extracted utilizing the Mayo Clinic Python Texture Analysis Pipeline (pyTAP). Algorithms applied in feature extraction included Grey-Level Co-Occurrence Matrix (GLCM), Gabor Filter Banks (GFB), Local Binary Patterns (LBP), Discrete Orthogonal Stockwell Transform (DOST), and Laplacian-of-Gaussian Histograms (LoGH). Principal component (PC) analysis was used to reduce the dimensionality of the algorithmically selected features to 13 PCs. A stepwise forward regression model was used to determine the effect of APOE status (APOE e4 carriers vs. noncarriers), and the texture feature principal components on age (as a continuous variable). After identification of 5 significant predictors of age in the model, the individual feature coefficients of those principal components were examined to determine which features contributed most significantly to the prediction of an aging brain. Results: 70 texture features were extracted for the two regions of interest in each participant's scan. The texture features were coded as 70 initial components andwere rotated to generate 13 principal components (PC) that contributed 75% of the variance in the dataset by scree plot analysis. The forward stepwise regression model used in this exploratory study significantly predicted age, accounting for approximately 40% of the variance in the data. The regression model revealed 5 significant regressors (2 right PC's, APOE status, and 2 left PC by APOE interactions). Finally, the specific texture features that contributed to each significant PCs were identified. Conclusion: Analysis of image texture features resulted in a statistical model that was able to detect subtle changes in brain integrity associated with age in a group of participants who are cognitively normal, but have an increased risk of developing AD based on the presence of the APOE e4 phenotype. This is an important finding, given that detecting subtle changes in regions vulnerable to the effects of AD in patients could allow certain texture features to serve as noninvasive, sensitive biomarkers predictive of AD. Even with only a small number of patients, the ability for us to determine sensitive imaging biomarkers could facilitate great improvement in speed of detection and effectiveness of AD interventions..
ContributorsSilva, Annelise Michelle (Author) / Baxter, Leslie (Thesis director) / McBeath, Michael (Committee member) / Presson, Clark (Committee member) / School of Life Sciences (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135638-Thumbnail Image.png
Description
A growing number of jobs in the US require a college degree or technical education, and the wage difference between jobs requiring a high school diploma and a college education has increased to over $17,000 per year. Enrollment levels in postsecondary education have been rising for at least the past

A growing number of jobs in the US require a college degree or technical education, and the wage difference between jobs requiring a high school diploma and a college education has increased to over $17,000 per year. Enrollment levels in postsecondary education have been rising for at least the past decade, and this paper attempts to tease out how much of the increasing enrollment is due to changes in the demand by companies for workers. A Bartik Instrument, which is a measure of local area labor demand, for each county in the US was constructed from 2007 to 2014, and using multivariate linear regression the effect of changing labor demand on local postsecondary education enrollment rates was examined. A small positive effect was found, but the effect size in relation to the total change in enrollment levels was diminutive. From the start to the end of the recession (2007 to 2010), Bartik Instrument calculated unemployment increased from 5.3% nationally to 8.2%. This level of labor demand contraction would lead to a 0.42% increase in enrollment between 2008 and 2011. The true enrollment increase over this period was 7.6%, so the model calculated 5.5% of the enrollment increase was based on the changes in labor demand.
ContributorsHerder, Daniel Steven (Author) / Dillon, Eleanor (Thesis director) / Schoellman, Todd (Committee member) / Economics Program in CLAS (Contributor) / Department of Psychology (Contributor) / Sandra Day O'Connor College of Law (Contributor) / School of Politics and Global Studies (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135640-Thumbnail Image.png
Description
In this study, potential differences in the manifestation and rates of eating disorders and symptoms (body dissatisfaction, weight and shape concerns, food restriction, and compensatory behaviors) in college women across sexual orientations were examined. The sociocultural model of eating disorders was also examined for these women across sexual orientations. The

In this study, potential differences in the manifestation and rates of eating disorders and symptoms (body dissatisfaction, weight and shape concerns, food restriction, and compensatory behaviors) in college women across sexual orientations were examined. The sociocultural model of eating disorders was also examined for these women across sexual orientations. The participants were organized into three different sexual orientation groups for analysis: heterosexual (group 1), bisexual, pansexual, and polysexual (group 2), and lesbian, gay, queer, transsexual, asexual, and other (group 3). Using cross-sectional data, it was revealed that there were significant group differences when comparing the three sexual orientation groups on loss of control over eating, but no significant group differences on body dissatisfaction, thin ideal internalization, and weight-related eating pathology, and total eating disorder symptoms scores. The sociocultural model was not predictive of eating disorder symptoms among non-heterosexual groups. Longitudinal analyses revealed that the sociocultural model of eating disorders prospectively predicts eating disorder symptoms among heterosexual women, but not non-heterosexual women. Both cross-sectional and longitudinal analyses indicate that non-heterosexual women may be protected from societal pressure to subscribe to the thin ideal and its subsequent internalization. However, the comparison group of heterosexual women in our study may not have been completely representative of undergraduate women in terms of total eating disorder symptoms or eating pathology. Additionally, regardless of sexual orientation, our sample reported more total eating disorder symptoms and emotional eating than previous studies. These findings have both clinical and research implications. Future research is needed to determine what risk factors and treatment target variables are relevant for non-heterosexual women.
ContributorsNorman, Elizabeth Blair (Author) / Perez, Marisol (Thesis director) / Presson, Clark (Committee member) / Cavanaugh Toft, Carolyn (Committee member) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135654-Thumbnail Image.png
Description
Company X has developed RealSenseTM technology, a depth sensing camera that provides machines the ability to capture three-dimensional spaces along with motion within these spaces. The goal of RealSense was to give machines human-like senses, such as knowing how far away objects are and perceiving the surrounding environment. The key

Company X has developed RealSenseTM technology, a depth sensing camera that provides machines the ability to capture three-dimensional spaces along with motion within these spaces. The goal of RealSense was to give machines human-like senses, such as knowing how far away objects are and perceiving the surrounding environment. The key issue for Company X is how to commercialize RealSense's depth recognition capabilities. This thesis addresses the problem by examining which markets to address and how to monetize this technology. The first part of the analysis identified potential markets for RealSense. This was achieved by evaluating current markets that could benefit from the camera's gesture recognition, 3D scanning, and depth sensing abilities. After identifying seven industries where RealSense could add value, a model of the available, addressable, and obtainable market sizes was developed for each segment. Key competitors and market dynamics were used to estimate the portion of the market that Company X could capture. These models provided a forecast of the discounted gross profits that could be earned over the next five years. These forecasted gross profits, combined with an examination of the competitive landscape and synergistic opportunities, resulted in the selection of the three segments thought to be most profitable to Company X. These segments are smart home, consumer drones, and automotive. The final part of the analysis investigated entrance strategies. Company X's competitive advantages in each space were found by examining the competition, both for the RealSense camera in general and other technologies specific to each industry. Finally, ideas about ways to monetize RealSense were developed by exploring various revenue models and channels.
ContributorsDunn, Nicole (Co-author) / Boudreau, Thomas (Co-author) / Kinzy, Chris (Co-author) / Radigan, Thomas (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / WPC Graduate Programs (Contributor) / Department of Psychology (Contributor) / Department of Finance (Contributor) / School of Accountancy (Contributor) / Department of Economics (Contributor) / School of Mathematical and Statistical Science (Contributor) / W. P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05