Matching Items (612)
Filtering by

Clear all filters

134328-Thumbnail Image.png
Description
As mobile devices have risen to prominence over the last decade, their importance has been increasingly recognized. Workloads for mobile devices are often very different from those on desktop and server computers, and solutions that worked in the past are not always the best fit for the resource- and energy-constrained

As mobile devices have risen to prominence over the last decade, their importance has been increasingly recognized. Workloads for mobile devices are often very different from those on desktop and server computers, and solutions that worked in the past are not always the best fit for the resource- and energy-constrained computing that characterizes mobile devices. While this is most commonly seen in CPU and graphics workloads, this device class difference extends to I/O as well. However, while a few tools exist to help analyze mobile storage solutions, there exists a gap in the available software that prevents quality analysis of certain research initiatives, such as I/O deduplication on mobile devices. This honors thesis will demonstrate a new tool that is capable of capturing I/O on the filesystem layer of mobile devices running the Android operating system, in support of new mobile storage research. Uniquely, it is able to capture both metadata of writes as well as the actual written data, transparently to the apps running on the devices. Based on a modification of the strace program, fstrace and its companion tool fstrace-replay can record and replay filesystem I/O of actual Android apps. Using this new tracing tool, several traces from popular Android apps such as Facebook and Twitter were collected and analyzed.
ContributorsMor, Omri (Author) / Zhao, Ming (Thesis director) / Zhao, Ziming (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135380-Thumbnail Image.png
Description
Bioscience High School, a small magnet high school located in Downtown Phoenix and a STEAM (Science, Technology, Engineering, Arts, Math) focused school, has been pushing to establish a computer science curriculum for all of their students from freshman to senior year. The school's Mision (Mission and Vision) is to: "..provide

Bioscience High School, a small magnet high school located in Downtown Phoenix and a STEAM (Science, Technology, Engineering, Arts, Math) focused school, has been pushing to establish a computer science curriculum for all of their students from freshman to senior year. The school's Mision (Mission and Vision) is to: "..provide a rigorous, collaborative, and relevant academic program emphasizing an innovative, problem-based curriculum that develops literacy in the sciences, mathematics, and the arts, thus cultivating critical thinkers, creative problem-solvers, and compassionate citizens, who are able to thrive in our increasingly complex and technological communities." Computational thinking is an important part in developing a future problem solver Bioscience High School is looking to produce. Bioscience High School is unique in the fact that every student has a computer available for him or her to use. Therefore, it makes complete sense for the school to add computer science to their curriculum because one of the school's goals is to be able to utilize their resources to their full potential. However, the school's attempt at computer science integration falls short due to the lack of expertise amongst the math and science teachers. The lack of training and support has postponed the development of the program and they are desperately in need of someone with expertise in the field to help reboot the program. As a result, I've decided to create a course that is focused on teaching students the concepts of computational thinking and its application through Scratch and Arduino programming.
ContributorsLiu, Deming (Author) / Meuth, Ryan (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
This project's goal was to design a Central Processing Unit (CPU) incorporating a fairly large instruction set and a multistage pipeline design with the potential to be used in a multi-core system. The CPU was coded and synthesized with Verilog. This was accomplished by building on the CPU design from

This project's goal was to design a Central Processing Unit (CPU) incorporating a fairly large instruction set and a multistage pipeline design with the potential to be used in a multi-core system. The CPU was coded and synthesized with Verilog. This was accomplished by building on the CPU design from fundamentals learned in CSE320 and increasing the instruction set to resemble a proper Reduced Instruction Set Computing (RISC) CPU system. A multistage pipeline was incorporated to the CPU to increase instruction throughput, or instructions per second. A major area of focus was on creating a multi-core design. The design used is master-slave in nature. The master core instructs the sub-cores where they should begin execution, the idea being that the operating system or kernel will be executing on the master core and the "user space" programs will be run on the sub-cores. The rationale behind this is that the system would specialize in running several small functions on all of its many supported cores. The system supports around 45 instructions, which include several types of jumps and branches (for changing the program counter based on conditions), arithmetic operations (addition, subtraction, or, and, etc.), and system calls (for controlling the core execution). The system has a very low Clocks per Instruction ratio (CPI), but to achieve this the second stage contains several modules and would most likely be a bottleneck for performance if implemented. The CPU is not perfect and contains a few errors and oversights, but the system as a whole functions as intended.
ContributorsKolden, Brian Andrew (Author) / Burger, Kevin (Thesis director) / Meuth, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135339-Thumbnail Image.png
Description
Observations of four times ionized iron and nickel (Fe V & Ni V) in the G191-B2B white dwarf spectrum have been used to test for variations in the fine structure constant, α, in the presence of strong gravitational fields. The laboratory wavelengths for these ions were thought to be the

Observations of four times ionized iron and nickel (Fe V & Ni V) in the G191-B2B white dwarf spectrum have been used to test for variations in the fine structure constant, α, in the presence of strong gravitational fields. The laboratory wavelengths for these ions were thought to be the cause of inconsistent conclusions regarding the
variation of α as observed through the white dwarf spectrum. This thesis presents 129 revised Fe V wavelengths (1200 Å to 1600 Å) and 161 revised Ni V wavelengths (1200 Å to 1400 Å) with uncertainties of approximately 3 mÅ. A systematic calibration error
is identified in the previous Ni V wavelengths and is corrected in this work. The evaluation of the fine structure variation is significantly improved with the results
found in this thesis.
ContributorsWard, Jacob Wolfgang (Author) / Treacy, Michael (Thesis director) / Alarcon, Ricardo (Committee member) / Nave, Gillian (Committee member) / Department of Physics (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135340-Thumbnail Image.png
Description
Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.
ContributorsMian, Sami T. (Author) / Collofello, James (Thesis director) / Chen, Yinong (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135654-Thumbnail Image.png
Description
Company X has developed RealSenseTM technology, a depth sensing camera that provides machines the ability to capture three-dimensional spaces along with motion within these spaces. The goal of RealSense was to give machines human-like senses, such as knowing how far away objects are and perceiving the surrounding environment. The key

Company X has developed RealSenseTM technology, a depth sensing camera that provides machines the ability to capture three-dimensional spaces along with motion within these spaces. The goal of RealSense was to give machines human-like senses, such as knowing how far away objects are and perceiving the surrounding environment. The key issue for Company X is how to commercialize RealSense's depth recognition capabilities. This thesis addresses the problem by examining which markets to address and how to monetize this technology. The first part of the analysis identified potential markets for RealSense. This was achieved by evaluating current markets that could benefit from the camera's gesture recognition, 3D scanning, and depth sensing abilities. After identifying seven industries where RealSense could add value, a model of the available, addressable, and obtainable market sizes was developed for each segment. Key competitors and market dynamics were used to estimate the portion of the market that Company X could capture. These models provided a forecast of the discounted gross profits that could be earned over the next five years. These forecasted gross profits, combined with an examination of the competitive landscape and synergistic opportunities, resulted in the selection of the three segments thought to be most profitable to Company X. These segments are smart home, consumer drones, and automotive. The final part of the analysis investigated entrance strategies. Company X's competitive advantages in each space were found by examining the competition, both for the RealSense camera in general and other technologies specific to each industry. Finally, ideas about ways to monetize RealSense were developed by exploring various revenue models and channels.
ContributorsDunn, Nicole (Co-author) / Boudreau, Thomas (Co-author) / Kinzy, Chris (Co-author) / Radigan, Thomas (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / WPC Graduate Programs (Contributor) / Department of Psychology (Contributor) / Department of Finance (Contributor) / School of Accountancy (Contributor) / Department of Economics (Contributor) / School of Mathematical and Statistical Science (Contributor) / W. P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135678-Thumbnail Image.png
Description
The constant evolution of technology has greatly shifted the way in which we gain knowledge information. This, in turn, has an affect on how we learn. Long gone are the days where students sit in libraries for hours flipping through numerous books to find one specific piece of information. With

The constant evolution of technology has greatly shifted the way in which we gain knowledge information. This, in turn, has an affect on how we learn. Long gone are the days where students sit in libraries for hours flipping through numerous books to find one specific piece of information. With the advent of Google, modern day students are able to arrive at the same information within 15 seconds. This technology, the internet, is reshaping the way we learn. As a result, the academic integrity policies that are set forth at the college level seem to be outdated, often prohibiting the use of technology as a resource for learning. The purpose of this paper is to explore why exactly these resources are prohibited. By contrasting a subject such as Computer Science with the Humanities, the paper explores the need for the internet as a resource in some fields as opposed to others. Taking a look at the knowledge presented in Computer Science, the course structure, and the role that professors play in teaching this knowledge, this thesis evaluates the epistemology of Engineering subjects. By juxtaposing Computer Science with the less technology reliant humanities subjects, it is clear that one common policy outlining academic integrity does not suffice for an entire university. Instead, there should be amendments made to the policy specific to each subject, in order to best foster an environment of learning at the university level. In conclusion of this thesis, Arizona State University's Academic Integrity Policy is analyzed and suggestions are made to remove ambiguity in the language of the document, in order to promote learning at the university.
ContributorsMohan, Sishir Basavapatna (Author) / Brake, Elizabeth (Thesis director) / Martin, William (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136637-Thumbnail Image.png
Description
The purpose of this project was to construct and write code for a vehicle to take advantage of the benefits of combining stepper motors with mecanum wheels. This process involved building the physical vehicle, designing a custom PCB for the vehicle, writing code for the onboard microprocessor, and implementing motor

The purpose of this project was to construct and write code for a vehicle to take advantage of the benefits of combining stepper motors with mecanum wheels. This process involved building the physical vehicle, designing a custom PCB for the vehicle, writing code for the onboard microprocessor, and implementing motor control algorithms.
ContributorsDavis, Severin Jan (Author) / Burger, Kevin (Thesis director) / Vannoni, Greg (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
136728-Thumbnail Image.png
Description
This project was centered around designing a processor model (using the C programming language) based on the Coldfire computer architecture that will run on third party software known as Open Virtual Platforms. The end goal is to have a fully functional processor that can run Coldfire instructions and utilize peripheral

This project was centered around designing a processor model (using the C programming language) based on the Coldfire computer architecture that will run on third party software known as Open Virtual Platforms. The end goal is to have a fully functional processor that can run Coldfire instructions and utilize peripheral devices in the same way as the hardware used in the embedded systems lab at ASU. This project would cut down the substantial amount of time students spend commuting to the lab. Having the processor directly at their disposal would also encourage them to spend more time outside of class learning the hardware and familiarizing themselves with development on an embedded micro-controller. The model will be accurate, fast and reliable. These aspects will be achieved through rigorous unit testing and use of the OVP platform which provides instruction accurate simulations at hundreds of MIPS (million instructions per second) for the specified model. The end product was able to accurately simulate a subset of the Coldfire instructions at very high rates.
ContributorsDunning, David Connor (Author) / Burger, Kevin (Thesis director) / Meuth, Ryan (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-12