Matching Items (715)
131503-Thumbnail Image.png
Description
Construction is a defining characteristic of geometry classes. In a traditional classroom, teachers and students use physical tools (i.e. a compass and straight-edge) in their constructions. However, with modern technology, construction is possible through the use of digital applications such as GeoGebra and Geometer’s SketchPad.
Many other studies have

Construction is a defining characteristic of geometry classes. In a traditional classroom, teachers and students use physical tools (i.e. a compass and straight-edge) in their constructions. However, with modern technology, construction is possible through the use of digital applications such as GeoGebra and Geometer’s SketchPad.
Many other studies have researched the benefits of digital manipulatives and digital environments through student completion of tasks and testing. This study intends to research students’ use of the digital tools and manipulatives, along with the students’ interactions with the digital environment. To this end, I conducted exploratory teaching experiments with two calculus I students.
In the exploratory teaching experiments, students were introduced to a GeoGebra application developed by Fischer (2019), which includes instructional videos and corresponding quizzes, as well as exercises and interactive notepads, where students could use digital tools to construct line segments and circles (corresponding to the physical straight-edge and compass). The application built up the students’ foundational knowledge, culminating in the construction and verbal proof of Euclid’s Elements, Proposition 1 (Euclid, 1733).
The central findings of this thesis are the students’ interactions with the digital environment, with observed changes in their conceptions of radii and circles, and in their use of tools. The students were observed to have conceptions of radii as a process, a geometric shape, and a geometric object. I observed the students’ conceptions of a circle change from a geometric shape to a geometric object, and with that change, observed the students’ use of tools change from a measuring focus to a property focus.
I report a summary of the students’ work and classify their reasoning and actions into the above categories, and an analysis of how the digital environment impacts the students’ conceptions. I also briefly discuss the impact of the findings on pedagogy and future research.
ContributorsSakauye, Noelle Marie (Author) / Roh, Kyeong Hah (Thesis director) / Zandieh, Michelle (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131525-Thumbnail Image.png
Description
The original version of Helix, the one I pitched when first deciding to make a video game
for my thesis, is an action-platformer, with the intent of metroidvania-style progression
and an interconnected world map.

The current version of Helix is a turn based role-playing game, with the intent of roguelike
gameplay and a dark

The original version of Helix, the one I pitched when first deciding to make a video game
for my thesis, is an action-platformer, with the intent of metroidvania-style progression
and an interconnected world map.

The current version of Helix is a turn based role-playing game, with the intent of roguelike
gameplay and a dark fantasy theme. We will first be exploring the challenges that came
with programming my own game - not quite from scratch, but also without a prebuilt
engine - then transition into game design and how Helix has evolved from its original form
to what we see today.
ContributorsDiscipulo, Isaiah K (Author) / Meuth, Ryan (Thesis director) / Kobayashi, Yoshihiro (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133887-Thumbnail Image.png
Description
This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable

This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable to results obtained using a conventional chassis dynamometer. Torque curves were generated via the experimental method under a variety of circumstances and also obtained professionally by a precision engine testing company. Metrics were created to measure the precision of the experimental device's ability to consistently generate torque curves and also to compare the similarity of these curves to the professionally obtained torque curves. The results revealed that although the test device does not quite provide the same level of precision as the professional chassis dynamometer, it does create torque curves that closely resemble the chassis dynamometer torque curves and exhibit a consistency between trials comparable to the professional results, even on rough road surfaces. The results suggest that the test device provides enough accuracy and precision to satisfy the needs of most consumers interested in measuring their vehicle's engine performance but probably lacks the level of accuracy and precision needed to appeal to professionals.
ContributorsKing, Michael (Author) / Ren, Yi (Thesis director) / Spanias, Andreas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133911-Thumbnail Image.png
Description
The main goal of this project is to study approximations of functions on circular and spherical domains using the cubed sphere discretization. On each subdomain, the function is approximated by windowed Fourier expansions. Of particular interest is the dependence of accuracy on the different choices of windows and the size

The main goal of this project is to study approximations of functions on circular and spherical domains using the cubed sphere discretization. On each subdomain, the function is approximated by windowed Fourier expansions. Of particular interest is the dependence of accuracy on the different choices of windows and the size of the overlapping regions. We use Matlab to manipulate each of the variables involved in these computations as well as the overall error, thus enabling us to decide which specific values produce the most accurate results. This work is motivated by problems arising in atmospheric research.
ContributorsSopa, Megan Grace (Author) / Platte, Rodrigo (Thesis director) / Kostelich, Eric (Committee member) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135547-Thumbnail Image.png
Description
The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a

The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a probabilistic analysis to describe the variation between replicates of the experimental process, and analyze reliability of a structural system based on that model. In order to help design the EDP software to perform the full analysis, the probabilistic and regression modeling aspects of this analysis have been explored. The focus has been on creating and analyzing probabilistic models for the data, adding multivariate and nonparametric fits to raw data, and developing computational techniques that allow for these methods to be properly implemented within EDP. For creating a probabilistic model of replicate data, the normal, lognormal, gamma, Weibull, and generalized exponential distributions have been explored. Goodness-of-fit tests, including the chi-squared, Anderson-Darling, and Kolmogorov-Smirnoff tests, have been used in order to analyze the effectiveness of any of these probabilistic models in describing the variation of parameters between replicates of an experimental test. An example using Young's modulus data for a Kevlar-49 Swath stress-strain test was used in order to demonstrate how this analysis is performed within EDP. In order to implement the distributions, numerical solutions for the gamma, beta, and hypergeometric functions were implemented, along with an arbitrary precision library to store numbers that exceed the maximum size of double-precision floating point digits. To create a multivariate fit, the multilinear solution was created as the simplest solution to the multivariate regression problem. This solution was then extended to solve nonlinear problems that can be linearized into multiple separable terms. These problems were solved analytically with the closed-form solution for the multilinear regression, and then by using a QR decomposition to solve numerically while avoiding numerical instabilities associated with matrix inversion. For nonparametric regression, or smoothing, the loess method was developed as a robust technique for filtering noise while maintaining the general structure of the data points. The loess solution was created by addressing concerns associated with simpler smoothing methods, including the running mean, running line, and kernel smoothing techniques, and combining the ability of each of these methods to resolve those issues. The loess smoothing method involves weighting each point in a partition of the data set, and then adding either a line or a polynomial fit within that partition. Both linear and quadratic methods were applied to a carbon fiber compression test, showing that the quadratic model was more accurate but the linear model had a shape that was more effective for analyzing the experimental data. Finally, the EDP program itself was explored to consider its current functionalities for processing data, as described by shear tests on carbon fiber data, and the future functionalities to be developed. The probabilistic and raw data processing capabilities were demonstrated within EDP, and the multivariate and loess analysis was demonstrated using R. As the functionality and relevant considerations for these methods have been developed, the immediate goal is to finish implementing and integrating these additional features into a version of EDP that performs a full streamlined structural analysis on experimental data.
ContributorsMarkov, Elan Richard (Author) / Rajan, Subramaniam (Thesis director) / Khaled, Bilal (Committee member) / Chemical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Ira A. Fulton School of Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135570-Thumbnail Image.png
Description
There is a disconnect between the way people are taught to find success and happiness, and the results observed. Society teaches us that success will lead to happiness. Instead, it is argued that success is engrained in happiness. Case studies of four, established, successful people: Jack Ma, Elon Musk, Ricardo

There is a disconnect between the way people are taught to find success and happiness, and the results observed. Society teaches us that success will lead to happiness. Instead, it is argued that success is engrained in happiness. Case studies of four, established, successful people: Jack Ma, Elon Musk, Ricardo Semler, and William Gore, have been conducted in order to observe an apparent pattern. This data, coupled with the data from Michael Boehringer's story, is used to formulate a solution to the proposed problem. Each case study is designed to observe characteristics of the individuals that allow them to be successful and exhibit traits of happiness. Happiness will be analyzed in terms of passion and desire to perform consistently. Someone who does what they love, paired with the ability to perform on a regular basis, is considered to be a happy person. The data indicates that there is an observable pattern within the results. From this pattern, certain traits have been highlighted and used to formulate guidelines that will aid someone falling short of success and happiness in their lives. The results indicate that there are simple questions that can guide people to a happier life. Three basic questions are defined: is it something you love, can you see yourself doing this every day and does it add value? If someone can answer yes to all three requirements, the person will be able to find happiness, with success following. These guidelines can be taken and applied to those struggling with unhappiness and failure. By creating such a formula, the youth can be taught a new way of thinking that will help to eliminate these issues, that many people are facing.
ContributorsBoehringer, Michael Alexander (Author) / Kashiwagi, Dean (Thesis director) / Kashiwagi, Jacob (Committee member) / Department of Management (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Finance (Contributor) / Sandra Day O'Connor College of Law (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135574-Thumbnail Image.png
Description
The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the

The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the internet. As the server CPU industry expands and transitions to cloud computing, Company A's Data Center Group will need to expand their server CPU chip product mix to meet new demands of the cloud industry and to maintain high market share. Company A boasts leading performance with their x86 server chips and 95% market segment share. The cloud industry is dominated by seven companies Company A calls "The Super 7." These seven companies include: Amazon, Google, Microsoft, Facebook, Alibaba, Tencent, and Baidu. In the long run, the growing market share of the Super 7 could give them substantial buying power over Company A, which could lead to discounts and margin compression for Company A's main growth engine. Additionally, in the long-run, the substantial growth of the Super 7 could fuel the development of their own design teams and work towards making their own server chips internally, which would be detrimental to Company A's data center revenue. We first researched the server industry and key terminology relevant to our project. We narrowed our scope by focusing most on the cloud computing aspect of the server industry. We then researched what Company A has already been doing in the context of cloud computing and what they are currently doing to address the problem. Next, using our market analysis, we identified key areas we think Company A's data center group should focus on. Using the information available to us, we developed our strategies and recommendations that we think will help Company A's Data Center Group position themselves well in an extremely fast growing cloud computing industry.
ContributorsJurgenson, Alex (Co-author) / Nguyen, Duy (Co-author) / Kolder, Sean (Co-author) / Wang, Chenxi (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Department of Finance (Contributor) / Department of Management (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Accountancy (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134328-Thumbnail Image.png
Description
As mobile devices have risen to prominence over the last decade, their importance has been increasingly recognized. Workloads for mobile devices are often very different from those on desktop and server computers, and solutions that worked in the past are not always the best fit for the resource- and energy-constrained

As mobile devices have risen to prominence over the last decade, their importance has been increasingly recognized. Workloads for mobile devices are often very different from those on desktop and server computers, and solutions that worked in the past are not always the best fit for the resource- and energy-constrained computing that characterizes mobile devices. While this is most commonly seen in CPU and graphics workloads, this device class difference extends to I/O as well. However, while a few tools exist to help analyze mobile storage solutions, there exists a gap in the available software that prevents quality analysis of certain research initiatives, such as I/O deduplication on mobile devices. This honors thesis will demonstrate a new tool that is capable of capturing I/O on the filesystem layer of mobile devices running the Android operating system, in support of new mobile storage research. Uniquely, it is able to capture both metadata of writes as well as the actual written data, transparently to the apps running on the devices. Based on a modification of the strace program, fstrace and its companion tool fstrace-replay can record and replay filesystem I/O of actual Android apps. Using this new tracing tool, several traces from popular Android apps such as Facebook and Twitter were collected and analyzed.
ContributorsMor, Omri (Author) / Zhao, Ming (Thesis director) / Zhao, Ziming (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135377-Thumbnail Image.png
Description
A specific species of the genus Geobacter exhibits useful electrical properties when processing a molecule often found in waste water. A team at ASU including Dr Cèsar Torres and Dr Sudeep Popat used that species to create a special type of solid oxide fuel cell we refer to as a

A specific species of the genus Geobacter exhibits useful electrical properties when processing a molecule often found in waste water. A team at ASU including Dr Cèsar Torres and Dr Sudeep Popat used that species to create a special type of solid oxide fuel cell we refer to as a microbial fuel cell. Identification of possible chemical processes and properties of the reactions used by the Geobacter are investigated indirectly by taking measurements using Electrochemical Impedance Spectroscopy of the electrode-electrolyte interface of the microbial fuel cell to obtain the value of the fuel cell's complex impedance at specific frequencies. Investigation of the multiple polarization processes which give rise to measured impedance values is difficult to do directly and so examination of the distribution function of relaxation times (DRT) is considered instead. The DRT is related to the measured complex impedance values using a general, non-physical equivalent circuit model. That model is originally given in terms of a Fredholm integral equation with a non-square integrable kernel which makes the inverse problem of determining the DRT given the impedance measurements an ill-posed problem. The original integral equation is rewritten in terms of new variables into an equation relating the complex impedance to the convolution of a function based upon the original integral kernel and a related but separate distribution function which we call the convolutional distribution function. This new convolutional equation is solved by reducing the convolution to a pointwise product using the Fourier transform and then solving the inverse problem by pointwise division and application of a filter function (equivalent to regularization). The inverse Fourier transform is then taken to get the convolutional distribution function. In the literature the convolutional distribution function is then examined and certain values of a specific, less general equivalent circuit model are calculated from which aspects of the original chemical processes are derived. We attempted to instead directly determine the original DRT from the calculated convolutional distribution function. This method proved to be practically less useful due to certain values determined at the time of experiment which meant the original DRT could only be recovered in a window which would not normally contain the desired information for the original DRT. This limits any attempt to extend the solution for the convolutional distribution function to the original DRT. Further research may determine a method for interpreting the convolutional distribution function without an equivalent circuit model as is done with the regularization method used to solve directly for the original DRT.
ContributorsBaker, Robert Simpson (Author) / Renaut, Rosemary (Thesis director) / Kostelich, Eric (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135378-Thumbnail Image.png
Description
A problem of interest in theoretical physics is the issue of the evaporation of black holes via Hawking radiation subject to a fixed background. We approach this problem by considering an electromagnetic analogue, where we have substituted Hawking radiation with the Schwinger effect. We treat the case of massless QED

A problem of interest in theoretical physics is the issue of the evaporation of black holes via Hawking radiation subject to a fixed background. We approach this problem by considering an electromagnetic analogue, where we have substituted Hawking radiation with the Schwinger effect. We treat the case of massless QED in 1+1 dimensions with the path integral approach to quantum field theory, and discuss the resulting Feynman diagrams from our analysis. The results from this thesis may be useful to find a version of the Schwinger effect that can be solved exactly and perturbatively, as this version may provide insights to the gravitational problem of Hawking radiation.
ContributorsDhumuntarao, Aditya (Author) / Parikh, Maulik (Thesis director) / Davies, Paul C. W. (Committee member) / Department of Physics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05