Matching Items (518)
Filtering by

Clear all filters

135574-Thumbnail Image.png
Description
The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the

The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the internet. As the server CPU industry expands and transitions to cloud computing, Company A's Data Center Group will need to expand their server CPU chip product mix to meet new demands of the cloud industry and to maintain high market share. Company A boasts leading performance with their x86 server chips and 95% market segment share. The cloud industry is dominated by seven companies Company A calls "The Super 7." These seven companies include: Amazon, Google, Microsoft, Facebook, Alibaba, Tencent, and Baidu. In the long run, the growing market share of the Super 7 could give them substantial buying power over Company A, which could lead to discounts and margin compression for Company A's main growth engine. Additionally, in the long-run, the substantial growth of the Super 7 could fuel the development of their own design teams and work towards making their own server chips internally, which would be detrimental to Company A's data center revenue. We first researched the server industry and key terminology relevant to our project. We narrowed our scope by focusing most on the cloud computing aspect of the server industry. We then researched what Company A has already been doing in the context of cloud computing and what they are currently doing to address the problem. Next, using our market analysis, we identified key areas we think Company A's data center group should focus on. Using the information available to us, we developed our strategies and recommendations that we think will help Company A's Data Center Group position themselves well in an extremely fast growing cloud computing industry.
ContributorsJurgenson, Alex (Co-author) / Nguyen, Duy (Co-author) / Kolder, Sean (Co-author) / Wang, Chenxi (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Department of Finance (Contributor) / Department of Management (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Accountancy (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134328-Thumbnail Image.png
Description
As mobile devices have risen to prominence over the last decade, their importance has been increasingly recognized. Workloads for mobile devices are often very different from those on desktop and server computers, and solutions that worked in the past are not always the best fit for the resource- and energy-constrained

As mobile devices have risen to prominence over the last decade, their importance has been increasingly recognized. Workloads for mobile devices are often very different from those on desktop and server computers, and solutions that worked in the past are not always the best fit for the resource- and energy-constrained computing that characterizes mobile devices. While this is most commonly seen in CPU and graphics workloads, this device class difference extends to I/O as well. However, while a few tools exist to help analyze mobile storage solutions, there exists a gap in the available software that prevents quality analysis of certain research initiatives, such as I/O deduplication on mobile devices. This honors thesis will demonstrate a new tool that is capable of capturing I/O on the filesystem layer of mobile devices running the Android operating system, in support of new mobile storage research. Uniquely, it is able to capture both metadata of writes as well as the actual written data, transparently to the apps running on the devices. Based on a modification of the strace program, fstrace and its companion tool fstrace-replay can record and replay filesystem I/O of actual Android apps. Using this new tracing tool, several traces from popular Android apps such as Facebook and Twitter were collected and analyzed.
ContributorsMor, Omri (Author) / Zhao, Ming (Thesis director) / Zhao, Ziming (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135377-Thumbnail Image.png
Description
A specific species of the genus Geobacter exhibits useful electrical properties when processing a molecule often found in waste water. A team at ASU including Dr Cèsar Torres and Dr Sudeep Popat used that species to create a special type of solid oxide fuel cell we refer to as a

A specific species of the genus Geobacter exhibits useful electrical properties when processing a molecule often found in waste water. A team at ASU including Dr Cèsar Torres and Dr Sudeep Popat used that species to create a special type of solid oxide fuel cell we refer to as a microbial fuel cell. Identification of possible chemical processes and properties of the reactions used by the Geobacter are investigated indirectly by taking measurements using Electrochemical Impedance Spectroscopy of the electrode-electrolyte interface of the microbial fuel cell to obtain the value of the fuel cell's complex impedance at specific frequencies. Investigation of the multiple polarization processes which give rise to measured impedance values is difficult to do directly and so examination of the distribution function of relaxation times (DRT) is considered instead. The DRT is related to the measured complex impedance values using a general, non-physical equivalent circuit model. That model is originally given in terms of a Fredholm integral equation with a non-square integrable kernel which makes the inverse problem of determining the DRT given the impedance measurements an ill-posed problem. The original integral equation is rewritten in terms of new variables into an equation relating the complex impedance to the convolution of a function based upon the original integral kernel and a related but separate distribution function which we call the convolutional distribution function. This new convolutional equation is solved by reducing the convolution to a pointwise product using the Fourier transform and then solving the inverse problem by pointwise division and application of a filter function (equivalent to regularization). The inverse Fourier transform is then taken to get the convolutional distribution function. In the literature the convolutional distribution function is then examined and certain values of a specific, less general equivalent circuit model are calculated from which aspects of the original chemical processes are derived. We attempted to instead directly determine the original DRT from the calculated convolutional distribution function. This method proved to be practically less useful due to certain values determined at the time of experiment which meant the original DRT could only be recovered in a window which would not normally contain the desired information for the original DRT. This limits any attempt to extend the solution for the convolutional distribution function to the original DRT. Further research may determine a method for interpreting the convolutional distribution function without an equivalent circuit model as is done with the regularization method used to solve directly for the original DRT.
ContributorsBaker, Robert Simpson (Author) / Renaut, Rosemary (Thesis director) / Kostelich, Eric (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135378-Thumbnail Image.png
Description
A problem of interest in theoretical physics is the issue of the evaporation of black holes via Hawking radiation subject to a fixed background. We approach this problem by considering an electromagnetic analogue, where we have substituted Hawking radiation with the Schwinger effect. We treat the case of massless QED

A problem of interest in theoretical physics is the issue of the evaporation of black holes via Hawking radiation subject to a fixed background. We approach this problem by considering an electromagnetic analogue, where we have substituted Hawking radiation with the Schwinger effect. We treat the case of massless QED in 1+1 dimensions with the path integral approach to quantum field theory, and discuss the resulting Feynman diagrams from our analysis. The results from this thesis may be useful to find a version of the Schwinger effect that can be solved exactly and perturbatively, as this version may provide insights to the gravitational problem of Hawking radiation.
ContributorsDhumuntarao, Aditya (Author) / Parikh, Maulik (Thesis director) / Davies, Paul C. W. (Committee member) / Department of Physics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135324-Thumbnail Image.png
Description
The Clean Power Plan seeks to reduce CO2 emissions in the energy industry, which is the largest source of CO2 emissions in the United States. In order to comply with the Clean Power Plan, electric utilities in Arizona will need to meet the electricity demand while reducing the use of

The Clean Power Plan seeks to reduce CO2 emissions in the energy industry, which is the largest source of CO2 emissions in the United States. In order to comply with the Clean Power Plan, electric utilities in Arizona will need to meet the electricity demand while reducing the use of fossil fuel sources in generation. The study first outlines the organization of the power sector in the United States and the structural and price changes attempted in the industry during the period of restructuring. The recent final rule of the Clean Power Plan is then described in detail with a narrowed focus on Arizona. Data from APS, a representative utility of Arizona, is used for the remainder of the analysis to determine the price increase necessary to cut Arizona's CO2 emissions in order to meet the federal goal. The first regression models the variables which affect total demand and thus generation load, from which we estimate the marginal effect of price on demand. The second regression models CO2 emissions as a function of different levels of generation. This allows the effect of generation on emissions to fluctuate with ranges of load, following the logic of the merit order of plants and changing rates of emissions for different sources. Two methods are used to find the necessary percentage increase in price to meet the CPP goals: one based on the mass-based goal for Arizona and the other based on the percentage reduction for Arizona. Then a price increase is calculated for a projection into the future using known changes in energy supply.
ContributorsHerman, Laura Alexandra (Author) / Silverman, Daniel (Thesis director) / Kuminoff, Nicolai (Committee member) / Department of Economics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135327-Thumbnail Image.png
Description
A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog

A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog scheme leads to computational modes in the solutions to highly nonlinear systems, and time-filters are often used to damp these modes. The proposed filter damps the computational modes without appreciably degrading the physical mode. Its performance in these metrics is superior to the second-order time-filtered leapfrog scheme developed by Robert and Asselin.
Created2016-05
135340-Thumbnail Image.png
Description
Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.
ContributorsMian, Sami T. (Author) / Collofello, James (Thesis director) / Chen, Yinong (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135355-Thumbnail Image.png
Description
Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and

Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and includes chemotherapy, radiation therapy, and surgical removal if the tumor is surgically accessible. Treatment seldom results in a significant increase in longevity, partly due to the lack of precise information regarding tumor size and location. This lack of information arises from the physical limitations of MR and CT imaging coupled with the diffusive nature of glioblastoma tumors. GBM tumor cells can migrate far beyond the visible boundaries of the tumor and will result in a recurring tumor if not killed or removed. Since medical images are the only readily available information about the tumor, we aim to improve mathematical models of tumor growth to better estimate the missing information. Particularly, we investigate the effect of random variation in tumor cell behavior (anisotropy) using stochastic parameterizations of an established proliferation-diffusion model of tumor growth. To evaluate the performance of our mathematical model, we use MR images from an animal model consisting of Murine GL261 tumors implanted in immunocompetent mice, which provides consistency in tumor initiation and location, immune response, genetic variation, and treatment. Compared to non-stochastic simulations, stochastic simulations showed improved volume accuracy when proliferation variability was high, but diffusion variability was found to only marginally affect tumor volume estimates. Neither proliferation nor diffusion variability significantly affected the spatial distribution accuracy of the simulations. While certain cases of stochastic parameterizations improved volume accuracy, they failed to significantly improve simulation accuracy overall. Both the non-stochastic and stochastic simulations failed to achieve over 75% spatial distribution accuracy, suggesting that the underlying structure of the model fails to capture one or more biological processes that affect tumor growth. Two biological features that are candidates for further investigation are angiogenesis and anisotropy resulting from differences between white and gray matter. Time-dependent proliferation and diffusion terms could be introduced to model angiogenesis, and diffusion weighed imaging (DTI) could be used to differentiate between white and gray matter, which might allow for improved estimates brain anisotropy.
ContributorsAnderies, Barrett James (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Stepien, Tracy (Committee member) / Harrington Bioengineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135360-Thumbnail Image.png
Description
Aberrant glycosylation has been shown to be linked to specific cancers, and using this idea, it was proposed that the levels of glycans in the blood could predict stage I adenocarcinoma. To track this glycosylation, glycan were broken down into glycan nodes via methylation analysis. This analysis utilized information from

Aberrant glycosylation has been shown to be linked to specific cancers, and using this idea, it was proposed that the levels of glycans in the blood could predict stage I adenocarcinoma. To track this glycosylation, glycan were broken down into glycan nodes via methylation analysis. This analysis utilized information from N-, O-, and lipid linked glycans detected from gas chromatography-mass spectrometry. The resulting glycan node-ratios represent the initial quantitative data that were used in this experiment.
For this experiment, two Sets of 50 µl blood plasma samples were provided by NYU Medical School. These samples were then analyzed by Dr. Borges’s lab so that they contained normalized biomarker levels from patients with stage 1 adenocarcinoma and control patients with matched age, smoking status, and gender were examined. An ROC curve was constructed under individual and paired conditions and AUC calculated in Wolfram Mathematica 10.2. Methods such as increasing size of training set, using hard vs. soft margins, and processing biomarkers together and individually were used in order to increase the AUC. Using a soft margin for this particular data set was proved to be most useful compared to the initial set hard margin, raising the AUC from 0.6013 to 0.6585. In regards to which biomarkers yielded the better value, 6-Glc/6-Man and 3,6-Gal glycan node ratios had the best with 0.7687 AUC and a sensitivity of .7684 and specificity of .6051. While this is not enough accuracy to become a primary diagnostic tool for diagnosing stage I adenocarcinoma, the methods examined in the paper should be evaluated further. . By comparison, the current clinical standard blood test for prostate cancer that has an AUC of only 0.67.
ContributorsDe Jesus, Celine Spicer (Author) / Taylor, Thomas (Thesis director) / Borges, Chad (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135425-Thumbnail Image.png
Description
The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor

The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor edge detection method was therefore developed to realize an edge detector directly from spectral data. This thesis explores the possibilities of detecting edges from the phase of the spectral data, that is, without the magnitude of the sampled spectral data. Prior work has demonstrated that the spectral phase contains particularly important information about underlying features in a signal. Furthermore, the concentration factor method yields some insight into the detection of edges in spectral phase data. An iterative design approach was taken to realize an edge detector using only the spectral phase data, also allowing for the design of an edge detector when phase data are intermittent or corrupted. Problem formulations showing the power of the design approach are given throughout. A post-processing scheme relying on the difference of multiple edge approximations yields a strong edge detector which is shown to be resilient under noisy, intermittent phase data. Lastly, a thresholding technique is applied to give an explicit enhanced edge detector ready to be used. Examples throughout are demonstrate both on signals and images.
ContributorsReynolds, Alexander Bryce (Author) / Gelb, Anne (Thesis director) / Cochran, Douglas (Committee member) / Viswanathan, Adityavikram (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05