Matching Items (30)
Filtering by

Clear all filters

157245-Thumbnail Image.png
Description
In the realm of network science, many topics can be abstracted as graph problems, such as routing, connectivity enhancement, resource/frequency allocation and so on. Though most of them are NP-hard to solve, heuristics as well as approximation algorithms are proposed to achieve reasonably good results. Accordingly, this dissertation studies graph

In the realm of network science, many topics can be abstracted as graph problems, such as routing, connectivity enhancement, resource/frequency allocation and so on. Though most of them are NP-hard to solve, heuristics as well as approximation algorithms are proposed to achieve reasonably good results. Accordingly, this dissertation studies graph related problems encountered in real applications. Two problems studied in this dissertation are derived from wireless network, two more problems studied are under scenarios of FIWI and optical network, one more problem is in Radio- Frequency Identification (RFID) domain and the last problem is inspired by satellite deployment.

The objective of most of relay nodes placement problems, is to place the fewest number of relay nodes in the deployment area so that the network, formed by the sensors and the relay nodes, is connected. Under the fixed budget scenario, the expense involved in procuring the minimum number of relay nodes to make the network connected, may exceed the budget. In this dissertation, we study a family of problems whose goal is to design a network with “maximal connectedness” or “minimal disconnectedness”, subject to a fixed budget constraint. Apart from “connectivity”, we also study relay node problem in which degree constraint is considered. The balance of reducing the degree of the network while maximizing communication forms the basis of our d-degree minimum arrangement(d-MA) problem. In this dissertation, we look at several approaches to solving the generalized d-MA problem where we embed a graph onto a subgraph of a given degree.

In recent years, considerable research has been conducted on optical and FIWI networks. Utilizing a recently proposed concept “candidate trees” in optical network, this dissertation studies counting problem on complete graphs. Closed form expressions are given for certain cases and a polynomial counting algorithm for general cases is also presented. Routing plays a major role in FiWi networks. Accordingly to a novel path length metric which emphasizes on “heaviest edge”, this dissertation proposes a polynomial algorithm on single path computation. NP-completeness proof as well as approximation algorithm are presented for multi-path routing.

Radio-frequency identification (RFID) technology is extensively used at present for identification and tracking of a multitude of objects. In many configurations, simultaneous activation of two readers may cause a “reader collision” when tags are present in the intersection of the sensing ranges of both readers. This dissertation ad- dresses slotted time access for Readers and tries to provide a collision-free scheduling scheme while minimizing total reading time.

Finally, this dissertation studies a monitoring problem on the surface of the earth for significant environmental, social/political and extreme events using satellites as sensors. It is assumed that the impact of a significant event spills into neighboring regions and there will be corresponding indicators. Careful deployment of sensors, utilizing “Identifying Codes”, can ensure that even though the number of deployed sensors is fewer than the number of regions, it may be possible to uniquely identify the region where the event has taken place.
ContributorsZhou, Chenyang (Author) / Richa, Andrea (Thesis advisor) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Walkowiak, Krzysztof (Committee member) / Arizona State University (Publisher)
Created2019
135574-Thumbnail Image.png
Description
The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the

The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the internet. As the server CPU industry expands and transitions to cloud computing, Company A's Data Center Group will need to expand their server CPU chip product mix to meet new demands of the cloud industry and to maintain high market share. Company A boasts leading performance with their x86 server chips and 95% market segment share. The cloud industry is dominated by seven companies Company A calls "The Super 7." These seven companies include: Amazon, Google, Microsoft, Facebook, Alibaba, Tencent, and Baidu. In the long run, the growing market share of the Super 7 could give them substantial buying power over Company A, which could lead to discounts and margin compression for Company A's main growth engine. Additionally, in the long-run, the substantial growth of the Super 7 could fuel the development of their own design teams and work towards making their own server chips internally, which would be detrimental to Company A's data center revenue. We first researched the server industry and key terminology relevant to our project. We narrowed our scope by focusing most on the cloud computing aspect of the server industry. We then researched what Company A has already been doing in the context of cloud computing and what they are currently doing to address the problem. Next, using our market analysis, we identified key areas we think Company A's data center group should focus on. Using the information available to us, we developed our strategies and recommendations that we think will help Company A's Data Center Group position themselves well in an extremely fast growing cloud computing industry.
ContributorsJurgenson, Alex (Co-author) / Nguyen, Duy (Co-author) / Kolder, Sean (Co-author) / Wang, Chenxi (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Department of Finance (Contributor) / Department of Management (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Accountancy (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136625-Thumbnail Image.png
Description
A Guide to Financial Mathematics is a comprehensive and easy-to-use study guide for students studying for the one of the first actuarial exams, Exam FM. While there are many resources available to students to study for these exams, this study is free to the students and offers an approach to

A Guide to Financial Mathematics is a comprehensive and easy-to-use study guide for students studying for the one of the first actuarial exams, Exam FM. While there are many resources available to students to study for these exams, this study is free to the students and offers an approach to the material similar to that of which is presented in class at ASU. The guide is available to students and professors in the new Actuarial Science degree program offered by ASU. There are twelve chapters, including financial calculator tips, detailed notes, examples, and practice exercises. Included at the end of the guide is a list of referenced material.
ContributorsDougher, Caroline Marie (Author) / Milovanovic, Jelena (Thesis director) / Boggess, May (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136803-Thumbnail Image.png
Description
This paper provides evidence through an event study, portfolio simulation, and regression analysis that insider trading, when appropriately aggregated, has predictive power for abnormal risk-adjusted returns on some country and sector exchange traded funds (ETFs). I examine ETFs because of their broad scope and liquidity. ETF markets are relatively efficient

This paper provides evidence through an event study, portfolio simulation, and regression analysis that insider trading, when appropriately aggregated, has predictive power for abnormal risk-adjusted returns on some country and sector exchange traded funds (ETFs). I examine ETFs because of their broad scope and liquidity. ETF markets are relatively efficient and, thus, the effects I document are unlikely to appear in ETF markets. My evidence that aggregated insider trading predicts abnormal returns in some ETFs suggests that aggregated insider trading is likely to have predictive power for financial assets traded in less efficient markets. My analysis depends on specialized insider trading data covering 88 countries is generously provided by 2iQ.
ContributorsKerker, Mackenzie Alan (Author) / Coles, Jeffrey (Thesis director) / Mcauley, Daniel (Committee member) / Licon, Wendell (Committee member) / Barrett, The Honors College (Contributor) / Department of Economics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Finance (Contributor)
Created2014-05
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136255-Thumbnail Image.png
Description
Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that hel

Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that help predict how much time it takes to implement a cost-saving project. These projects had previously been considered only on the merit of cost savings, but with an added dimension of time, we hope to forecast time according to a number of variables. With such a forecast, we can then apply it to an expense project prioritization model which relates time and cost savings together, compares many different projects simultaneously, and returns a series of present value calculations over different ranges of time. The goal is twofold: assist with an accurate prediction of a project's time to implementation, and provide a basis to compare different projects based on their present values, ultimately helping to reduce the Company's manufacturing costs and improve gross margins. We believe this approach, and the research found toward this goal, is most valuable for the Company. Two coaches from the Company have provided assistance and clarified our questions when necessary throughout our research. In this paper, we begin by defining the problem, setting an objective, and establishing a checklist to monitor our progress. Next, our attention shifts to the data: making observations, trimming the dataset, framing and scoping the variables to be used for the analysis portion of the paper. Before creating a hypothesis, we perform a preliminary statistical analysis of certain individual variables to enrich our variable selection process. After the hypothesis, we run multiple linear regressions with project duration as the dependent variable. After regression analysis and a test for robustness, we shift our focus to an intuitive model based on rules of thumb. We relate these models to an expense project prioritization tool developed using Microsoft Excel software. Our deliverables to the Company come in the form of (1) a rules of thumb intuitive model and (2) an expense project prioritization tool.
ContributorsAl-Assi, Hashim (Co-author) / Chiang, Robert (Co-author) / Liu, Andrew (Co-author) / Ludwick, David (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Department of Economics (Contributor) / Department of Supply Chain Management (Contributor) / School of Accountancy (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / WPC Graduate Programs (Contributor)
Created2015-05
132832-Thumbnail Image.png
Description
Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price

Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price and the net asset value of the ETF’s underlying assets. However while this does cause ETF deviations to be generally lower than their mutual fund counterparts, as our paper explores this process does not eliminate these deviations completely. This article builds off an earlier paper by Engle and Sarkar (2006) that investigates these properties of premiums (discounts) of ETFs from their fair market value. And looks to see if these premia have changed in the last 10 years. Our paper then diverges from the original and takes a deeper look into the standard deviations of these premia specifically.

Our findings show that over 70% of an ETFs standard deviation of premia can be explained through a linear combination consisting of two variables: a categorical (Domestic[US], Developed, Emerging) and a discrete variable (time-difference from US). This paper also finds that more traditional metrics such as market cap, ETF price volatility, and even 3rd party market indicators such as the economic freedom index and investment freedom index are insignificant predictors of an ETFs standard deviation of premia when combined with the categorical variable. These findings differ somewhat from existing literature which indicate that these factors should have a significant impact on the predictive ability of an ETFs standard deviation of premia.
ContributorsZhang, Jingbo (Co-author, Co-author) / Henning, Thomas (Co-author) / Simonson, Mark (Thesis director) / Licon, L. Wendell (Committee member) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132834-Thumbnail Image.png
Description
Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual
funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to
achieve their functionality and this mechanism is designed to minimize the deviations that occur
between the ETF’s listed price and the net

Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual
funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to
achieve their functionality and this mechanism is designed to minimize the deviations that occur
between the ETF’s listed price and the net asset value of the ETF’s underlying assets. However
while this does cause ETF deviations to be generally lower than their mutual fund counterparts,
as our paper explores this process does not eliminate these deviations completely. This article
builds off an earlier paper by Engle and Sarkar (2006) that investigates these properties of
premiums (discounts) of ETFs from their fair market value. And looks to see if these premia
have changed in the last 10 years. Our paper then diverges from the original and takes a deeper
look into the standard deviations of these premia specifically.
Our findings show that over 70% of an ETFs standard deviation of premia can be
explained through a linear combination consisting of two variables: a categorical (Domestic[US],
Developed, Emerging) and a discrete variable (time-difference from US). This paper also finds
that more traditional metrics such as market cap, ETF price volatility, and even 3rd party market
indicators such as the economic freedom index and investment freedom index are insignificant
predictors of an ETFs standard deviation of premia. These findings differ somewhat from
existing literature which indicate that these factors should have a significant impact on the
predictive ability of an ETFs standard deviation of premia.
ContributorsHenning, Thomas Louis (Co-author) / Zhang, Jingbo (Co-author) / Simonson, Mark (Thesis director) / Wendell, Licon (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
137408-Thumbnail Image.png
Description
This paper investigates whether measures of investor sentiment can be used to predict future total returns of the S&P 500 index. Rolling regressions and other statistical techniques are used to determine which indicators contain the most predictive information and which time horizons' returns are "easiest" to predict in a three

This paper investigates whether measures of investor sentiment can be used to predict future total returns of the S&P 500 index. Rolling regressions and other statistical techniques are used to determine which indicators contain the most predictive information and which time horizons' returns are "easiest" to predict in a three year data set. The five "most predictive" indicators are used to predict 180 calendar day future returns of the market and simulated investment of hypothetical accounts is conducted in an independent six year data set based on the rolling regression future return predictions. Some indicators, most notably the VIX index, appear to contain predictive information which led to out-performance of the accounts that invested based on the rolling regression model's predictions.
ContributorsDundas, Matthew William (Author) / Boggess, May (Thesis director) / Budolfson, Arthur (Committee member) / Hedegaard, Esben (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Finance (Contributor)
Created2013-12
137020-Thumbnail Image.png
Description
In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this ill-posed problem. Two such algorithms were examined: alternating projections, utilizing iterative Fourier transforms with manipulations performed in each domain on every iteration, and phase lifting, converting the problem to that of trace minimization, allowing for the use of convex optimization algorithms to perform the signal recovery. These recovery algorithms were compared on a basis of robustness as a function of signal-to-noise ratio. A second problem examined was that of unimodular polyphase radar waveform design. Under a finite signal energy constraint, the maximal energy return of a scene operator is obtained by transmitting the eigenvector of the scene Gramian associated with the largest eigenvalue. It is shown that if instead the problem is considered under a power constraint, a unimodular signal can be constructed starting from such an eigenvector that will have a greater return.
ContributorsJones, Scott Robert (Author) / Cochran, Douglas (Thesis director) / Diaz, Rodolfo (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05