Matching Items (356)
Filtering by

Clear all filters

Description

The PPP Loan Program was created by the CARES Act and carried out by the Small Business Administration (SBA) to provide support to small businesses in maintaining their payroll during the Coronavirus pandemic. This program was approved for $350 billion, but this amount was expanded by an additional $320 billion

The PPP Loan Program was created by the CARES Act and carried out by the Small Business Administration (SBA) to provide support to small businesses in maintaining their payroll during the Coronavirus pandemic. This program was approved for $350 billion, but this amount was expanded by an additional $320 billion to meet the demand by struggling businesses, since initial funding was exhausted under two weeks.<br/><br/>Significant controversy surrounds the program. In December 2020, the Department of Justice reported 90 individuals were charged for fraudulent use of funds, totaling $250 million. The loans, which were intended for small business, were actually approved for 450 public companies. Furthermore, the methods of approval are<br/>shrouded in mystery. In an effort to be transparent, the SBA has released information about loan recipients. Conveniently, the SBA has released information of all recipients. Detailed information was released for 661,218 recipients who have received a PPP loan in excess of $150,000. These recipients are the central point of this research.<br/><br/>This research sought to answer two primary questions: how did the SBA determine which loans, and therefore which industries are approved, and did the industries most affected by the pandemic receive the most in PPP loans, as intended by Congress? It was determined that, generally, PPP Loans were approved on the basis of employment percentages relative to the individual state. Furthermore, in general, the loans approved were approved fairly, with respect to the size of the industry. The loans, when adjusted for GDP and Employment factors, yielded a clear ranking that prioritized vulnerable industries first.<br/><br/>However, significant questions remain. The effectiveness of the PPP has been hindered by unclear incentives and negative outcomes, characterized by a government program that has essentially been rushed into service. Furthermore, limitations of available data to regress and compare the SBA's approved loans are not representative of small business.

ContributorsMaglanoc, Julian (Author) / Kenchington, David (Thesis director) / Cassidy, Nancy (Committee member) / Department of Finance (Contributor) / Dean, W.P. Carey School of Business (Contributor) / School of Accountancy (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The Covid-19 pandemic has made a significant impact on both the stock market and the<br/>global economy. The resulting volatility in stock prices has provided an opportunity to examine<br/>the Efficient Market Hypothesis. This study aims to gain insights into the efficiency of markets<br/>based on stock price performance in the Covid era.

The Covid-19 pandemic has made a significant impact on both the stock market and the<br/>global economy. The resulting volatility in stock prices has provided an opportunity to examine<br/>the Efficient Market Hypothesis. This study aims to gain insights into the efficiency of markets<br/>based on stock price performance in the Covid era. Specifically, it investigates the market’s<br/>ability to anticipate significant events during the Covid-19 timeline beginning November 1, 2019<br/><br/>and ending March 31, 2021. To examine the efficiency of markets, our team created a Stay-at-<br/>Home Portfolio, experiencing economic tailwinds from the Covid lockdowns, and a Pandemic<br/><br/>Loser Portfolio, experiencing economic headwinds from the Covid lockdowns. Cumulative<br/>returns of each portfolio are benchmarked to the cumulative returns of the S&P 500. The results<br/>showed that the Efficient Market Hypothesis is likely to be valid, although a definitive<br/>conclusion cannot be made based on the scope of the analysis. There are recommendations for<br/>further research surrounding key events that may be able to draw a more direct conclusion.

ContributorsBrock, Matt Ian (Co-author) / Beneduce, Trevor (Co-author) / Craig, Nicko (Co-author) / Hertzel, Michael (Thesis director) / Mindlin, Jeff (Committee member) / Department of Finance (Contributor) / Economics Program in CLAS (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147956-Thumbnail Image.png
Description

Music streaming services have affected the music industry from both a financial and legal standpoint. Their current business model affects stakeholders such as artists, users, and investors. These services have been scrutinized recently for their imperfect royalty distribution model. Covid-19 has made these discussions even more relevant as touring income

Music streaming services have affected the music industry from both a financial and legal standpoint. Their current business model affects stakeholders such as artists, users, and investors. These services have been scrutinized recently for their imperfect royalty distribution model. Covid-19 has made these discussions even more relevant as touring income has come to a halt for musicians and the live entertainment industry. <br/>Under the current per-stream model, it is becoming exceedingly hard for artists to make a living off of streams. This forces artists to tour heavily as well as cut corners to create what is essentially “disposable art”. Rapidly releasing multiple projects a year has become the norm for many modern artists. This paper will examine the licensing framework, royalty payout issues, and propose a solution.

ContributorsKoudssi, Zakaria Corley (Author) / Sadusky, Brian (Thesis director) / Koretz, Lora (Committee member) / Dean, W.P. Carey School of Business (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148212-Thumbnail Image.png
Description

Developed a business product with a team of CS students.

ContributorsPerri, Cole Thomas (Co-author) / Hernandez, Maximilliano (Co-author) / Schneider, Kaitlin (Co-author) / Call, Andy (Thesis director) / Hunt, Neil (Committee member) / School of Accountancy (Contributor) / Watts College of Public Service & Community Solut (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The COVID-19 pandemic has and will continue to radically shift the workplace. An increasing percentage of the workforce desires flexible working options and, as such, firms are likely to require less office space going forward. Additionally, the economic downturn caused by the pandemic provides an opportunity for companies to secure

The COVID-19 pandemic has and will continue to radically shift the workplace. An increasing percentage of the workforce desires flexible working options and, as such, firms are likely to require less office space going forward. Additionally, the economic downturn caused by the pandemic provides an opportunity for companies to secure favorable rent rates on new lease agreements. This project aims to evaluate and measure Company X’s potential cost savings from terminating current leases and downsizing office space in five selected cities. Along with city-specific real estate market research and forecasts, we employ a four-stage model of Company X’s real estate negotiation process to analyze whether existing lease agreements in these cities should be renewed or terminated.

ContributorsHegardt, Brandon Michael (Co-author) / Saker, Logan (Co-author) / Patterson, Jack (Co-author) / Ries, Sarah (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
ContributorsThulasiram, Ramesh (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150095-Thumbnail Image.png
Description
Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It

Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms.
ContributorsChen, Jianhui (Author) / Ye, Jieping (Thesis advisor) / Kumar, Sudhir (Committee member) / Liu, Huan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2011
151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
151498-Thumbnail Image.png
Description
Nowadays, wireless communications and networks have been widely used in our daily lives. One of the most important topics related to networking research is using optimization tools to improve the utilization of network resources. In this dissertation, we concentrate on optimization for resource-constrained wireless networks, and study two fundamental resource-allocation

Nowadays, wireless communications and networks have been widely used in our daily lives. One of the most important topics related to networking research is using optimization tools to improve the utilization of network resources. In this dissertation, we concentrate on optimization for resource-constrained wireless networks, and study two fundamental resource-allocation problems: 1) distributed routing optimization and 2) anypath routing optimization. The study on the distributed routing optimization problem is composed of two main thrusts, targeted at understanding distributed routing and resource optimization for multihop wireless networks. The first thrust is dedicated to understanding the impact of full-duplex transmission on wireless network resource optimization. We propose two provably good distributed algorithms to optimize the resources in a full-duplex wireless network. We prove their optimality and also provide network status analysis using dual space information. The second thrust is dedicated to understanding the influence of network entity load constraints on network resource allocation and routing computation. We propose a provably good distributed algorithm to allocate wireless resources. In addition, we propose a new subgradient optimization framework, which can provide findgrained convergence, optimality, and dual space information at each iteration. This framework can provide a useful theoretical foundation for many networking optimization problems. The study on the anypath routing optimization problem is composed of two main thrusts. The first thrust is dedicated to understanding the computational complexity of multi-constrained anypath routing and designing approximate solutions. We prove that this problem is NP-hard when the number of constraints is larger than one. We present two polynomial time K-approximation algorithms. One is a centralized algorithm while the other one is a distributed algorithm. For the second thrust, we study directional anypath routing and present a cross-layer design of MAC and routing. For the MAC layer, we present a directional anycast MAC. For the routing layer, we propose two polynomial time routing algorithms to compute directional anypaths based on two antenna models, and prove their ptimality based on the packet delivery ratio metric.
ContributorsFang, Xi (Author) / Xue, Guoliang (Thesis advisor) / Yau, Sik-Sang (Committee member) / Ye, Jieping (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
151500-Thumbnail Image.png
Description
Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding

Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks.
ContributorsBanerjee, Sujogya (Author) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Richa, Andrea (Committee member) / Hurlbert, Glenn (Committee member) / Arizona State University (Publisher)
Created2013