Matching Items (109)
Filtering by

Clear all filters

147971-Thumbnail Image.png
Description

This survey takes information on a participant’s beliefs on privacy security, the general digital knowledge, demographics, and willingness-to-pay points on if they would delete information on their social media, to see how an information treatment affects those payment points. This information treatment is meant to make half of the participants

This survey takes information on a participant’s beliefs on privacy security, the general digital knowledge, demographics, and willingness-to-pay points on if they would delete information on their social media, to see how an information treatment affects those payment points. This information treatment is meant to make half of the participants think about the deeper ramifications of the information they reveal. The initial hypothesis is that this information will make people want to pay more to remove their information from the web, but the results find a surprising negative correlation with the treatment.

ContributorsDeitrick, Noah Sumner (Author) / Silverman, Daniel (Thesis director) / Kuminoff, Nicolai (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The PPP Loan Program was created by the CARES Act and carried out by the Small Business Administration (SBA) to provide support to small businesses in maintaining their payroll during the Coronavirus pandemic. This program was approved for $350 billion, but this amount was expanded by an additional $320 billion

The PPP Loan Program was created by the CARES Act and carried out by the Small Business Administration (SBA) to provide support to small businesses in maintaining their payroll during the Coronavirus pandemic. This program was approved for $350 billion, but this amount was expanded by an additional $320 billion to meet the demand by struggling businesses, since initial funding was exhausted under two weeks.<br/><br/>Significant controversy surrounds the program. In December 2020, the Department of Justice reported 90 individuals were charged for fraudulent use of funds, totaling $250 million. The loans, which were intended for small business, were actually approved for 450 public companies. Furthermore, the methods of approval are<br/>shrouded in mystery. In an effort to be transparent, the SBA has released information about loan recipients. Conveniently, the SBA has released information of all recipients. Detailed information was released for 661,218 recipients who have received a PPP loan in excess of $150,000. These recipients are the central point of this research.<br/><br/>This research sought to answer two primary questions: how did the SBA determine which loans, and therefore which industries are approved, and did the industries most affected by the pandemic receive the most in PPP loans, as intended by Congress? It was determined that, generally, PPP Loans were approved on the basis of employment percentages relative to the individual state. Furthermore, in general, the loans approved were approved fairly, with respect to the size of the industry. The loans, when adjusted for GDP and Employment factors, yielded a clear ranking that prioritized vulnerable industries first.<br/><br/>However, significant questions remain. The effectiveness of the PPP has been hindered by unclear incentives and negative outcomes, characterized by a government program that has essentially been rushed into service. Furthermore, limitations of available data to regress and compare the SBA's approved loans are not representative of small business.

ContributorsMaglanoc, Julian (Author) / Kenchington, David (Thesis director) / Cassidy, Nancy (Committee member) / Department of Finance (Contributor) / Dean, W.P. Carey School of Business (Contributor) / School of Accountancy (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The Covid-19 pandemic has made a significant impact on both the stock market and the<br/>global economy. The resulting volatility in stock prices has provided an opportunity to examine<br/>the Efficient Market Hypothesis. This study aims to gain insights into the efficiency of markets<br/>based on stock price performance in the Covid era.

The Covid-19 pandemic has made a significant impact on both the stock market and the<br/>global economy. The resulting volatility in stock prices has provided an opportunity to examine<br/>the Efficient Market Hypothesis. This study aims to gain insights into the efficiency of markets<br/>based on stock price performance in the Covid era. Specifically, it investigates the market’s<br/>ability to anticipate significant events during the Covid-19 timeline beginning November 1, 2019<br/><br/>and ending March 31, 2021. To examine the efficiency of markets, our team created a Stay-at-<br/>Home Portfolio, experiencing economic tailwinds from the Covid lockdowns, and a Pandemic<br/><br/>Loser Portfolio, experiencing economic headwinds from the Covid lockdowns. Cumulative<br/>returns of each portfolio are benchmarked to the cumulative returns of the S&P 500. The results<br/>showed that the Efficient Market Hypothesis is likely to be valid, although a definitive<br/>conclusion cannot be made based on the scope of the analysis. There are recommendations for<br/>further research surrounding key events that may be able to draw a more direct conclusion.

ContributorsBrock, Matt Ian (Co-author) / Beneduce, Trevor (Co-author) / Craig, Nicko (Co-author) / Hertzel, Michael (Thesis director) / Mindlin, Jeff (Committee member) / Department of Finance (Contributor) / Economics Program in CLAS (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149658-Thumbnail Image.png
Description
Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered

Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered as 30 to 50 years. Power plants over 30 years old usually conduct a feasibility study of rehabilitation on their entire facilities including infrastructure. By age 35, the forced outage rate increases by 10 percentage points compared to the previous year. Much longer outages occur in power plants older than 20 years. Consequently, the forced outage rate increases exponentially due to these longer outages. Although these long forced outages are not frequent, their impact is immense. If reasonable timing of rehabilitation is missed, an abrupt long-term outage could occur and additional unnecessary repairs and inefficiencies would follow. On the contrary, too early replacement might cause the waste of revenue. The hydropower plants of Korea Water Resources Corporation (hereafter K-water) are utilized for this study. Twenty-four K-water generators comprise the population for quantifying the reliability of each equipment. A facility in a hydropower plant is a repairable system because most failures can be fixed without replacing the entire facility. The fault data of each power plant are collected, within which only forced outage faults are considered as raw data for reliability analyses. The mean cumulative repair functions (MCF) of each facility are determined with the failure data tables, using Nelson's graph method. The power law model, a popular model for a repairable system, can also be obtained to represent representative equipment and system availability. The criterion-based analysis of HydroAmp is used to provide more accurate reliability of each power plant. Two case studies are presented to enhance the understanding of the availability of each power plant and represent economic evaluations for modernization. Also, equipment in a hydropower plant is categorized into two groups based on their reliability for determining modernization timing and their suitable replacement periods are obtained using simulation.
ContributorsKwon, Ogeuk (Author) / Holbert, Keith E. (Thesis advisor) / Heydt, Gerald T (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2011
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
151511-Thumbnail Image.png
Description
With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus

With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus knowledge discovery by machine learning techniques is necessary if we want to better understand information from data. In this dissertation, we explore the topics of asymmetric loss and asymmetric data in machine learning and propose new algorithms as solutions to some of the problems in these topics. We also studied variable selection of matched data sets and proposed a solution when there is non-linearity in the matched data. The research is divided into three parts. The first part addresses the problem of asymmetric loss. A proposed asymmetric support vector machine (aSVM) is used to predict specific classes with high accuracy. aSVM was shown to produce higher precision than a regular SVM. The second part addresses asymmetric data sets where variables are only predictive for a subset of the predictor classes. Asymmetric Random Forest (ARF) was proposed to detect these kinds of variables. The third part explores variable selection for matched data sets. Matched Random Forest (MRF) was proposed to find variables that are able to distinguish case and control without the restrictions that exists in linear models. MRF detects variables that are able to distinguish case and control even in the presence of interaction and qualitative variables.
ContributorsKoh, Derek (Author) / Runger, George C. (Thesis advisor) / Wu, Tong (Committee member) / Pan, Rong (Committee member) / Cesta, John (Committee member) / Arizona State University (Publisher)
Created2013
150547-Thumbnail Image.png
Description
This dissertation presents methods for addressing research problems that currently can only adequately be solved using Quality Reliability Engineering (QRE) approaches especially accelerated life testing (ALT) of electronic printed wiring boards with applications to avionics circuit boards. The methods presented in this research are generally applicable to circuit boards, but

This dissertation presents methods for addressing research problems that currently can only adequately be solved using Quality Reliability Engineering (QRE) approaches especially accelerated life testing (ALT) of electronic printed wiring boards with applications to avionics circuit boards. The methods presented in this research are generally applicable to circuit boards, but the data generated and their analysis is for high performance avionics. Avionics equipment typically requires 20 years expected life by aircraft equipment manufacturers and therefore ALT is the only practical way of performing life test estimates. Both thermal and vibration ALT induced failure are performed and analyzed to resolve industry questions relating to the introduction of lead-free solder product and processes into high reliability avionics. In chapter 2, thermal ALT using an industry standard failure machine implementing Interconnect Stress Test (IST) that simulates circuit board life data is compared to real production failure data by likelihood ratio tests to arrive at a mechanical theory. This mechanical theory results in a statistically equivalent energy bound such that failure distributions below a specific energy level are considered to be from the same distribution thus allowing testers to quantify parameter setting in IST prior to life testing. In chapter 3, vibration ALT comparing tin-lead and lead-free circuit board solder designs involves the use of the likelihood ratio (LR) test to assess both complete failure data and S-N curves to present methods for analyzing data. Failure data is analyzed using Regression and two-way analysis of variance (ANOVA) and reconciled with the LR test results that indicating that a costly aging pre-process may be eliminated in certain cases. In chapter 4, vibration ALT for side-by-side tin-lead and lead-free solder black box designs are life tested. Commercial models from strain data do not exist at the low levels associated with life testing and need to be developed because testing performed and presented here indicate that both tin-lead and lead-free solders are similar. In addition, earlier failures due to vibration like connector failure modes will occur before solder interconnect failures.
ContributorsJuarez, Joseph Moses (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie M. (Thesis advisor) / Gel, Esma (Committee member) / Mignolet, Marc (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
151226-Thumbnail Image.png
Description
Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning of

Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning of the relevant patterns This dissertation proposes TS representations and methods for supervised TS analysis. The approaches combine new representations that handle translations and dilations of patterns with bag-of-features strategies and tree-based ensemble learning. This provides flexibility in handling time-warped patterns in a computationally efficient way. The ensemble learners provide a classification framework that can handle high-dimensional feature spaces, multiple classes and interaction between features. The proposed representations are useful for classification and interpretation of the TS data of varying complexity. The first contribution handles the problem of time warping with a feature-based approach. An interval selection and local feature extraction strategy is proposed to learn a bag-of-features representation. This is distinctly different from common similarity-based time warping. This allows for additional features (such as pattern location) to be easily integrated into the models. The learners have the capability to account for the temporal information through the recursive partitioning method. The second contribution focuses on the comprehensibility of the models. A new representation is integrated with local feature importance measures from tree-based ensembles, to diagnose and interpret time intervals that are important to the model. Multivariate time series (MTS) are especially challenging because the input consists of a collection of TS and both features within TS and interactions between TS can be important to models. Another contribution uses a different representation to produce computationally efficient strategies that learn a symbolic representation for MTS. Relationships between the multiple TS, nominal and missing values are handled with tree-based learners. Applications such as speech recognition, medical diagnosis and gesture recognition are used to illustrate the methods. Experimental results show that the TS representations and methods provide better results than competitive methods on a comprehensive collection of benchmark datasets. Moreover, the proposed approaches naturally provide solutions to similarity analysis, predictive pattern discovery and feature selection.
ContributorsBaydogan, Mustafa Gokce (Author) / Runger, George C. (Thesis advisor) / Atkinson, Robert (Committee member) / Gel, Esma (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
136098-Thumbnail Image.png
Description
In order to discover if Company X's current system of local trucking is the most efficient and cost-effective way to move freight between sites in the Western U.S., we will compare the current system to varying alternatives to see if there are potential avenues for Company X to create or

In order to discover if Company X's current system of local trucking is the most efficient and cost-effective way to move freight between sites in the Western U.S., we will compare the current system to varying alternatives to see if there are potential avenues for Company X to create or implement an improved cost saving freight movement system.
ContributorsPicone, David (Co-author) / Krueger, Brandon (Co-author) / Harrison, Sarah (Co-author) / Way, Noah (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Supply Chain Management (Contributor) / Department of Finance (Contributor) / Economics Program in CLAS (Contributor) / School of Accountancy (Contributor) / W. P. Carey School of Business (Contributor) / Sandra Day O'Connor College of Law (Contributor)
Created2015-05
136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05