Matching Items (3)
Filtering by

Clear all filters

156576-Thumbnail Image.png
Description
The primary objective in time series analysis is forecasting. Raw data often exhibits nonstationary behavior: trends, seasonal cycles, and heteroskedasticity. After data is transformed to a weakly stationary process, autoregressive moving average (ARMA) models may capture the remaining temporal dynamics to improve forecasting. Estimation of ARMA can be performed

The primary objective in time series analysis is forecasting. Raw data often exhibits nonstationary behavior: trends, seasonal cycles, and heteroskedasticity. After data is transformed to a weakly stationary process, autoregressive moving average (ARMA) models may capture the remaining temporal dynamics to improve forecasting. Estimation of ARMA can be performed through regressing current values on previous realizations and proxy innovations. The classic paradigm fails when dynamics are nonlinear; in this case, parametric, regime-switching specifications model changes in level, ARMA dynamics, and volatility, using a finite number of latent states. If the states can be identified using past endogenous or exogenous information, a threshold autoregressive (TAR) or logistic smooth transition autoregressive (LSTAR) model may simplify complex nonlinear associations to conditional weakly stationary processes. For ARMA, TAR, and STAR, order parameters quantify the extent past information is associated with the future. Unfortunately, even if model orders are known a priori, the possibility of over-fitting can lead to sub-optimal forecasting performance. By intentionally overestimating these orders, a linear representation of the full model is exploited and Bayesian regularization can be used to achieve sparsity. Global-local shrinkage priors for AR, MA, and exogenous coefficients are adopted to pull posterior means toward 0 without over-shrinking relevant effects. This dissertation introduces, evaluates, and compares Bayesian techniques that automatically perform model selection and coefficient estimation of ARMA, TAR, and STAR models. Multiple Monte Carlo experiments illustrate the accuracy of these methods in finding the "true" data generating process. Practical applications demonstrate their efficacy in forecasting.
ContributorsGiacomazzo, Mario (Author) / Kamarianakis, Yiannis (Thesis advisor) / Reiser, Mark R. (Committee member) / McCulloch, Robert (Committee member) / Hahn, Richard (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2018
171977-Thumbnail Image.png
Description
This dissertation contains two research projects: Multiple Change Point Detection in Linear Models and Statistical Inference for Implicit Network Structures. In the first project, a new method to detect the number and locations of change points in piecewise linear models under stationary Gaussian noise is proposed. The method transforms the problem

This dissertation contains two research projects: Multiple Change Point Detection in Linear Models and Statistical Inference for Implicit Network Structures. In the first project, a new method to detect the number and locations of change points in piecewise linear models under stationary Gaussian noise is proposed. The method transforms the problem of detecting change points to the detection of local extrema by kernel smoothing and differentiating the data sequence. The change points are detected by computing the p-values for all local extrema using the derived peak height distributions of smooth Gaussian processes, and then applying the Benjamini-Hochberg procedure to identify significant local extrema. Theoretical results show that the method can guarantee asymptotic control of the False Discover Rate (FDR) and power consistency, as the length of the sequence, and the size of slope changes and jumps get large. In addition, compared to traditional methods for change point detection based on recursive segmentation, The proposed method tests the candidate local extrema only one time, achieving the smallest computational complexity. Numerical studies show that the properties on FDR control and power consistency are maintained in non-asymptotic cases. In the second project, identifiability and estimation consistency under mild conditions in hub model are proved. Hub Model is a model-based approach, introduced by Zhao and Weko (2019), to infer implicit network structuress from grouping behavior. The hub model assumes that each member of the group is brought together by a member of the group called the hub. This paper generalize the hub model by introducing a model component that allows hubless groups in which individual nodes spontaneously appear independent of any other individual. The new model bridges the gap between the hub model and the degenerate case of the mixture model -- the Bernoulli product. Furthermore, a penalized likelihood approach is proposed to estimate the set of hubs when it is unknown.
ContributorsHe, Zhibing (Author) / Zhao, Yunpeng YZ (Thesis advisor) / Cheng, Dan DC (Thesis advisor) / Lopes, Hedibert HL (Committee member) / Fricks, John JF (Committee member) / Kao, Ming-Hung MK (Committee member) / Arizona State University (Publisher)
Created2022
158883-Thumbnail Image.png
Description
Nonregular designs are a preferable alternative to regular resolution four designs because they avoid confounding two-factor interactions. As a result nonregular designs can estimate and identify a few active two-factor interactions. However, due to the sometimes complex alias structure of nonregular designs, standard screening strategies can fail to identify all

Nonregular designs are a preferable alternative to regular resolution four designs because they avoid confounding two-factor interactions. As a result nonregular designs can estimate and identify a few active two-factor interactions. However, due to the sometimes complex alias structure of nonregular designs, standard screening strategies can fail to identify all active effects. In this research, two-level nonregular screening designs with orthogonal main effects will be discussed. By utilizing knowledge of the alias structure, a design based model selection process for analyzing nonregular designs is proposed.

The Aliased Informed Model Selection (AIMS) strategy is a design specific approach that is compared to three generic model selection methods; stepwise regression, least absolute shrinkage and selection operator (LASSO), and the Dantzig selector. The AIMS approach substantially increases the power to detect active main effects and two-factor interactions versus the aforementioned generic methodologies. This research identifies design specific model spaces; sets of models with strong heredity, all estimable, and exhibit no model confounding. These spaces are then used in the AIMS method along with design specific aliasing rules for model selection decisions. Model spaces and alias rules are identified for three designs; 16-run no-confounding 6, 7, and 8-factor designs. The designs are demonstrated with several examples as well as simulations to show the AIMS superiority in model selection.

A final piece of the research provides a method for augmenting no-confounding designs based on a model spaces and maximum average D-efficiency. Several augmented designs are provided for different situations. A final simulation with the augmented designs shows strong results for augmenting four additional runs if time and resources permit.
ContributorsMetcalfe, Carly E (Author) / Montgomery, Douglas C. (Thesis advisor) / Jones, Bradley (Committee member) / Pan, Rong (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2020