Matching Items (11)
Filtering by

Clear all filters

156576-Thumbnail Image.png
Description
The primary objective in time series analysis is forecasting. Raw data often exhibits nonstationary behavior: trends, seasonal cycles, and heteroskedasticity. After data is transformed to a weakly stationary process, autoregressive moving average (ARMA) models may capture the remaining temporal dynamics to improve forecasting. Estimation of ARMA can be performed

The primary objective in time series analysis is forecasting. Raw data often exhibits nonstationary behavior: trends, seasonal cycles, and heteroskedasticity. After data is transformed to a weakly stationary process, autoregressive moving average (ARMA) models may capture the remaining temporal dynamics to improve forecasting. Estimation of ARMA can be performed through regressing current values on previous realizations and proxy innovations. The classic paradigm fails when dynamics are nonlinear; in this case, parametric, regime-switching specifications model changes in level, ARMA dynamics, and volatility, using a finite number of latent states. If the states can be identified using past endogenous or exogenous information, a threshold autoregressive (TAR) or logistic smooth transition autoregressive (LSTAR) model may simplify complex nonlinear associations to conditional weakly stationary processes. For ARMA, TAR, and STAR, order parameters quantify the extent past information is associated with the future. Unfortunately, even if model orders are known a priori, the possibility of over-fitting can lead to sub-optimal forecasting performance. By intentionally overestimating these orders, a linear representation of the full model is exploited and Bayesian regularization can be used to achieve sparsity. Global-local shrinkage priors for AR, MA, and exogenous coefficients are adopted to pull posterior means toward 0 without over-shrinking relevant effects. This dissertation introduces, evaluates, and compares Bayesian techniques that automatically perform model selection and coefficient estimation of ARMA, TAR, and STAR models. Multiple Monte Carlo experiments illustrate the accuracy of these methods in finding the "true" data generating process. Practical applications demonstrate their efficacy in forecasting.
ContributorsGiacomazzo, Mario (Author) / Kamarianakis, Yiannis (Thesis advisor) / Reiser, Mark R. (Committee member) / McCulloch, Robert (Committee member) / Hahn, Richard (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2018
157026-Thumbnail Image.png
Description
Statistical model selection using the Akaike Information Criterion (AIC) and similar criteria is a useful tool for comparing multiple and non-nested models without the specification of a null model, which has made it increasingly popular in the natural and social sciences. De- spite their common usage, model selection methods are

Statistical model selection using the Akaike Information Criterion (AIC) and similar criteria is a useful tool for comparing multiple and non-nested models without the specification of a null model, which has made it increasingly popular in the natural and social sciences. De- spite their common usage, model selection methods are not driven by a notion of statistical confidence, so their results entail an unknown de- gree of uncertainty. This paper introduces a general framework which extends notions of Type-I and Type-II error to model selection. A theo- retical method for controlling Type-I error using Difference of Goodness of Fit (DGOF) distributions is given, along with a bootstrap approach that approximates the procedure. Results are presented for simulated experiments using normal distributions, random walk models, nested linear regression, and nonnested regression including nonlinear mod- els. Tests are performed using an R package developed by the author which will be made publicly available on journal publication of research results.
ContributorsCullan, Michael J (Author) / Sterner, Beckett (Thesis advisor) / Fricks, John (Committee member) / Kao, Ming-Hung (Committee member) / Arizona State University (Publisher)
Created2018
189356-Thumbnail Image.png
Description
This dissertation comprises two projects: (i) Multiple testing of local maxima for detection of peaks and change points with non-stationary noise, and (ii) Height distributions of critical points of smooth isotropic Gaussian fields: computations, simulations and asymptotics. The first project introduces a topological multiple testing method for one-dimensional domains to

This dissertation comprises two projects: (i) Multiple testing of local maxima for detection of peaks and change points with non-stationary noise, and (ii) Height distributions of critical points of smooth isotropic Gaussian fields: computations, simulations and asymptotics. The first project introduces a topological multiple testing method for one-dimensional domains to detect signals in the presence of non-stationary Gaussian noise. The approach involves conducting tests at local maxima based on two observation conditions: (i) the noise is smooth with unit variance and (ii) the noise is not smooth where kernel smoothing is applied to increase the signal-to-noise ratio (SNR). The smoothed signals are then standardized, which ensures that the variance of the new sequence's noise becomes one, making it possible to calculate $p$-values for all local maxima using random field theory. Assuming unimodal true signals with finite support and non-stationary Gaussian noise that can be repeatedly observed. The algorithm introduced in this work, demonstrates asymptotic strong control of the False Discovery Rate (FDR) and power consistency as the number of sequence repetitions and signal strength increase. Simulations indicate that FDR levels can also be controlled under non-asymptotic conditions with finite repetitions. The application of this algorithm to change point detection also guarantees FDR control and power consistency. The second project focuses on investigating the explicit and asymptotic height densities of critical points of smooth isotropic Gaussian random fields on both Euclidean space and spheres.The formulae are based on characterizing the distribution of the Hessian of the Gaussian field using the Gaussian orthogonally invariant (GOI) matrices and the Gaussian orthogonal ensemble (GOE) matrices, which are special cases of GOI matrices. However, as the dimension increases, calculating explicit formulae becomes computationally challenging. The project includes two simulation methods for these distributions. Additionally, asymptotic distributions are obtained by utilizing the asymptotic distribution of the eigenvalues (excluding the maximum eigenvalues) of the GOE matrix for large dimensions. However, when it comes to the maximum eigenvalue, the Tracy-Widom distribution is utilized. Simulation results demonstrate the close approximation between the asymptotic distribution and the real distribution when $N$ is sufficiently large.
Contributorsgu, shuang (Author) / Cheng, Dan (Thesis advisor) / Lopes, Hedibert (Committee member) / Fricks, John (Committee member) / Lan, Shiwei (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2023
187808-Thumbnail Image.png
Description
This dissertation covers several topics in machine learning and causal inference. First, the question of “feature selection,” a common byproduct of regularized machine learning methods, is investigated theoretically in the context of treatment effect estimation. This involves a detailed review and extension of frameworks for estimating causal effects and in-depth

This dissertation covers several topics in machine learning and causal inference. First, the question of “feature selection,” a common byproduct of regularized machine learning methods, is investigated theoretically in the context of treatment effect estimation. This involves a detailed review and extension of frameworks for estimating causal effects and in-depth theoretical study. Next, various computational approaches to estimating causal effects with machine learning methods are compared with these theoretical desiderata in mind. Several improvements to current methods for causal machine learning are identified and compelling angles for further study are pinpointed. Finally, a common method used for “explaining” predictions of machine learning algorithms, SHAP, is evaluated critically through a statistical lens.
ContributorsHerren, Andrew (Author) / Hahn, P Richard (Thesis advisor) / Kao, Ming-Hung (Committee member) / Lopes, Hedibert (Committee member) / McCulloch, Robert (Committee member) / Zhou, Shuang (Committee member) / Arizona State University (Publisher)
Created2023
157719-Thumbnail Image.png
Description
Functional brain imaging experiments are widely conducted in many fields for study- ing the underlying brain activity in response to mental stimuli. For such experiments, it is crucial to select a good sequence of mental stimuli that allow researchers to collect informative data for making precise and valid statistical inferences

Functional brain imaging experiments are widely conducted in many fields for study- ing the underlying brain activity in response to mental stimuli. For such experiments, it is crucial to select a good sequence of mental stimuli that allow researchers to collect informative data for making precise and valid statistical inferences at minimum cost. In contrast to most existing studies, the aim of this study is to obtain optimal designs for brain mapping technology with an ultra-high temporal resolution with respect to some common statistical optimality criteria. The first topic of this work is on finding optimal designs when the primary interest is in estimating the Hemodynamic Response Function (HRF), a function of time describing the effect of a mental stimulus to the brain. A major challenge here is that the design matrix of the statistical model is greatly enlarged. As a result, it is very difficult, if not infeasible, to compute and compare the statistical efficiencies of competing designs. For tackling this issue, an efficient approach is built on subsampling the design matrix and the use of an efficient computer algorithm is proposed. It is demonstrated through the analytical and simulation results that the proposed approach can outperform the existing methods in terms of computing time, and the quality of the obtained designs. The second topic of this work is to find optimal designs when another set of popularly used basis functions is considered for modeling the HRF, e.g., to detect brain activations. Although the statistical model for analyzing the data remains linear, the parametric functions of interest under this setting are often nonlinear. The quality of the de- sign will then depend on the true value of some unknown parameters. To address this issue, the maximin approach is considered to identify designs that maximize the relative efficiencies over the parameter space. As shown in the case studies, these maximin designs yield high performance for detecting brain activation compared to the traditional designs that are widely used in practice.
ContributorsAlghamdi, Reem (Author) / Kao, Ming-Hung (Thesis advisor) / Fricks, John (Committee member) / Pan, Rong (Committee member) / Reiser, Mark R. (Committee member) / Stufken, John (Committee member) / Arizona State University (Publisher)
Created2019
158850-Thumbnail Image.png
Description
Spatial regression is one of the central topics in spatial statistics. Based on the goals, interpretation or prediction, spatial regression models can be classified into two categories, linear mixed regression models and nonlinear regression models. This dissertation explored these models and their real world applications. New methods and models were

Spatial regression is one of the central topics in spatial statistics. Based on the goals, interpretation or prediction, spatial regression models can be classified into two categories, linear mixed regression models and nonlinear regression models. This dissertation explored these models and their real world applications. New methods and models were proposed to overcome the challenges in practice. There are three major parts in the dissertation.

In the first part, nonlinear regression models were embedded into a multistage workflow to predict the spatial abundance of reef fish species in the Gulf of Mexico. There were two challenges, zero-inflated data and out of sample prediction. The methods and models in the workflow could effectively handle the zero-inflated sampling data without strong assumptions. Three strategies were proposed to solve the out of sample prediction problem. The results and discussions showed that the nonlinear prediction had the advantages of high accuracy, low bias and well-performed in multi-resolution.

In the second part, a two-stage spatial regression model was proposed for analyzing soil carbon stock (SOC) data. In the first stage, there was a spatial linear mixed model that captured the linear and stationary effects. In the second stage, a generalized additive model was used to explain the nonlinear and nonstationary effects. The results illustrated that the two-stage model had good interpretability in understanding the effect of covariates, meanwhile, it kept high prediction accuracy which is competitive to the popular machine learning models, like, random forest, xgboost and support vector machine.

A new nonlinear regression model, Gaussian process BART (Bayesian additive regression tree), was proposed in the third part. Combining advantages in both BART and Gaussian process, the model could capture the nonlinear effects of both observed and latent covariates. To develop the model, first, the traditional BART was generalized to accommodate correlated errors. Then, the failure of likelihood based Markov chain Monte Carlo (MCMC) in parameter estimating was discussed. Based on the idea of analysis of variation, back comparing and tuning range, were proposed to tackle this failure. Finally, effectiveness of the new model was examined by experiments on both simulation and real data.
ContributorsLu, Xuetao (Author) / McCulloch, Robert (Thesis advisor) / Hahn, Paul (Committee member) / Lan, Shiwei (Committee member) / Zhou, Shuang (Committee member) / Saul, Steven (Committee member) / Arizona State University (Publisher)
Created2020
158520-Thumbnail Image.png
Description
In this dissertation two research questions in the field of applied experimental design were explored. First, methods for augmenting the three-level screening designs called Definitive Screening Designs (DSDs) were investigated. Second, schemes for strategic subdata selection for nonparametric predictive modeling with big data were developed.

Under sparsity, the structure

In this dissertation two research questions in the field of applied experimental design were explored. First, methods for augmenting the three-level screening designs called Definitive Screening Designs (DSDs) were investigated. Second, schemes for strategic subdata selection for nonparametric predictive modeling with big data were developed.

Under sparsity, the structure of DSDs can allow for the screening and optimization of a system in one step, but in non-sparse situations estimation of second-order models requires augmentation of the DSD. In this work, augmentation strategies for DSDs were considered, given the assumption that the correct form of the model for the response of interest is quadratic. Series of augmented designs were constructed and explored, and power calculations, model-robustness criteria, model-discrimination criteria, and simulation study results were used to identify the number of augmented runs necessary for (1) effectively identifying active model effects, and (2) precisely predicting a response of interest. When the goal is identification of active effects, it is shown that supersaturated designs are sufficient; when the goal is prediction, it is shown that little is gained by augmenting beyond the design that is saturated for the full quadratic model. Surprisingly, augmentation strategies based on the I-optimality criterion do not lead to better predictions than strategies based on the D-optimality criterion.

Computational limitations can render standard statistical methods infeasible in the face of massive datasets, necessitating subsampling strategies. In the big data context, the primary objective is often prediction but the correct form of the model for the response of interest is likely unknown. Here, two new methods of subdata selection were proposed. The first is based on clustering, the second is based on space-filling designs, and both are free from model assumptions. The performance of the proposed methods was explored visually via low-dimensional simulated examples; via real data applications; and via large simulation studies. In all cases the proposed methods were compared to existing, widely used subdata selection methods. The conditions under which the proposed methods provide advantages over standard subdata selection strategies were identified.
ContributorsNachtsheim, Abigael (Author) / Stufken, John (Thesis advisor) / Fricks, John (Committee member) / Kao, Ming-Hung (Committee member) / Montgomery, Douglas C. (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2020
158338-Thumbnail Image.png
Description
Acoustic emission (AE) signals have been widely employed for tracking material properties and structural characteristics. In this study, the aim is to analyze the AE signals gathered during a scanning probe lithography process to classify the known microstructure types and discover unknown surface microstructures/anomalies. To achieve this, a Hidden Markov

Acoustic emission (AE) signals have been widely employed for tracking material properties and structural characteristics. In this study, the aim is to analyze the AE signals gathered during a scanning probe lithography process to classify the known microstructure types and discover unknown surface microstructures/anomalies. To achieve this, a Hidden Markov Model is developed to consider the temporal dependency of the high-resolution AE data. Furthermore, the posterior classification probability and the negative likelihood score for microstructure classification and discovery are computed. Subsequently, a diagnostic procedure to identify the dominant AE frequencies that were used to track the microstructural characteristics is presented. In addition, machine learning methods such as KNN, Naive Bayes, and Logistic Regression classifiers are applied. Finally, the proposed approach applied to identify the surface microstructures of additively manufactured Ti-6Al-4V and show that it not only achieved a high classification accuracy (e.g., more than 90\%) but also correctly identified the microstructural anomalies that may be subjected to further investigation to discover new material phases/properties.
ContributorsSun, Huifeng (Author) / Yan, Hao (Thesis advisor) / Fricks, John (Thesis advisor) / Cheng, Dan (Committee member) / Arizona State University (Publisher)
Created2020
161250-Thumbnail Image.png
Description
Inside cells, axonal and dendritic transport by motor proteins is a process that is responsible for supplying cargo, such as vesicles and organelles, to support neuronal function. Motor proteins achieve transport through a cycle of chemical and mechanical processes. Particle tracking experiments are used to study this intracellular cargo transport

Inside cells, axonal and dendritic transport by motor proteins is a process that is responsible for supplying cargo, such as vesicles and organelles, to support neuronal function. Motor proteins achieve transport through a cycle of chemical and mechanical processes. Particle tracking experiments are used to study this intracellular cargo transport by recording multi-dimensional, discrete cargo position trajectories over time. However, due to experimental limitations, much of the mechanochemical process cannot be directly observed, making mathematical modeling and statistical inference an essential tool for identifying the underlying mechanisms. The cargo movement during transport is modeled using a switching stochastic differential equation framework that involves classification into one of three proposed hidden regimes. Each regime is characterized by different levels of velocity and stochasticity. The equations are presented as a state-space model with Markovian properties. Through a stochastic expectation-maximization algorithm, statistical inference can be made based on the observed trajectory. Regime predictions and particle location predictions are calculated through an auxiliary particle filter and particle smoother. Based on these predictions, parameters are estimated through maximum likelihood. Diagnostics are proposed that can assess model performance and therefore also be a form of model selection criteria. Model selection is used to find the most accurate regime models and the optimal number of regimes for a certain motor-cargo system. A method for incorporating a second positional dimension is also introduced. These methods are tested on both simulated data and different types of experimental data.
ContributorsCrow, Lauren (Author) / Fricks, John (Thesis advisor) / McKinley, Scott (Committee member) / Hahn, Paul R (Committee member) / Reiser, Mark (Committee member) / Cheng, Dan (Committee member) / Arizona State University (Publisher)
Created2021
171927-Thumbnail Image.png
Description
Tracking disease cases is an essential task in public health; however, tracking the number of cases of a disease may be difficult not every infection can be recorded by public health authorities. Notably, this may happen with whole country measles case reports, even such countries with robust registration systems.

Tracking disease cases is an essential task in public health; however, tracking the number of cases of a disease may be difficult not every infection can be recorded by public health authorities. Notably, this may happen with whole country measles case reports, even such countries with robust registration systems. Eilertson et al. (2019) propose using a state-space model combined with maximum likelihood methods for estimating measles transmission. A Bayesian approach that uses particle Markov Chain Monte Carlo (pMCMC) is proposed to estimate the parameters of the non-linear state-space model developed in Eilertson et al. (2019) and similar previous studies. This dissertation illustrates the performance of this approach by calculating posterior estimates of the model parameters and predictions of the unobserved states in simulations and case studies. Also, Iteration Filtering (IF2) is used as a support method to verify the Bayesian estimation and to inform the selection of prior distributions. In the second half of the thesis, a birth-death process is proposed to model the unobserved population size of a disease vector. This model studies the effect of a disease vector population size on a second affected population. The second population follows a non-homogenous Poisson process when conditioned on the vector process with a transition rate given by a scaled version of the vector population. The observation model also measures a potential threshold event when the host species population size surpasses a certain level yielding a higher transmission rate. A maximum likelihood procedure is developed for this model, which combines particle filtering with the Minorize-Maximization (MM) algorithm and extends the work of Crawford et al. (2014).
ContributorsMartinez Rivera, Wilmer Osvaldo (Author) / Fricks, John (Thesis advisor) / Reiser, Mark (Committee member) / Zhou, Shuang (Committee member) / Cheng, Dan (Committee member) / Lan, Shiwei (Committee member) / Arizona State University (Publisher)
Created2022