Matching Items (19)
156576-Thumbnail Image.png
Description
The primary objective in time series analysis is forecasting. Raw data often exhibits nonstationary behavior: trends, seasonal cycles, and heteroskedasticity. After data is transformed to a weakly stationary process, autoregressive moving average (ARMA) models may capture the remaining temporal dynamics to improve forecasting. Estimation of ARMA can be performed

The primary objective in time series analysis is forecasting. Raw data often exhibits nonstationary behavior: trends, seasonal cycles, and heteroskedasticity. After data is transformed to a weakly stationary process, autoregressive moving average (ARMA) models may capture the remaining temporal dynamics to improve forecasting. Estimation of ARMA can be performed through regressing current values on previous realizations and proxy innovations. The classic paradigm fails when dynamics are nonlinear; in this case, parametric, regime-switching specifications model changes in level, ARMA dynamics, and volatility, using a finite number of latent states. If the states can be identified using past endogenous or exogenous information, a threshold autoregressive (TAR) or logistic smooth transition autoregressive (LSTAR) model may simplify complex nonlinear associations to conditional weakly stationary processes. For ARMA, TAR, and STAR, order parameters quantify the extent past information is associated with the future. Unfortunately, even if model orders are known a priori, the possibility of over-fitting can lead to sub-optimal forecasting performance. By intentionally overestimating these orders, a linear representation of the full model is exploited and Bayesian regularization can be used to achieve sparsity. Global-local shrinkage priors for AR, MA, and exogenous coefficients are adopted to pull posterior means toward 0 without over-shrinking relevant effects. This dissertation introduces, evaluates, and compares Bayesian techniques that automatically perform model selection and coefficient estimation of ARMA, TAR, and STAR models. Multiple Monte Carlo experiments illustrate the accuracy of these methods in finding the "true" data generating process. Practical applications demonstrate their efficacy in forecasting.
ContributorsGiacomazzo, Mario (Author) / Kamarianakis, Yiannis (Thesis advisor) / Reiser, Mark R. (Committee member) / McCulloch, Robert (Committee member) / Hahn, Richard (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2018
157026-Thumbnail Image.png
Description
Statistical model selection using the Akaike Information Criterion (AIC) and similar criteria is a useful tool for comparing multiple and non-nested models without the specification of a null model, which has made it increasingly popular in the natural and social sciences. De- spite their common usage, model selection methods are

Statistical model selection using the Akaike Information Criterion (AIC) and similar criteria is a useful tool for comparing multiple and non-nested models without the specification of a null model, which has made it increasingly popular in the natural and social sciences. De- spite their common usage, model selection methods are not driven by a notion of statistical confidence, so their results entail an unknown de- gree of uncertainty. This paper introduces a general framework which extends notions of Type-I and Type-II error to model selection. A theo- retical method for controlling Type-I error using Difference of Goodness of Fit (DGOF) distributions is given, along with a bootstrap approach that approximates the procedure. Results are presented for simulated experiments using normal distributions, random walk models, nested linear regression, and nonnested regression including nonlinear mod- els. Tests are performed using an R package developed by the author which will be made publicly available on journal publication of research results.
ContributorsCullan, Michael J (Author) / Sterner, Beckett (Thesis advisor) / Fricks, John (Committee member) / Kao, Ming-Hung (Committee member) / Arizona State University (Publisher)
Created2018
162238-Thumbnail Image.png
DescriptionUnderstanding the evolution of opinions is a delicate task as the dynamics of how one changes their opinion based on their interactions with others are unclear.
ContributorsWeber, Dylan (Author) / Motsch, Sebastien (Thesis advisor) / Lanchier, Nicolas (Committee member) / Platte, Rodrigo (Committee member) / Armbruster, Dieter (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2021
189255-Thumbnail Image.png
Description
\begin{abstract}The human immunodeficiency virus (HIV) pandemic, which causes the syndrome of opportunistic infections that characterize the late stage HIV disease, known as the acquired immunodeficiency syndrome (AIDS), remains a major public health challenge to many parts of the world. This dissertation contributes in providing deeper qualitative insights into the transmission

\begin{abstract}The human immunodeficiency virus (HIV) pandemic, which causes the syndrome of opportunistic infections that characterize the late stage HIV disease, known as the acquired immunodeficiency syndrome (AIDS), remains a major public health challenge to many parts of the world. This dissertation contributes in providing deeper qualitative insights into the transmission dynamics and control of the HIV/AIDS disease in Men who have Sex with Men (MSM) community. A new mathematical model (which is relatively basic), which incorporates some of the pertinent aspects of HIV epidemiology and immunology and fitted using the yearly new case data of the MSM population from the State of Arizona, was designed and used to assess the population-level impact of awareness of HIV infection status and condom-based intervention, on the transmission dynamics and control of HIV/AIDS in an MSM community. Conditions for the existence and asymptotic stability of the various equilibria ofthe model were derived. The numerical simulations showed that the prospects for the effective control and/or elimination of HIV/AIDS in the MSM community in the United States are very promising using a condom-based intervention, provided the condom efficacy is high and the compliance is moderate enough. The model was extended in Chapter 3 to account for the effect of risk-structure, staged-progression property of HIV disease, and the use of pre-exposure prophylaxis (PrEP) on the spread and control of the disease. The model was shown to undergo a PrEP-induced \textit{backward bifurcation} when the associated control reproduction number is less than one. It was shown that when the compliance in PrEP usage is $50%(80%)$ then about $19.1%(34.2%)$ of the yearly new HIV/AIDS cases recorded at the peak will have been prevented, in comparison to the worst-case scenario where PrEP-based intervention is not implemented in the MSM community. It was also shown that the HIV pandemic elimination is possible from the MSM community even for the scenario when the effective contact rate is increased by 5-fold from its baseline value, if low-risk individuals take at least 15 years before they change their risky behavior and transition to the high-risk group (regardless of the value of the transition rate from high-risk to low-risk susceptible population).
ContributorsTollett, Queen Wiggs (Author) / Gumel, Abba (Thesis advisor) / Crook, Sharon (Committee member) / Fricks, John (Committee member) / Gardner, Carl (Committee member) / Nagy, John (Committee member) / Arizona State University (Publisher)
Created2023
189356-Thumbnail Image.png
Description
This dissertation comprises two projects: (i) Multiple testing of local maxima for detection of peaks and change points with non-stationary noise, and (ii) Height distributions of critical points of smooth isotropic Gaussian fields: computations, simulations and asymptotics. The first project introduces a topological multiple testing method for one-dimensional domains to

This dissertation comprises two projects: (i) Multiple testing of local maxima for detection of peaks and change points with non-stationary noise, and (ii) Height distributions of critical points of smooth isotropic Gaussian fields: computations, simulations and asymptotics. The first project introduces a topological multiple testing method for one-dimensional domains to detect signals in the presence of non-stationary Gaussian noise. The approach involves conducting tests at local maxima based on two observation conditions: (i) the noise is smooth with unit variance and (ii) the noise is not smooth where kernel smoothing is applied to increase the signal-to-noise ratio (SNR). The smoothed signals are then standardized, which ensures that the variance of the new sequence's noise becomes one, making it possible to calculate $p$-values for all local maxima using random field theory. Assuming unimodal true signals with finite support and non-stationary Gaussian noise that can be repeatedly observed. The algorithm introduced in this work, demonstrates asymptotic strong control of the False Discovery Rate (FDR) and power consistency as the number of sequence repetitions and signal strength increase. Simulations indicate that FDR levels can also be controlled under non-asymptotic conditions with finite repetitions. The application of this algorithm to change point detection also guarantees FDR control and power consistency. The second project focuses on investigating the explicit and asymptotic height densities of critical points of smooth isotropic Gaussian random fields on both Euclidean space and spheres.The formulae are based on characterizing the distribution of the Hessian of the Gaussian field using the Gaussian orthogonally invariant (GOI) matrices and the Gaussian orthogonal ensemble (GOE) matrices, which are special cases of GOI matrices. However, as the dimension increases, calculating explicit formulae becomes computationally challenging. The project includes two simulation methods for these distributions. Additionally, asymptotic distributions are obtained by utilizing the asymptotic distribution of the eigenvalues (excluding the maximum eigenvalues) of the GOE matrix for large dimensions. However, when it comes to the maximum eigenvalue, the Tracy-Widom distribution is utilized. Simulation results demonstrate the close approximation between the asymptotic distribution and the real distribution when $N$ is sufficiently large.
Contributorsgu, shuang (Author) / Cheng, Dan (Thesis advisor) / Lopes, Hedibert (Committee member) / Fricks, John (Committee member) / Lan, Shiwei (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2023
187415-Thumbnail Image.png
Description
A pneumonia-like illness emerged late in 2019 (coined COVID-19), caused by SARSCoV-2, causing a devastating global pandemic on a scale never before seen sincethe 1918/1919 influenza pandemic. This dissertation contributes in providing deeper qualitative insights into the transmission dynamics and control of the disease in the United States. A basic mathematical model,

A pneumonia-like illness emerged late in 2019 (coined COVID-19), caused by SARSCoV-2, causing a devastating global pandemic on a scale never before seen sincethe 1918/1919 influenza pandemic. This dissertation contributes in providing deeper qualitative insights into the transmission dynamics and control of the disease in the United States. A basic mathematical model, which incorporates the key pertinent epidemiological features of SARS-CoV-2 and fitted using observed COVID-19 data, was designed and used to assess the population-level impacts of vaccination and face mask usage in mitigating the burden of the pandemic in the United States. Conditions for the existence and asymptotic stability of the various equilibria of the model were derived. The model was shown to undergo a vaccine-induced backward bifurcation when the associated reproduction number is less than one. Conditions for achieving vaccine-derived herd immunity were derived for three of the four FDA-approved vaccines (namely Pfizer, Moderna and Johnson & Johnson vaccine), and the vaccination coverage level needed to achieve it decreases with increasing coverage of moderately and highly-effective face masks. It was also shown that using face masks as a singular intervention strategy could lead to the elimination of the pandemic if moderate or highly-effective masks are prioritized and pandemic elimination prospects are greatly enhanced if the vaccination program is combined with a face mask use strategy that emphasizes the use of moderate to highly-effective masks with at least moderate coverage. The model was extended in Chapter 3 to allow for the assessment of the impacts of waning and boosting of vaccine-derived and natural immunity against the BA.1 Omicron variant of SARS-CoV-2. It was shown that vaccine-derived herd immunity can be achieved in the United States via a vaccination-boosting strategy which entails fully vaccinating at least 72% of the susceptible populace. Boosting of vaccine-derived immunity was shown to be more beneficial than boosting of natural immunity. Overall, this study showed that the prospects of the elimination of the pandemic in the United States were highly promising using the two intervention measures.
ContributorsSafdar, Salman (Author) / Gumel, Abba (Thesis advisor) / Kostelich, Eric (Committee member) / Kang, Yun (Committee member) / Fricks, John (Committee member) / Espanol, Malena (Committee member) / Arizona State University (Publisher)
Created2023
171851-Thumbnail Image.png
Description
A leading crisis in the United States is the opioid use disorder (OUD) epidemic. Opioid overdose deaths have been increasing, with over 100,000 deaths due to overdose from April 2020 to April 2021. This dissertation presents two mathematical models to address illicit OUD (IOUD), treatment, and recovery within an epidemiological

A leading crisis in the United States is the opioid use disorder (OUD) epidemic. Opioid overdose deaths have been increasing, with over 100,000 deaths due to overdose from April 2020 to April 2021. This dissertation presents two mathematical models to address illicit OUD (IOUD), treatment, and recovery within an epidemiological framework. In the first model, individuals remain in the recovery class unless they relapse. Due to the limited availability of specialty treatment facilities for individuals with OUD, a saturation treat- ment function was incorporated. The second model is an extension of the first, where a casual user class and its corresponding specialty treatment class were added. Using U.S. population data, the data was scaled to a population of 200,000 to find parameter estimates. While the first model used the heroin-only dataset, the second model used both the heroin and all-illicit opioids datasets. Backward bifurcation was found in the first IOUD model for realistic parameter values. Additionally, bistability was observed in the second IOUD model with the heroin-only dataset. This result implies that it would be beneficial to increase the availability of treatment. An alarming effect was discovered about the high overdose death rate: by 2038, the disease-free equilibrium would be the only stable equilibrium. This consequence is concerning because although the goal is for the epidemic to end, it would be preferable to end it through treatment rather than overdose. The IOUD model with a casual user class, its sensitivity results, and the comparison of parameters for both datasets, showed the importance of not overlooking the influence that casual users have in driving the all-illicit opioid epidemic. Casual users stay in the casual user class longer and are not going to treatment as quickly as the users of the heroin epidemic. Another result was that the users of the all-illicit opioids were going to the recovered class by means other than specialty treatment. However, the relapse rates for those individuals were much more significant than in the heroin-only epidemic. The results above from analyzing these models may inform health and policy officials, leading to more effective treatment options and prevention efforts.
ContributorsCole, Sandra (Author) / Wirkus, Stephen (Thesis advisor) / Gardner, Carl (Committee member) / Lanchier, Nicolas (Committee member) / Camacho, Erika (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2022
171638-Thumbnail Image.png
Description
The high uncertainty of renewables introduces more dynamics to power systems. The conventional way of monitoring and controlling power systems is no longer reliable. New strategies are needed to ensure the stability and reliability of power systems. This work aims to assess the use of machine learning methods in analyzing

The high uncertainty of renewables introduces more dynamics to power systems. The conventional way of monitoring and controlling power systems is no longer reliable. New strategies are needed to ensure the stability and reliability of power systems. This work aims to assess the use of machine learning methods in analyzing data from renewable integrated power systems to aid the decisionmaking of electricity market participants. Specifically, the work studies the cases of electricity price forecast, solar panel detection, and how to constrain the machine learning methods to obey domain knowledge.Chapter 2 proposes to diversify the data source to ensure a more accurate electricity price forecast. Specifically, the proposed two-stage method, namely the rerouted method, learns two types of mapping rules: the mapping between the historical wind power and the historical price and the forecasting rule for wind generation. Based on the two rules, we forecast the price via the forecasted generation and the learned mapping between power and price. The massive numerical comparison gives guidance for choosing proper machine learning methods and proves the effectiveness of the proposed method. Chapter 3 proposes to integrate advanced data compression techniques into machine learning algorithms to either improve the predicting accuracy or accelerate the computation speed. New semi-supervised learning and one-class classification methods are proposed based on autoencoders to compress the data while refining the nonlinear data representation of human behavior and solar behavior. The numerical results show robust detection accuracy, laying down the foundation for managing distributed energy resources in distribution grids. Guidance is also provided to determine the proper machine learning methods for the solar detection problem. Chapter 4 proposes to integrate different types of domain knowledge-based constraints into basic neural networks to guide the model selection and enhance interpretability. A hybrid model is proposed to penalize derivatives and alter the structure to improve the performance of a neural network. We verify the performance improvement of introducing prior knowledge-based constraints on both synthetic and real data sets.
ContributorsLuo, Shuman (Author) / Weng, Yang (Thesis advisor) / Lei, Qin (Committee member) / Fricks, John (Committee member) / Qin, Jiangchao (Committee member) / Arizona State University (Publisher)
Created2022
171927-Thumbnail Image.png
Description
Tracking disease cases is an essential task in public health; however, tracking the number of cases of a disease may be difficult not every infection can be recorded by public health authorities. Notably, this may happen with whole country measles case reports, even such countries with robust registration systems.

Tracking disease cases is an essential task in public health; however, tracking the number of cases of a disease may be difficult not every infection can be recorded by public health authorities. Notably, this may happen with whole country measles case reports, even such countries with robust registration systems. Eilertson et al. (2019) propose using a state-space model combined with maximum likelihood methods for estimating measles transmission. A Bayesian approach that uses particle Markov Chain Monte Carlo (pMCMC) is proposed to estimate the parameters of the non-linear state-space model developed in Eilertson et al. (2019) and similar previous studies. This dissertation illustrates the performance of this approach by calculating posterior estimates of the model parameters and predictions of the unobserved states in simulations and case studies. Also, Iteration Filtering (IF2) is used as a support method to verify the Bayesian estimation and to inform the selection of prior distributions. In the second half of the thesis, a birth-death process is proposed to model the unobserved population size of a disease vector. This model studies the effect of a disease vector population size on a second affected population. The second population follows a non-homogenous Poisson process when conditioned on the vector process with a transition rate given by a scaled version of the vector population. The observation model also measures a potential threshold event when the host species population size surpasses a certain level yielding a higher transmission rate. A maximum likelihood procedure is developed for this model, which combines particle filtering with the Minorize-Maximization (MM) algorithm and extends the work of Crawford et al. (2014).
ContributorsMartinez Rivera, Wilmer Osvaldo (Author) / Fricks, John (Thesis advisor) / Reiser, Mark (Committee member) / Zhou, Shuang (Committee member) / Cheng, Dan (Committee member) / Lan, Shiwei (Committee member) / Arizona State University (Publisher)
Created2022
157719-Thumbnail Image.png
Description
Functional brain imaging experiments are widely conducted in many fields for study- ing the underlying brain activity in response to mental stimuli. For such experiments, it is crucial to select a good sequence of mental stimuli that allow researchers to collect informative data for making precise and valid statistical inferences

Functional brain imaging experiments are widely conducted in many fields for study- ing the underlying brain activity in response to mental stimuli. For such experiments, it is crucial to select a good sequence of mental stimuli that allow researchers to collect informative data for making precise and valid statistical inferences at minimum cost. In contrast to most existing studies, the aim of this study is to obtain optimal designs for brain mapping technology with an ultra-high temporal resolution with respect to some common statistical optimality criteria. The first topic of this work is on finding optimal designs when the primary interest is in estimating the Hemodynamic Response Function (HRF), a function of time describing the effect of a mental stimulus to the brain. A major challenge here is that the design matrix of the statistical model is greatly enlarged. As a result, it is very difficult, if not infeasible, to compute and compare the statistical efficiencies of competing designs. For tackling this issue, an efficient approach is built on subsampling the design matrix and the use of an efficient computer algorithm is proposed. It is demonstrated through the analytical and simulation results that the proposed approach can outperform the existing methods in terms of computing time, and the quality of the obtained designs. The second topic of this work is to find optimal designs when another set of popularly used basis functions is considered for modeling the HRF, e.g., to detect brain activations. Although the statistical model for analyzing the data remains linear, the parametric functions of interest under this setting are often nonlinear. The quality of the de- sign will then depend on the true value of some unknown parameters. To address this issue, the maximin approach is considered to identify designs that maximize the relative efficiencies over the parameter space. As shown in the case studies, these maximin designs yield high performance for detecting brain activation compared to the traditional designs that are widely used in practice.
ContributorsAlghamdi, Reem (Author) / Kao, Ming-Hung (Thesis advisor) / Fricks, John (Committee member) / Pan, Rong (Committee member) / Reiser, Mark R. (Committee member) / Stufken, John (Committee member) / Arizona State University (Publisher)
Created2019