Matching Items (8)
Filtering by

Clear all filters

151501-Thumbnail Image.png
Description
Daily dairies and other intensive measurement methods are increasingly used to study the relationships between two time varying variables X and Y. These data are commonly analyzed using longitudinal multilevel or bivariate growth curve models that allow for random effects of intercept (and sometimes also slope) but which do not

Daily dairies and other intensive measurement methods are increasingly used to study the relationships between two time varying variables X and Y. These data are commonly analyzed using longitudinal multilevel or bivariate growth curve models that allow for random effects of intercept (and sometimes also slope) but which do not address the effects of weekly cycles in the data. Three Monte Carlo studies investigated the impact of omitting the weekly cycles in daily dairy data under the multilevel model framework. In cases where cycles existed in both the time-varying predictor series (X) and the time-varying outcome series (Y) but were ignored, the effects of the within- and between-person components of X on Y tended to be biased, as were their corresponding standard errors. The direction and magnitude of the bias depended on the phase difference between the cycles in the two series. In cases where cycles existed in only one series but were ignored, the standard errors of the regression coefficients for the within- and between-person components of X tended to be biased, and the direction and magnitude of bias depended on which series contained cyclical components.
ContributorsLiu, Yu (Author) / West, Stephen G. (Thesis advisor) / Enders, Craig K. (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2013
153049-Thumbnail Image.png
Description
Obtaining high-quality experimental designs to optimize statistical efficiency and data quality is quite challenging for functional magnetic resonance imaging (fMRI). The primary fMRI design issue is on the selection of the best sequence of stimuli based on a statistically meaningful optimality criterion. Some previous studies have provided some guidance and

Obtaining high-quality experimental designs to optimize statistical efficiency and data quality is quite challenging for functional magnetic resonance imaging (fMRI). The primary fMRI design issue is on the selection of the best sequence of stimuli based on a statistically meaningful optimality criterion. Some previous studies have provided some guidance and powerful computational tools for obtaining good fMRI designs. However, these results are mainly for basic experimental settings with simple statistical models. In this work, a type of modern fMRI experiments is considered, in which the design matrix of the statistical model depends not only on the selected design, but also on the experimental subject's probabilistic behavior during the experiment. The design matrix is thus uncertain at the design stage, making it diffcult to select good designs. By taking this uncertainty into account, a very efficient approach for obtaining high-quality fMRI designs is developed in this study. The proposed approach is built upon an analytical result, and an efficient computer algorithm. It is shown through case studies that the proposed approach can outperform an existing method in terms of computing time, and the quality of the obtained designs.
ContributorsZhou, Lin (Author) / Kao, Ming-Hung (Thesis advisor) / Reiser, Mark R. (Committee member) / Stufken, John (Committee member) / Welfert, Bruno (Committee member) / Arizona State University (Publisher)
Created2014
Description
When analyzing longitudinal data it is essential to account both for the correlation inherent from the repeated measures of the responses as well as the correlation realized on account of the feedback created between the responses at a particular time and the predictors at other times. A generalized method of

When analyzing longitudinal data it is essential to account both for the correlation inherent from the repeated measures of the responses as well as the correlation realized on account of the feedback created between the responses at a particular time and the predictors at other times. A generalized method of moments (GMM) for estimating the coefficients in longitudinal data is presented. The appropriate and valid estimating equations associated with the time-dependent covariates are identified, thus providing substantial gains in efficiency over generalized estimating equations (GEE) with the independent working correlation. Identifying the estimating equations for computation is of utmost importance. This paper provides a technique for identifying the relevant estimating equations through a general method of moments. I develop an approach that makes use of all the valid estimating equations necessary with each time-dependent and time-independent covariate. Moreover, my approach does not assume that feedback is always present over time, or present at the same degree. I fit the GMM correlated logistic regression model in SAS with PROC IML. I examine two datasets for illustrative purposes. I look at rehospitalization in a Medicare database. I revisit data regarding the relationship between the body mass index and future morbidity among children in the Philippines. These datasets allow us to compare my results with some earlier methods of analyses.
ContributorsYin, Jianqiong (Author) / Wilson, Jeffrey Wilson (Thesis advisor) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Arizona State University (Publisher)
Created2012
150549-Thumbnail Image.png
Description
The purpose of this study was to examine whether dispositional sadness predicted children's prosocial behavior, and whether empathy-related responding (i.e., sympathy, personal distress) mediated this relation. It was hypothesized that children who were dispositionally sad, but well-regulated (i.e., moderate to high in effortful control), would experience sympathy versus personal distress,

The purpose of this study was to examine whether dispositional sadness predicted children's prosocial behavior, and whether empathy-related responding (i.e., sympathy, personal distress) mediated this relation. It was hypothesized that children who were dispositionally sad, but well-regulated (i.e., moderate to high in effortful control), would experience sympathy versus personal distress, and thus would engage in more prosocial behaviors than children who were not well-regulated. Constructs were measured across three time points, when children were 18-, 30-, and 42-months old. In addition, early effortful control (at 18 months) was investigated as a potential moderator of the relation between dispositional sadness and empathy-related responding. Separate path models were computed for sadness predicting prosocial behavior with (1) sympathy and (2) personal distress as the mediator. In path analysis, sadness was found to be a positive predictor of sympathy across time. There was not a significant mediated effect of sympathy on the relation between sadness and prosocial behavior (both reported and observed). In path models with personal distress, sadness was not a significant predictor of personal distress, and personal distress was not a significant predictor of prosocial behavior (therefore, mediation analyses were not pursued). The moderated effect of effortful control was significant for the relation between 18-month sadness and 30-month sympathy; contrary to expectation, sadness was a significant, positive predictor of sympathy only for children who had average and low levels of effortful control (children high in effortful control were high in sympathy regardless of level of sadness). There was no significant moderated effect of effortful control on the path from sadness to personal distress. Findings are discussed in terms of the role of sadness in empathy-related responding and prosocial behavior as well as the dual role of effortful control and sadness in predicting empathy-related responding.
ContributorsEdwards, Alison (Author) / Eisenberg, Nancy (Thesis advisor) / Spinrad, Tracy (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2012
153860-Thumbnail Image.png
Description
Threshold regression is used to model regime switching dynamics where the effects of the explanatory variables in predicting the response variable depend on whether a certain threshold has been crossed. When regime-switching dynamics are present, new estimation problems arise related to estimating the value of the threshold. Conventional methods utilize

Threshold regression is used to model regime switching dynamics where the effects of the explanatory variables in predicting the response variable depend on whether a certain threshold has been crossed. When regime-switching dynamics are present, new estimation problems arise related to estimating the value of the threshold. Conventional methods utilize an iterative search procedure, seeking to minimize the sum of squares criterion. However, when unnecessary variables are included in the model or certain variables drop out of the model depending on the regime, this method may have high variability. This paper proposes Lasso-type methods as an alternative to ordinary least squares. By incorporating an L_{1} penalty term, Lasso methods perform variable selection, thus potentially reducing some of the variance in estimating the threshold parameter. This paper discusses the results of a study in which two different underlying model structures were simulated. The first is a regression model with correlated predictors, whereas the second is a self-exciting threshold autoregressive model. Finally the proposed Lasso-type methods are compared to conventional methods in an application to urban traffic data.
ContributorsVan Schaijik, Maria (Author) / Kamarianakis, Yiannis (Committee member) / Reiser, Mark R. (Committee member) / Stufken, John (Committee member) / Arizona State University (Publisher)
Created2015
155598-Thumbnail Image.png
Description
This article proposes a new information-based subdata selection (IBOSS) algorithm, Squared Scaled Distance Algorithm (SSDA). It is based on the invariance of the determinant of the information matrix under orthogonal transformations, especially rotations. Extensive simulation results show that the new IBOSS algorithm retains nice asymptotic properties of IBOSS and gives

This article proposes a new information-based subdata selection (IBOSS) algorithm, Squared Scaled Distance Algorithm (SSDA). It is based on the invariance of the determinant of the information matrix under orthogonal transformations, especially rotations. Extensive simulation results show that the new IBOSS algorithm retains nice asymptotic properties of IBOSS and gives a larger determinant of the subdata information matrix. It has the same order of time complexity as the D-optimal IBOSS algorithm. However, it exploits the advantages of vectorized calculation avoiding for loops and is approximately 6 times as fast as the D-optimal IBOSS algorithm in R. The robustness of SSDA is studied from three aspects: nonorthogonality, including interaction terms and variable misspecification. A new accurate variable selection algorithm is proposed to help the implementation of IBOSS algorithms when a large number of variables are present with sparse important variables among them. Aggregating random subsample results, this variable selection algorithm is much more accurate than the LASSO method using full data. Since the time complexity is associated with the number of variables only, it is also very computationally efficient if the number of variables is fixed as n increases and not massively large. More importantly, using subsamples it solves the problem that full data cannot be stored in the memory when a data set is too large.
ContributorsZheng, Yi (Author) / Stufken, John (Thesis advisor) / Reiser, Mark R. (Committee member) / McCulloch, Robert (Committee member) / Arizona State University (Publisher)
Created2017
155642-Thumbnail Image.png
Description
Functional magnetic resonance imaging (fMRI) is used to study brain activity due

to stimuli presented to subjects in a scanner. It is important to conduct statistical

inference on such time series fMRI data obtained. It is also important to select optimal designs for practical experiments. Design selection under autoregressive models

have not been

Functional magnetic resonance imaging (fMRI) is used to study brain activity due

to stimuli presented to subjects in a scanner. It is important to conduct statistical

inference on such time series fMRI data obtained. It is also important to select optimal designs for practical experiments. Design selection under autoregressive models

have not been thoroughly discussed before. This paper derives general information

matrices for orthogonal designs under autoregressive model with an arbitrary number

of correlation coefficients. We further provide the minimum trace of orthogonal circulant designs under AR(1) model, which is used as a criterion to compare practical

designs such as M-sequence designs and circulant (almost) orthogonal array designs.

We also explore optimal designs under AR(2) model. In practice, types of stimuli can

be more than one, but in this paper we only consider the simplest situation with only

one type of stimuli.
ContributorsChen, Chuntao (Author) / Stufken, John (Thesis advisor) / Reiser, Mark R. (Committee member) / Kamarianakis, Ioannis (Committee member) / Arizona State University (Publisher)
Created2017
158282-Thumbnail Image.png
Description
Whilst linear mixed models offer a flexible approach to handle data with multiple sources of random variability, the related hypothesis testing for the fixed effects often encounters obstacles when the sample size is small and the underlying distribution for the test statistic is unknown. Consequently, five methods of denominator degrees

Whilst linear mixed models offer a flexible approach to handle data with multiple sources of random variability, the related hypothesis testing for the fixed effects often encounters obstacles when the sample size is small and the underlying distribution for the test statistic is unknown. Consequently, five methods of denominator degrees of freedom approximations (residual, containment, between-within, Satterthwaite, Kenward-Roger) are developed to overcome this problem. This study aims to evaluate the performance of these five methods with a mixed model consisting of random intercept and random slope. Specifically, simulations are conducted to provide insights on the F-statistics, denominator degrees of freedom and p-values each method gives with respect to different settings of the sample structure, the fixed-effect slopes and the missing-data proportion. The simulation results show that the residual method performs the worst in terms of F-statistics and p-values. Also, Satterthwaite and Kenward-Roger methods tend to be more sensitive to the change of designs. The Kenward-Roger method performs the best in terms of F-statistics when the null hypothesis is true.
ContributorsHuang, Ping-Chieh (Author) / Reiser, Mark R. (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2020