Matching Items (21)
156631-Thumbnail Image.png
Description
Mediation analysis is used to investigate how an independent variable, X, is related to an outcome variable, Y, through a mediator variable, M (MacKinnon, 2008). If X represents a randomized intervention it is difficult to make a cause and effect inference regarding indirect effects without making no unmeasured confounding assumptions

Mediation analysis is used to investigate how an independent variable, X, is related to an outcome variable, Y, through a mediator variable, M (MacKinnon, 2008). If X represents a randomized intervention it is difficult to make a cause and effect inference regarding indirect effects without making no unmeasured confounding assumptions using the potential outcomes framework (Holland, 1988; MacKinnon, 2008; Robins & Greenland, 1992; VanderWeele, 2015), using longitudinal data to determine the temporal order of M and Y (MacKinnon, 2008), or both. The goals of this dissertation were to (1) define all indirect and direct effects in a three-wave longitudinal mediation model using the causal mediation formula (Pearl, 2012), (2) analytically compare traditional estimators (ANCOVA, difference score, and residualized change score) to the potential outcomes-defined indirect effects, and (3) use a Monte Carlo simulation to compare the performance of regression and potential outcomes-based methods for estimating longitudinal indirect effects and apply the methods to an empirical dataset. The results of the causal mediation formula revealed the potential outcomes definitions of indirect effects are equivalent to the product of coefficient estimators in a three-wave longitudinal mediation model with linear and additive relations. It was demonstrated with analytical comparisons that the ANCOVA, difference score, and residualized change score models’ estimates of two time-specific indirect effects differ as a function of the respective mediator-outcome relations at each time point. The traditional model that performed the best in terms of the evaluation criteria in the Monte Carlo study was the ANCOVA model and the potential outcomes model that performed the best in terms of the evaluation criteria was sequential G-estimation. Implications and future directions are discussed.
ContributorsValente, Matthew J (Author) / Mackinnon, David P (Thesis advisor) / West, Stephen G. (Committee member) / Grimm, Keving (Committee member) / Chassin, Laurie (Committee member) / Arizona State University (Publisher)
Created2018
156579-Thumbnail Image.png
Description
The goal of diagnostic assessment is to discriminate between groups. In many cases, a binary decision is made conditional on a cut score from a continuous scale. Psychometric methods can improve assessment by modeling a latent variable using item response theory (IRT), and IRT scores can subsequently be used to

The goal of diagnostic assessment is to discriminate between groups. In many cases, a binary decision is made conditional on a cut score from a continuous scale. Psychometric methods can improve assessment by modeling a latent variable using item response theory (IRT), and IRT scores can subsequently be used to determine a cut score using receiver operating characteristic (ROC) curves. Psychometric methods provide reliable and interpretable scores, but the prediction of the diagnosis is not the primary product of the measurement process. In contrast, machine learning methods, such as regularization or binary recursive partitioning, can build a model from the assessment items to predict the probability of diagnosis. Machine learning predicts the diagnosis directly, but does not provide an inferential framework to explain why item responses are related to the diagnosis. It remains unclear whether psychometric and machine learning methods have comparable accuracy or if one method is preferable in some situations. In this study, Monte Carlo simulation methods were used to compare psychometric and machine learning methods on diagnostic classification accuracy. Results suggest that classification accuracy of psychometric models depends on the diagnostic-test correlation and prevalence of diagnosis. Also, machine learning methods that reduce prediction error have inflated specificity and very low sensitivity compared to the data-generating model, especially when prevalence is low. Finally, machine learning methods that use ROC curves to determine probability thresholds have comparable classification accuracy to the psychometric models as sample size, number of items, and number of item categories increase. Therefore, results suggest that machine learning models could provide a viable alternative for classification in diagnostic assessments. Strengths and limitations for each of the methods are discussed, and future directions are considered.
ContributorsGonzález, Oscar (Author) / Mackinnon, David P (Thesis advisor) / Edwards, Michael C (Thesis advisor) / Grimm, Kevin J. (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2018
157542-Thumbnail Image.png
Description
Statistical inference from mediation analysis applies to populations, however, researchers and clinicians may be interested in making inference to individual clients or small, localized groups of people. Person-oriented approaches focus on the differences between people, or latent groups of people, to ask how individuals differ across variables, and can hel

Statistical inference from mediation analysis applies to populations, however, researchers and clinicians may be interested in making inference to individual clients or small, localized groups of people. Person-oriented approaches focus on the differences between people, or latent groups of people, to ask how individuals differ across variables, and can help researchers avoid ecological fallacies when making inferences about individuals. Traditional variable-oriented mediation assumes the population undergoes a homogenous reaction to the mediating process. However, mediation is also described as an intra-individual process where each person passes from a predictor, through a mediator, to an outcome (Collins, Graham, & Flaherty, 1998). Configural frequency mediation is a person-oriented analysis of contingency tables that has not been well-studied or implemented since its introduction in the literature (von Eye, Mair, & Mun, 2010; von Eye, Mun, & Mair, 2009). The purpose of this study is to describe CFM and investigate its statistical properties while comparing it to traditional and casual inference mediation methods. The results of this study show that joint significance mediation tests results in better Type I error rates but limit the person-oriented interpretations of CFM. Although the estimator for logistic regression and causal mediation are different, they both perform well in terms of Type I error and power, although the causal estimator had higher bias than expected, which is discussed in the limitations section.
ContributorsSmyth, Heather Lynn (Author) / Mackinnon, David P (Thesis advisor) / Grimm, Kevin J. (Committee member) / Edwards, Michael C (Committee member) / Arizona State University (Publisher)
Created2019
157069-Thumbnail Image.png
Description
Research and theory in social psychology and related fields indicates that people simultaneously hold many cultural identities. And it is well evidenced across relevant fields (e.g., sociology, marketing, economics) that salient identities are instrumental in a variety of cognitive and behavioral processes, including decision-making. It is not, however, well understood

Research and theory in social psychology and related fields indicates that people simultaneously hold many cultural identities. And it is well evidenced across relevant fields (e.g., sociology, marketing, economics) that salient identities are instrumental in a variety of cognitive and behavioral processes, including decision-making. It is not, however, well understood how the relative salience of various cultural identities factors into the process of making identity-relevant choices, particularly ones that require an actor to choose between conflicting sets of cultural values or beliefs. It is also unclear whether the source of that salience (e.g., chronic or situational) is meaningful in this regard. The current research makes novel predictions concerning the roles of cultural identity centrality and cultural identity situational salience in three distinct aspects of the decision-making process: Direction of decision, speed of decision, and emotion related to decision. In doing so, the research highlights two under-researched forms of culture (i.e., political and religious) and uses as the focal dependent variable a decision-making scenario that forces participants to choose between the values of their religious and political cultures and, to some degree, behave in an identity-inconsistent manner. Results indicate main effects of Christian identity centrality and democrat identity centrality on preference for traditional versus gender-neutral (i.e., non-traditional/progressive) restrooms after statistically controlling for covariates. Additionally, results show a significant main effect of democrat identity centrality and a significant interaction effect of Christian and democrat identity centrality on positive emotion linked to the decision. Post hoc analyses further reveal a significant quadratic relationship between Christian identity centrality and emotion related to the decision. There was no effect of situational strength of democrat identity salience on the decision. Neither centrality or situational strength had any effect on the speed with which participants made their decisions. This research theoretically and empirically advances the study of cultural psychology and carries important implications for identity research and judgment and decision-making across a variety of fields, including management, behavioral economics, and marketing.
ContributorsBarbour, Joseph Eugene (Author) / Cohen, Adam B. (Thesis advisor) / Kenrick, Douglas T. (Committee member) / Mackinnon, David P (Committee member) / Mandel, Naomi (Committee member) / Arizona State University (Publisher)
Created2019
153962-Thumbnail Image.png
Description
This dissertation examines a planned missing data design in the context of mediational analysis. The study considered a scenario in which the high cost of an expensive mediator limited sample size, but in which less expensive mediators could be gathered on a larger sample size. Simulated multivariate normal data were

This dissertation examines a planned missing data design in the context of mediational analysis. The study considered a scenario in which the high cost of an expensive mediator limited sample size, but in which less expensive mediators could be gathered on a larger sample size. Simulated multivariate normal data were generated from a latent variable mediation model with three observed indicator variables, M1, M2, and M3. Planned missingness was implemented on M1 under the missing completely at random mechanism. Five analysis methods were employed: latent variable mediation model with all three mediators as indicators of a latent construct (Method 1), auxiliary variable model with M1 as the mediator and M2 and M3 as auxiliary variables (Method 2), auxiliary variable model with M1 as the mediator and M2 as a single auxiliary variable (Method 3), maximum likelihood estimation including all available data but incorporating only mediator M1 (Method 4), and listwise deletion (Method 5).

The main outcome of interest was empirical power to detect the mediated effect. The main effects of mediation effect size, sample size, and missing data rate performed as expected with power increasing for increasing mediation effect sizes, increasing sample sizes, and decreasing missing data rates. Consistent with expectations, power was the greatest for analysis methods that included all three mediators, and power decreased with analysis methods that included less information. Across all design cells relative to the complete data condition, Method 1 with 20% missingness on M1 produced only 2.06% loss in power for the mediated effect; with 50% missingness, 6.02% loss; and 80% missingess, only 11.86% loss. Method 2 exhibited 20.72% power loss at 80% missingness, even though the total amount of data utilized was the same as Method 1. Methods 3 – 5 exhibited greater power loss. Compared to an average power loss of 11.55% across all levels of missingness for Method 1, average power losses for Methods 3, 4, and 5 were 23.87%, 29.35%, and 32.40%, respectively. In conclusion, planned missingness in a multiple mediator design may permit higher quality characterization of the mediator construct at feasible cost.
ContributorsBaraldi, Amanda N (Author) / Enders, Craig K. (Thesis advisor) / Mackinnon, David P (Thesis advisor) / Aiken, Leona S. (Committee member) / Tein, Jenn-Yun (Committee member) / Arizona State University (Publisher)
Created2015
154852-Thumbnail Image.png
Description
Statistical mediation analysis allows researchers to identify the most important the mediating constructs in the causal process studied. Information about the mediating processes can be used to make interventions more powerful by enhancing successful program components and by not implementing components that did not significantly change the outcome. Identifying mediators

Statistical mediation analysis allows researchers to identify the most important the mediating constructs in the causal process studied. Information about the mediating processes can be used to make interventions more powerful by enhancing successful program components and by not implementing components that did not significantly change the outcome. Identifying mediators is especially relevant when the hypothesized mediating construct consists of multiple related facets. The general definition of the construct and its facets might relate differently to external criteria. However, current methods do not allow researchers to study the relationships between general and specific aspects of a construct to an external criterion simultaneously. This study proposes a bifactor measurement model for the mediating construct as a way to represent the general aspect and specific facets of a construct simultaneously. Monte Carlo simulation results are presented to help to determine under what conditions researchers can detect the mediated effect when one of the facets of the mediating construct is the true mediator, but the mediator is treated as unidimensional. Results indicate that parameter bias and detection of the mediated effect depends on the facet variance represented in the mediation model. This study contributes to the largely unexplored area of measurement issues in statistical mediation analysis.
ContributorsGonzález, Oscar (Author) / Mackinnon, David P (Thesis advisor) / Grimm, Kevin J. (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2016
154889-Thumbnail Image.png
Description
Time metric is an important consideration for all longitudinal models because it can influence the interpretation of estimates, parameter estimate accuracy, and model convergence in longitudinal models with latent variables. Currently, the literature on latent difference score (LDS) models does not discuss the importance of time metric. Furthermore, there is

Time metric is an important consideration for all longitudinal models because it can influence the interpretation of estimates, parameter estimate accuracy, and model convergence in longitudinal models with latent variables. Currently, the literature on latent difference score (LDS) models does not discuss the importance of time metric. Furthermore, there is little research using simulations to investigate LDS models. This study examined the influence of time metric on model estimation, interpretation, parameter estimate accuracy, and convergence in LDS models using empirical simulations. Results indicated that for a time structure with a true time metric where participants had different starting points and unequally spaced intervals, LDS models fit with a restructured and less informative time metric resulted in biased parameter estimates. However, models examined using the true time metric were less likely to converge than models using the restructured time metric, likely due to missing data. Where participants had different starting points but equally spaced intervals, LDS models fit with a restructured time metric resulted in biased estimates of intercept means, but all other parameter estimates were unbiased, and models examined using the true time metric had less convergence than the restructured time metric as well due to missing data. The findings of this study support prior research on time metric in longitudinal models, and further research should examine these findings under alternative conditions. The importance of these findings for substantive researchers is discussed.
ContributorsO'Rourke, Holly P (Author) / Grimm, Kevin J. (Thesis advisor) / Mackinnon, David P (Thesis advisor) / Chassin, Laurie (Committee member) / Aiken, Leona S. (Committee member) / Arizona State University (Publisher)
Created2016
154939-Thumbnail Image.png
Description
The comparison of between- versus within-person relations addresses a central issue in psychological research regarding whether group-level relations among variables generalize to individual group members. Between- and within-person effects may differ in magnitude as well as direction, and contextual multilevel models can accommodate this difference. Contextual multilevel models have been

The comparison of between- versus within-person relations addresses a central issue in psychological research regarding whether group-level relations among variables generalize to individual group members. Between- and within-person effects may differ in magnitude as well as direction, and contextual multilevel models can accommodate this difference. Contextual multilevel models have been explicated mostly for cross-sectional data, but they can also be applied to longitudinal data where level-1 effects represent within-person relations and level-2 effects represent between-person relations. With longitudinal data, estimating the contextual effect allows direct evaluation of whether between-person and within-person effects differ. Furthermore, these models, unlike single-level models, permit individual differences by allowing within-person slopes to vary across individuals. This study examined the statistical performance of the contextual model with a random slope for longitudinal within-person fluctuation data.

A Monte Carlo simulation was used to generate data based on the contextual multilevel model, where sample size, effect size, and intraclass correlation (ICC) of the predictor variable were varied. The effects of simulation factors on parameter bias, parameter variability, and standard error accuracy were assessed. Parameter estimates were in general unbiased. Power to detect the slope variance and contextual effect was over 80% for most conditions, except some of the smaller sample size conditions. Type I error rates for the contextual effect were also high for some of the smaller sample size conditions. Conclusions and future directions are discussed.
ContributorsWurpts, Ingrid Carlson (Author) / Mackinnon, David P (Thesis advisor) / West, Stephen G. (Committee member) / Grimm, Kevin J. (Committee member) / Suk, Hye Won (Committee member) / Arizona State University (Publisher)
Created2016
153748-Thumbnail Image.png
Description
The parent-child relationship is one of the earliest and most formative experiences for social and emotional development. Synchrony, defined as the rhythmic patterning and quality of mutual affect, engagement, and physiological attunement, has been identified as a critical quality of a healthy mother-infant relationship. Although the salience of the quality

The parent-child relationship is one of the earliest and most formative experiences for social and emotional development. Synchrony, defined as the rhythmic patterning and quality of mutual affect, engagement, and physiological attunement, has been identified as a critical quality of a healthy mother-infant relationship. Although the salience of the quality of family interaction has been well-established, clinical and developmental research has varied widely in methods for observing and identifying influential aspects of synchrony. In addition, modern dynamic perspectives presume multiple factors converge in a complex system influenced by both nature and nurture, in which individual traits, behavior, and environment are inextricably intertwined within the system of dyadic relational units.

The present study aimed to directly examine and compare synchrony from three distinct approaches: observed microanalytic behavioral sequences, observed global dyadic qualities, and physiological attunement between mothers and infants. The sample consisted of 323 Mexican American mothers and their infants followed from the third trimester of pregnancy through the first year of life. Mothers were interviewed prenatally, observed at a home visit at 12 weeks postpartum, and were finally interviewed for child social-emotional problems at child age 12 months. Specific aspects of synchrony (microanalytical, global, and physiological) were examined separately as well as together to identify comparable and divergent qualities within the construct.

Findings indicated that multiple perspectives on synchrony are best examined together, but as independent qualities to account for varying characteristics captured by divergent systems. Dyadic relationships characterized by higher reciprocity, more time and flexibility in mutual non-negative engagement, and less tendency to enter negative or unengaged states were associated with fewer child social-emotional problems at child age 12 months. Lower infant cortisol was associated with higher levels of externalizing problems, and smaller differences between mother and child cortisol were associated with higher levels of child dysregulation. Results underscore the complex but important nature of synchrony as a salient mechanism underlying the social-emotional growth of children. A mutually engaged, non-negative, and reciprocal environment lays the foundation for the successful social and self-regulatory competence of infants in the first year of life.
ContributorsCoburn, Shayna Skelley (Author) / Crnic, Keith A (Thesis advisor) / Dishion, Thomas J (Committee member) / Mackinnon, David P (Committee member) / Luecken, Linda J. (Committee member) / Arizona State University (Publisher)
Created2015
156112-Thumbnail Image.png
Description
Understanding how adherence affects outcomes is crucial when developing and assigning interventions. However, interventions are often evaluated by conducting randomized experiments and estimating intent-to-treat effects, which ignore actual treatment received. Dose-response effects can supplement intent-to-treat effects when participants are offered the full dose but many only receive a

Understanding how adherence affects outcomes is crucial when developing and assigning interventions. However, interventions are often evaluated by conducting randomized experiments and estimating intent-to-treat effects, which ignore actual treatment received. Dose-response effects can supplement intent-to-treat effects when participants are offered the full dose but many only receive a partial dose due to nonadherence. Using these data, we can estimate the magnitude of the treatment effect at different levels of adherence, which serve as a proxy for different levels of treatment. In this dissertation, I conducted Monte Carlo simulations to evaluate when linear dose-response effects can be accurately and precisely estimated in randomized experiments comparing a no-treatment control condition to a treatment condition with partial adherence. Specifically, I evaluated the performance of confounder adjustment and instrumental variable methods when their assumptions were met (Study 1) and when their assumptions were violated (Study 2). In Study 1, the confounder adjustment and instrumental variable methods provided unbiased estimates of the dose-response effect across sample sizes (200, 500, 2,000) and adherence distributions (uniform, right skewed, left skewed). The adherence distribution affected power for the instrumental variable method. In Study 2, the confounder adjustment method provided unbiased or minimally biased estimates of the dose-response effect under no or weak (but not moderate or strong) unobserved confounding. The instrumental variable method provided extremely biased estimates of the dose-response effect under violations of the exclusion restriction (no direct effect of treatment assignment on the outcome), though less severe violations of the exclusion restriction should be investigated.
ContributorsMazza, Gina L (Author) / Grimm, Kevin J. (Thesis advisor) / West, Stephen G. (Thesis advisor) / Mackinnon, David P (Committee member) / Tein, Jenn-Yun (Committee member) / Arizona State University (Publisher)
Created2018