Matching Items (8)
Filtering by

Clear all filters

157322-Thumbnail Image.png
Description
With improvements in technology, intensive longitudinal studies that permit the investigation of daily and weekly cycles in behavior have increased exponentially over the past few decades. Traditionally, when data have been collected on two variables over time, multivariate time series approaches that remove trends, cycles, and serial dependency have been

With improvements in technology, intensive longitudinal studies that permit the investigation of daily and weekly cycles in behavior have increased exponentially over the past few decades. Traditionally, when data have been collected on two variables over time, multivariate time series approaches that remove trends, cycles, and serial dependency have been used. These analyses permit the study of the relationship between random shocks (perturbations) in the presumed causal series and changes in the outcome series, but do not permit the study of the relationships between cycles. Liu and West (2016) proposed a multilevel approach that permitted the study of potential between subject relationships between features of the cycles in two series (e.g., amplitude). However, I show that the application of the Liu and West approach is restricted to a small set of features and types of relationships between the series. Several authors (e.g., Boker & Graham, 1998) proposed a connected mass-spring model that appears to permit modeling of more general cyclic relationships. I showed that the undamped connected mass-spring model is also limited and may be unidentified. To test the severity of the restrictions of the motion trajectories producible by the undamped connected mass-spring model I mathematically derived their connection to the force equations of the undamped connected mass-spring system. The mathematical solution describes the domain of the trajectory pairs that are producible by the undamped connected mass-spring model. The set of producible trajectory pairs is highly restricted, and this restriction sets major limitations on the application of the connected mass-spring model to psychological data. I used a simulation to demonstrate that even if a pair of psychological time-varying variables behaved exactly like two masses in an undamped connected mass-spring system, the connected mass-spring model would not yield adequate parameter estimates. My simulation probed the performance of the connected mass-spring model as a function of several aspects of data quality including number of subjects, series length, sampling rate relative to the cycle, and measurement error in the data. The findings can be extended to damped and nonlinear connected mass-spring systems.
ContributorsMartynova, Elena (M.A.) (Author) / West, Stephen G. (Thesis advisor) / Amazeen, Polemnia (Committee member) / Tein, Jenn-Yun (Committee member) / Arizona State University (Publisher)
Created2019
Description
Collider effects pose a major problem in psychological research. Colliders are third variables that bias the relationship between an independent and dependent variable when (1) the composition of a research sample is restricted by the scores on a collider variable or (2) researchers adjust for a collider variable in their

Collider effects pose a major problem in psychological research. Colliders are third variables that bias the relationship between an independent and dependent variable when (1) the composition of a research sample is restricted by the scores on a collider variable or (2) researchers adjust for a collider variable in their statistical analyses. Both cases interfere with the accuracy and generalizability of statistical results. Despite their importance, collider effects remain relatively unknown in the social sciences. This research introduces both the conceptual and the mathematical foundation for collider effects and demonstrates how to calculate a collider effect and test it for statistical significance. Simulation studies examined the efficiency and accuracy of the collider estimation methods and tested the viability of Thorndike’s Case III equation as a potential solution to correcting for collider bias in cases of biased sample selection.
ContributorsLamp, Sophia Josephine (Author) / Mackinnon, David P (Thesis advisor) / Anderson, Samantha F (Committee member) / Edwards, Michael C (Committee member) / Arizona State University (Publisher)
Created2021
168753-Thumbnail Image.png
Description
Latent profile analysis (LPA), a type of finite mixture model, has grown in popularity due to its ability to detect latent classes or unobserved subgroups within a sample. Though numerous methods exist to determine the correct number of classes, past research has repeatedly demonstrated that no one method is consistently

Latent profile analysis (LPA), a type of finite mixture model, has grown in popularity due to its ability to detect latent classes or unobserved subgroups within a sample. Though numerous methods exist to determine the correct number of classes, past research has repeatedly demonstrated that no one method is consistently the best as each tends to struggle under specific conditions. Recently, the likelihood incremental percentage per parameter (LI3P), a method using a new approach, was proposed and tested which yielded promising initial results. To evaluate this new method more thoroughly, this study simulated 50,000 datasets, manipulating factors such as sample size, class distance, number of items, and number of classes. After evaluating the performance of the LI3P on simulated data, the LI3P is applied to LPA models fit to an empirical dataset to illustrate the method’s application. Results indicate the LI3P performs in line with standard class enumeration techniques, and primarily reflects class separation and the number of classes.
ContributorsHoupt, Russell Paul (Author) / Grimm, Kevin J (Thesis advisor) / McNeish, Daniel (Committee member) / Edwards, Michael C (Committee member) / Arizona State University (Publisher)
Created2022
158877-Thumbnail Image.png
Description
This research explores tests for statistical suppression. Suppression is a statistical phenomenon whereby the magnitude of an effect becomes larger when another variable is added to the regression equation. From a causal perspective, suppression occurs when there is inconsistent mediation or negative confounding. Several different estimators for suppression are evaluated

This research explores tests for statistical suppression. Suppression is a statistical phenomenon whereby the magnitude of an effect becomes larger when another variable is added to the regression equation. From a causal perspective, suppression occurs when there is inconsistent mediation or negative confounding. Several different estimators for suppression are evaluated conceptually and in a statistical simulation study where we impose suppression and non-suppression conditions. For each estimator without an existing standard error formula, one was derived in order to conduct significance tests and build confidence intervals. Overall, two of the estimators were biased and had poor coverage, one worked well but had inflated type-I error rates when the population model was complete mediation. As a result of analyzing these three tests, a fourth was considered in the late stages of the project and showed promising results that address concerns of the other tests. When the tests were applied to real data, they gave similar results and were consistent.
ContributorsMuniz, Felix (Author) / Mackinnon, David P (Thesis advisor) / Anderson, Samantha F. (Committee member) / McNeish, Daniel M (Committee member) / Arizona State University (Publisher)
Created2020
154396-Thumbnail Image.png
Description
Measurement invariance exists when a scale functions equivalently across people and is therefore essential for making meaningful group comparisons. Often, measurement invariance is examined with independent and identically distributed data; however, there are times when the participants are clustered within units, creating dependency in the data. Researchers have taken different

Measurement invariance exists when a scale functions equivalently across people and is therefore essential for making meaningful group comparisons. Often, measurement invariance is examined with independent and identically distributed data; however, there are times when the participants are clustered within units, creating dependency in the data. Researchers have taken different approaches to address this dependency when studying measurement invariance (e.g., Kim, Kwok, & Yoon, 2012; Ryu, 2014; Kim, Yoon, Wen, Luo, & Kwok, 2015), but there are no comparisons of the various approaches. The purpose of this master's thesis was to investigate measurement invariance in multilevel data when the grouping variable was a level-1 variable using five different approaches. Publicly available data from the Early Childhood Longitudinal Study-Kindergarten Cohort (ECLS-K) was used as an illustrative example. The construct of early behavior, which was made up of four teacher-rated behavior scales, was evaluated for measurement invariance in relation to gender. In the specific case of this illustrative example, the statistical conclusions of the five approaches were in agreement (i.e., the loading of the externalizing item and the intercept of the approaches to learning item were not invariant). Simulation work should be done to investigate in which situations the conclusions of these approaches diverge.
ContributorsGunn, Heather (Author) / Grimm, Kevin J. (Thesis advisor) / Aiken, Leona S. (Committee member) / Suk, Hye Won (Committee member) / Arizona State University (Publisher)
Created2016
157542-Thumbnail Image.png
Description
Statistical inference from mediation analysis applies to populations, however, researchers and clinicians may be interested in making inference to individual clients or small, localized groups of people. Person-oriented approaches focus on the differences between people, or latent groups of people, to ask how individuals differ across variables, and can hel

Statistical inference from mediation analysis applies to populations, however, researchers and clinicians may be interested in making inference to individual clients or small, localized groups of people. Person-oriented approaches focus on the differences between people, or latent groups of people, to ask how individuals differ across variables, and can help researchers avoid ecological fallacies when making inferences about individuals. Traditional variable-oriented mediation assumes the population undergoes a homogenous reaction to the mediating process. However, mediation is also described as an intra-individual process where each person passes from a predictor, through a mediator, to an outcome (Collins, Graham, & Flaherty, 1998). Configural frequency mediation is a person-oriented analysis of contingency tables that has not been well-studied or implemented since its introduction in the literature (von Eye, Mair, & Mun, 2010; von Eye, Mun, & Mair, 2009). The purpose of this study is to describe CFM and investigate its statistical properties while comparing it to traditional and casual inference mediation methods. The results of this study show that joint significance mediation tests results in better Type I error rates but limit the person-oriented interpretations of CFM. Although the estimator for logistic regression and causal mediation are different, they both perform well in terms of Type I error and power, although the causal estimator had higher bias than expected, which is discussed in the limitations section.
ContributorsSmyth, Heather Lynn (Author) / Mackinnon, David P (Thesis advisor) / Grimm, Kevin J. (Committee member) / Edwards, Michael C (Committee member) / Arizona State University (Publisher)
Created2019
168527-Thumbnail Image.png
Description
Scale scores play a significant role in research and practice in a wide range of areas such as education, psychology, and health sciences. Although the methods of scale scoring have advanced considerably over the last 100 years, researchers and practitioners have generally been slow to implement these advances. There are

Scale scores play a significant role in research and practice in a wide range of areas such as education, psychology, and health sciences. Although the methods of scale scoring have advanced considerably over the last 100 years, researchers and practitioners have generally been slow to implement these advances. There are many topics that fall under this umbrella but the current study focuses on two. The first topic is that of subscores and total scores. Many of the scales in psychological and health research are designed to yield subscores, yet it is common to see total scores reported instead. Simplifying scores in this way, however, may have important implications for researchers and scale users in terms of interpretation and use. The second topic is subscore augmentation. That is, if there are subscores, how much value is there in using a subscore augmentation method? Most people using psychological assessments are unfamiliar with score augmentation techniques and the potential benefits they may have over the traditional sum score approach. The current study borrows methods from education to explore the magnitude of improvement of using augmented scores over observed scores. Data was simulated using the Graded Response Model. Factors controlled in the simulation were number of subscales, number of items per subscale, level of correlation between subscales, and sample size. Four estimates of the true subscore were considered (raw, subscore-adjusted, total score-adjusted, joint score-adjusted). Results from the simulation suggest that the score adjusted with total score information may perform poorly when the level of inter-subscore correlation is 0.3. Joint scores perform well most of the time, and the subscore-adjusted scores and joint-adjusted scores were always better performers than raw scores. Finally, general advice to applied users is provided.
ContributorsGardner, Molly (Author) / Edwards, Michael C (Thesis advisor) / McNeish, Daniel (Committee member) / Levy, Roy (Committee member) / Arizona State University (Publisher)
Created2022
190785-Thumbnail Image.png
Description
Psychologists report effect sizes in randomized controlled trials to facilitate interpretation and inform clinical or policy guidance. Since commonly used effect size measures (e.g., standardized mean difference) are not sensitive to heterogeneous treatment effects, methodologists have suggested the use of an alternative effect size δ, a between-subjects causal parameter describing

Psychologists report effect sizes in randomized controlled trials to facilitate interpretation and inform clinical or policy guidance. Since commonly used effect size measures (e.g., standardized mean difference) are not sensitive to heterogeneous treatment effects, methodologists have suggested the use of an alternative effect size δ, a between-subjects causal parameter describing the probability that the outcome of a random participant in the treatment group is better than the outcome of another random participant in the control group. Although this effect size is useful, researchers could mistakenly use δ to describe its within-subject analogue, ψ, the probability that an individual will do better under the treatment than the control. Hand’s paradox describes the situation where ψ and δ are on opposing sides of 0.5: δ may imply most are helped whereas the (unknown) underlying ψ indicates that most are harmed by the treatment. The current study used Monte Carlo simulations to investigate plausible situations under which Hand’s paradox does and does not occur, tracked the magnitude of the discrepancy between ψ and δ, and explored whether the size of the discrepancy could be reduced with a relevant covariate. The findings suggested that although the paradox should not occur under bivariate normal data conditions in the population, there could be sample cases with the paradox. The magnitude of the discrepancy between ψ and δ depended on both the size of the average treatment effect and the underlying correlation between the potential outcomes, ρ. Smaller effects led to larger discrepancies when ρ < 0 and ρ = 1, whereas larger effects led to larger discrepancies when 0 < ρ < 1. It was useful to consider a relevant covariate when calculating ψ and δ. Although ψ and δ were still discrepant within covariate levels, results indicated that conditioning upon relevant covariates is still useful in describing heterogeneous treatment effects.
ContributorsLiu, Xinran (Author) / Anderson, Samantha F (Thesis advisor) / McNeish, Daniel (Committee member) / MacKinnon, David (Committee member) / Arizona State University (Publisher)
Created2023