Matching Items (5)
Filtering by

Clear all filters

151957-Thumbnail Image.png
Description
Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The

Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The results suggested that, depending on the nature of data, optimal specification of (1) decision rules to select the covariate and its split value in a Classification Tree, (2) the number of covariates randomly sampled for selection, and (3) methods of estimating Random Forests propensity scores could potentially produce an unbiased average treatment effect estimate after propensity scores weighting by the odds adjustment. Compared to the logistic regression estimation model using the true propensity score model, Random Forests had an additional advantage in producing unbiased estimated standard error and correct statistical inference of the average treatment effect. The relationship between the balance on the covariates' means and the bias of average treatment effect estimate was examined both within and between conditions of the simulation. Within conditions, across repeated samples there was no noticeable correlation between the covariates' mean differences and the magnitude of bias of average treatment effect estimate for the covariates that were imbalanced before adjustment. Between conditions, small mean differences of covariates after propensity score adjustment were not sensitive enough to identify the optimal Random Forests model specification for propensity score analysis.
ContributorsCham, Hei Ning (Author) / Tein, Jenn-Yun (Thesis advisor) / Enders, Stephen G (Thesis advisor) / Enders, Craig K. (Committee member) / Mackinnon, David P (Committee member) / Arizona State University (Publisher)
Created2013
150306-Thumbnail Image.png
Description
Lexical diversity (LD) has been used in a wide range of applications, producing a rich history in the field of speech-language pathology. However, for clinicians and researchers identifying a robust measure to quantify LD has been challenging. Recently, sophisticated techniques have been developed that assert to measure LD. Each one

Lexical diversity (LD) has been used in a wide range of applications, producing a rich history in the field of speech-language pathology. However, for clinicians and researchers identifying a robust measure to quantify LD has been challenging. Recently, sophisticated techniques have been developed that assert to measure LD. Each one is based on its own theoretical assumptions and employs different computational machineries. Therefore, it is not clear to what extent these techniques produce valid scores and how they relate to each other. Further, in the field of speech-language pathology, researchers and clinicians often use different methods to elicit various types of discourse and it is an empirical question whether the inferences drawn from analyzing one type of discourse relate and generalize to other types. The current study examined a corpus of four types of discourse (procedures, eventcasts, storytelling, recounts) from 442 adults. Using four techniques (D; Maas; Measure of textual lexical diversity, MTLD; Moving average type token ratio, MATTR), LD scores were estimated for each type. Subsequently, data were modeled using structural equation modeling to uncover their latent structure. Results indicated that two estimation techniques (MATTR and MTLD) generated scores that were stronger indicators of the LD of the language samples. For the other two techniques, results were consistent with the presence of method factors that represented construct-irrelevant sources. A hierarchical factor analytic model indicated that a common factor underlay all combinations of types of discourse and estimation techniques and was interpreted as a general construct of LD. Two discourse types (storytelling and eventcasts) were significantly stronger indicators of the underlying trait. These findings supplement our understanding regarding the validity of scores generated by different estimation techniques. Further, they enhance our knowledge about how productive vocabulary manifests itself across different types of discourse that impose different cognitive and linguistic demands. They also offer clinicians and researchers a point of reference in terms of techniques that measure the LD of a language sample and little of anything else and also types of discourse that might be the most informative for measuring the LD of individuals.
ContributorsFergadiotis, Gerasimos (Author) / Wright, Heather H (Thesis advisor) / Katz, Richard (Committee member) / Green, Samuel (Committee member) / Arizona State University (Publisher)
Created2011
Description
This study presents a structural model of coping with dating violence. The model integrates abuse frequency and solution attribution to determine a college woman's choice of coping strategy. Three hundred, twenty-four undergraduate women reported being targets of some physical abuse from a boyfriend and responded to questions regarding the abuse,

This study presents a structural model of coping with dating violence. The model integrates abuse frequency and solution attribution to determine a college woman's choice of coping strategy. Three hundred, twenty-four undergraduate women reported being targets of some physical abuse from a boyfriend and responded to questions regarding the abuse, their gender role beliefs, their solution attribution and the coping behaviors they executed. Though gender role beliefs and abuse severity were not significant predictors, solution attribution mediated between frequency of the abuse and coping. Abuse frequency had a positive effect on external solution attribution and external solution attribution had a positive effect on the level of use of active coping, utilization of social support, denial and acceptance.
ContributorsBapat, Mona (Author) / Tracey, Terence J.G. (Thesis advisor) / Bernstein, Bianca (Committee member) / Green, Samuel (Committee member) / Arizona State University (Publisher)
Created2011
156621-Thumbnail Image.png
Description
Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of

Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of mismatch in dimensionality between data and analysis models with multiple-group analyses at the population and sample levels. Datasets were generated using a bifactor model with different factor structures and were analyzed with bifactor and single-factor models to assess misspecification effects on assessments of MI and latent mean differences. As baseline models, the bifactor models fit data well and had minimal bias in latent mean estimation. However, the low convergence rates of fitting bifactor models to data with complex structures and small sample sizes caused concern. On the other hand, effects of fitting the misspecified single-factor models on the assessments of MI and latent means differed by the bifactor structures underlying data. For data following one general factor and one group factor affecting a small set of indicators, the effects of ignoring the group factor in analysis models on the tests of MI and latent mean differences were mild. In contrast, for data following one general factor and several group factors, oversimplifications of analysis models can lead to inaccurate conclusions regarding MI assessment and latent mean estimation.
ContributorsXu, Yuning (Author) / Green, Samuel (Thesis advisor) / Levy, Roy (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2018
149352-Thumbnail Image.png
Description
For this thesis a Monte Carlo simulation was conducted to investigate the robustness of three latent interaction modeling approaches (constrained product indicator, generalized appended product indicator (GAPI), and latent moderated structural equations (LMS)) under high degrees of nonnormality of the exogenous indicators, which have not been investigated in previous literature.

For this thesis a Monte Carlo simulation was conducted to investigate the robustness of three latent interaction modeling approaches (constrained product indicator, generalized appended product indicator (GAPI), and latent moderated structural equations (LMS)) under high degrees of nonnormality of the exogenous indicators, which have not been investigated in previous literature. Results showed that the constrained product indicator and LMS approaches yielded biased estimates of the interaction effect when the exogenous indicators were highly nonnormal. When the violation of nonnormality was not severe (symmetric with excess kurtosis < 1), the LMS approach with ML estimation yielded the most precise latent interaction effect estimates. The LMS approach with ML estimation also had the highest statistical power among the three approaches, given that the actual Type-I error rates of the Wald and likelihood ratio test of interaction effect were acceptable. In highly nonnormal conditions, only the GAPI approach with ML estimation yielded unbiased latent interaction effect estimates, with an acceptable actual Type-I error rate of both the Wald test and likelihood ratio test of interaction effect. No support for the use of the Satorra-Bentler or Yuan-Bentler ML corrections was found across all three methods.
ContributorsCham, Hei Ning (Author) / West, Stephen G. (Thesis advisor) / Aiken, Leona S. (Committee member) / Enders, Craig K. (Committee member) / Arizona State University (Publisher)
Created2010