Matching Items (68)
Filtering by

Clear all filters

151957-Thumbnail Image.png
Description
Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The

Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The results suggested that, depending on the nature of data, optimal specification of (1) decision rules to select the covariate and its split value in a Classification Tree, (2) the number of covariates randomly sampled for selection, and (3) methods of estimating Random Forests propensity scores could potentially produce an unbiased average treatment effect estimate after propensity scores weighting by the odds adjustment. Compared to the logistic regression estimation model using the true propensity score model, Random Forests had an additional advantage in producing unbiased estimated standard error and correct statistical inference of the average treatment effect. The relationship between the balance on the covariates' means and the bias of average treatment effect estimate was examined both within and between conditions of the simulation. Within conditions, across repeated samples there was no noticeable correlation between the covariates' mean differences and the magnitude of bias of average treatment effect estimate for the covariates that were imbalanced before adjustment. Between conditions, small mean differences of covariates after propensity score adjustment were not sensitive enough to identify the optimal Random Forests model specification for propensity score analysis.
ContributorsCham, Hei Ning (Author) / Tein, Jenn-Yun (Thesis advisor) / Enders, Stephen G (Thesis advisor) / Enders, Craig K. (Committee member) / Mackinnon, David P (Committee member) / Arizona State University (Publisher)
Created2013
152979-Thumbnail Image.png
Description
Research demonstrating the importance of the paternal role has been largely conducted using samples of Caucasian men, leaving a gap in what is known about fathering in minority cultures. Family systems theories highlight the dynamic interrelations between familial roles and relationships, and suggest that comprehensive studies of fathering require attention

Research demonstrating the importance of the paternal role has been largely conducted using samples of Caucasian men, leaving a gap in what is known about fathering in minority cultures. Family systems theories highlight the dynamic interrelations between familial roles and relationships, and suggest that comprehensive studies of fathering require attention to the broad family and cultural context. During the early infancy period, mothers' and fathers' postpartum adjustment may represent a critical source of influence on father involvement. For the current study, Mexican American (MA) women (N = 125) and a subset of their romantic partners/biological fathers (N = 57) reported on their depressive symptoms and levels of father involvement (paternal engagement, accessibility, and responsibility) during the postpartum period. Descriptive analyses suggested that fathers are involved in meaningful levels of care during infancy. Greater paternal postpartum depression (PPD) was associated with lower levels of father involvement. Maternal PPD interacted with paternal gender role attitudes to predict father involvement. At higher levels of maternal PPD, involvement increased among fathers adhering to less segregated gender role attitudes and decreased among fathers who endorsed more segregated gender role attitudes. Within select models, differences in the relations were observed between mothers' and fathers' reports of paternal involvement. Results bring attention to the importance of examining contextual influences on early fathering in MA families and highlight the unique information that may be gathered from separate maternal and paternal reports of father involvement.
ContributorsRoubinov, Danielle S (Author) / Luecken, Linda J. (Thesis advisor) / Crnic, Keith A (Committee member) / Enders, Craig K. (Committee member) / Gonzales, Nancy A. (Committee member) / Arizona State University (Publisher)
Created2014
153540-Thumbnail Image.png
Description
In accordance with the Principal Agent Theory, Property Right Theory, Incentive Theory, and Human Capital Theory, firms face agency problems due to “separation of ownership and management”, which call for effective corporate governance. Ownership structure is a core element of the corporate governance. The differences in ownership structures thus may

In accordance with the Principal Agent Theory, Property Right Theory, Incentive Theory, and Human Capital Theory, firms face agency problems due to “separation of ownership and management”, which call for effective corporate governance. Ownership structure is a core element of the corporate governance. The differences in ownership structures thus may result in differential incentives in governance through the selection of senior management and in the design of senior management compensation system. This thesis investigates four firms with four different types of ownership structures: a public listed firm with the controlling interest by the state, a public listed firm with a non-state-owned controlling interest, a public listed firm a family-owned controlling interest, and a Sino-foreign joint venture firm. By using a case study approach, I focus on two dimensions of ownership structure characteristics – ownership diversification and differences in property rights so as to document whether there are systematic differences in governance participation and executive compensation design. Specifically, I focused on whether such differences are reflected in management selection (which is linked to adverse selection and moral hazard problems) and in compensation design (the choices of performance measurements, performance pay, and in stock option or restricted stock). The results are consistent with my expectation – the nature of ownership structure does affect senior management compensation design. Policy implications are discussed accordingly.
ContributorsGao, Shenghua (Author) / Pei, Ker-Wei (Thesis advisor) / Li, Feng (Committee member) / Shen, Wei (Committee member) / Arizona State University (Publisher)
Created2015
153391-Thumbnail Image.png
Description
Missing data are common in psychology research and can lead to bias and reduced power if not properly handled. Multiple imputation is a state-of-the-art missing data method recommended by methodologists. Multiple imputation methods can generally be divided into two broad categories: joint model (JM) imputation and fully conditional specification (FCS)

Missing data are common in psychology research and can lead to bias and reduced power if not properly handled. Multiple imputation is a state-of-the-art missing data method recommended by methodologists. Multiple imputation methods can generally be divided into two broad categories: joint model (JM) imputation and fully conditional specification (FCS) imputation. JM draws missing values simultaneously for all incomplete variables using a multivariate distribution (e.g., multivariate normal). FCS, on the other hand, imputes variables one at a time, drawing missing values from a series of univariate distributions. In the single-level context, these two approaches have been shown to be equivalent with multivariate normal data. However, less is known about the similarities and differences of these two approaches with multilevel data, and the methodological literature provides no insight into the situations under which the approaches would produce identical results. This document examined five multilevel multiple imputation approaches (three JM methods and two FCS methods) that have been proposed in the literature. An analytic section shows that only two of the methods (one JM method and one FCS method) used imputation models equivalent to a two-level joint population model that contained random intercepts and different associations across levels. The other three methods employed imputation models that differed from the population model primarily in their ability to preserve distinct level-1 and level-2 covariances. I verified the analytic work with computer simulations, and the simulation results also showed that imputation models that failed to preserve level-specific covariances produced biased estimates. The studies also highlighted conditions that exacerbated the amount of bias produced (e.g., bias was greater for conditions with small cluster sizes). The analytic work and simulations lead to a number of practical recommendations for researchers.
ContributorsMistler, Stephen (Author) / Enders, Craig K. (Thesis advisor) / Aiken, Leona (Committee member) / Levy, Roy (Committee member) / West, Stephen G. (Committee member) / Arizona State University (Publisher)
Created2015
149971-Thumbnail Image.png
Description
Although the issue of factorial invariance has received increasing attention in the literature, the focus is typically on differences in factor structure across groups that are directly observed, such as those denoted by sex or ethnicity. While establishing factorial invariance across observed groups is a requisite step in making meaningful

Although the issue of factorial invariance has received increasing attention in the literature, the focus is typically on differences in factor structure across groups that are directly observed, such as those denoted by sex or ethnicity. While establishing factorial invariance across observed groups is a requisite step in making meaningful cross-group comparisons, failure to attend to possible sources of latent class heterogeneity in the form of class-based differences in factor structure has the potential to compromise conclusions with respect to observed groups and may result in misguided attempts at instrument development and theory refinement. The present studies examined the sensitivity of two widely used confirmatory factor analytic model fit indices, the chi-square test of model fit and RMSEA, to latent class differences in factor structure. Two primary questions were addressed. The first of these concerned the impact of latent class differences in factor loadings with respect to model fit in a single sample reflecting a mixture of classes. The second question concerned the impact of latent class differences in configural structure on tests of factorial invariance across observed groups. The results suggest that both indices are highly insensitive to class-based differences in factor loadings. Across sample size conditions, models with medium (0.2) sized loading differences were rejected by the chi-square test of model fit at rates just slightly higher than the nominal .05 rate of rejection that would be expected under a true null hypothesis. While rates of rejection increased somewhat when the magnitude of loading difference increased, even the largest sample size with equal class representation and the most extreme violations of loading invariance only had rejection rates of approximately 60%. RMSEA was also insensitive to class-based differences in factor loadings, with mean values across conditions suggesting a degree of fit that would generally be regarded as exceptionally good in practice. In contrast, both indices were sensitive to class-based differences in configural structure in the context of a multiple group analysis in which each observed group was a mixture of classes. However, preliminary evidence suggests that this sensitivity may contingent on the form of the cross-group model misspecification.
ContributorsBlackwell, Kimberly Carol (Author) / Millsap, Roger E (Thesis advisor) / Aiken, Leona S. (Committee member) / Enders, Craig K. (Committee member) / Mackinnon, David P (Committee member) / Arizona State University (Publisher)
Created2011
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
150016-Thumbnail Image.png
Description
Designing studies that use latent growth modeling to investigate change over time calls for optimal approaches for conducting power analysis for a priori determination of required sample size. This investigation (1) studied the impacts of variations in specified parameters, design features, and model misspecification in simulation-based power analyses and

Designing studies that use latent growth modeling to investigate change over time calls for optimal approaches for conducting power analysis for a priori determination of required sample size. This investigation (1) studied the impacts of variations in specified parameters, design features, and model misspecification in simulation-based power analyses and (2) compared power estimates across three common power analysis techniques: the Monte Carlo method; the Satorra-Saris method; and the method developed by MacCallum, Browne, and Cai (MBC). Choice of sample size, effect size, and slope variance parameters markedly influenced power estimates; however, level-1 error variance and number of repeated measures (3 vs. 6) when study length was held constant had little impact on resulting power. Under some conditions, having a moderate versus small effect size or using a sample size of 800 versus 200 increased power by approximately .40, and a slope variance of 10 versus 20 increased power by up to .24. Decreasing error variance from 100 to 50, however, increased power by no more than .09 and increasing measurement occasions from 3 to 6 increased power by no more than .04. Misspecification in level-1 error structure had little influence on power, whereas misspecifying the form of the growth model as linear rather than quadratic dramatically reduced power for detecting differences in slopes. Additionally, power estimates based on the Monte Carlo and Satorra-Saris techniques never differed by more than .03, even with small sample sizes, whereas power estimates for the MBC technique appeared quite discrepant from the other two techniques. Results suggest the choice between using the Satorra-Saris or Monte Carlo technique in a priori power analyses for slope differences in latent growth models is a matter of preference, although features such as missing data can only be considered within the Monte Carlo approach. Further, researchers conducting power analyses for slope differences in latent growth models should pay greatest attention to estimating slope difference, slope variance, and sample size. Arguments are also made for examining model-implied covariance matrices based on estimated parameters and graphic depictions of slope variance to help ensure parameter estimates are reasonable in a priori power analysis.
ContributorsVan Vleet, Bethany Lucía (Author) / Thompson, Marilyn S. (Thesis advisor) / Green, Samuel B. (Committee member) / Enders, Craig K. (Committee member) / Arizona State University (Publisher)
Created2011
153962-Thumbnail Image.png
Description
This dissertation examines a planned missing data design in the context of mediational analysis. The study considered a scenario in which the high cost of an expensive mediator limited sample size, but in which less expensive mediators could be gathered on a larger sample size. Simulated multivariate normal data were

This dissertation examines a planned missing data design in the context of mediational analysis. The study considered a scenario in which the high cost of an expensive mediator limited sample size, but in which less expensive mediators could be gathered on a larger sample size. Simulated multivariate normal data were generated from a latent variable mediation model with three observed indicator variables, M1, M2, and M3. Planned missingness was implemented on M1 under the missing completely at random mechanism. Five analysis methods were employed: latent variable mediation model with all three mediators as indicators of a latent construct (Method 1), auxiliary variable model with M1 as the mediator and M2 and M3 as auxiliary variables (Method 2), auxiliary variable model with M1 as the mediator and M2 as a single auxiliary variable (Method 3), maximum likelihood estimation including all available data but incorporating only mediator M1 (Method 4), and listwise deletion (Method 5).

The main outcome of interest was empirical power to detect the mediated effect. The main effects of mediation effect size, sample size, and missing data rate performed as expected with power increasing for increasing mediation effect sizes, increasing sample sizes, and decreasing missing data rates. Consistent with expectations, power was the greatest for analysis methods that included all three mediators, and power decreased with analysis methods that included less information. Across all design cells relative to the complete data condition, Method 1 with 20% missingness on M1 produced only 2.06% loss in power for the mediated effect; with 50% missingness, 6.02% loss; and 80% missingess, only 11.86% loss. Method 2 exhibited 20.72% power loss at 80% missingness, even though the total amount of data utilized was the same as Method 1. Methods 3 – 5 exhibited greater power loss. Compared to an average power loss of 11.55% across all levels of missingness for Method 1, average power losses for Methods 3, 4, and 5 were 23.87%, 29.35%, and 32.40%, respectively. In conclusion, planned missingness in a multiple mediator design may permit higher quality characterization of the mediator construct at feasible cost.
ContributorsBaraldi, Amanda N (Author) / Enders, Craig K. (Thesis advisor) / Mackinnon, David P (Thesis advisor) / Aiken, Leona S. (Committee member) / Tein, Jenn-Yun (Committee member) / Arizona State University (Publisher)
Created2015
157422-Thumbnail Image.png
Description冷链物流主要是指食品在生产到消费者食用前始终处于适宜的温度环境,以保障食品品质、降低流通过程中的损耗。冷链物流相比于传统物流而言是一项更复杂的系统性工程,受到政策和市场需求的影响呈现迅猛发展态势。但是,冷链物流企业长期以来因规模小、固定资产少、服务范围窄、服务规范性弱而发展困难重重,核心问题是资金的问题。政府引导和鼓励打造冷链物流产业园,推动产业园投资和建设主体打造平台,实现对园区内冷链企业的聚集效应并通过金融服务解决企业发展的资金问题。通过产融结合助力冷链物流企业发展,成为目前冷链物流行业发展的主要方式和未来趋势。

本研究聚焦冷链物流产业园金融服务助力冷链物流企业发展问题,主要研究内容包括:第一,基于产融结合理论,梳理冷链物流企业与产业园之间关系,从供需两侧探索冷链物流企业和产业园的金融服务的范围、类型和特点。第二,基于平台理论,构建冷链物流企业采纳产业园金融服务的研究模型,探索金融服务影响冷链物流企业的经营因素,分析冷链物流企业采纳产业园金融服务的因素和途径。第三,基于信息不对称理论,关切信息技术支持和知识分享在冷链物流企业采纳产业园提供金融服务过程中的调节作用。同时,梳理产业园提供金融服务可能面临哪些风险,制订冷链物流企业入驻园区的标准,防范风险。

本文运用实证研究方法,通过对国内18家冷链物流相关的产业园、物流园、冷链物流、商贸流通、金融等企业实地考察和专家访谈基础上,拟定问卷并对268家企业进行调查收集数据,使用结构方程模型进行假设检验。研究发现:金融服务的有形性、可靠性、移情性、经济性对冷链物流企业采纳产业园金融服务影响显著,而响应性的影响不显著。同时

信息技术支持和知识共享的调节作用不显著。最后,针对产业园吸引冷链物流企业提供金融服务、冷链物流企业采纳产业园金融服务的风险,提出防范策略措施。
ContributorsYang, Su (Author) / Shen, Wei (Thesis advisor) / Chen, Xinlei (Thesis advisor) / Gu, Bin (Committee member) / Arizona State University (Publisher)
Created2019
157089-Thumbnail Image.png
Description财富管理是一个高度信息不对称的行业,因此投资人需要尽可能减少自身的不确定来做投资决策,通过文献整理,本文发现通过建立信任来消除不确定性是很多投资人都会选择的帮助投资决策的方法。纵观历史,美国2007-2008年的金融危机也恰恰导致金融市场投资人对于理财机构信任的严重缺失,相同的情况也可能发生在中国财富管理市场,因此本文将此选作研究重点,希望深入研究财富管理公司投资人对理财师的信任来得到一系列结论。本文最终发现就平台和理财师相比,投资人更看重平台的信誉度。 投资人大多认为平台的信誉度要高于理财师的信誉度,但是这并不意味着理财师不重要。本文进一步的分析发现,多数投资人会和理财师建立起一种私人联系,且该私人关系有助于加强客户和平台的联系。投资人认为行业经验、为人诚恳,说话可信以及责任心是加强这种私人关系的重要因素。最后,投资人对于钜派平台的信任主要由对于理财师的信任来维持,同时对于理财师的信任主要来自与情感信任。本文的发现对财富管理平台具有战略意义。
ContributorsWu, Qimin (Author) / Shen, Wei (Thesis advisor) / Chang, Chun (Thesis advisor) / Zhu, Hongquan (Committee member) / Arizona State University (Publisher)
Created2019