Matching Items (22)
136078-Thumbnail Image.png
DescriptionDuring the Third Wave of Democratization, the United States has influenced many different cultures through politics and social interests. The way in which this has occurred is through their marketing and advertising. Many companies are the reason that the United States is a super power today.
ContributorsNebeker, Garrett Albert (Author) / Wilson, Jeffrey (Thesis director) / Reiser, Mark (Committee member) / Barrett, The Honors College (Contributor) / Department of Finance (Contributor) / W. P. Carey School of Business (Contributor)
Created2015-05
136139-Thumbnail Image.png
Description
Objective: To assess and quantify the effect of state’s price transparency regulations (hereafter, PTR) on healthcare pricing.

Data Sources: I use the Healthcare Cost and Utilization Project’s Nationwide Inpatient Sample (NIS) from 2000 to 2011. The NIS is a 20% sample of all inpatient claims. The Manhattan

Objective: To assess and quantify the effect of state’s price transparency regulations (hereafter, PTR) on healthcare pricing.

Data Sources: I use the Healthcare Cost and Utilization Project’s Nationwide Inpatient Sample (NIS) from 2000 to 2011. The NIS is a 20% sample of all inpatient claims. The Manhattan Institute supplied data on the availability of health savings accounts in each state. State PTR implementation dates were gathered by Hans Christensen, Eric Floyd, and Mark Maffett of University of Chicago’s Booth School of Business by contacting the health department, hospital association, or website controller in each state.

Study Design: The NIS data was collapsed by procedure, hospital, and year providing averages for the dependent variable, Cost, and a host of covariates. Cost is a product of Total Charges within the NIS and the hospital’s Cost to Charge ratio. A new binary variable, PTR, was defined as ‘0’ if the year was strictly less than the disclosure website’s implementation date, ‘1’ for afterwards, and missing for the year of implementation. Then, using multivariate OLS regression with fixed effect modeling, the change in cost from before to after the year of implementation is estimated.

Principal Findings: The analysis estimates the effect of PTR to decrease the average cost per procedure by 7%. Specifications identify within state, within hospital, and within procedure variation, and reports that 78% of the cost decrease is due to within-hospital, within-procedure price discounts. An additional model includes the interaction of PTR with the prevalence of health savings accounts (hereafter, HSAs) and procedure electivity. The results show that PTR lowers costs by an additional 3 percent with each additional 10 percentage point increase in the availability of HSAs. In contrast, the cost reductions from PTR were much smaller for procedures more frequently coded as elective.

Conclusions: The study concludes price transparency regulations can lead to a decrease in a procedure’s costs on average, primarily through price discounts and slightly through lower cost procedures, but not due to patients moving to cheaper hospitals. This implies that hospitals are taking initiative and lowering prices as the competition’s prices become publically available suggesting that hospitals – not patients – are the biggest users of price transparency websites. Hospitals are also finding some ways to provide cheaper alternatives to more expensive procedures. State regulators should evaluate if a better metric other than charge prices, such as expected out-of-pocket payments, would evoke greater patient participation. Furthermore, states with higher prevalence of HSAs experience greater effects of PTR as expected since patients with HSAs have greater incentives to lower their costs. Patients should expect a shift towards plans that offer these types of savings accounts since they’ve shown to have a reduction of health costs on average per procedure in states with higher prevalence of HSAs.
ContributorsSabol, Joshua Lawrence (Author) / Reiser, Mark (Thesis director) / Ketcham, Jonathan (Committee member) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Supply Chain Management (Contributor)
Created2015-05
130389-Thumbnail Image.png
Description
We used sex, observed parenting quality at 18 months, and three variants of the catechol-O-methyltransferase gene (Val[superscript 158]Met [rs4680], intron1 [rs737865], and 3′-untranslated region [rs165599]) to predict mothers' reports of inhibitory and attentional control (assessed at 42, 54, 72, and 84 months) and internalizing symptoms (assessed at 24, 30, 42,

We used sex, observed parenting quality at 18 months, and three variants of the catechol-O-methyltransferase gene (Val[superscript 158]Met [rs4680], intron1 [rs737865], and 3′-untranslated region [rs165599]) to predict mothers' reports of inhibitory and attentional control (assessed at 42, 54, 72, and 84 months) and internalizing symptoms (assessed at 24, 30, 42, 48, and 54 months) in a sample of 146 children (79 male). Although the pattern for all three variants was very similar, Val[superscript 158]Met explained more variance in both outcomes than did intron1, the 3′-untranslated region, or a haplotype that combined all three catechol-O-methyltransferase variants. In separate models, there were significant three-way interactions among each of the variants, parenting, and sex, predicting the intercepts of inhibitory control and internalizing symptoms. Results suggested that Val[superscript 158]Met indexes plasticity, although this effect was moderated by sex. Parenting was positively associated with inhibitory control for methionine–methionine boys and for valine–valine/valine–methionine girls, and was negatively associated with internalizing symptoms for methionine–methionine boys. Using the “regions of significance” technique, genetic differences in inhibitory control were found for children exposed to high-quality parenting, whereas genetic differences in internalizing were found for children exposed to low-quality parenting. These findings provide evidence in support of testing for differential susceptibility across multiple outcomes.
Created2015-08-01
130411-Thumbnail Image.png
Description
The purpose of this study was to examine whether dispositional sadness predicted children's prosocial behavior and if sympathy mediated this relation. Constructs were measured when children (n = 256 at time 1) were 18, 30, and 42 months old. Mothers and non-parental caregivers rated children's sadness; mothers, caregivers, and fathers rated

The purpose of this study was to examine whether dispositional sadness predicted children's prosocial behavior and if sympathy mediated this relation. Constructs were measured when children (n = 256 at time 1) were 18, 30, and 42 months old. Mothers and non-parental caregivers rated children's sadness; mothers, caregivers, and fathers rated children's prosocial behavior; sympathy (concern and hypothesis testing) and prosocial behavior (indirect and direct, as well as verbal at older ages) were assessed with a task in which the experimenter feigned injury. In a panel path analysis, 30-month dispositional sadness predicted marginally higher 42-month sympathy; in addition, 30-month sympathy predicted 42-month sadness. Moreover, when controlling for prior levels of prosocial behavior, 30-month sympathy significantly predicted reported and observed prosocial behavior at 42 months. Sympathy did not mediate the relation between sadness and prosocial behavior (either reported or observed).
Created2015-01-01
134418-Thumbnail Image.png
Description
We seek a comprehensive measurement for the economic prosperity of persons with disabilities. We survey the current literature and identify the major economic indicators used to describe the socioeconomic standing of persons with disabilities. We then develop a methodology for constructing a statistically valid composite index of these indicators, and

We seek a comprehensive measurement for the economic prosperity of persons with disabilities. We survey the current literature and identify the major economic indicators used to describe the socioeconomic standing of persons with disabilities. We then develop a methodology for constructing a statistically valid composite index of these indicators, and build this index using data from the 2014 American Community Survey. Finally, we provide context for further use and development of the index and describe an example application of the index in practice.
ContributorsTheisen, Ryan (Co-author) / Helms, Tyler (Co-author) / Lewis, Paul (Thesis director) / Reiser, Mark (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Politics and Global Studies (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
168839-Thumbnail Image.png
Description
The introduction of parameterized loss functions for robustness in machine learning has led to questions as to how hyperparameter(s) of the loss functions can be tuned. This thesis explores how Bayesian methods can be leveraged to tune such hyperparameters. Specifically, a modified Gibbs sampling scheme is used to generate a

The introduction of parameterized loss functions for robustness in machine learning has led to questions as to how hyperparameter(s) of the loss functions can be tuned. This thesis explores how Bayesian methods can be leveraged to tune such hyperparameters. Specifically, a modified Gibbs sampling scheme is used to generate a distribution of loss parameters of tunable loss functions. The modified Gibbs sampler is a two-block sampler that alternates between sampling the loss parameter and optimizing the other model parameters. The sampling step is performed using slice sampling, while the optimization step is performed using gradient descent. This thesis explores the application of the modified Gibbs sampler to alpha-loss, a tunable loss function with a single parameter $\alpha \in (0,\infty]$, that is designed for the classification setting. Theoretically, it is shown that the Markov chain generated by a modified Gibbs sampling scheme is ergodic; that is, the chain has, and converges to, a unique stationary (posterior) distribution. Further, the modified Gibbs sampler is implemented in two experiments: a synthetic dataset and a canonical image dataset. The results show that the modified Gibbs sampler performs well under label noise, generating a distribution indicating preference for larger values of alpha, matching the outcomes of previous experiments.
ContributorsCole, Erika Lingo (Author) / Sankar, Lalitha (Thesis advisor) / Lan, Shiwei (Thesis advisor) / Pedrielli, Giulia (Committee member) / Hahn, Paul (Committee member) / Arizona State University (Publisher)
Created2022
171508-Thumbnail Image.png
Description
Longitudinal data involving multiple subjects is quite popular in medical and social science areas. I consider generalized linear mixed models (GLMMs) applied to such longitudinal data, and the optimal design searching problem under such models. In this case, based on optimal design theory, the optimality criteria depend on the estimated

Longitudinal data involving multiple subjects is quite popular in medical and social science areas. I consider generalized linear mixed models (GLMMs) applied to such longitudinal data, and the optimal design searching problem under such models. In this case, based on optimal design theory, the optimality criteria depend on the estimated parameters, which leads to local optimality. Moreover, the information matrix under a GLMM doesn't have a closed-form expression. My dissertation includes three topics related to this design problem. The first part is searching for locally optimal designs under GLMMs with longitudinal data. I apply penalized quasi-likelihood (PQL) method to approximate the information matrix and compare several approximations to show the superiority of PQL over other approximations. Under different local parameters and design restrictions, locally D- and A- optimal designs are constructed based on the approximation. An interesting finding is that locally optimal designs sometimes apply different designs to different subjects. Finally, the robustness of these locally optimal designs is discussed. In the second part, an unknown observational covariate is added to the previous model. With an unknown observational variable in the experiment, expected optimality criteria are considered. Under different assumptions of the unknown variable and parameter settings, locally optimal designs are constructed and discussed. In the last part, Bayesian optimal designs are considered under logistic mixed models. Considering different priors of the local parameters, Bayesian optimal designs are generated. Bayesian design under such a model is usually expensive in time. The running time in this dissertation is optimized to an acceptable amount with accurate results. I also discuss the robustness of these Bayesian optimal designs, which is the motivation of applying such an approach.
ContributorsShi, Yao (Author) / Stufken, John (Thesis advisor) / Kao, Ming-Hung (Thesis advisor) / Lan, Shiwei (Committee member) / Pan, Rong (Committee member) / Reiser, Mark (Committee member) / Arizona State University (Publisher)
Created2022
171467-Thumbnail Image.png
Description
Goodness-of-fit test is a hypothesis test used to test whether a given model fit the data well. It is extremely difficult to find a universal goodness-of-fit test that can test all types of statistical models. Moreover, traditional Pearson’s chi-square goodness-of-fit test is sometimes considered to be an omnibus test but

Goodness-of-fit test is a hypothesis test used to test whether a given model fit the data well. It is extremely difficult to find a universal goodness-of-fit test that can test all types of statistical models. Moreover, traditional Pearson’s chi-square goodness-of-fit test is sometimes considered to be an omnibus test but not a directional test so it is hard to find the source of poor fit when the null hypothesis is rejected and it will lose its validity and effectiveness in some of the special conditions. Sparseness is such an abnormal condition. One effective way to overcome the adverse effects of sparseness is to use limited-information statistics. In this dissertation, two topics about constructing and using limited-information statistics to overcome sparseness for binary data will be included. In the first topic, the theoretical framework of pairwise concordance and the transformation matrix which is used to extract the corresponding marginals and their generalizations are provided. Then a series of new chi-square test statistics and corresponding orthogonal components are proposed, which are used to detect the model misspecification for longitudinal binary data. One of the important conclusions is, the test statistic $X^2_{2c}$ can be taken as an extension of $X^2_{[2]}$, the second-order marginals of traditional Pearson’s chi-square statistic. In the second topic, the research interest is to investigate the effect caused by different intercept patterns when using Lagrange multiplier (LM) test to find the source of misfit for two items in 2-PL IRT model. Several other directional chi-square test statistics are taken into comparison. The simulation results showed that the intercept pattern does affect the performance of goodness-of-fit test, especially the power to find the source of misfit if the source of misfit does exist. More specifically, the power is directly affected by the `intercept distance' between two misfit variables. Another discovery is, the LM test statistic has the best balance between the accurate Type I error rates and high empirical power, which indicates the LM test is a robust test.
ContributorsXu, Jinhui (Author) / Reiser, Mark (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / Zheng, Yi (Committee member) / Edwards, Michael (Committee member) / Arizona State University (Publisher)
Created2022
190731-Thumbnail Image.png
Description
Uncertainty Quantification (UQ) is crucial in assessing the reliability of predictivemodels that make decisions for human experts in a data-rich world. The Bayesian approach to UQ for inverse problems has gained popularity. However, addressing UQ in high-dimensional inverse problems is challenging due to the intensity and inefficiency of Markov Chain

Uncertainty Quantification (UQ) is crucial in assessing the reliability of predictivemodels that make decisions for human experts in a data-rich world. The Bayesian approach to UQ for inverse problems has gained popularity. However, addressing UQ in high-dimensional inverse problems is challenging due to the intensity and inefficiency of Markov Chain Monte Carlo (MCMC) based Bayesian inference methods. Consequently, the first primary focus of this thesis is enhancing efficiency and scalability for UQ in inverse problems. On the other hand, the omnipresence of spatiotemporal data, particularly in areas like traffic analysis, underscores the need for effectively addressing inverse problems with spatiotemporal observations. Conventional solutions often overlook spatial or temporal correlations, resulting in underutilization of spatiotemporal interactions for parameter learning. Appropriately modeling spatiotemporal observations in inverse problems thus forms another pivotal research avenue. In terms of UQ methodologies, the calibration-emulation-sampling (CES) scheme has emerged as effective for large-dimensional problems. I introduce a novel CES approach by employing deep neural network (DNN) models during the emulation and sampling phase. This approach not only enhances computational efficiency but also diminishes sensitivity to training set variations. The newly devised “Dimension- Reduced Emulative Autoencoder Monte Carlo (DREAM)” algorithm scales Bayesian UQ up to thousands of dimensions in physics-constrained inverse problems. The algorithm’s effectiveness is exemplified through elliptic and advection-diffusion inverse problems. In the realm of spatiotemporal modeling, I propose to use Spatiotemporal Gaussian processes (STGP) in likelihood modeling and Spatiotemporal Besov processes (STBP) in prior modeling separately. These approaches highlight the efficacy of incorporat- ing spatial and temporal information for enhanced parameter estimation and UQ. Additionally, the superiority of STGP is demonstrated compared to static and time- averaged methods in time-dependent advection-diffusion partial differential equation (PDE) and three chaotic ordinary differential equations (ODE). Expanding upon Besov Process (BP), a method known for sparsity-promotion and edge-preservation, STBP is introduced to capture spatial data features and model temporal correlations by replacing the random coefficients in the series expansion with stochastic time functions following Q-exponential process(Q-EP). This advantage is showcased in dynamic computerized tomography (CT) reconstructions through comparison with classic STGP and a time-uncorrelated approach.
ContributorsLi, Shuyi (Author) / Lan, Shiwei (Thesis advisor) / Hahn, Paul (Committee member) / McCulloch, Robert (Committee member) / Dan, Cheng (Committee member) / Lopes, Hedibert (Committee member) / Arizona State University (Publisher)
Created2023
190789-Thumbnail Image.png
Description
In this work, the author analyzes quantitative and structural aspects of Bayesian inference using Markov kernels, Wasserstein metrics, and Kantorovich monads. In particular, the author shows the following main results: first, that Markov kernels can be viewed as Borel measurable maps with values in a Wasserstein space; second, that the

In this work, the author analyzes quantitative and structural aspects of Bayesian inference using Markov kernels, Wasserstein metrics, and Kantorovich monads. In particular, the author shows the following main results: first, that Markov kernels can be viewed as Borel measurable maps with values in a Wasserstein space; second, that the Disintegration Theorem can be interpreted as a literal equality of integrals using an original theory of integration for Markov kernels; third, that the Kantorovich monad can be defined for Wasserstein metrics of any order; and finally, that, under certain assumptions, a generalized Bayes’s Law for Markov kernels provably leads to convergence of the expected posterior distribution in the Wasserstein metric. These contributions provide a basis for studying further convergence, approximation, and stability properties of Bayesian inverse maps and inference processes using a unified theoretical framework that bridges between statistical inference, machine learning, and probabilistic programming semantics.
ContributorsEikenberry, Keenan (Author) / Cochran, Douglas (Thesis advisor) / Lan, Shiwei (Thesis advisor) / Dasarathy, Gautam (Committee member) / Kotschwar, Brett (Committee member) / Shahbaba, Babak (Committee member) / Arizona State University (Publisher)
Created2023