Matching Items (3)

141348-Thumbnail Image.png

The Dimensionality of Trust-Relevant Constructs in Four Institutional Domains: Results From Confirmatory Factor Analyses.

Description

Using confirmatory factor analyses and multiple indicators per construct, we examined a number of theoretically derived factor structures pertaining to numerous trust-relevant constructs (from 9 to12) across four institutional contexts

Using confirmatory factor analyses and multiple indicators per construct, we examined a number of theoretically derived factor structures pertaining to numerous trust-relevant constructs (from 9 to12) across four institutional contexts (police, local governance, natural resources, state governance) and multiple participant-types (college students via an online survey, community residents as part of a city’s budget engagement activity, a random sample of rural landowners, and a national sample of adult Americans via an Amazon Mechanical Turk study). Across studies, a number of common findings emerged. First, the best fitting models in each study maintained separate factors for each trust-relevant construct. Furthermore, post hoc analyses involving addition of higher-order factors tended to fit better than collapsing of factors. Second, dispositional trust was easily distinguishable from the other trust-related constructs, and positive and negative constructs were often distinguishable. However, the items reflecting positive trust attitude constructs or positive trustworthiness perceptions showed low discriminant validity. Differences in findings between studies raise questions warranting further investigation in future research, including differences in correlations among latent constructs varying from very high (e.g., 12 inter-factor correlations above .9 in Study 2) to more moderate (e.g., only 3 correlations above .8 in Study 4). Further, the results from one study (Study 4) suggested that legitimacy, fairness, and voice were especially highly correlated and may form a single higher-order factor, but the other studies did not. Future research is needed to determine when and why different higher-order factor structures may emerge in different institutional contexts or with different samples.

Contributors

Created

Date Created
  • 2016-03-31

151609-Thumbnail Image.png

The real American court: immigration courts and the ecology of reform

Description

Immigration courts fail to live up to courtroom ideals. Around 2009, proposals were offered to address the problems of these troubled courts. My study illustrates the inevitable linkage between court

Immigration courts fail to live up to courtroom ideals. Around 2009, proposals were offered to address the problems of these troubled courts. My study illustrates the inevitable linkage between court reform proposals and conceptions of fairness and efficiency, and ultimately justice. I ask: (1) From the perspective of attorneys defending immigrants' rights, what are the obstacles to justice? How should they be addressed? And (2) How do proposals speak to these attorneys' concerns and proposed resolutions? The proposals reviewed generally favor restructuring the court. On the other hand, immigration (cause) lawyers remain unconvinced that current proposals to reform the courts' structure would be successful at addressing the pivotal issues of these courts: confounding laws and problematic personnel. They are particularly concerned about the legal needs and rights of immigrants and how reforms may affect their current and potential clients. With this in mind, they prefer incremental changes - such as extending pro bono programs - to the system. These findings suggest the importance of professional location in conceptualizing justice through law. They offer rich ground for theorizing about courts and politics, and justice in immigration adjudication.

Contributors

Agent

Created

Date Created
  • 2013

158139-Thumbnail Image.png

Quantifying Information Leakage via Adversarial Loss Functions: Theory and Practice

Description

Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility

Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This dissertation addresses this challenge.

This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information.

There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks.

Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models.

Contributors

Agent

Created

Date Created
  • 2020