Filtering by
- Resource Type: Text
Using confirmatory factor analyses and multiple indicators per construct, we examined a number of theoretically derived factor structures pertaining to numerous trust-relevant constructs (from 9 to12) across four institutional contexts (police, local governance, natural resources, state governance) and multiple participant-types (college students via an online survey, community residents as part of a city’s budget engagement activity, a random sample of rural landowners, and a national sample of adult Americans via an Amazon Mechanical Turk study). Across studies, a number of common findings emerged. First, the best fitting models in each study maintained separate factors for each trust-relevant construct. Furthermore, post hoc analyses involving addition of higher-order factors tended to fit better than collapsing of factors. Second, dispositional trust was easily distinguishable from the other trust-related constructs, and positive and negative constructs were often distinguishable. However, the items reflecting positive trust attitude constructs or positive trustworthiness perceptions showed low discriminant validity. Differences in findings between studies raise questions warranting further investigation in future research, including differences in correlations among latent constructs varying from very high (e.g., 12 inter-factor correlations above .9 in Study 2) to more moderate (e.g., only 3 correlations above .8 in Study 4). Further, the results from one study (Study 4) suggested that legitimacy, fairness, and voice were especially highly correlated and may form a single higher-order factor, but the other studies did not. Future research is needed to determine when and why different higher-order factor structures may emerge in different institutional contexts or with different samples.
This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information.
There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks.
Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models.