Matching Items (6)
Filtering by

Clear all filters

151609-Thumbnail Image.png
Description
Immigration courts fail to live up to courtroom ideals. Around 2009, proposals were offered to address the problems of these troubled courts. My study illustrates the inevitable linkage between court reform proposals and conceptions of fairness and efficiency, and ultimately justice. I ask: (1) From the perspective of attorneys defending

Immigration courts fail to live up to courtroom ideals. Around 2009, proposals were offered to address the problems of these troubled courts. My study illustrates the inevitable linkage between court reform proposals and conceptions of fairness and efficiency, and ultimately justice. I ask: (1) From the perspective of attorneys defending immigrants' rights, what are the obstacles to justice? How should they be addressed? And (2) How do proposals speak to these attorneys' concerns and proposed resolutions? The proposals reviewed generally favor restructuring the court. On the other hand, immigration (cause) lawyers remain unconvinced that current proposals to reform the courts' structure would be successful at addressing the pivotal issues of these courts: confounding laws and problematic personnel. They are particularly concerned about the legal needs and rights of immigrants and how reforms may affect their current and potential clients. With this in mind, they prefer incremental changes - such as extending pro bono programs - to the system. These findings suggest the importance of professional location in conceptualizing justice through law. They offer rich ground for theorizing about courts and politics, and justice in immigration adjudication.
ContributorsAbbott, Katherine R (Author) / Provine, Doris M. (Thesis advisor) / Cruz, Evelyn H. (Committee member) / Johnson, John M. (Committee member) / Zatz, Marjorie S. (Committee member) / Arizona State University (Publisher)
Created2013
141348-Thumbnail Image.png
Description

Using confirmatory factor analyses and multiple indicators per construct, we examined a number of theoretically derived factor structures pertaining to numerous trust-relevant constructs (from 9 to12) across four institutional contexts (police, local governance, natural resources, state governance) and multiple participant-types (college students via an online survey, community residents as part

Using confirmatory factor analyses and multiple indicators per construct, we examined a number of theoretically derived factor structures pertaining to numerous trust-relevant constructs (from 9 to12) across four institutional contexts (police, local governance, natural resources, state governance) and multiple participant-types (college students via an online survey, community residents as part of a city’s budget engagement activity, a random sample of rural landowners, and a national sample of adult Americans via an Amazon Mechanical Turk study). Across studies, a number of common findings emerged. First, the best fitting models in each study maintained separate factors for each trust-relevant construct. Furthermore, post hoc analyses involving addition of higher-order factors tended to fit better than collapsing of factors. Second, dispositional trust was easily distinguishable from the other trust-related constructs, and positive and negative constructs were often distinguishable. However, the items reflecting positive trust attitude constructs or positive trustworthiness perceptions showed low discriminant validity. Differences in findings between studies raise questions warranting further investigation in future research, including differences in correlations among latent constructs varying from very high (e.g., 12 inter-factor correlations above .9 in Study 2) to more moderate (e.g., only 3 correlations above .8 in Study 4). Further, the results from one study (Study 4) suggested that legitimacy, fairness, and voice were especially highly correlated and may form a single higher-order factor, but the other studies did not. Future research is needed to determine when and why different higher-order factor structures may emerge in different institutional contexts or with different samples.

ContributorsPytlikZillig, Lisa M. (Author) / Hamm, Joseph A. (Author) / Shockley, Ellie (Author) / Herian, Mitchell N. (Author) / Neal, Tess M.S. (Author) / Kimbrough, Christopher D. (Author) / Tomkins, Alan J. (Author) / Bornstein, Brian H. (Author)
Created2016-03-31
168720-Thumbnail Image.png
Description
Artificial intelligence (AI) has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks of oppression and calamity. For example, social media platforms have knowingly and surreptitiously promoted harmful content, e.g., the rampant instances of disinformation and hate speech. Machine

Artificial intelligence (AI) has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks of oppression and calamity. For example, social media platforms have knowingly and surreptitiously promoted harmful content, e.g., the rampant instances of disinformation and hate speech. Machine learning algorithms designed for combating hate speech were also found biased against underrepresented and disadvantaged groups. In response, researchers and organizations have been working to publish principles and regulations for the responsible use of AI. However, these conceptual principles also need to be turned into actionable algorithms to materialize AI for good. The broad aim of my research is to design AI systems that responsibly serve users and develop applications with social impact. This dissertation seeks to develop the algorithmic solutions for Socially Responsible AI (SRAI), a systematic framework encompassing the responsible AI principles and algorithms, and the responsible use of AI. In particular, it first introduces an interdisciplinary definition of SRAI and the AI responsibility pyramid, in which four types of AI responsibilities are described. It then elucidates the purpose of SRAI: how to bridge from the conceptual definitions to responsible AI practice through the three human-centered operations -- to Protect and Inform users, and Prevent negative consequences. They are illustrated in the social media domain given that social media has revolutionized how people live but has also contributed to the rise of many societal issues. The three representative tasks for each dimension are cyberbullying detection, disinformation detection and dissemination, and unintended bias mitigation. The means of SRAI is to develop responsible AI algorithms. Many issues (e.g., discrimination and generalization) can arise when AI systems are trained to improve accuracy without knowing the underlying causal mechanism. Causal inference, therefore, is intrinsically related to understanding and resolving these challenging issues in AI. As a result, this dissertation also seeks to gain an in-depth understanding of AI by looking into the precise relationships between causes and effects. For illustration, it introduces a recent work that applies deep learning to estimating causal effects and shows that causal learning algorithms can outperform traditional methods.
ContributorsCheng, Lu (Author) / Liu, Huan (Thesis advisor) / Varshney, Kush R. (Committee member) / Silva, Yasin N. (Committee member) / Wu, Carole-Jean (Committee member) / Candan, Kasim S. (Committee member) / Arizona State University (Publisher)
Created2022
Description
In journalism school, reporters learn to be unbiased, impartial and objective when covering a story. They are to stay neutral and detached from their reporting. However, this standard has become unrealistic and unachievable for many journalists. "Inside Objectivity" is a five-episode podcast that focuses on what journalistic objectivity looks like in

In journalism school, reporters learn to be unbiased, impartial and objective when covering a story. They are to stay neutral and detached from their reporting. However, this standard has become unrealistic and unachievable for many journalists. "Inside Objectivity" is a five-episode podcast that focuses on what journalistic objectivity looks like in the 21st century. In this podcast, you will hear from journalists, scholars, historians, researchers and a news consumer. These guests will provide their thoughts regarding journalistic objectivity and whether this ethical standard needs to be modified. To listen to the episodes and learn more about the podcast, visit insideobjectivity.com.
ContributorsManeshni, Autriya (Author) / Nikpour, Rodmanned (Thesis director) / Russell, Dennis (Committee member) / Barrett, The Honors College (Contributor) / Walter Cronkite School of Journalism and Mass Comm (Contributor) / Department of Psychology (Contributor)
Created2023-05
187381-Thumbnail Image.png
Description
Artificial Intelligence (AI) systems have achieved outstanding performance and have been found to be better than humans at various tasks, such as sentiment analysis, and face recognition. However, the majority of these state-of-the-art AI systems use complex Deep Learning (DL) methods which present challenges for human experts to design and

Artificial Intelligence (AI) systems have achieved outstanding performance and have been found to be better than humans at various tasks, such as sentiment analysis, and face recognition. However, the majority of these state-of-the-art AI systems use complex Deep Learning (DL) methods which present challenges for human experts to design and evaluate such models with respect to privacy, fairness, and robustness. Recent examination of DL models reveals that representations may include information that could lead to privacy violations, unfairness, and robustness issues. This results in AI systems that are potentially untrustworthy from a socio-technical standpoint. Trustworthiness in AI is defined by a set of model properties such as non-discriminatory bias, protection of users’ sensitive attributes, and lawful decision-making. The characteristics of trustworthy AI can be grouped into three categories: Reliability, Resiliency, and Responsibility. Past research has shown that the successful integration of an AI model depends on its trustworthiness. Thus it is crucial for organizations and researchers to build trustworthy AI systems to facilitate the seamless integration and adoption of intelligent technologies. The main issue with existing AI systems is that they are primarily trained to improve technical measures such as accuracy on a specific task but are not considerate of socio-technical measures. The aim of this dissertation is to propose methods for improving the trustworthiness of AI systems through representation learning. DL models’ representations contain information about a given input and can be used for tasks such as detecting fake news on social media or predicting the sentiment of a review. The findings of this dissertation significantly expand the scope of trustworthy AI research and establish a new paradigm for modifying data representations to balance between properties of trustworthy AI. Specifically, this research investigates multiple techniques such as reinforcement learning for understanding trustworthiness in users’ privacy, fairness, and robustness in classification tasks like cyberbullying detection and fake news detection. Since most social measures in trustworthy AI cannot be used to fine-tune or train an AI model directly, the main contribution of this dissertation lies in using reinforcement learning to alter an AI system’s behavior based on non-differentiable social measures.
ContributorsMosallanezhad, Ahmadreza (Author) / Liu, Huan (Thesis advisor) / Mancenido, Michelle (Thesis advisor) / Doupe, Adam (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2023
158139-Thumbnail Image.png
Description
Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This

Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This dissertation addresses this challenge.

This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information.

There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks.

Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models.
ContributorsLiao, Jiachun (Author) / Sankar, Lalitha (Thesis advisor) / Kosut, Oliver (Committee member) / Zhang, Junshan (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2020