Matching Items (5)
Filtering by

Clear all filters

149858-Thumbnail Image.png
Description
This dissertation is focused on building scalable Attribute Based Security Systems (ABSS), including efficient and privacy-preserving attribute based encryption schemes and applications to group communications and cloud computing. First of all, a Constant Ciphertext Policy Attribute Based Encryption (CCP-ABE) is proposed. Existing Attribute Based Encryption (ABE) schemes usually incur large,

This dissertation is focused on building scalable Attribute Based Security Systems (ABSS), including efficient and privacy-preserving attribute based encryption schemes and applications to group communications and cloud computing. First of all, a Constant Ciphertext Policy Attribute Based Encryption (CCP-ABE) is proposed. Existing Attribute Based Encryption (ABE) schemes usually incur large, linearly increasing ciphertext. The proposed CCP-ABE dramatically reduces the ciphertext to small, constant size. This is the first existing ABE scheme that achieves constant ciphertext size. Also, the proposed CCP-ABE scheme is fully collusion-resistant such that users can not combine their attributes to elevate their decryption capacity. Next step, efficient ABE schemes are applied to construct optimal group communication schemes and broadcast encryption schemes. An attribute based Optimal Group Key (OGK) management scheme that attains communication-storage optimality without collusion vulnerability is presented. Then, a novel broadcast encryption model: Attribute Based Broadcast Encryption (ABBE) is introduced, which exploits the many-to-many nature of attributes to dramatically reduce the storage complexity from linear to logarithm and enable expressive attribute based access policies. The privacy issues are also considered and addressed in ABSS. Firstly, a hidden policy based ABE schemes is proposed to protect receivers' privacy by hiding the access policy. Secondly,a new concept: Gradual Identity Exposure (GIE) is introduced to address the restrictions of hidden policy based ABE schemes. GIE's approach is to reveal the receivers' information gradually by allowing ciphertext recipients to decrypt the message using their possessed attributes one-by-one. If the receiver does not possess one attribute in this procedure, the rest of attributes are still hidden. Compared to hidden-policy based solutions, GIE provides significant performance improvement in terms of reducing both computation and communication overhead. Last but not least, ABSS are incorporated into the mobile cloud computing scenarios. In the proposed secure mobile cloud data management framework, the light weight mobile devices can securely outsource expensive ABE operations and data storage to untrusted cloud service providers. The reported scheme includes two components: (1) a Cloud-Assisted Attribute-Based Encryption/Decryption (CA-ABE) scheme and (2) An Attribute-Based Data Storage (ABDS) scheme that achieves information theoretical optimality.
ContributorsZhou, Zhibin (Author) / Huang, Dijiang (Thesis advisor) / Yau, Sik-Sang (Committee member) / Ahn, Gail-Joon (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2011
156796-Thumbnail Image.png
Description
Mobile devices have penetrated into every aspect of modern world. For one thing, they are becoming ubiquitous in daily life. For the other thing, they are storing more and more data, including sensitive data. Therefore, security and privacy of mobile devices are indispensable. This dissertation consists of five parts: two

Mobile devices have penetrated into every aspect of modern world. For one thing, they are becoming ubiquitous in daily life. For the other thing, they are storing more and more data, including sensitive data. Therefore, security and privacy of mobile devices are indispensable. This dissertation consists of five parts: two authentication schemes, two attacks, and one countermeasure related to security and privacy of mobile devices.

Specifically, in Chapter 1, I give an overview the challenges and existing solutions in these areas. In Chapter 2, a novel authentication scheme is presented, which is based on a user’s tapping or sliding on the touchscreen of a mobile device. In Chapter 3, I focus on mobile app fingerprinting and propose a method based on analyzing the power profiles of targeted mobile devices. In Chapter 4, I mainly explore a novel liveness detection method for face authentication on mobile devices. In Chapter 5, I investigate a novel keystroke inference attack on mobile devices based on user eye movements. In Chapter 6, a novel authentication scheme is proposed, based on detecting a user’s finger gesture through acoustic sensing. In Chapter 7, I discuss the future work.
ContributorsChen, Yimin (Author) / Zhang, Yanchao (Thesis advisor) / Zhang, Junshan (Committee member) / Reisslein, Martin (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2018
135099-Thumbnail Image.png
Description
Smartphone privacy is a growing concern around the world; smartphone applications routinely take personal information from our phones and monetize it for their own profit. Worse, they're doing it legally. The Terms of Service allow companies to use this information to market, promote, and sell personal data. Most users seem

Smartphone privacy is a growing concern around the world; smartphone applications routinely take personal information from our phones and monetize it for their own profit. Worse, they're doing it legally. The Terms of Service allow companies to use this information to market, promote, and sell personal data. Most users seem to be either unaware of it, or unconcerned by it. This has negative implications for the future of privacy, particularly as the idea of smart home technology becomes a reality. If this is what privacy looks like now, with only one major type of smart device on the market, what will the future hold, when the smart home systems come into play. In order to examine this question, I investigated how much awareness/knowledge smartphone users of a specific demographic (millennials aged 18-25) knew about their smartphone's data and where it goes. I wanted three questions answered: - For what purposes do millennials use their smartphones? - What do they know about smartphone privacy and security? - How will this affect the future of privacy? To accomplish this, I gathered information using a distributed survey to millennials attending Arizona State University. Using statistical analysis, I exposed trends for this demographic, discovering that there isn't a lack of knowledge among millennials; most are aware that smartphone apps can collect and share data and many of the participants are not comfortable with the current state of smartphone privacy. However, more than half of the study participants indicated that they never read an app's Terms of Service. Due to the nature of the privacy vs. convenience argument, users will willingly agree to let apps take their personal in- formation, since they don't want to give up the convenience.
ContributorsJones, Scott Spenser (Author) / Atkinson, Robert (Thesis director) / Chavez-Echeagaray, Maria Elena (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
187381-Thumbnail Image.png
Description
Artificial Intelligence (AI) systems have achieved outstanding performance and have been found to be better than humans at various tasks, such as sentiment analysis, and face recognition. However, the majority of these state-of-the-art AI systems use complex Deep Learning (DL) methods which present challenges for human experts to design and

Artificial Intelligence (AI) systems have achieved outstanding performance and have been found to be better than humans at various tasks, such as sentiment analysis, and face recognition. However, the majority of these state-of-the-art AI systems use complex Deep Learning (DL) methods which present challenges for human experts to design and evaluate such models with respect to privacy, fairness, and robustness. Recent examination of DL models reveals that representations may include information that could lead to privacy violations, unfairness, and robustness issues. This results in AI systems that are potentially untrustworthy from a socio-technical standpoint. Trustworthiness in AI is defined by a set of model properties such as non-discriminatory bias, protection of users’ sensitive attributes, and lawful decision-making. The characteristics of trustworthy AI can be grouped into three categories: Reliability, Resiliency, and Responsibility. Past research has shown that the successful integration of an AI model depends on its trustworthiness. Thus it is crucial for organizations and researchers to build trustworthy AI systems to facilitate the seamless integration and adoption of intelligent technologies. The main issue with existing AI systems is that they are primarily trained to improve technical measures such as accuracy on a specific task but are not considerate of socio-technical measures. The aim of this dissertation is to propose methods for improving the trustworthiness of AI systems through representation learning. DL models’ representations contain information about a given input and can be used for tasks such as detecting fake news on social media or predicting the sentiment of a review. The findings of this dissertation significantly expand the scope of trustworthy AI research and establish a new paradigm for modifying data representations to balance between properties of trustworthy AI. Specifically, this research investigates multiple techniques such as reinforcement learning for understanding trustworthiness in users’ privacy, fairness, and robustness in classification tasks like cyberbullying detection and fake news detection. Since most social measures in trustworthy AI cannot be used to fine-tune or train an AI model directly, the main contribution of this dissertation lies in using reinforcement learning to alter an AI system’s behavior based on non-differentiable social measures.
ContributorsMosallanezhad, Ahmadreza (Author) / Liu, Huan (Thesis advisor) / Mancenido, Michelle (Thesis advisor) / Doupe, Adam (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2023
171921-Thumbnail Image.png
Description
With the bloom of machine learning, a massive amount of data has been used in the training process of machine learning. A tremendous amount of this data is user-generated data which allows the machine learning models to produce accurate results and personalized services. Nevertheless, I recognize the importance of preserving

With the bloom of machine learning, a massive amount of data has been used in the training process of machine learning. A tremendous amount of this data is user-generated data which allows the machine learning models to produce accurate results and personalized services. Nevertheless, I recognize the importance of preserving the privacy of individuals by protecting their information in the training process. One privacy attack that affects individuals is the private attribute inference attack. The private attribute attack is the process of inferring individuals' information that they do not explicitly reveal, such as age, gender, location, and occupation. The impacts of this go beyond knowing the information as individuals face potential risks. Furthermore, some applications need sensitive data to train the models and predict helpful insights and figuring out how to build privacy-preserving machine learning models will increase the capabilities of these applications.However, improving privacy affects the data utility which leads to a dilemma between privacy and utility. The utility of the data is measured by the quality of the data for different tasks. This trade-off between privacy and utility needs to be maintained to satisfy the privacy requirement and the result quality. To achieve more scalable privacy-preserving machine learning models, I investigate the privacy risks that affect individuals' private information in distributed machine learning. Even though the distributed machine learning has been driven by privacy concerns, privacy issues have been proposed in the literature which threaten individuals' privacy. In this dissertation, I investigate how to measure and protect individuals' privacy in centralized and distributed machine learning models. First, a privacy-preserving text representation learning is proposed to protect users' privacy that can be revealed from user generated data. Second, a novel privacy-preserving text classification for split learning is presented to improve users' privacy and retain high utility by defending against private attribute inference attacks.
ContributorsAlnasser, Walaa (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Shu, Kai (Committee member) / Bao, Tiffany (Committee member) / Arizona State University (Publisher)
Created2022