Matching Items (16)
Filtering by

Clear all filters

149834-Thumbnail Image.png
Description
This thesis, Impressive Mastermind, examines notions of privacy and the law, particularly with regard to the USA Patriot Act implemented following the events of 9/11. The author/artist believes that numerous freedoms related to personal privacy, especially those rights protected by the Fourth Amendment, were diminished in order to ostensibly seek

This thesis, Impressive Mastermind, examines notions of privacy and the law, particularly with regard to the USA Patriot Act implemented following the events of 9/11. The author/artist believes that numerous freedoms related to personal privacy, especially those rights protected by the Fourth Amendment, were diminished in order to ostensibly seek out potential terrorists. Through the vehicle of a theatrical dance performance, Impressive Mastermind investigates these privacy issues on a public and personal level and also asks the audience to question their own views on government policies regarding personal privacy, including illegal search and seizure. Drawing on the previous work of other intervention artists, this thesis explores the realm of public intervention. Moving away from the usual spectacle of traditional theater, this multi-dimensional piece explores an experiential examination of how the public relates to what is real and what is considered performative.
ContributorsFerrell, Rebecca A (Author) / Murphey, Claudia (Thesis advisor) / Dove, Simon (Committee member) / Mcgurgan, Melissa (Committee member) / Arizona State University (Publisher)
Created2011
149858-Thumbnail Image.png
Description
This dissertation is focused on building scalable Attribute Based Security Systems (ABSS), including efficient and privacy-preserving attribute based encryption schemes and applications to group communications and cloud computing. First of all, a Constant Ciphertext Policy Attribute Based Encryption (CCP-ABE) is proposed. Existing Attribute Based Encryption (ABE) schemes usually incur large,

This dissertation is focused on building scalable Attribute Based Security Systems (ABSS), including efficient and privacy-preserving attribute based encryption schemes and applications to group communications and cloud computing. First of all, a Constant Ciphertext Policy Attribute Based Encryption (CCP-ABE) is proposed. Existing Attribute Based Encryption (ABE) schemes usually incur large, linearly increasing ciphertext. The proposed CCP-ABE dramatically reduces the ciphertext to small, constant size. This is the first existing ABE scheme that achieves constant ciphertext size. Also, the proposed CCP-ABE scheme is fully collusion-resistant such that users can not combine their attributes to elevate their decryption capacity. Next step, efficient ABE schemes are applied to construct optimal group communication schemes and broadcast encryption schemes. An attribute based Optimal Group Key (OGK) management scheme that attains communication-storage optimality without collusion vulnerability is presented. Then, a novel broadcast encryption model: Attribute Based Broadcast Encryption (ABBE) is introduced, which exploits the many-to-many nature of attributes to dramatically reduce the storage complexity from linear to logarithm and enable expressive attribute based access policies. The privacy issues are also considered and addressed in ABSS. Firstly, a hidden policy based ABE schemes is proposed to protect receivers' privacy by hiding the access policy. Secondly,a new concept: Gradual Identity Exposure (GIE) is introduced to address the restrictions of hidden policy based ABE schemes. GIE's approach is to reveal the receivers' information gradually by allowing ciphertext recipients to decrypt the message using their possessed attributes one-by-one. If the receiver does not possess one attribute in this procedure, the rest of attributes are still hidden. Compared to hidden-policy based solutions, GIE provides significant performance improvement in terms of reducing both computation and communication overhead. Last but not least, ABSS are incorporated into the mobile cloud computing scenarios. In the proposed secure mobile cloud data management framework, the light weight mobile devices can securely outsource expensive ABE operations and data storage to untrusted cloud service providers. The reported scheme includes two components: (1) a Cloud-Assisted Attribute-Based Encryption/Decryption (CA-ABE) scheme and (2) An Attribute-Based Data Storage (ABDS) scheme that achieves information theoretical optimality.
ContributorsZhou, Zhibin (Author) / Huang, Dijiang (Thesis advisor) / Yau, Sik-Sang (Committee member) / Ahn, Gail-Joon (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2011
150827-Thumbnail Image.png
Description
In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more

In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more willing to shift their electronic medical record (EMR) systems to clouds that can remove the geographical distance barriers among providers and patient. Even though cloud-based EMRs have received considerable attention since it would help achieve lower operational cost and better interoperability with other healthcare providers, the adoption of security-aware cloud systems has become an extremely important prerequisite for bringing interoperability and efficient management to the healthcare industry. Since a shared electronic health record (EHR) essentially represents a virtualized aggregation of distributed clinical records from multiple healthcare providers, sharing of such integrated EHRs may comply with various authorization policies from these data providers. In this work, we focus on the authorized and selective sharing of EHRs among several parties with different duties and objectives that satisfies access control and compliance issues in healthcare cloud computing environments. We present a secure medical data sharing framework to support selective sharing of composite EHRs aggregated from various healthcare providers and compliance of HIPAA regulations. Our approach also ensures that privacy concerns need to be accommodated for processing access requests to patients' healthcare information. To realize our proposed approach, we design and implement a cloud-based EHRs sharing system. In addition, we describe case studies and evaluation results to demonstrate the effectiveness and efficiency of our approach.
ContributorsWu, Ruoyu (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2012
151152-Thumbnail Image.png
Description
Access control is one of the most fundamental security mechanisms used in the design and management of modern information systems. However, there still exists an open question on how formal access control models can be automatically analyzed and fully realized in secure system development. Furthermore, specifying and managing access control

Access control is one of the most fundamental security mechanisms used in the design and management of modern information systems. However, there still exists an open question on how formal access control models can be automatically analyzed and fully realized in secure system development. Furthermore, specifying and managing access control policies are often error-prone due to the lack of effective analysis mechanisms and tools. In this dissertation, I present an Assurance Management Framework (AMF) that is designed to cope with various assurance management requirements from both access control system development and policy-based computing. On one hand, the AMF framework facilitates comprehensive analysis and thorough realization of formal access control models in secure system development. I demonstrate how this method can be applied to build role-based access control systems by adopting the NIST/ANSI RBAC standard as an underlying security model. On the other hand, the AMF framework ensures the correctness of access control policies in policy-based computing through automated reasoning techniques and anomaly management mechanisms. A systematic method is presented to formulate XACML in Answer Set Programming (ASP) that allows users to leverage off-the-shelf ASP solvers for a variety of analysis services. In addition, I introduce a novel anomaly management mechanism, along with a grid-based visualization approach, which enables systematic and effective detection and resolution of policy anomalies. I further evaluate the AMF framework through modeling and analyzing multiparty access control in Online Social Networks (OSNs). A MultiParty Access Control (MPAC) model is formulated to capture the essence of multiparty authorization requirements in OSNs. In particular, I show how AMF can be applied to OSNs for identifying and resolving privacy conflicts, and representing and reasoning about MPAC model and policy. To demonstrate the feasibility of the proposed methodology, a suite of proof-of-concept prototype systems is implemented as well.
ContributorsHu, Hongxin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Dasgupta, Partha (Committee member) / Ye, Nong (Committee member) / Arizona State University (Publisher)
Created2012
151210-Thumbnail Image.png
Description
Very little experimental work has been done to investigate the psychological underpinnings of perceptions of privacy. This issue is especially pressing with the advent of powerful and inexpensive technologies that allow access to all but our most private thoughts -and these too are at risk (Farah, Smith, Gawuga, Lindsell, &Foster;,

Very little experimental work has been done to investigate the psychological underpinnings of perceptions of privacy. This issue is especially pressing with the advent of powerful and inexpensive technologies that allow access to all but our most private thoughts -and these too are at risk (Farah, Smith, Gawuga, Lindsell, &Foster;, 2009). Recently the Supreme Court ruled that the use of a global positioning system (GPS) device to covertly follow a criminal suspect, without first obtaining a search warrant, is a violation of a suspect's fourth amendment right to protection from unlawful search and seizure (United States v. Jones, 2012). However, the Court has also ruled in the past that a law enforcement officer can covertly follow a suspect's vehicle and collect the same information without a search warrant and this is not considered a violation of the suspect's rights (Katz v. United States). In the case of GPS surveillance the Supreme Court Justices did not agree on whether the GPS device constituted a trespassing violation because it was placed on the suspect's vehicle (the majority) or if it violated a person's reasonable expectation of privacy. This incongruence is an example of how the absence of a clear and predictable model of privacy makes it difficult for even the country's highest moral authority to articulate when and why privacy has been violated. This research investigated whether public perceptions of support for the use of each surveillance technique also vary across different monitoring types that collect the same information and whether these differences are mediated by similar factors as argued by the Supreme Court. Results suggest that under some circumstances participants do demonstrate differential support and this is mediated by a general privacy concern. However, under other circumstances differential support is the result of an interaction between the type of monitoring and its cost to employ -not simply type; this differential support was mediated by both perceived violations of private-space and general privacy. Results are discussed in terms of how these findings might contribute to understanding the psychological foundation of perceived privacy violations and how they might inform policy decision.
ContributorsBaker, Denise, M.S (Author) / Schweitzer, Nicholas J (Thesis advisor) / Newman, Matt (Committee member) / Risko, Evan (Committee member) / Arizona State University (Publisher)
Created2012
149399-Thumbnail Image.png
Description
Sparked by the Virginia Tech Shooting of 2007 and the resultant changes to the Family Educational Rights and Privacy Act, a review was conducted of FERPA's impact on university policies regarding student privacy and safety. A single, private university's policies were reviewed and a survey was distributed to 500 campus

Sparked by the Virginia Tech Shooting of 2007 and the resultant changes to the Family Educational Rights and Privacy Act, a review was conducted of FERPA's impact on university policies regarding student privacy and safety. A single, private university's policies were reviewed and a survey was distributed to 500 campus employees who had recently completed the university's FERPA training to determine if the university's current training was effective in training employees to understand FERPA's health and safety exceptions clause. The results showed that while the university's training was effective in training employees how to safeguard students' academic records, employees did not have a clear understanding of which information they could or should share in response to a threat to health and safety or to which university entity they should route safety concerns. The survey suggests that the university's FERPA training should be expanded to include training on FERPA's health and safety exceptions, including the communication of clear reporting lines for possible threats to campus safety and security.
ContributorsGilbert, Byron Jesse (Author) / Edwards, David A. (Thesis advisor) / Hild, Nicholas R. (Committee member) / Peterson, Danny M. (Committee member) / Arizona State University (Publisher)
Created2010
135099-Thumbnail Image.png
Description
Smartphone privacy is a growing concern around the world; smartphone applications routinely take personal information from our phones and monetize it for their own profit. Worse, they're doing it legally. The Terms of Service allow companies to use this information to market, promote, and sell personal data. Most users seem

Smartphone privacy is a growing concern around the world; smartphone applications routinely take personal information from our phones and monetize it for their own profit. Worse, they're doing it legally. The Terms of Service allow companies to use this information to market, promote, and sell personal data. Most users seem to be either unaware of it, or unconcerned by it. This has negative implications for the future of privacy, particularly as the idea of smart home technology becomes a reality. If this is what privacy looks like now, with only one major type of smart device on the market, what will the future hold, when the smart home systems come into play. In order to examine this question, I investigated how much awareness/knowledge smartphone users of a specific demographic (millennials aged 18-25) knew about their smartphone's data and where it goes. I wanted three questions answered: - For what purposes do millennials use their smartphones? - What do they know about smartphone privacy and security? - How will this affect the future of privacy? To accomplish this, I gathered information using a distributed survey to millennials attending Arizona State University. Using statistical analysis, I exposed trends for this demographic, discovering that there isn't a lack of knowledge among millennials; most are aware that smartphone apps can collect and share data and many of the participants are not comfortable with the current state of smartphone privacy. However, more than half of the study participants indicated that they never read an app's Terms of Service. Due to the nature of the privacy vs. convenience argument, users will willingly agree to let apps take their personal in- formation, since they don't want to give up the convenience.
ContributorsJones, Scott Spenser (Author) / Atkinson, Robert (Thesis director) / Chavez-Echeagaray, Maria Elena (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
171895-Thumbnail Image.png
Description
Adversarial threats of deep learning are increasingly becoming a concern due to the ubiquitous deployment of deep neural networks(DNNs) in many security-sensitive domains. Among the existing threats, adversarial weight perturbation is an emerging class of threats that attempts to perturb the weight parameters of DNNs to breach security and privacy.In

Adversarial threats of deep learning are increasingly becoming a concern due to the ubiquitous deployment of deep neural networks(DNNs) in many security-sensitive domains. Among the existing threats, adversarial weight perturbation is an emerging class of threats that attempts to perturb the weight parameters of DNNs to breach security and privacy.In this thesis, the first weight perturbation attack introduced is called Bit-Flip Attack (BFA), which can maliciously flip a small number of bits within a computer’s main memory system storing the DNN weight parameter to achieve malicious objectives. Our developed algorithm can achieve three specific attack objectives: I) Un-targeted accuracy degradation attack, ii) Targeted attack, & iii) Trojan attack. Moreover, BFA utilizes the rowhammer technique to demonstrate the bit-flip attack in an actual computer prototype. While the bit-flip attack is conducted in a white-box setting, the subsequent contribution of this thesis is to develop another novel weight perturbation attack in a black-box setting. Consequently, this thesis discusses a new study of DNN model vulnerabilities in a multi-tenant Field Programmable Gate Array (FPGA) cloud under a strict black-box framework. This newly developed attack framework injects faults in the malicious tenant by duplicating specific DNN weight packages during data transmission between off-chip memory and on-chip buffer of a victim FPGA. The proposed attack is also experimentally validated in a multi-tenant cloud FPGA prototype. In the final part, the focus shifts toward deep learning model privacy, popularly known as model extraction, that can steal partial DNN weight parameters remotely with the aid of a memory side-channel attack. In addition, a novel training algorithm is designed to utilize the partially leaked DNN weight bit information, making the model extraction attack more effective. The algorithm effectively leverages the partial leaked bit information and generates a substitute prototype of the victim model with almost identical performance to the victim.
ContributorsRakin, Adnan Siraj (Author) / Fan, Deliang (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Seo, Jae-Sun (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2022
171921-Thumbnail Image.png
Description
With the bloom of machine learning, a massive amount of data has been used in the training process of machine learning. A tremendous amount of this data is user-generated data which allows the machine learning models to produce accurate results and personalized services. Nevertheless, I recognize the importance of preserving

With the bloom of machine learning, a massive amount of data has been used in the training process of machine learning. A tremendous amount of this data is user-generated data which allows the machine learning models to produce accurate results and personalized services. Nevertheless, I recognize the importance of preserving the privacy of individuals by protecting their information in the training process. One privacy attack that affects individuals is the private attribute inference attack. The private attribute attack is the process of inferring individuals' information that they do not explicitly reveal, such as age, gender, location, and occupation. The impacts of this go beyond knowing the information as individuals face potential risks. Furthermore, some applications need sensitive data to train the models and predict helpful insights and figuring out how to build privacy-preserving machine learning models will increase the capabilities of these applications.However, improving privacy affects the data utility which leads to a dilemma between privacy and utility. The utility of the data is measured by the quality of the data for different tasks. This trade-off between privacy and utility needs to be maintained to satisfy the privacy requirement and the result quality. To achieve more scalable privacy-preserving machine learning models, I investigate the privacy risks that affect individuals' private information in distributed machine learning. Even though the distributed machine learning has been driven by privacy concerns, privacy issues have been proposed in the literature which threaten individuals' privacy. In this dissertation, I investigate how to measure and protect individuals' privacy in centralized and distributed machine learning models. First, a privacy-preserving text representation learning is proposed to protect users' privacy that can be revealed from user generated data. Second, a novel privacy-preserving text classification for split learning is presented to improve users' privacy and retain high utility by defending against private attribute inference attacks.
ContributorsAlnasser, Walaa (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Shu, Kai (Committee member) / Bao, Tiffany (Committee member) / Arizona State University (Publisher)
Created2022
168589-Thumbnail Image.png
Description
Mobile Augmented Reality (MAR) is a portable, powerful, and suitable technology that integrates 3D virtual content into the physical world in real-time. It has been implemented for multiple intents as it enhances people’s interaction, e.g., shopping, entertainment, gaming, etc. Thus, MAR is expected to grow at a tremendous rate in

Mobile Augmented Reality (MAR) is a portable, powerful, and suitable technology that integrates 3D virtual content into the physical world in real-time. It has been implemented for multiple intents as it enhances people’s interaction, e.g., shopping, entertainment, gaming, etc. Thus, MAR is expected to grow at a tremendous rate in the upcoming years, as its popularity via mobile devices has increased. But, unfortunately, the applications that implement MAR, hereby referred to as MAR-Apps, bear security issues. Such are imaged in worldwide recorded incidents caused by MAR-Apps, e.g., robberies, authorities requesting banning MAR at specific locations, etc. To further explore these concerns, a case study analyzed several MAR-Apps available in the market to identify the security problems in MAR. As a result of this study, the threats found were classified into three categories. First, Space Invasion implies the intrusive modification through MAR of sensitive spaces, e.g., hospitals, memorials, etc. Then, Space Affectation means the degradation of users’ experience via interaction with undesirable MAR or malicious entities. Finally, MAR-Apps mishandling sensitive data leads to Privacy Leaks. SpaceMediator, a proof-of-concept MAR-App that imitates the well-known and successful MAR-App Pokémon GO, implements the solution approach of a Policy-Governed MAR-App, which assists in preventing the aforementioned mentioned security issues. Furthermore, its feasibility is evaluated through a user study with 40 participants. As a result, uncovering understandability over the security issues as participants recognized and prevented them with success rates as high as 92.50%. Furthermore, there is an enriched interest in Policy-Governed MAR-Apps as 87.50% of participants agreed with restricted MAR-Apps within sensitive spaces, and 82.50% would implement constraints in MAR-Apps. These promising results encourage adopting the Policy-Governed solution approach in future MAR-Apps.
ContributorsClaramunt, Luis Manuel (Author) / Ahn, Gail-Joon (Thesis advisor) / Rubio-Medrano, Carlos E (Committee member) / Baek, Jaejong (Committee member) / Arizona State University (Publisher)
Created2022