Matching Items (4)
Filtering by

Clear all filters

187820-Thumbnail Image.png
Description
With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has

With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has been challenging, especially in healthcare. The data owners lack an automated resource that provides layers of protection on a published dataset with validated statistical values for usability. Differential privacy (DP) has gained a lot of attention in the past few years as a solution to the above-mentioned dual problem. DP is defined as a statistical anonymity model that can protect the data from adversarial observation while still providing intended usage. This dissertation introduces a novel DP protection mechanism called Inexact Data Cloning (IDC), which simultaneously protects and preserves information in published data while conveying source data intent. IDC preserves the privacy of the records by converting the raw data records into clonesets. The clonesets then pass through a classifier that removes potential compromising clonesets, filtering only good inexact cloneset. The mechanism of IDC is dependent on a set of privacy protection metrics called differential privacy protection metrics (DPPM), which represents the overall protection level. IDC uses two novel performance values, differential privacy protection score (DPPS) and clone classifier selection percentage (CCSP), to estimate the privacy level of protected data. In support of using IDC as a viable data security product, a software tool chain prototype, differential privacy protection architecture (DPPA), was developed to utilize the IDC. DPPA used the engineering security mechanism of IDC. DPPA is a hub which facilitates a market for data DP security mechanisms. DPPA works by incorporating standalone IDC mechanisms and provides automation, IDC protected published datasets and statistically verified IDC dataset diagnostic report. DPPA is currently doing functional, and operational benchmark processes that quantifies the DP protection of a given published dataset. The DPPA tool was recently used to test a couple of health datasets. The test results further validate the IDC mechanism as being feasible.
Contributorsthomas, zelpha (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Banerjee, Ayan (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2023
154767-Thumbnail Image.png
Description
Microblogging services such as Twitter, Sina Weibo, and Tumblr have been emerging and deeply embedded into people's daily lives. Used by hundreds of millions of users to connect the people worldwide and share and access information in real-time, the microblogging service has also became the target of malicious attackers due

Microblogging services such as Twitter, Sina Weibo, and Tumblr have been emerging and deeply embedded into people's daily lives. Used by hundreds of millions of users to connect the people worldwide and share and access information in real-time, the microblogging service has also became the target of malicious attackers due to its massive user engagement and structural openness. Although existed, little is still known in the community about new types of vulnerabilities in current microblogging services which could be leveraged by the intelligence-evolving attackers, and more importantly, the corresponding defenses that could prevent both the users and the microblogging service providers from being attacked. This dissertation aims to uncover a number of challenging security and privacy issues in microblogging services and also propose corresponding defenses.

This dissertation makes fivefold contributions. The first part presents the social botnet, a group of collaborative social bots under the control of a single botmaster, demonstrate the effectiveness and advantages of exploiting a social botnet for spam distribution and digital-influence manipulation, and propose the corresponding countermeasures and evaluate their effectiveness. Inspired by Pagerank, the second part describes TrueTop, the first sybil-resilient system to find the top-K influential users in microblogging services with very accurate results and strong resilience to sybil attacks. TrueTop has been implemented to handle millions of nodes and 100 times more edges on commodity computers. The third and fourth part demonstrate that microblogging systems' structural openness and users' carelessness could disclose the later's sensitive information such as home city and age. LocInfer, a novel and lightweight system, is presented to uncover the majority of the users in any metropolitan area; the dissertation also proposes MAIF, a novel machine learning framework that leverages public content and interaction information in microblogging services to infer users' hidden ages. Finally, the dissertation proposes the first privacy-preserving social media publishing framework to let the microblogging service providers publish their data to any third-party without disclosing users' privacy and meanwhile meeting the data's commercial utilities. This dissertation sheds the light on the state-of-the-art security and privacy issues in the microblogging services.
ContributorsZhang, Jinxue (Author) / Zhang, Yanchao (Thesis advisor) / Zhang, Junshan (Committee member) / Ying, Lei (Committee member) / Ahn, Gail-Joon (Committee member) / Arizona State University (Publisher)
Created2016
154895-Thumbnail Image.png
Description
Data privacy is emerging as one of the most serious concerns of big data analytics, particularly with the growing use of personal data and the ever-improving capability of data analysis. This dissertation first investigates the relation between different privacy notions, and then puts the main focus on developing economic foundations

Data privacy is emerging as one of the most serious concerns of big data analytics, particularly with the growing use of personal data and the ever-improving capability of data analysis. This dissertation first investigates the relation between different privacy notions, and then puts the main focus on developing economic foundations for a market model of trading private data.

The first part characterizes differential privacy, identifiability and mutual-information privacy by their privacy--distortion functions, which is the optimal achievable privacy level as a function of the maximum allowable distortion. The results show that these notions are fundamentally related and exhibit certain consistency: (1) The gap between the privacy--distortion functions of identifiability and differential privacy is upper bounded by a constant determined by the prior. (2) Identifiability and mutual-information privacy share the same optimal mechanism. (3) The mutual-information optimal mechanism satisfies differential privacy with a level at most a constant away from the optimal level.

The second part studies a market model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The value of epsilon units of privacy is measured by the minimum payment such that an individual's equilibrium strategy is to report data in an epsilon-differentially private manner. For the setting with binary private data that represents individuals' knowledge about a common underlying state, asymptotically tight lower and upper bounds on the value of privacy are established as the number of individuals becomes large, and the payment--accuracy tradeoff for learning the state is obtained. The lower bound assures the impossibility of using lower payment to buy epsilon units of privacy, and the upper bound is given by a designed reward mechanism. When the individuals' valuations of privacy are unknown to the data collector, mechanisms with possible negative payments (aiming to penalize individuals with "unacceptably" high privacy valuations) are designed to fulfill the accuracy goal and drive the total payment to zero. For the setting with binary private data following a general joint probability distribution with some symmetry, asymptotically optimal mechanisms are designed in the high data quality regime.
ContributorsWang, Weina (Author) / Ying, Lei (Thesis advisor) / Zhang, Junshan (Thesis advisor) / Scaglione, Anna (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2016
153374-Thumbnail Image.png
Description
Users often join an online social networking (OSN) site, like Facebook, to remain social, by either staying connected with friends or expanding social networks. On an OSN site, users generally share variety of personal information which is often expected to be visible to their friends, but sometimes vulnerable to

Users often join an online social networking (OSN) site, like Facebook, to remain social, by either staying connected with friends or expanding social networks. On an OSN site, users generally share variety of personal information which is often expected to be visible to their friends, but sometimes vulnerable to unwarranted access from others. The recent study suggests that many personal attributes, including religious and political affiliations, sexual orientation, relationship status, age, and gender, are predictable using users' personal data from an OSN site. The majority of users want to remain socially active, and protect their personal data at the same time. This tension leads to a user's vulnerability, allowing privacy attacks which can cause physical and emotional distress to a user, sometimes with dire consequences. For example, stalkers can make use of personal information available on an OSN site to their personal gain. This dissertation aims to systematically study a user vulnerability against such privacy attacks.

A user vulnerability can be managed in three steps: (1) identifying, (2) measuring and (3) reducing a user vulnerability. Researchers have long been identifying vulnerabilities arising from user's personal data, including user names, demographic attributes, lists of friends, wall posts and associated interactions, multimedia data such as photos, audios and videos, and tagging of friends. Hence, this research first proposes a way to measure and reduce a user vulnerability to protect such personal data. This dissertation also proposes an algorithm to minimize a user's vulnerability while maximizing their social utility values.

To address these vulnerability concerns, social networking sites like Facebook usually let their users to adjust their profile settings so as to make some of their data invisible. However, users sometimes interact with others using unprotected posts (e.g., posts from a ``Facebook page\footnote{The term ''Facebook page`` refers to the page which are commonly dedicated for businesses, brands and organizations to share their stories and connect with people.}''). Such interactions help users to become more social and are publicly accessible to everyone. Thus, visibilities of these interactions are beyond the control of their profile settings. I explore such unprotected interactions so that users' are well aware of these new vulnerabilities and adopt measures to mitigate them further. In particular, {\em are users' personal attributes predictable using only the unprotected interactions}? To answer this question, I address a novel problem of predictability of users' personal attributes with unprotected interactions. The extreme sparsity patterns in users' unprotected interactions pose a serious challenge. Therefore, I approach to mitigating the data sparsity challenge by designing a novel attribute prediction framework using only the unprotected interactions. Experimental results on Facebook dataset demonstrates that the proposed framework can predict users' personal attributes.
ContributorsGundecha, Pritam S (Author) / Liu, Huan (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Ye, Jieping (Committee member) / Barbier, Geoffrey (Committee member) / Arizona State University (Publisher)
Created2015