Matching Items (34)

Filtering by

Clear all filters

153094-Thumbnail Image.png

Privacy preserving controls for Android applications

Description

Android is currently the most widely used mobile operating system. The permission model in Android governs the resource access privileges of applications. The permission model however is amenable to various attacks, including re-delegation attacks, background snooping attacks and disclosure of

Android is currently the most widely used mobile operating system. The permission model in Android governs the resource access privileges of applications. The permission model however is amenable to various attacks, including re-delegation attacks, background snooping attacks and disclosure of private information. This thesis is aimed at understanding, analyzing and performing forensics on application behavior. This research sheds light on several security aspects, including the use of inter-process communications (IPC) to perform permission re-delegation attacks.

Android permission system is more of app-driven rather than user controlled, which means it is the applications that specify their permission requirement and the only thing which the user can do is choose not to install a particular application based on the requirements. Given the all or nothing choice, users succumb to pressures and needs to accept permissions requested. This thesis proposes a couple of ways for providing the users finer grained control of application privileges. The same methods can be used to evade the Permission Re-delegation attack.

This thesis also proposes and implements a novel methodology in Android that can be used to control the access privileges of an Android application, taking into consideration the context of the running application. This application-context based permission usage is further used to analyze a set of sample applications. We found the evidence of applications spoofing or divulging user sensitive information such as location information, contact information, phone id and numbers, in the background. Such activities can be used to track users for a variety of privacy-intrusive purposes. We have developed implementations that minimize several forms of privacy leaks that are routinely done by stock applications.

Contributors

Agent

Created

Date Created
2014

150382-Thumbnail Image.png

Establishing distributed social network trust model in MobiCloud system

Description

This thesis proposed a novel approach to establish the trust model in a social network scenario based on users' emails. Email is one of the most important social connections nowadays. By analyzing email exchange activities among users, a social network

This thesis proposed a novel approach to establish the trust model in a social network scenario based on users' emails. Email is one of the most important social connections nowadays. By analyzing email exchange activities among users, a social network trust model can be established to judge the trust rate between each two users. The whole trust checking process is divided into two steps: local checking and remote checking. Local checking directly contacts the email server to calculate the trust rate based on user's own email communication history. Remote checking is a distributed computing process to get help from user's social network friends and built the trust rate together. The email-based trust model is built upon a cloud computing framework called MobiCloud. Inside MobiCloud, each user occupies a virtual machine which can directly communicate with others. Based on this feature, the distributed trust model is implemented as a combination of local analysis and remote analysis in the cloud. Experiment results show that the trust evaluation model can give accurate trust rate even in a small scale social network which does not have lots of social connections. With this trust model, the security in both social network services and email communication could be improved.

Contributors

Agent

Created

Date Created
2011

153126-Thumbnail Image.png

Towards demographic information release in LBS k-Anonymization

Description

The increasing number of continually connected mobile persons has created an environment conducive to real time user data gathering for many uses both public and private in nature. Publicly, one can envision no longer requiring a census to determine the

The increasing number of continually connected mobile persons has created an environment conducive to real time user data gathering for many uses both public and private in nature. Publicly, one can envision no longer requiring a census to determine the demographic composition of the country and its sub regions. The information provided is vastly more up to date than that of a census and allows civil authorities to be more agile and preemptive with planning. Privately, advertisers take advantage of a persons stated opinions, demographics, and contextual (where and when) information in order to formulate and present pertinent offers.

Regardless of its use this information can be sensitive in nature and should therefore be under the control of the user. Currently, a user has little say in the manner that their information is processed once it has been released. An ad-hoc approach is currently in use, where the location based service providers each maintain their own policy over personal information usage.

In order to allow more user control over their personal information while still providing for targeted advertising, a systematic approach to the release of the information is needed. It is for that reason we propose a User-Centric Context Aware Spatiotemporal Anonymization framework. At its core the framework will unify the current spatiotemporal anonymization with that of traditional anonymization so that user specified anonymization requirement is met or exceeded while allowing for more demographic information to be released.

Contributors

Agent

Created

Date Created
2014

153147-Thumbnail Image.png

Flexible analyst defined viewpoint for malware relationship analysis

Description

The rate at which new malicious software (Malware) is created is consistently increasing each year. These new malwares are designed to bypass the current anti-virus countermeasures employed to protect computer systems. Security Analysts must understand the nature and intent of

The rate at which new malicious software (Malware) is created is consistently increasing each year. These new malwares are designed to bypass the current anti-virus countermeasures employed to protect computer systems. Security Analysts must understand the nature and intent of the malware sample in order to protect computer systems from these attacks. The large number of new malware samples received daily by computer security companies require Security Analysts to quickly determine the type, threat, and countermeasure for newly identied samples. Our approach provides for a visualization tool to assist the Security Analyst in these tasks that allows the Analyst to visually identify relationships between malware samples.

This approach consists of three steps. First, the received samples are processed by a sandbox environment to perform a dynamic behavior analysis. Second, the reports of the dynamic behavior analysis are parsed to extract identifying features which are matched against other known and analyzed samples. Lastly, those matches that are determined to express a relationship are visualized as an edge connected pair of nodes in an undirected graph.

Contributors

Agent

Created

Date Created
2014

153335-Thumbnail Image.png

Policy-driven security management for gateway-oriented reconfigurable ecosystems

Description

With the increasing user demand for low latency, elastic provisioning of computing resources coupled with ubiquitous and on-demand access to real-time data, cloud computing has emerged as a popular

With the increasing user demand for low latency, elastic provisioning of computing resources coupled with ubiquitous and on-demand access to real-time data, cloud computing has emerged as a popular computing paradigm to meet growing user demands. However, with the introduction and rising use of wear- able technology and evolving uses of smart-phones, the concept of Internet of Things (IoT) has become a prevailing notion in the currently growing technology industry. Cisco Inc. has projected a data creation of approximately 403 Zetabytes (ZB) by 2018. The combination of bringing benign devices and connecting them to the web has resulted in exploding service and data aggregation requirements, thus requiring a new and innovative computing platform. This platform should have the capability to provide robust real-time data analytics and resource provisioning to clients, such as IoT users, on-demand. Such a computation model would need to function at the edge-of-the-network, forming a bridge between the large cloud data centers and the distributed connected devices.

This research expands on the notion of bringing computational power to the edge- of-the-network, and then integrating it with the cloud computing paradigm whilst providing services to diverse IoT-based applications. This expansion is achieved through the establishment of a new computing model that serves as a platform for IoT-based devices to communicate with services in real-time. We name this paradigm as Gateway-Oriented Reconfigurable Ecosystem (GORE) computing. Finally, this thesis proposes and discusses the development of a policy management framework for accommodating our proposed computational paradigm. The policy framework is designed to serve both the hosted applications and the GORE paradigm by enabling them to function more efficiently. The goal of the framework is to ensure uninterrupted communication and service delivery between users and their applications.

Contributors

Agent

Created

Date Created
2015

154606-Thumbnail Image.png

Data Protection over Cloud

Description

Data protection has long been a point of contention and a vastly researched field. With the advent of technology and advances in Internet technologies, securing data has become much more challenging these days. Cloud services have become very popular. Given

Data protection has long been a point of contention and a vastly researched field. With the advent of technology and advances in Internet technologies, securing data has become much more challenging these days. Cloud services have become very popular. Given the ease of access and availability of the systems, it is not easy to not use cloud to store data. This however, pose a significant risk to data security as more of your data is available to a third party. Given the easy transmission and almost infinite storage of data, securing one's sensitive information has become a major challenge.

Cloud service providers may not be trusted completely with your data. It is not very uncommon to snoop over the data for finding interesting patterns to generate ad revenue or divulge your information to a third party, e.g. government and law enforcing agencies. For enterprises who use cloud service, it pose a risk for their intellectual property and business secrets. With more and more employees using cloud for their day to day work, business now face a risk of losing or leaking out information.

In this thesis, I have focused on ways to protect data and information over cloud- a third party not authorized to use your data, all this while still utilizing cloud services for transfer and availability of data. This research proposes an alternative to an on-premise secure infrastructure giving exibility to user for protecting the data and control over it. The project uses cryptography to protect data and create a secure architecture for secret key migration in order to decrypt the data securely for the intended recipient. It utilizes Intel's technology which gives it an added advantage over other existing solutions.

Contributors

Agent

Created

Date Created
2016

154704-Thumbnail Image.png

E-mail header injections - an analysis of the World Wide Web

Description

E-Mail header injection vulnerability is a class of vulnerability that can occur in web applications that use user input to construct e-mail messages. E-Mail injection is possible when the mailing script fails to check for the presence of e-mail headers

E-Mail header injection vulnerability is a class of vulnerability that can occur in web applications that use user input to construct e-mail messages. E-Mail injection is possible when the mailing script fails to check for the presence of e-mail headers in user input (either form fields or URL parameters). The vulnerability exists in the reference implementation of the built-in “mail” functionality in popular languages like PHP, Java, Python, and Ruby. With the proper injection string, this vulnerability can be exploited to inject additional headers and/or modify existing headers in an e-mail message, allowing an attacker to completely alter the content of the e-mail.

This thesis develops a scalable mechanism to automatically detect E-Mail Header Injection vulnerability and uses this mechanism to quantify the prevalence of E- Mail Header Injection vulnerabilities on the Internet. Using a black-box testing approach, the system crawled 21,675,680 URLs to find URLs which contained form fields. 6,794,917 such forms were found by the system, of which 1,132,157 forms contained e-mail fields. The system used this data feed to discern the forms that could be fuzzed with malicious payloads. Amongst the 934,016 forms tested, 52,724 forms were found to be injectable with more malicious payloads. The system tested 46,156 of these and was able to find 496 vulnerable URLs across 222 domains, which proves that the threat is widespread and deserves future research attention.

Contributors

Agent

Created

Date Created
2016

155561-Thumbnail Image.png

iGen: Toward Automatic Generation and Analysis of Indicators of Compromise (IOCs) using Convolutional Neural Network

Description

Field of cyber threats is evolving rapidly and every day multitude of new information about malware and Advanced Persistent Threats (APTs) is generated in the form of malware reports, blog articles, forum posts, etc. However, current Threat Intelligence (TI) systems

Field of cyber threats is evolving rapidly and every day multitude of new information about malware and Advanced Persistent Threats (APTs) is generated in the form of malware reports, blog articles, forum posts, etc. However, current Threat Intelligence (TI) systems have several limitations. First, most of the TI systems examine and interpret data manually with the help of analysts. Second, some of them generate Indicators of Compromise (IOCs) directly using regular expressions without understanding the contextual meaning of those IOCs from the data sources which allows the tools to include lot of false positives. Third, lot of TI systems consider either one or two data sources for the generation of IOCs, and misses some of the most valuable IOCs from other data sources.

To overcome these limitations, we propose iGen, a novel approach to fully automate the process of IOC generation and analysis. Proposed approach is based on the idea that our model can understand English texts like human beings, and extract the IOCs from the different data sources intelligently. Identification of the IOCs is done on the basis of the syntax and semantics of the sentence as well as context words (e.g., ``attacked'', ``suspicious'') present in the sentence which helps the approach work on any kind of data source. Our proposed technique, first removes the words with no contextual meaning like stop words and punctuations etc. Then using the rest of the words in the sentence and output label (IOC or non-IOC sentence), our model intelligently learn to classify sentences into IOC and non-IOC sentences. Once IOC sentences are identified using this learned Convolutional Neural Network (CNN) based approach, next step is to identify the IOC tokens (like domains, IP, URL) in the sentences. This CNN based classification model helps in removing false positives (like IPs which are not malicious). Afterwards, IOCs extracted from different data sources are correlated to find the links between thousands of apparently unrelated attack instances, particularly infrastructures shared between them. Our approach fully automates the process of IOC generation from gathering data from different sources to creating rules (e.g. OpenIOC, snort rules, STIX rules) for deployment on

the security infrastructure.

iGen has collected around 400K IOCs till now with a precision of 95\%, better than any state-of-art method.

Contributors

Agent

Created

Date Created
2017

158005-Thumbnail Image.png

Attribute-Based Encryption for Fine-Grained Access Control over Sensitive Data

Description

The traditional access control system suffers from the problem of separation of data ownership and management. It poses data security issues in application scenarios such as cloud computing and blockchain where the data owners either do not trust the data

The traditional access control system suffers from the problem of separation of data ownership and management. It poses data security issues in application scenarios such as cloud computing and blockchain where the data owners either do not trust the data storage provider or even do not know who would have access to their data once they are appended to the chain. In these scenarios, the data owner actually loses control of the data once they are uploaded to the outside storage. Encryption-before-uploading is the way to solve this issue, however traditional encryption schemes such as AES, RSA, ECC, bring about great overheads in key management on the data owner end and could not provide fine-grained access control as well.

Attribute-Based Encryption (ABE) is a cryptographic way to implement attribute-based access control, which is a fine-grained access control model, thus solving all aforementioned issues. With ABE, the data owner would encrypt the data by a self-defined access control policy before uploading the data. The access control policy is an AND-OR boolean formula over attributes. Only users with attributes that satisfy the access control policy could decrypt the ciphertext. However the existing ABE schemes do not provide some important features in practical applications, e.g., user revocation and attribute expiration. Furthermore, most existing work focus on how to use ABE to protect cloud stored data, while not the blockchain applications.

The main objective of this thesis is to provide solutions to add two important features of the ABE schemes, i.e., user revocation and attribute expiration, and also provide a practical trust framework for using ABE to protect blockchain data. To add the feature of user revocation, I propose to add user's hierarchical identity into the private attribute key. In this way, only users whose identity is not revoked and attributes satisfy the access control policy could decrypt the ciphertext. To add the feature of attribute expiration, I propose to add the attribute valid time period into the private attribute key. The data would be encrypted by access control policy where all attributes have a temporal value. In this way, only users whose attributes both satisfy the access policy and at the same time these attributes do not expire,

are allowed to decrypt the ciphertext. To use ABE in the blockchain applications, I propose an ABE-enabled trust framework in a very popular blockchain platform, Hyperledger Fabric. Based on the design, I implement a light-weight attribute certificate authority for attribute distribution and validation; I implement the proposed ABE schemes and provide a toolkit which supports system setup, key generation,

data encryption and data decryption. All these modules were integrated into a demo system for protecting sensitive les in a blockchain application.

Contributors

Agent

Created

Date Created
2020

157515-Thumbnail Image.png

IRE: A Framework For Inductive Reverse Engineering

Description

Reverse engineering is critical to reasoning about how a system behaves. While complete access to a system inherently allows for perfect analysis, partial access is inherently uncertain. This is the case foran individual agent in a distributed system. Inductive Reverse

Reverse engineering is critical to reasoning about how a system behaves. While complete access to a system inherently allows for perfect analysis, partial access is inherently uncertain. This is the case foran individual agent in a distributed system. Inductive Reverse Engineering (IRE) enables analysis under

such circumstances. IRE does this by producing program spaces consistent with individual input-output examples for a given domain-specific language. Then, IRE intersects those program spaces to produce a generalized program consistent with all examples. IRE, an easy to use framework, allows this domain-specific language to be specified in the form of Theorist s, which produce Theory s, a succinct way of representing the program space.

Programs are often much more complex than simple string transformations. One of the ways in which they are more complex is in the way that they follow a conversation-like behavior, potentially following some underlying protocol. As a result, IRE represents program interactions as Conversations in order to

more correctly model a distributed system. This, for instance, enables IRE to model dynamically captured inputs received from other agents in the distributed system.

While domain-specific knowledge provided by a user is extremely valuable, such information is not always possible. IRE mitigates this by automatically inferring program grammars, allowing it to still perform efficient searches of the program space. It does this by intersecting conversations prior to synthesis in order to understand what portions of conversations are constant.

IRE exists to be a tool that can aid in automatic reverse engineering across numerous domains. Further, IRE aspires to be a centralized location and interface for implementing program synthesis and automatic black box analysis techniques.

Contributors

Agent

Created

Date Created
2019