Matching Items (61)
Filtering by

Clear all filters

153094-Thumbnail Image.png
Description
Android is currently the most widely used mobile operating system. The permission model in Android governs the resource access privileges of applications. The permission model however is amenable to various attacks, including re-delegation attacks, background snooping attacks and disclosure of private information. This thesis is aimed at understanding, analyzing and

Android is currently the most widely used mobile operating system. The permission model in Android governs the resource access privileges of applications. The permission model however is amenable to various attacks, including re-delegation attacks, background snooping attacks and disclosure of private information. This thesis is aimed at understanding, analyzing and performing forensics on application behavior. This research sheds light on several security aspects, including the use of inter-process communications (IPC) to perform permission re-delegation attacks.

Android permission system is more of app-driven rather than user controlled, which means it is the applications that specify their permission requirement and the only thing which the user can do is choose not to install a particular application based on the requirements. Given the all or nothing choice, users succumb to pressures and needs to accept permissions requested. This thesis proposes a couple of ways for providing the users finer grained control of application privileges. The same methods can be used to evade the Permission Re-delegation attack.

This thesis also proposes and implements a novel methodology in Android that can be used to control the access privileges of an Android application, taking into consideration the context of the running application. This application-context based permission usage is further used to analyze a set of sample applications. We found the evidence of applications spoofing or divulging user sensitive information such as location information, contact information, phone id and numbers, in the background. Such activities can be used to track users for a variety of privacy-intrusive purposes. We have developed implementations that minimize several forms of privacy leaks that are routinely done by stock applications.
ContributorsGollapudi, Narasimha Aditya (Author) / Dasgupta, Partha (Thesis advisor) / Xue, Guoliang (Committee member) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2014
153147-Thumbnail Image.png
Description
The rate at which new malicious software (Malware) is created is consistently increasing each year. These new malwares are designed to bypass the current anti-virus countermeasures employed to protect computer systems. Security Analysts must understand the nature and intent of the malware sample in order to protect computer systems from

The rate at which new malicious software (Malware) is created is consistently increasing each year. These new malwares are designed to bypass the current anti-virus countermeasures employed to protect computer systems. Security Analysts must understand the nature and intent of the malware sample in order to protect computer systems from these attacks. The large number of new malware samples received daily by computer security companies require Security Analysts to quickly determine the type, threat, and countermeasure for newly identied samples. Our approach provides for a visualization tool to assist the Security Analyst in these tasks that allows the Analyst to visually identify relationships between malware samples.

This approach consists of three steps. First, the received samples are processed by a sandbox environment to perform a dynamic behavior analysis. Second, the reports of the dynamic behavior analysis are parsed to extract identifying features which are matched against other known and analyzed samples. Lastly, those matches that are determined to express a relationship are visualized as an edge connected pair of nodes in an undirected graph.
ContributorsHolmes, James Edward (Author) / Ahn, Gail-Joon (Thesis advisor) / Dasgupta, Partha (Committee member) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2014
156185-Thumbnail Image.png
Description
Web applications continue to remain as the most popular method of interaction for businesses over the Internet. With it's simplicity of use and management, they often function as the "front door" for many companies. As such, they are a critical component of the security ecosystem as vulnerabilities present in these

Web applications continue to remain as the most popular method of interaction for businesses over the Internet. With it's simplicity of use and management, they often function as the "front door" for many companies. As such, they are a critical component of the security ecosystem as vulnerabilities present in these systems could potentially allow malicious users access to sensitive business and personal data.

The inherent nature of web applications enables anyone to access them anytime and anywhere, this includes any malicious actors looking to exploit vulnerabilities present in the web application. In addition, the static configurations of these web applications enables attackers the opportunity to perform reconnaissance at their leisure, increasing their success rate by allowing them time to discover information on the system. On the other hand, defenders are often at a disadvantage as they do not have the same temporal opportunity that attackers possess in order to perform counter-reconnaissance. Lastly, the unchanging nature of web applications results in undiscovered vulnerabilities to remain open for exploitation, requiring developers to adopt a reactive approach that is often delayed or to anticipate and prepare for all possible attacks which is often cost-prohibitive.

Moving Target Defense (MTD) seeks to remove the attackers' advantage by reducing the information asymmetry between the attacker and defender. This research explores the concept of MTD and the various methods of applying MTD to secure Web Applications. In particular, MTD concepts are applied to web applications by implementing an automated application diversifier that aims to mitigate specific classes of web application vulnerabilities and exploits. Evaluation is done using two open source web applications to determine the effectiveness of the MTD implementation. Though developed for the chosen applications, the automation process can be customized to fit a variety of applications.
ContributorsTaguinod, Marthony (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Thesis advisor) / Yau, Sik-Sang (Committee member) / Arizona State University (Publisher)
Created2018
156206-Thumbnail Image.png
Description
Web applications are ubiquitous. Accessible from almost anywhere, web applications support multiple platforms and can be easily customized. Most people interact with web applications daily for social media, communication, research, purchases, etc. Node.js has gained popularity as a programming language for web applications. A server-side JavaScript implementation, Node.js, allows both

Web applications are ubiquitous. Accessible from almost anywhere, web applications support multiple platforms and can be easily customized. Most people interact with web applications daily for social media, communication, research, purchases, etc. Node.js has gained popularity as a programming language for web applications. A server-side JavaScript implementation, Node.js, allows both the front-end and back-end to be coded in JavaScript. Node.js contains many features such as dynamic inclusion of other modules using a built-in function named require which dynamically locates and loads code.

To be effective, web applications must perform actions quickly while avoiding unexpected interruptions. However, dynamically linked libraries can cause delays and thus downtime, because dynamically linked code must load multiple files, often from disk. As loading is one of the slowest operations a computer performs, seeking from disk can have a negative impact on performance which causes the server to feel less responsive for users. Dynamically linked code can also break when the underlying library is updated. Normally, when trying to update a server, developers will use test servers. However, if the developer accidentally updates a library in a dynamically linked system, it may be incompatible with another portion of the program.

Statically linking code makes it more reliable and faster (to load) than dynamically linking code. The static linking process varies by programming language. Therefore, different static linkers need to be developed for different languages. This thesis describes the creation of a static linker, called FrozenNode, for the popular back-end web application language, Node.js. FrozenNode resolves Node.js applications into a single file that does not rely on dynamic libraries. FrozenNode was built on top of Closure Compiler to accurately process JavaScript. We found that the resolved application was faster and self-contained yielding significant advantages over the dynamically loaded application. Furthermore, both had the same output.

Vulnerabilities in web applications can be found using static analysis tools, however static analysis tools must reason about dynamically linked application. FrozenNode can be used to statically link a Node.js application before being used by a JavaScript static analysis tool.
ContributorsHutchins, James (Author) / Doupe, Adam (Thesis advisor) / Shoshitaishvili, Yan (Committee member) / Zhao, Ziming (Committee member) / Arizona State University (Publisher)
Created2018
156290-Thumbnail Image.png
Description
Data breaches have been on a rise and financial sector is among the top targeted. It can take a few months and upto a few years to identify the occurrence of a data breach. A major motivation behind data breaches is financial gain, hence most of the data ends u

Data breaches have been on a rise and financial sector is among the top targeted. It can take a few months and upto a few years to identify the occurrence of a data breach. A major motivation behind data breaches is financial gain, hence most of the data ends up being on sale on the darkweb websites. It is important to identify sale of such stolen information on a timely and relevant manner. In this research, we present a system for timely identification of sale of stolen data on darkweb websites. We frame identifying sale of stolen data as a multi-label classification problem and leverage several machine learning approaches based on the thread content (textual) and social network analysis of the user communication seen on darkweb websites. The system generates alerts about trends based on popularity amongst the users of such websites. We evaluate our system using the K-fold cross validation as well as manual evaluation of blind (unseen) data. The method of combining social network and textual features outperforms baseline method i.e only using textual features, by 15 to 20 % improved precision. The alerts provide a good insight and we illustrate our findings by cases studies of the results.
ContributorsDharaiya, Krishna Tushar (Author) / Shakarian, Paulo (Thesis advisor) / Doupe, Adam (Committee member) / Shoshitaishvili, Yan (Committee member) / Arizona State University (Publisher)
Created2018
155696-Thumbnail Image.png
Description
The ease of programmability in Software-Defined Networking (SDN) makes it a great platform for implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. However, implementing security solutions in such an environment is fraught with policy conflicts and consistency

The ease of programmability in Software-Defined Networking (SDN) makes it a great platform for implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. However, implementing security solutions in such an environment is fraught with policy conflicts and consistency issues with the hardness of this problem being affected by the distribution scheme for the SDN controllers.

In this dissertation, a formalism for flow rule conflicts in SDN environments is introduced. This formalism is realized in Brew, a security policy analysis framework implemented on an OpenDaylight SDN controller. Brew has comprehensive conflict detection and resolution modules to ensure that no two flow rules in a distributed SDN-based cloud environment have conflicts at any layer; thereby assuring consistent conflict-free security policy implementation and preventing information leakage. Techniques for global prioritization of flow rules in a decentralized environment are presented, using which all SDN flow rule conflicts are recognized and classified. Strategies for unassisted resolution of these conflicts are also detailed. Alternately, if administrator input is desired to resolve conflicts, a novel visualization scheme is implemented to help the administrators view the conflicts in an aesthetic manner. The correctness, feasibility and scalability of the Brew proof-of-concept prototype is demonstrated. Flow rule conflict avoidance using a buddy address space management technique is studied as an alternate to conflict detection and resolution in highly dynamic cloud systems attempting to implement an SDN-based Moving Target Defense (MTD) countermeasures.
ContributorsPisharody, Sandeep (Author) / Huang, Dijiang (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Syrotiuk, Violet (Committee member) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2017
155706-Thumbnail Image.png
Description
The volume and frequency of cyber attacks have exploded in recent years. Organizations subscribe to multiple threat intelligence feeds to increase their knowledge base and better equip their security teams with the latest information in threat intelligence domain. Though such subscriptions add intelligence and can help in taking more informed

The volume and frequency of cyber attacks have exploded in recent years. Organizations subscribe to multiple threat intelligence feeds to increase their knowledge base and better equip their security teams with the latest information in threat intelligence domain. Though such subscriptions add intelligence and can help in taking more informed decisions, organizations have to put considerable efforts in facilitating and analyzing a large number of threat indicators. This problem worsens further, due to a large number of false positives and irrelevant events detected as threat indicators by existing threat feed sources. It is often neither practical nor cost-effective to analyze every single alert considering the staggering volume of indicators. The very reason motivates to solve the overcrowded threat indicators problem by prioritizing and filtering them.

To overcome above issue, I explain the necessity of determining how likely a reported indicator is malicious given the evidence and prioritizing it based on such determination. Confidence Score Measurement system (CSM) introduces the concept of confidence score, where it assigns a score of being malicious to a threat indicator based on the evaluation of different threat intelligence systems. An indicator propagates maliciousness to adjacent indicators based on relationship determined from behavior of an indicator. The propagation algorithm derives final confidence to determine overall maliciousness of the threat indicator. CSM can prioritize the indicators based on confidence score; however, an analyst may not be interested in the entire result set, so CSM narrows down the results based on the analyst-driven input. To this end, CSM introduces the concept of relevance score, where it combines the confidence score with analyst-driven search by applying full-text search techniques. It prioritizes the results based on relevance score to provide meaningful results to the analyst. The analysis shows the propagation algorithm of CSM linearly scales with larger datasets and achieves 92% accuracy in determining threat indicators. The evaluation of the result demonstrates the effectiveness and practicality of the approach.
ContributorsModi, Ajay (Author) / Ahn, Gail-Joon (Thesis advisor) / Zhao, Ziming (Committee member) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2017
155511-Thumbnail Image.png
Description
The Internet is a major source of online news content. Online news is a form of large-scale narrative text with rich, complex contents that embed deep meanings (facts, strategic communication frames, and biases) for shaping and transitioning standards, values, attitudes, and beliefs of the masses. Currently, this body of narrative

The Internet is a major source of online news content. Online news is a form of large-scale narrative text with rich, complex contents that embed deep meanings (facts, strategic communication frames, and biases) for shaping and transitioning standards, values, attitudes, and beliefs of the masses. Currently, this body of narrative text remains untapped due—in large part—to human limitations. The human ability to comprehend rich text and extract hidden meanings is far superior to known computational algorithms but remains unscalable. In this research, computational treatment is given to online news framing for exposing a deeper level of expressivity coined “double subjectivity” as characterized by its cumulative amplification effects. A visual language is offered for extracting spatial and temporal dynamics of double subjectivity that may give insight into social influence about critical issues, such as environmental, economic, or political discourse. This research offers benefits of 1) scalability for processing hidden meanings in big data and 2) visibility of the entire network dynamics over time and space to give users insight into the current status and future trends of mass communication.
ContributorsCheeks, Loretta H. (Author) / Gaffar, Ashraf (Thesis advisor) / Wald, Dara M (Committee member) / Ben Amor, Hani (Committee member) / Doupe, Adam (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2017
155760-Thumbnail Image.png
Description
The Internet traffic, today, comprises majorly of Hyper Text Transfer Protocol (HTTP). The first version of HTTP protocol was standardized in 1991, followed by a major upgrade in May 2015. HTTP/2 is the next generation of HTTP protocol that promises to resolve short-comings of HTTP 1.1 and provide features to

The Internet traffic, today, comprises majorly of Hyper Text Transfer Protocol (HTTP). The first version of HTTP protocol was standardized in 1991, followed by a major upgrade in May 2015. HTTP/2 is the next generation of HTTP protocol that promises to resolve short-comings of HTTP 1.1 and provide features to greatly improve upon its performance.

There has been a 1000\% increase in the cyber crimes rate over the past two years. Since HTTP/2 is a relatively new protocol with a very high acceptance rate (around 68\% of all HTTPS traffic), it gives rise to an urgent need of analyzing this protocol from a security vulnerability perspective.

In this thesis, I have systematically analyzed the security concerns in HTTP/2 protocol - starting from the specifications, testing all variation of frames (basic entity in HTTP/2 protocol) and every new introduced feature.

In this thesis, I also propose the Context Aware fuzz Testing for Binary communication protocols methodology. Using this testing methodology, I was able to discover a serious security susceptibility using which an attacker can carry out a denial-of-service attack on Apache
ContributorsTiwari, Naveen (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Committee member) / Zhao, Ziming (Committee member) / Arizona State University (Publisher)
Created2017
155561-Thumbnail Image.png
Description
Field of cyber threats is evolving rapidly and every day multitude of new information about malware and Advanced Persistent Threats (APTs) is generated in the form of malware reports, blog articles, forum posts, etc. However, current Threat Intelligence (TI) systems have several limitations. First, most of the TI systems examine

Field of cyber threats is evolving rapidly and every day multitude of new information about malware and Advanced Persistent Threats (APTs) is generated in the form of malware reports, blog articles, forum posts, etc. However, current Threat Intelligence (TI) systems have several limitations. First, most of the TI systems examine and interpret data manually with the help of analysts. Second, some of them generate Indicators of Compromise (IOCs) directly using regular expressions without understanding the contextual meaning of those IOCs from the data sources which allows the tools to include lot of false positives. Third, lot of TI systems consider either one or two data sources for the generation of IOCs, and misses some of the most valuable IOCs from other data sources.

To overcome these limitations, we propose iGen, a novel approach to fully automate the process of IOC generation and analysis. Proposed approach is based on the idea that our model can understand English texts like human beings, and extract the IOCs from the different data sources intelligently. Identification of the IOCs is done on the basis of the syntax and semantics of the sentence as well as context words (e.g., ``attacked'', ``suspicious'') present in the sentence which helps the approach work on any kind of data source. Our proposed technique, first removes the words with no contextual meaning like stop words and punctuations etc. Then using the rest of the words in the sentence and output label (IOC or non-IOC sentence), our model intelligently learn to classify sentences into IOC and non-IOC sentences. Once IOC sentences are identified using this learned Convolutional Neural Network (CNN) based approach, next step is to identify the IOC tokens (like domains, IP, URL) in the sentences. This CNN based classification model helps in removing false positives (like IPs which are not malicious). Afterwards, IOCs extracted from different data sources are correlated to find the links between thousands of apparently unrelated attack instances, particularly infrastructures shared between them. Our approach fully automates the process of IOC generation from gathering data from different sources to creating rules (e.g. OpenIOC, snort rules, STIX rules) for deployment on

the security infrastructure.

iGen has collected around 400K IOCs till now with a precision of 95\%, better than any state-of-art method.
ContributorsPanwar, Anupam (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Committee member) / Zhao, Ziming (Committee member) / Arizona State University (Publisher)
Created2017