Matching Items (53)
Filtering by

Clear all filters

155706-Thumbnail Image.png
Description
The volume and frequency of cyber attacks have exploded in recent years. Organizations subscribe to multiple threat intelligence feeds to increase their knowledge base and better equip their security teams with the latest information in threat intelligence domain. Though such subscriptions add intelligence and can help in taking more informed

The volume and frequency of cyber attacks have exploded in recent years. Organizations subscribe to multiple threat intelligence feeds to increase their knowledge base and better equip their security teams with the latest information in threat intelligence domain. Though such subscriptions add intelligence and can help in taking more informed decisions, organizations have to put considerable efforts in facilitating and analyzing a large number of threat indicators. This problem worsens further, due to a large number of false positives and irrelevant events detected as threat indicators by existing threat feed sources. It is often neither practical nor cost-effective to analyze every single alert considering the staggering volume of indicators. The very reason motivates to solve the overcrowded threat indicators problem by prioritizing and filtering them.

To overcome above issue, I explain the necessity of determining how likely a reported indicator is malicious given the evidence and prioritizing it based on such determination. Confidence Score Measurement system (CSM) introduces the concept of confidence score, where it assigns a score of being malicious to a threat indicator based on the evaluation of different threat intelligence systems. An indicator propagates maliciousness to adjacent indicators based on relationship determined from behavior of an indicator. The propagation algorithm derives final confidence to determine overall maliciousness of the threat indicator. CSM can prioritize the indicators based on confidence score; however, an analyst may not be interested in the entire result set, so CSM narrows down the results based on the analyst-driven input. To this end, CSM introduces the concept of relevance score, where it combines the confidence score with analyst-driven search by applying full-text search techniques. It prioritizes the results based on relevance score to provide meaningful results to the analyst. The analysis shows the propagation algorithm of CSM linearly scales with larger datasets and achieves 92% accuracy in determining threat indicators. The evaluation of the result demonstrates the effectiveness and practicality of the approach.
ContributorsModi, Ajay (Author) / Ahn, Gail-Joon (Thesis advisor) / Zhao, Ziming (Committee member) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2017
155591-Thumbnail Image.png
Description
Mobile telephony is a critical aspect of our modern society: through telephone calls,

it is possible to reach almost anyone around the globe. However, every mobile telephone

call placed implicitly leaks the user's location to the telephony service provider (TSP).

This privacy leakage is due to the fundamental nature of mobile telephony calls

Mobile telephony is a critical aspect of our modern society: through telephone calls,

it is possible to reach almost anyone around the globe. However, every mobile telephone

call placed implicitly leaks the user's location to the telephony service provider (TSP).

This privacy leakage is due to the fundamental nature of mobile telephony calls that

must connect to a local base station to receive service and place calls. Thus, the TSP

can track the physical location of the user for every call that they place. While the

The Internet is similar in this regard, privacy-preserving technologies such as Tor allow

users to connect to websites anonymously (without revealing to their ISP the site

that they are visiting). In this thesis, the scheme presented, called shadow calling,

to allow geolocation anonymous calling from legacy mobile devices. In this way,

the call is placed from the same number, however, the TSP will not know the user's

physical location. The scheme does not require any change on the network side and

can be used on current mobile networks. The scheme implemented is for the GSM

(commonly referred to as 2G) network, as it is the most widely used mode of mobile

telephony communication. The feasibility of our scheme is demonstrated through the

prototype. Shadow calling, which renders the users geolocation anonymous, will be

beneficial for users such as journalists, human rights activists in hostile nations, or

other privacy-demanding users.
ContributorsPinto, Gerard Lawrence (Author) / Doupe, Adam (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Zhao, Ziming (Committee member) / Arizona State University (Publisher)
Created2017
155601-Thumbnail Image.png
Description
Web applications are an incredibly important aspect of our modern lives. Organizations

and developers use automated vulnerability analysis tools, also known as

scanners, to automatically find vulnerabilities in their web applications during development.

Scanners have traditionally fallen into two types of approaches: black-box

and white-box. In the black-box approaches, the scanner does not have

Web applications are an incredibly important aspect of our modern lives. Organizations

and developers use automated vulnerability analysis tools, also known as

scanners, to automatically find vulnerabilities in their web applications during development.

Scanners have traditionally fallen into two types of approaches: black-box

and white-box. In the black-box approaches, the scanner does not have access to the

source code of the web application whereas a white-box approach has access to the

source code. Today’s state-of-the-art black-box vulnerability scanners employ various

methods to fuzz and detect vulnerabilities in a web application. However, these

scanners attempt to fuzz the web application with a number of known payloads and

to try to trigger a vulnerability. This technique is simple but does not understand

the web application that it is testing. This thesis, presents a new approach to vulnerability

analysis. The vulnerability analysis module presented uses a novel approach

of Inductive Reverse Engineering (IRE) to understand and model the web application.

IRE first attempts to understand the behavior of the web application by giving

certain number of input/output pairs to the web application. Then, the IRE module

hypothesizes a set of programs (in a limited language specific to web applications,

called AWL) that satisfy the input/output pairs. These hypotheses takes the form of

a directed acyclic graph (DAG). AWL vulnerability analysis module can then attempt

to detect vulnerabilities in this DAG. Further, it generates the payload based on the

DAG, and therefore this payload will be a precise payload to trigger the potential vulnerability

(based on our understanding of the program). It then tests this potential

vulnerability using the generated payload on the actual web application, and creates

a verification procedure to see if the potential vulnerability is actually vulnerable,

based on the web application’s response.
ContributorsKhairnar, Tejas (Author) / Doupe, Adam (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Zhao, Ziming (Committee member) / Arizona State University (Publisher)
Created2017
155561-Thumbnail Image.png
Description
Field of cyber threats is evolving rapidly and every day multitude of new information about malware and Advanced Persistent Threats (APTs) is generated in the form of malware reports, blog articles, forum posts, etc. However, current Threat Intelligence (TI) systems have several limitations. First, most of the TI systems examine

Field of cyber threats is evolving rapidly and every day multitude of new information about malware and Advanced Persistent Threats (APTs) is generated in the form of malware reports, blog articles, forum posts, etc. However, current Threat Intelligence (TI) systems have several limitations. First, most of the TI systems examine and interpret data manually with the help of analysts. Second, some of them generate Indicators of Compromise (IOCs) directly using regular expressions without understanding the contextual meaning of those IOCs from the data sources which allows the tools to include lot of false positives. Third, lot of TI systems consider either one or two data sources for the generation of IOCs, and misses some of the most valuable IOCs from other data sources.

To overcome these limitations, we propose iGen, a novel approach to fully automate the process of IOC generation and analysis. Proposed approach is based on the idea that our model can understand English texts like human beings, and extract the IOCs from the different data sources intelligently. Identification of the IOCs is done on the basis of the syntax and semantics of the sentence as well as context words (e.g., ``attacked'', ``suspicious'') present in the sentence which helps the approach work on any kind of data source. Our proposed technique, first removes the words with no contextual meaning like stop words and punctuations etc. Then using the rest of the words in the sentence and output label (IOC or non-IOC sentence), our model intelligently learn to classify sentences into IOC and non-IOC sentences. Once IOC sentences are identified using this learned Convolutional Neural Network (CNN) based approach, next step is to identify the IOC tokens (like domains, IP, URL) in the sentences. This CNN based classification model helps in removing false positives (like IPs which are not malicious). Afterwards, IOCs extracted from different data sources are correlated to find the links between thousands of apparently unrelated attack instances, particularly infrastructures shared between them. Our approach fully automates the process of IOC generation from gathering data from different sources to creating rules (e.g. OpenIOC, snort rules, STIX rules) for deployment on

the security infrastructure.

iGen has collected around 400K IOCs till now with a precision of 95\%, better than any state-of-art method.
ContributorsPanwar, Anupam (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Committee member) / Zhao, Ziming (Committee member) / Arizona State University (Publisher)
Created2017
149360-Thumbnail Image.png
Description
Cloud computing systems fundamentally provide access to large pools of data and computational resources through a variety of interfaces similar in spirit to existing grid and HPC resource management and programming systems. These types of systems offer a new programming target for scalable application developers and have gained popularity over

Cloud computing systems fundamentally provide access to large pools of data and computational resources through a variety of interfaces similar in spirit to existing grid and HPC resource management and programming systems. These types of systems offer a new programming target for scalable application developers and have gained popularity over the past few years. However, most cloud computing systems in operation today are proprietary and rely upon infrastructure that is invisible to the research community, or are not explicitly designed to be instrumented and modified by systems researchers. In this research, Xen Server Management API is employed to build a framework for cloud computing that implements what is commonly referred to as Infrastructure as a Service (IaaS); systems that give users the ability to run and control entire virtual machine instances deployed across a variety physical resources. The goal of this research is to develop a cloud based resource and service sharing platform for Computer network security education a.k.a Virtual Lab.
ContributorsKadne, Aniruddha (Author) / Huang, Dijiang (Thesis advisor) / Tsai, Wei-Tek (Committee member) / Ahn, Gail-Joon (Committee member) / Arizona State University (Publisher)
Created2010
155858-Thumbnail Image.png
Description
With the recent expansion in the use of wearable technology, a large number of users access personal data with these smart devices. The consumer market of wearables includes smartwatches, health and fitness bands, and gesture control armbands. These smart devices enable users to communicate with each other, control other devices,

With the recent expansion in the use of wearable technology, a large number of users access personal data with these smart devices. The consumer market of wearables includes smartwatches, health and fitness bands, and gesture control armbands. These smart devices enable users to communicate with each other, control other devices, relax and work out more effectively. As part of their functionality, these devices store, transmit, and/or process sensitive user personal data, perhaps biological and location data, making them an abundant source of confidential user information. Thus, prevention of unauthorized access to wearables is necessary. In fact, it is important to effectively authenticate users to prevent intentional misuse or alteration of individual data. Current authentication methods for the legitimate users of smart wearable devices utilize passcodes, and graphical pattern based locks. These methods have the following problems: (1) passcodes can be stolen or copied, (2) they depend on conscious user inputs, which can be undesirable to a user, (3) they authenticate the user only at the beginning of the usage session, and (4) they do not consider user behavior or they do not adapt to evolving user behavior.

In this thesis, an approach is presented for developing software for continuous authentication of the legitimate user of a smart wearable device. With this approach, the legitimate user of a smart wearable device can be authenticated based on the user's behavioral biometrics in the form of motion gestures extracted from the embedded sensors of the smart wearable device. The continuous authentication of this approach is accomplished by adapting the authentication to user's gesture pattern changes. This approach is demonstrated by using two comprehensive datasets generated by two research groups, and it is shown that this approach achieves better performance than existing methods.
ContributorsMukherjee, Tamalika (Author) / Yau, Sik-Sang (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017
Description
Data from a total of 282 online web applications was collected, and accounts for 230 of those web applications were created in order to gather data about authentication practices, multistep authentication practices, security question practices, fallback authentication practices, and other security practices for online accounts. The account creation and data

Data from a total of 282 online web applications was collected, and accounts for 230 of those web applications were created in order to gather data about authentication practices, multistep authentication practices, security question practices, fallback authentication practices, and other security practices for online accounts. The account creation and data collection was done between June 2016 and April 2017. The password strengths for online accounts were analyzed and password strength data was compared to existing data. Security questions used by online accounts were evaluated for security and usability, and fallback authentication practices were assessed based on their adherence to best practices. Alternative authentication schemes were examined, and other security considerations such as use of HTTPS and CAPTCHAs were explored. Based on existing data, password policies require stronger passwords in for web applications in 2017 compared to the requirements in 2010. Nevertheless, password policies for many accounts are still not adequate. About a quarter of online web applications examined use security questions, and many of the questions have usability and security concerns. Security mechanisms such as HTTPS and continuous authentication are in general not used in conjunction with security questions for most web applications, which reduces the overall security of the web application. A majority of web applications use email addresses as the login credential and the password recovery credential and do not follow best practices. About a quarter of accounts use multistep authentication and a quarter of accounts employ continuous authentication, yet most accounts fail to combine security measures for defense in depth. The overall conclusion is that some online web applications are using secure practices; however, a majority of online web applications fail to properly implement and utilize secure practices.
ContributorsGutierrez, Garrett (Author) / Bazzi, Rida (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2017
158121-Thumbnail Image.png
Description
Utilities infrastructure like the electric grid have been the target of more sophisticated cyberattacks designed to disrupt their operation and create social unrest and economical losses. Just in 2016, a cyberattack targeted the Ukrainian power grid and successfully caused a blackout that affected 225,000 customers.

Industrial Control Systems (ICS) are

Utilities infrastructure like the electric grid have been the target of more sophisticated cyberattacks designed to disrupt their operation and create social unrest and economical losses. Just in 2016, a cyberattack targeted the Ukrainian power grid and successfully caused a blackout that affected 225,000 customers.

Industrial Control Systems (ICS) are a critical part of this infrastructure. Honeypots are one of the tools that help us capture attack data to better understand new and existing attack methods and strategies. Honeypots are computer systems purposefully left exposed to be broken into. They do not have any inherent value, instead, their value comes when attackers interact with them. However, state-of-the-art honeypots lack sophisticated service simulations required to obtain valuable data.

Worst, they cannot adapt while ICS malware keeps evolving and attacks patterns are increasingly more sophisticated.

This work presents HoneyPLC: A Next-Generation Honeypot for ICS. HoneyPLC is, the very first medium-interaction ICS honeypot, and includes advanced service simulation modeled after S7-300 and S7-1200 Siemens PLCs, which are widely used in real-life ICS infrastructures.

Additionally, HoneyPLC provides much needed extensibility features to prepare for new attack tactics, e.g., exploiting a new vulnerability found in a new PLC model.

HoneyPLC was deployed both in local and public environments, and tested against well-known reconnaissance tools used by attackers such as Nmap and Shodan's Honeyscore. Results show that HoneyPLC is in fact able to fool both tools with a high level of confidence. Also, HoneyPLC recorded high amounts of interesting ICS interactions from all around the globe, proving not only that attackers are in fact targeting ICS systems, but that HoneyPLC provides a higher level of interaction that effectively deceives them.
ContributorsLopez Morales, Efren (Author) / Doupe, Adam (Thesis advisor) / Ahn, Gail-Joon (Thesis advisor) / Rubio-Medrano, Carlos (Committee member) / Arizona State University (Publisher)
Created2020
158018-Thumbnail Image.png
Description
Many researchers have seen the value blockchain can add to the field of voting and many protocols have been proposed to allow voting to be conducted in a way that takes advantage of blockchains distributed and immutable structure. While blockchains immutable structure can take the place of paper records in

Many researchers have seen the value blockchain can add to the field of voting and many protocols have been proposed to allow voting to be conducted in a way that takes advantage of blockchains distributed and immutable structure. While blockchains immutable structure can take the place of paper records in preventing tampering it by itself is insufficient to construct a trustworthy voting system with eligibility, privacy, verifiability, and fairness requirements. Many of the protocols which strive to keep voters votes confidential, but also allow for verifiability and eligibility requirements rely on either a blind signature provided by a central authority to provide compliance with these requirements or ring signatures to prove membership in the set of voters. A blind signature issued by a central authority introduces a potential vulnerability as it allows a corrupt central authority to pass a large number of forged ballots into the mix without any detection. Ring signatures on the other hand tend to be overly resource intensive to allow for practical usage in large voting sets. The research in this thesis focuses on improving the trustworthiness of electronic voting systems by providing possible ways of avoiding or detecting corrupt central authorities while still relying upon the benefits of efficiency the blind signature provides.
ContributorsAnderson, Brandon David (Author) / Yau, Stephen S. (Thesis advisor) / Dasgupta, Partha (Committee member) / Marchant, Gary (Committee member) / Arizona State University (Publisher)
Created2020
153969-Thumbnail Image.png
Description
Emerging trends in cyber system security breaches in critical cloud infrastructures show that attackers have abundant resources (human and computing power), expertise and support of large organizations and possible foreign governments. In order to greatly improve the protection of critical cloud infrastructures, incorporation of human behavior is needed to predict

Emerging trends in cyber system security breaches in critical cloud infrastructures show that attackers have abundant resources (human and computing power), expertise and support of large organizations and possible foreign governments. In order to greatly improve the protection of critical cloud infrastructures, incorporation of human behavior is needed to predict potential security breaches in critical cloud infrastructures. To achieve such prediction, it is envisioned to develop a probabilistic modeling approach with the capability of accurately capturing system-wide causal relationship among the observed operational behaviors in the critical cloud infrastructure and accurately capturing probabilistic human (users’) behaviors on subsystems as the subsystems are directly interacting with humans. In our conceptual approach, the system-wide causal relationship can be captured by the Bayesian network, and the probabilistic human behavior in the subsystems can be captured by the Markov Decision Processes. The interactions between the dynamically changing state graphs of Markov Decision Processes and the dynamic causal relationships in Bayesian network are key components in such probabilistic modelling applications. In this thesis, two techniques are presented for supporting the above vision to prediction of potential security breaches in critical cloud infrastructures. The first technique is for evaluation of the conformance of the Bayesian network with the multiple MDPs. The second technique is to evaluate the dynamically changing Bayesian network structure for conformance with the rules of the Bayesian network using a graph checker algorithm. A case study and its simulation are presented to show how the two techniques support the specific parts in our conceptual approach to predicting system-wide security breaches in critical cloud infrastructures.
ContributorsNagaraja, Vinjith (Author) / Yau, Stephen S. (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2015