Matching Items (22)

133698-Thumbnail Image.png

An Algorithm for Merging Identities

Description

In online social networks the identities of users are concealed, often by design. This anonymity makes it possible for a single person to have multiple accounts and to engage in

In online social networks the identities of users are concealed, often by design. This anonymity makes it possible for a single person to have multiple accounts and to engage in malicious activity such as defrauding a service providers, leveraging social influence, or hiding activities that would otherwise be detected. There are various methods for detecting whether two online users in a network are the same people in reality and the simplest way to utilize this information is to simply merge their identities and treat the two users as a single user. However, this then raises the issue of how we deal with these composite identities. To solve this problem, we introduce a mathematical abstraction for representing users and their identities as partitions on a set. We then define a similarity function, SIM, between two partitions, a set of properties that SIM must have, and a threshold that SIM must exceed for two users to be considered the same person. The main theoretical result of our work is a proof that for any given partition and similarity threshold, there is only a single unique way to merge the identities of similar users such that no two identities are similar. We also present two algorithms, COLLAPSE and SIM_MERGE, that merge the identities of users to find this unique set of identities. We prove that both algorithms execute in polynomial time and we also perform an experiment on dark web social network data from over 6000 users that demonstrates the runtime of SIM_MERGE.

Contributors

Agent

Created

Date Created
  • 2018-05

135242-Thumbnail Image.png

Data Driven Game Theoretic Cyber Threat Mitigation

Description

Penetration testing is regarded as the gold-standard for understanding how well an organization can withstand sophisticated cyber-attacks. However, the recent prevalence of markets specializing in zero-day exploits on the darknet

Penetration testing is regarded as the gold-standard for understanding how well an organization can withstand sophisticated cyber-attacks. However, the recent prevalence of markets specializing in zero-day exploits on the darknet make exploits widely available to potential attackers. The cost associated with these sophisticated kits generally precludes penetration testers from simply obtaining such exploits – so an alternative approach is needed to understand what exploits an attacker will most likely purchase and how to defend against them. In this paper, we introduce a data-driven security game framework to model an attacker and provide policy recommendations to the defender. In addition to providing a formal framework and algorithms to develop strategies, we present experimental results from applying our framework, for various system configurations, on real-world exploit market data actively mined from the darknet.

Contributors

Agent

Created

Date Created
  • 2016-05

133396-Thumbnail Image.png

Malicious IP Address Prediction

Description

IP blacklisting is a popular technique to bolster an enterprise's security, where access to and from designated IP addresses is explicitly restricted. The fundamental idea behind blacklists is to continually

IP blacklisting is a popular technique to bolster an enterprise's security, where access to and from designated IP addresses is explicitly restricted. The fundamental idea behind blacklists is to continually add IP addresses that reputable entities, such as security researchers, have labeled as malicious to the list. Currently IP blacklisting is a reactive method, where malicious IP addresses are identified after their engagement in malicious activities is detected (e.g. hosting malware samples or sending spam emails). This thesis project aims to address this issue, by laying the groundwork for a machine learning tool that proactively identifies malicious IP address. The ground truth data derives from VirusTotal, a company that synthesizes security knowledge from prominent sources, such as Symantec, Fortinet, and ESET. I passed 307,621 IP addresses found in posts on the D2web (deep and dark web) through VirusTotal. If at least one detected URL associates with the IP address and VirusTotal deems it positive, I accordingly label the IP address as positive (malicious), and negative (non-malicious) otherwise. To give some insight into the ground truth, 6,147 IP addresses were identified as positive from the original 307,621. Furthermore, in order to quantify the prediction capabilities of our models, I introduce a metric called lead time. Lead time represents the difference between the date an IP address was first seen on the D2web and its earliest date on VirusTotal. For example, if an IP address was mentioned on the D2web on 1/5/2017 and mentioned on VirusTotal on 1/25/2017, then its lead time is 20 days. After feature selection, where I handpicked features from the data mined from the D2web, I attempted various combinations of classifiers and feature sets in order to create the best model. The final machine learning models implement temporal cross validation - where I train a model on data from 1/1/2016 up until a testing month in 2017, and test on data from the testing month - with a Random Forest classifier. The following are results from a model that was tested on January 2017, which exhibits median performance among the final models. The true positive rate is 0.2558, the false positive rate is 0.3612, and the average lead time (for leading true positives) is 193 days, where the model picks up 33.33% of all leading true positives. Although the model finds a respectable number of true positives, it picks up too many false positives. Thus, my approach is ineffective at predicting malicious IP addresses in their current state, meaning additional efforts will be required to transform the current work into a viable tool

Contributors

Agent

Created

Date Created
  • 2018-05

134946-Thumbnail Image.png

Darkweb Cyber Threat Intelligence Mining through the I2P Protocol

Description

This thesis project focused on malicious hacking community activities accessible through the I2P protocol. We visited 315 distinct I2P sites to identify those with malicious hacking content. We also wrote

This thesis project focused on malicious hacking community activities accessible through the I2P protocol. We visited 315 distinct I2P sites to identify those with malicious hacking content. We also wrote software to scrape and parse data from relevant I2P sites. The data was integrated into the CySIS databases for further analysis to contribute to the larger CySIS Lab Darkweb Cyber Threat Intelligence Mining research. We found that the I2P cryptonet was slow and had only a small amount of malicious hacking community activity. However, we also found evidence of a growing perception that Tor anonymity could be compromised. This work will contribute to understanding the malicious hacker community as some Tor users, seeking assured anonymity, transition to I2P.

Contributors

Agent

Created

Date Created
  • 2016-12

158099-Thumbnail Image.png

Measuring the Impact of Social Network Interactions

Description

Social links form the backbone of human interactions, both in an offline and online world. Such interactions harbor network diffusion or in simpler words, information spreading in a population of

Social links form the backbone of human interactions, both in an offline and online world. Such interactions harbor network diffusion or in simpler words, information spreading in a population of connected individuals. With recent increase in user engagement in social media platforms thus giving rise to networks of large scale, it has become imperative to understand the diffusion mechanisms by considering evolving instances of these network structures. Additionally, I claim that human connections fluctuate over time and attempt to study empirically grounded models of diffusion that embody these variations through evolving network structures. Patterns of interactions that are now stimulated by these fluctuating connections can be harnessed

towards predicting real world events. This dissertation attempts at analyzing

and then modeling such patterns of social network interactions. I propose how such

models could be used in advantage over traditional models of diffusion in various

predictions and simulations of real world events.

The specific three questions rooted in understanding social network interactions that have been addressed in this dissertation are: (1) can interactions captured through evolving diffusion networks indicate and predict the phase changes in a diffusion process? (2) can patterns and models of interactions in hacker forums be used in cyber-attack predictions in the real world? and (3) do varying patterns of social influence impact behavior adoption with different success ratios and could they be used to simulate rumor diffusion?

For the first question, I empirically analyze information cascades of Twitter and Flixster data and conclude that in evolving network structures characterizing diffusion, local network neighborhood surrounding a user is particularly a better indicator of the approaching phases. For the second question, I attempt to build an integrated approach utilizing unconventional signals from the "darkweb" forum discussions for predicting attacks on a target organization. The study finds that filtering out credible users and measuring network features surrounding them can be good indicators of an impending attack. For the third question, I develop an experimental framework in a controlled environment to understand how individuals respond to peer behavior in situations of sequential decision making and develop data-driven agent based models towards simulating rumor diffusion.

Contributors

Agent

Created

Date Created
  • 2020

158434-Thumbnail Image.png

A Hacker-Centric Perspective to Empower Cyber Defense

Description

Malicious hackers utilize the World Wide Web to share knowledge. Previous work has demonstrated that information mined from online hacking communities can be used as precursors to cyber-attacks. In a

Malicious hackers utilize the World Wide Web to share knowledge. Previous work has demonstrated that information mined from online hacking communities can be used as precursors to cyber-attacks. In a threatening scenario, where security alert systems are facing high false positive rates, understanding the people behind cyber incidents can help reduce the risk of attacks. However, the rapidly evolving nature of those communities leads to limitations still largely unexplored, such as: who are the skilled and influential individuals forming those groups, how they self-organize along the lines of technical expertise, how ideas propagate within them, and which internal patterns can signal imminent cyber offensives? In this dissertation, I have studied four key parts of this complex problem set. Initially, I leverage content, social network, and seniority analysis to mine key-hackers on darkweb forums, identifying skilled and influential individuals who are likely to succeed in their cybercriminal goals. Next, as hackers often use Web platforms to advertise and recruit collaborators, I analyze how social influence contributes to user engagement online. On social media, two time constraints are proposed to extend standard influence measures, which increases their correlation with adoption probability and consequently improves hashtag adoption prediction. On darkweb forums, the prediction of where and when hackers will post a message in the near future is accomplished by analyzing their recurrent interactions with other hackers. After that, I demonstrate how vendors of malware and malicious exploits organically form hidden organizations on darkweb marketplaces, obtaining significant consistency across the vendors’ communities extracted using the similarity of their products in different networks. Finally, I predict imminent cyber-attacks correlating malicious hacking activity on darkweb forums with real-world cyber incidents, evidencing how social indicators are crucial for the performance of the proposed model. This research is a hybrid of social network analysis (SNA), machine learning (ML), evolutionary computation (EC), and temporal logic (TL), presenting expressive contributions to empower cyber defense.

Contributors

Agent

Created

Date Created
  • 2020

157833-Thumbnail Image.png

Assessing Influential Users in Live Streaming Social Networks

Description

Live streaming has risen to significant popularity in the recent past and largely this live streaming is a feature of existing social networks like Facebook, Instagram, and Snapchat. However, there

Live streaming has risen to significant popularity in the recent past and largely this live streaming is a feature of existing social networks like Facebook, Instagram, and Snapchat. However, there does exist at least one social network entirely devoted to live streaming, and specifically the live streaming of video games, Twitch. This social network is unique for a number of reasons, not least because of its hyper-focus on live content and this uniqueness has challenges for social media researchers.

Despite this uniqueness, almost no scientific work has been performed on this public social network. Thus, it is unclear what user interaction features present on other social networks exist on Twitch. Investigating the interactions between users and identifying which, if any, of the common user behaviors on social network exist on Twitch is an important step in understanding how Twitch fits in to the social media ecosystem. For example, there are users that have large followings on Twitch and amass a large number of viewers, but do those users exert influence over the behavior of other user the way that popular users on Twitter do?

This task, however, will not be trivial. The same hyper-focus on live content that makes Twitch unique in the social network space invalidates many of the traditional approaches to social network analysis. Thus, new algorithms and techniques must be developed in order to tap this data source. In this thesis, a novel algorithm for finding games whose releases have made a significant impact on the network is described as well as a novel algorithm for detecting and identifying influential players of games. In addition, the Twitch network is described in detail along with the data that was collected in order to power the two previously described algorithms.

Contributors

Agent

Created

Date Created
  • 2019

154137-Thumbnail Image.png

Information source detection in networks

Description

The purpose of information source detection problem (or called rumor source detection) is to identify the source of information diffusion in networks based on available observations like the states of

The purpose of information source detection problem (or called rumor source detection) is to identify the source of information diffusion in networks based on available observations like the states of the nodes and the timestamps at which nodes adopted the information (or called infected). The solution of the problem can be used to answer a wide range of important questions in epidemiology, computer network security, etc. This dissertation studies the fundamental theory and the design of efficient and robust algorithms for the information source detection problem.

For tree networks, the maximum a posterior (MAP) estimator of the information source is derived under the independent cascades (IC) model with a complete snapshot and a Short-Fat Tree (SFT) algorithm is proposed for general networks based on the MAP estimator. Furthermore, the following possibility and impossibility results are established on the Erdos-Renyi (ER) random graph: $(i)$ when the infection duration $<\frac{2}{3}t_u,$ SFT identifies the source with probability one asymptotically, where $t_u=\left\lceil\frac{\log n}{\log \mu}\right\rceil+2$ and $\mu$ is the average node degree, $(ii)$ when the infection duration $>t_u,$ the probability of identifying the source approaches zero asymptotically under any algorithm; and $(iii)$ when infection duration $
In practice, other than the nodes' states, side information like partial timestamps may also be available. Such information provides important insights of the diffusion process. To utilize the partial timestamps, the information source detection problem is formulated as a ranking problem on graphs and two ranking algorithms, cost-based ranking (CR) and tree-based ranking (TR), are proposed. Extensive experimental evaluations of synthetic data of different diffusion models and real world data demonstrate the effectiveness and robustness of CR and TR compared with existing algorithms.

Contributors

Agent

Created

Date Created
  • 2015

154888-Thumbnail Image.png

Semantic feature extraction for narrative analysis

Description

A story is defined as "an actor(s) taking action(s) that culminates in a resolution(s)''. I present novel sets of features to facilitate story detection among text via supervised classification and

A story is defined as "an actor(s) taking action(s) that culminates in a resolution(s)''. I present novel sets of features to facilitate story detection among text via supervised classification and further reveal different forms within stories via unsupervised clustering. First, I investigate the utility of a new set of semantic features compared to standard keyword features combined with statistical features, such as density of part-of-speech (POS) tags and named entities, to develop a story classifier. The proposed semantic features are based on triplets that can be extracted using a shallow parser. Experimental results show that a model of memory-based semantic linguistic features alongside statistical features achieves better accuracy. Next, I further improve the performance of story detection with a novel algorithm which aggregates the triplets producing generalized concepts and relations. A major challenge in automated text analysis is that different words are used for related concepts. Analyzing text at the surface level would treat related concepts (i.e. actors, actions, targets, and victims) as different objects, potentially missing common narrative patterns. The algorithm clusters triplets into generalized concepts by utilizing syntactic criteria based on common contexts and semantic corpus-based statistical criteria based on "contextual synonyms''. Generalized concepts representation of text (1) overcomes surface level differences (which arise when different keywords are used for related concepts) without drift, (2) leads to a higher-level semantic network representation of related stories, and (3) when used as features, they yield a significant (36%) boost in performance for the story detection task. Finally, I implement co-clustering based on generalized concepts/relations to automatically detect story forms. Overlapping generalized concepts and relationships correspond to archetypes/targets and actions that characterize story forms. I perform co-clustering of stories using standard unigrams/bigrams and generalized concepts. I show that the residual error of factorization with concept-based features is significantly lower than the error with standard keyword-based features. I also present qualitative evaluations by a subject matter expert, which suggest that concept-based features yield more coherent, distinctive and interesting story forms compared to those produced by using standard keyword-based features.

Contributors

Agent

Created

Date Created
  • 2016

158024-Thumbnail Image.png

Understanding Propagation of Malicious Information Online

Description

The recent proliferation of online platforms has not only revolutionized the way people communicate and acquire information but has also led to propagation of malicious information (e.g., online human trafficking,

The recent proliferation of online platforms has not only revolutionized the way people communicate and acquire information but has also led to propagation of malicious information (e.g., online human trafficking, spread of misinformation, etc.). Propagation of such information occurs at unprecedented scale that could ultimately pose imminent societal-significant threats to the public. To better understand the behavior and impact of the malicious actors and counter their activity, social media authorities need to deploy certain capabilities to reduce their threats. Due to the large volume of this data and limited manpower, the burden usually falls to automatic approaches to identify these malicious activities. However, this is a subtle task facing online platforms due to several challenges: (1) malicious users have strong incentives to disguise themselves as normal users (e.g., intentional misspellings, camouflaging, etc.), (2) malicious users are high likely to be key users in making harmful messages go viral and thus need to be detected at their early life span to stop their threats from reaching a vast audience, and (3) available data for training automatic approaches for detecting malicious users, are usually either highly imbalanced (i.e., higher number of normal users than malicious users) or comprise insufficient labeled data.

To address the above mentioned challenges, in this dissertation I investigate the propagation of online malicious information from two broad perspectives: (1) content posted by users and (2) information cascades formed by resharing mechanisms in social media. More specifically, first, non-parametric and semi-supervised learning algorithms are introduced to discern potential patterns of human trafficking activities that are of high interest to law enforcement. Second, a time-decay causality-based framework is introduced for early detection of “Pathogenic Social Media (PSM)” accounts (e.g., terrorist supporters). Third, due to the lack of sufficient annotated data for training PSM detection approaches, a semi-supervised causal framework is proposed that utilizes causal-related attributes from unlabeled instances to compensate for the lack of enough labeled data. Fourth, a feature-driven approach for PSM detection is introduced that leverages different sets of attributes from users’ causal activities, account-level and content-related information as well as those from URLs shared by users.

Contributors

Agent

Created

Date Created
  • 2020