Matching Items (78)
Description
Networks naturally appear in many high-impact applications. The simplest model of networks is single-layered networks, where the nodes are from the same domain and the links are of the same type. However, as the world is highly coupled, nodes from different application domains tend to be interdependent on each other, forming a more complex network model called multi-layered networks.
Among the various aspects of network studies, network connectivity plays an important role in a myriad of applications. The diversified application areas have spurred numerous connectivity measures, each designed for some specific tasks. Although effective in their own fields, none of the connectivity measures is generally applicable to all the tasks. Moreover, existing connectivity measures are predominantly based on single-layered networks, with few attempts made on multi-layered networks.
Most connectivity analyzing methods assume that the input network is static and accurate, which is not realistic in many applications. As real-world networks are evolving, their connectivity scores would vary by time as well, making it imperative to keep track of those changing parameters in a timely manner. Furthermore, as the observed links in the input network may be inaccurate due to noise and incomplete data sources, it is crucial to infer a more accurate network structure to better approximate its connectivity scores.
The ultimate goal of connectivity studies is to optimize the connectivity scores via manipulating the network structures. For most complex measures, the hardness of the optimization problem still remains unknown. Meanwhile, current optimization methods are mainly ad-hoc solutions for specific types of connectivity measures on single-layered networks. No optimization framework has ever been proposed to tackle a wider range of connectivity measures on complex networks.
In this thesis, an in-depth study of connectivity measures, inference, and optimization problems will be proposed. Specifically, a unified connectivity measure model will be introduced to unveil the commonality among existing connectivity measures. For the connectivity inference aspect, an effective network inference method and connectivity tracking framework will be described. Last, a generalized optimization framework will be built to address the connectivity minimization/maximization problems on both single-layered and multi-layered networks.
Among the various aspects of network studies, network connectivity plays an important role in a myriad of applications. The diversified application areas have spurred numerous connectivity measures, each designed for some specific tasks. Although effective in their own fields, none of the connectivity measures is generally applicable to all the tasks. Moreover, existing connectivity measures are predominantly based on single-layered networks, with few attempts made on multi-layered networks.
Most connectivity analyzing methods assume that the input network is static and accurate, which is not realistic in many applications. As real-world networks are evolving, their connectivity scores would vary by time as well, making it imperative to keep track of those changing parameters in a timely manner. Furthermore, as the observed links in the input network may be inaccurate due to noise and incomplete data sources, it is crucial to infer a more accurate network structure to better approximate its connectivity scores.
The ultimate goal of connectivity studies is to optimize the connectivity scores via manipulating the network structures. For most complex measures, the hardness of the optimization problem still remains unknown. Meanwhile, current optimization methods are mainly ad-hoc solutions for specific types of connectivity measures on single-layered networks. No optimization framework has ever been proposed to tackle a wider range of connectivity measures on complex networks.
In this thesis, an in-depth study of connectivity measures, inference, and optimization problems will be proposed. Specifically, a unified connectivity measure model will be introduced to unveil the commonality among existing connectivity measures. For the connectivity inference aspect, an effective network inference method and connectivity tracking framework will be described. Last, a generalized optimization framework will be built to address the connectivity minimization/maximization problems on both single-layered and multi-layered networks.
ContributorsChen, Chen (Author) / Tong, Hanghang (Thesis advisor) / Davulcu, Hasan (Committee member) / Sen, Arunabha (Committee member) / Subrahmanian, V.S. (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2019
Description
In online social networks the identities of users are concealed, often by design. This anonymity makes it possible for a single person to have multiple accounts and to engage in malicious activity such as defrauding a service providers, leveraging social influence, or hiding activities that would otherwise be detected. There are various methods for detecting whether two online users in a network are the same people in reality and the simplest way to utilize this information is to simply merge their identities and treat the two users as a single user. However, this then raises the issue of how we deal with these composite identities. To solve this problem, we introduce a mathematical abstraction for representing users and their identities as partitions on a set. We then define a similarity function, SIM, between two partitions, a set of properties that SIM must have, and a threshold that SIM must exceed for two users to be considered the same person. The main theoretical result of our work is a proof that for any given partition and similarity threshold, there is only a single unique way to merge the identities of similar users such that no two identities are similar. We also present two algorithms, COLLAPSE and SIM_MERGE, that merge the identities of users to find this unique set of identities. We prove that both algorithms execute in polynomial time and we also perform an experiment on dark web social network data from over 6000 users that demonstrates the runtime of SIM_MERGE.
ContributorsPolican, Andrew Dominic (Author) / Shakarian, Paulo (Thesis director) / Sen, Arunabha (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
The goal of this project is to use an open-source solution to implement a drone Cyber-Physical System that can fly autonomously and accurately. The proof-of-concept to analyze the drone's flight capabilities is to fly in a pattern corresponding to the outline of an image, a process that requires both stability and precision to accurately depict the image. In this project, we found that building a Cyber-Physical System is difficult because of the tedious and complex nature of designing and testing the hardware and software solutions of this system. Furthermore, we reflect on the difficulties that arose from using open-source hardware and software.
ContributorsDedinsky, Rachel (Co-author) / Lubbers, Harrison James (Co-author) / Shrivastava, Aviral (Thesis director) / Dougherty, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
A community in a social network can be viewed as a structure formed by individuals who share similar interests. Not all communities are explicit; some may be hidden in a large network. Therefore, discovering these hidden communities becomes an interesting problem. Researchers from a number of fields have developed algorithms to tackle this problem.
Besides the common feature above, communities within a social network have two unique characteristics: communities are mostly small and overlapping. Unfortunately, many traditional algorithms have difficulty recognizing these small communities (often called the resolution limit problem) as well as overlapping communities.
In this work, two enhanced community detection techniques are proposed for re-working existing community detection algorithms to find small communities in social networks. One method is to modify the modularity measure within the framework of the traditional Newman-Girvan algorithm so that more small communities can be detected. The second method is to incorporate a preprocessing step into existing algorithms by changing edge weights inside communities. Both methods help improve community detection performance while maintaining or improving computational efficiency.
Besides the common feature above, communities within a social network have two unique characteristics: communities are mostly small and overlapping. Unfortunately, many traditional algorithms have difficulty recognizing these small communities (often called the resolution limit problem) as well as overlapping communities.
In this work, two enhanced community detection techniques are proposed for re-working existing community detection algorithms to find small communities in social networks. One method is to modify the modularity measure within the framework of the traditional Newman-Girvan algorithm so that more small communities can be detected. The second method is to incorporate a preprocessing step into existing algorithms by changing edge weights inside communities. Both methods help improve community detection performance while maintaining or improving computational efficiency.
ContributorsWang, Ran (Author) / Liu, Huan (Thesis advisor) / Sen, Arunabha (Committee member) / Colbourn, Charles (Committee member) / Arizona State University (Publisher)
Created2015
Description
Error correcting systems have put increasing demands on system designers, both due to increasing error correcting requirements and higher throughput targets. These requirements have led to greater silicon area, power consumption and have forced system designers to make trade-offs in Error Correcting Code (ECC) functionality. Solutions to increase the efficiency of ECC systems are very important to system designers and have become a heavily researched area.
Many such systems incorporate the Bose-Chaudhuri-Hocquenghem (BCH) method of error correcting in a multi-channel configuration. BCH is a commonly used code because of its configurability, low storage overhead, and low decoding requirements when compared to other codes. Multi-channel configurations are popular with system designers because they offer a straightforward way to increase bandwidth. The ECC hardware is duplicated for each channel and the throughput increases linearly with the number of channels. The combination of these two technologies provides a configurable and high throughput ECC architecture.
This research proposes a new method to optimize a BCH error correction decoder in multi-channel configurations. In this thesis, I examine how error frequency effects the utilization of BCH hardware. Rather than implement each decoder as a single pipeline of independent decoding stages, the channels are considered together and served by a pool of decoding stages. Modified hardware blocks for handling common cases are included and the pool is sized based on an acceptable, but negligible decrease in performance.
Many such systems incorporate the Bose-Chaudhuri-Hocquenghem (BCH) method of error correcting in a multi-channel configuration. BCH is a commonly used code because of its configurability, low storage overhead, and low decoding requirements when compared to other codes. Multi-channel configurations are popular with system designers because they offer a straightforward way to increase bandwidth. The ECC hardware is duplicated for each channel and the throughput increases linearly with the number of channels. The combination of these two technologies provides a configurable and high throughput ECC architecture.
This research proposes a new method to optimize a BCH error correction decoder in multi-channel configurations. In this thesis, I examine how error frequency effects the utilization of BCH hardware. Rather than implement each decoder as a single pipeline of independent decoding stages, the channels are considered together and served by a pool of decoding stages. Modified hardware blocks for handling common cases are included and the pool is sized based on an acceptable, but negligible decrease in performance.
ContributorsDill, Russell (Author) / Shrivastava, Aviral (Thesis advisor) / Oh, Hyunok (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2015
Description
In trading, volume is a measure of how much stock has been exchanged in a given period of time. Since every stock is distinctive and has an alternate measure of shares, volume can be contrasted with historical volume inside a stock to spot changes. It is likewise used to affirm value patterns, breakouts, and spot potential reversals. In my thesis, I hypothesize that the concept of trading volume can be extrapolated to social media (Twitter).
The ubiquity of social media, especially Twitter, in financial market has been overly resonant in the past couple of years. With the growth of its (Twitter) usage by news channels, financial experts and pandits, the global economy does seem to hinge on 140 characters. By analyzing the number of tweets hash tagged to a stock, a strong relation can be established between the number of people talking about it, to the trading volume of the stock.
In my work, I overt this relation and find a state of the breakout when the volume goes beyond a characterized support or resistance level.
The ubiquity of social media, especially Twitter, in financial market has been overly resonant in the past couple of years. With the growth of its (Twitter) usage by news channels, financial experts and pandits, the global economy does seem to hinge on 140 characters. By analyzing the number of tweets hash tagged to a stock, a strong relation can be established between the number of people talking about it, to the trading volume of the stock.
In my work, I overt this relation and find a state of the breakout when the volume goes beyond a characterized support or resistance level.
ContributorsAwasthi, Piyush (Author) / Davulcu, Hasan (Thesis advisor) / Tong, Hanghang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2015
Description
Internet and social media devices created a new public space for debate on political
and social topics (Papacharissi 2002; Himelboim 2010). Hotly debated issues
span all spheres of human activity; from liberal vs. conservative politics, to radical
vs. counter-radical religious debate, to climate change debate in scientific community,
to globalization debate in economics, and to nuclear disarmament debate in
security. Many prominent ’camps’ have emerged within Internet debate rhetoric and
practice (Dahlberg, n.d.).
In this research I utilized feature extraction and model fitting techniques to process
the rhetoric found in the web sites of 23 Indonesian Islamic religious organizations,
later with 26 similar organizations from the United Kingdom to profile their
ideology and activity patterns along a hypothesized radical/counter-radical scale, and
presented an end-to-end system that is able to help researchers to visualize the data
in an interactive fashion on a time line. The subject data of this study is the articles
downloaded from the web sites of these organizations dating from 2001 to 2011,
and in 2013. I developed algorithms to rank these organizations by assigning them
to probable positions on the scale. I showed that the developed Rasch model fits
the data using Andersen’s LR-test (likelihood ratio). I created a gold standard of
the ranking of these organizations through an expertise elicitation tool. Then using
my system I computed expert-to-expert agreements, and then presented experimental
results comparing the performance of three baseline methods to show that the
Rasch model not only outperforms the baseline methods, but it was also the only
system that performs at expert-level accuracy.
I developed an end-to-end system that receives list of organizations from experts,
mines their web corpus, prepare discourse topic lists with expert support, and then
ranks them on scales with partial expert interaction, and finally presents them on an
easy to use web based analytic system.
and social topics (Papacharissi 2002; Himelboim 2010). Hotly debated issues
span all spheres of human activity; from liberal vs. conservative politics, to radical
vs. counter-radical religious debate, to climate change debate in scientific community,
to globalization debate in economics, and to nuclear disarmament debate in
security. Many prominent ’camps’ have emerged within Internet debate rhetoric and
practice (Dahlberg, n.d.).
In this research I utilized feature extraction and model fitting techniques to process
the rhetoric found in the web sites of 23 Indonesian Islamic religious organizations,
later with 26 similar organizations from the United Kingdom to profile their
ideology and activity patterns along a hypothesized radical/counter-radical scale, and
presented an end-to-end system that is able to help researchers to visualize the data
in an interactive fashion on a time line. The subject data of this study is the articles
downloaded from the web sites of these organizations dating from 2001 to 2011,
and in 2013. I developed algorithms to rank these organizations by assigning them
to probable positions on the scale. I showed that the developed Rasch model fits
the data using Andersen’s LR-test (likelihood ratio). I created a gold standard of
the ranking of these organizations through an expertise elicitation tool. Then using
my system I computed expert-to-expert agreements, and then presented experimental
results comparing the performance of three baseline methods to show that the
Rasch model not only outperforms the baseline methods, but it was also the only
system that performs at expert-level accuracy.
I developed an end-to-end system that receives list of organizations from experts,
mines their web corpus, prepare discourse topic lists with expert support, and then
ranks them on scales with partial expert interaction, and finally presents them on an
easy to use web based analytic system.
ContributorsTikves, Sukru (Author) / Davulcu, Hasan (Thesis advisor) / Sen, Arunabha (Committee member) / Liu, Huan (Committee member) / Woodward, Mark (Committee member) / Arizona State University (Publisher)
Created2016
Description
Predicting when an individual will adopt a new behavior is an important problem in application domains such as marketing and public health. This thesis examines the performance of a wide variety of social network based measurements proposed in the literature - which have not been previously compared directly. This research studies the probability of an individual becoming influenced based on measurements derived from neighborhood (i.e. number of influencers, personal network exposure), structural diversity, locality, temporal measures, cascade measures, and metadata. It also examines the ability to predict influence based on choice of the classifier and how the ratio of positive to negative samples in both training and testing affect prediction results - further enabling practical use of these concepts for social influence applications.
ContributorsNanda Kumar, Nikhil (Author) / Shakarian, Paulo (Thesis advisor) / Sen, Arunabha (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2016
Description
In this thesis multiple approaches are explored to enhance sentiment analysis of tweets. A standard sentiment analysis model with customized features is first trained and tested to establish a baseline. This is compared to an existing topic based mixture model and a new proposed topic based vector model both of which use Latent Dirichlet Allocation (LDA) for topic modeling. The proposed topic based vector model has higher accuracies in terms of averaged F scores than the other two models.
ContributorsBaskaran, Swetha (Author) / Davulcu, Hasan (Thesis advisor) / Sen, Arunabha (Committee member) / Hsiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2016
Description
In supervised learning, machine learning techniques can be applied to learn a model on
a small set of labeled documents which can be used to classify a larger set of unknown
documents. Machine learning techniques can be used to analyze a political scenario
in a given society. A lot of research has been going on in this field to understand
the interactions of various people in the society in response to actions taken by their
organizations.
This paper talks about understanding the Russian influence on people in Latvia.
This is done by building an eeffective model learnt on initial set of documents
containing a combination of official party web-pages, important political leaders' social
networking sites. Since twitter is a micro-blogging site which allows people to post
their opinions on any topic, the model built is used for estimating the tweets sup-
porting the Russian and Latvian political organizations in Latvia. All the documents
collected for analysis are in Latvian and Russian languages which are rich in vocabulary resulting into huge number of features. Hence, feature selection techniques can
be used to reduce the vocabulary set relevant to the classification model. This thesis
provides a comparative analysis of traditional feature selection techniques and implementation of a new iterative feature selection method using EM and cross-domain
training along with supportive visualization tool. This method out performed other
feature selection methods by reducing the number of features up-to 50% along with
good model accuracy. The results from the classification are used to interpret user
behavior and their political influence patterns across organizations in Latvia using
interactive dashboard with combination of powerful widgets.
a small set of labeled documents which can be used to classify a larger set of unknown
documents. Machine learning techniques can be used to analyze a political scenario
in a given society. A lot of research has been going on in this field to understand
the interactions of various people in the society in response to actions taken by their
organizations.
This paper talks about understanding the Russian influence on people in Latvia.
This is done by building an eeffective model learnt on initial set of documents
containing a combination of official party web-pages, important political leaders' social
networking sites. Since twitter is a micro-blogging site which allows people to post
their opinions on any topic, the model built is used for estimating the tweets sup-
porting the Russian and Latvian political organizations in Latvia. All the documents
collected for analysis are in Latvian and Russian languages which are rich in vocabulary resulting into huge number of features. Hence, feature selection techniques can
be used to reduce the vocabulary set relevant to the classification model. This thesis
provides a comparative analysis of traditional feature selection techniques and implementation of a new iterative feature selection method using EM and cross-domain
training along with supportive visualization tool. This method out performed other
feature selection methods by reducing the number of features up-to 50% along with
good model accuracy. The results from the classification are used to interpret user
behavior and their political influence patterns across organizations in Latvia using
interactive dashboard with combination of powerful widgets.
ContributorsBollapragada, Lakshmi Gayatri Niharika (Author) / Davulcu, Hasan (Thesis advisor) / Sen, Arunabha (Committee member) / Hsiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2016