Filtering by
- All Subjects: Machine Learning
- Creators: Electrical Engineering Program
- Creators: Baral, Chitta
- Member of: Theses and Dissertations
- Member of: Barrett, The Honors College Thesis/Creative Project Collection
- Resource Type: Text
attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult
to determine who the attacker is, what the desired goals are of the attacker, and how they will
carry out their attacks. These three questions essentially entail understanding the attacker’s
use of deception, the capabilities available, and the intent of launching the attack. These
three issues are highly inter-related. If an adversary can hide their intent, they can better
deceive a defender. If an adversary’s capabilities are not well understood, then determining
what their goals are becomes difficult as the defender is uncertain if they have the necessary
tools to accomplish them. However, the understanding of these aspects are also mutually
supportive. If we have a clear picture of capabilities, intent can better be deciphered. If we
understand intent and capabilities, a defender may be able to see through deception schemes.
In this dissertation, I present three pieces of work to tackle these questions to obtain
a better understanding of cyber threats. First, we introduce a new reasoning framework
to address deception. We evaluate the framework by building a dataset from DEFCON
capture-the-flag exercise to identify the person or group responsible for a cyber attack.
We demonstrate that the framework not only handles cases of deception but also provides
transparent decision making in identifying the threat actor. The second task uses a cognitive
learning model to determine the intent – goals of the threat actor on the target system.
The third task looks at understanding the capabilities of threat actors to target systems by
identifying at-risk systems from hacker discussions on darkweb websites. To achieve this
task we gather discussions from more than 300 darkweb websites relating to malicious
hacking.
Leveraging Machine Learning and Wireless Sensing for Robot Localization - Location Variance Analysis
Modern communication networks heavily depend upon an estimate of the communication channel, which represents the distortions that a transmitted signal takes as it moves towards a receiver. A channel can become quite complicated due to signal reflections, delays, and other undesirable effects and, as a result, varies significantly with each different location. This localization system seeks to take advantage of this distinctness by feeding channel information into a machine learning algorithm, which will be trained to associate channels with their respective locations. A device in need of localization would then only need to calculate a channel estimate and pose it to this algorithm to obtain its location.
As an additional step, the effect of location noise is investigated in this report. Once the localization system described above demonstrates promising results, the team demonstrates that the system is robust to noise on its location labels. In doing so, the team demonstrates that this system could be implemented in a continued learning environment, in which some user agents report their estimated (noisy) location over a wireless communication network, such that the model can be implemented in an environment without extensive data collection prior to release.
The verses generated by the system are evaluated using rhyme, rhythm, syllable counts and stress patterns. These computational features of language are considered for generating haikus, limericks and iambic pentameter verses. The generated poems are evaluated using a Turing test on both experts and non-experts. The user study finds that only 38% computer generated poems were correctly identified by nonexperts while 65% of the computer generated poems were correctly identified by experts. Although the system does not pass the Turing test, the results from the Turing test suggest an improvement of over 17% when compared to previous methods which use Turing tests to evaluate poetry generators.
received increasing attention in recent years. The availability of sheer amounts of
user-generated data presents data scientists both opportunities and challenges. Opportunities are presented with additional data sources. The abundant link information
in social networks could provide another rich source in deriving implicit information
for social data mining. However, the vast majority of existing studies overwhelmingly
focus on positive links between users while negative links are also prevailing in real-
world social networks such as distrust relations in Epinions and foe links in Slashdot.
Though recent studies show that negative links have some added value over positive
links, it is dicult to directly employ them because of its distinct characteristics from
positive interactions. Another challenge is that label information is rather limited
in social media as the labeling process requires human attention and may be very
expensive. Hence, alternative criteria are needed to guide the learning process for
many tasks such as feature selection and sentiment analysis.
To address above-mentioned issues, I study two novel problems for signed social
networks mining, (1) unsupervised feature selection in signed social networks; and
(2) unsupervised sentiment analysis with signed social networks. To tackle the first problem, I propose a novel unsupervised feature selection framework SignedFS. In
particular, I model positive and negative links simultaneously for user preference
learning, and then embed the user preference learning into feature selection. To study the second problem, I incorporate explicit sentiment signals in textual terms and
implicit sentiment signals from signed social networks into a coherent model Signed-
Senti. Empirical experiments on real-world datasets corroborate the effectiveness of
these two frameworks on the tasks of feature selection and sentiment analysis.
The purpose of this project is to create a useful tool for musicians that utilizes the harmonic content of their playing to recommend new, relevant chords to play. This is done by training various Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) on the lead sheets of 100 different jazz standards. A total of 200 unique datasets were produced and tested, resulting in the prediction of nearly 51 million chords. A note-prediction accuracy of 82.1% and a chord-prediction accuracy of 34.5% were achieved across all datasets. Methods of data representation that were rooted in valid music theory frameworks were found to increase the efficacy of harmonic prediction by up to 6%. Optimal LSTM input sizes were also determined for each method of data representation.