Matching Items (4)
Filtering by

Clear all filters

153335-Thumbnail Image.png
Description
With the increasing user demand for low latency, elastic provisioning of computing resources coupled with ubiquitous and on-demand access to real-time data, cloud computing has emerged as a popular computing paradigm to meet growing user demands.

With the increasing user demand for low latency, elastic provisioning of computing resources coupled with ubiquitous and on-demand access to real-time data, cloud computing has emerged as a popular computing paradigm to meet growing user demands. However, with the introduction and rising use of wear- able technology and evolving uses of smart-phones, the concept of Internet of Things (IoT) has become a prevailing notion in the currently growing technology industry. Cisco Inc. has projected a data creation of approximately 403 Zetabytes (ZB) by 2018. The combination of bringing benign devices and connecting them to the web has resulted in exploding service and data aggregation requirements, thus requiring a new and innovative computing platform. This platform should have the capability to provide robust real-time data analytics and resource provisioning to clients, such as IoT users, on-demand. Such a computation model would need to function at the edge-of-the-network, forming a bridge between the large cloud data centers and the distributed connected devices.

This research expands on the notion of bringing computational power to the edge- of-the-network, and then integrating it with the cloud computing paradigm whilst providing services to diverse IoT-based applications. This expansion is achieved through the establishment of a new computing model that serves as a platform for IoT-based devices to communicate with services in real-time. We name this paradigm as Gateway-Oriented Reconfigurable Ecosystem (GORE) computing. Finally, this thesis proposes and discusses the development of a policy management framework for accommodating our proposed computational paradigm. The policy framework is designed to serve both the hosted applications and the GORE paradigm by enabling them to function more efficiently. The goal of the framework is to ensure uninterrupted communication and service delivery between users and their applications.
ContributorsDsouza, Clinton (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2015
153265-Thumbnail Image.png
Description
Corporations invest considerable resources to create, preserve and analyze

their data; yet while organizations are interested in protecting against

unauthorized data transfer, there lacks a comprehensive metric to discriminate

what data are at risk of leaking.

This thesis motivates the need for a quantitative leakage risk metric, and

provides a risk assessment system,

Corporations invest considerable resources to create, preserve and analyze

their data; yet while organizations are interested in protecting against

unauthorized data transfer, there lacks a comprehensive metric to discriminate

what data are at risk of leaking.

This thesis motivates the need for a quantitative leakage risk metric, and

provides a risk assessment system, called Whispers, for computing it. Using

unsupervised machine learning techniques, Whispers uncovers themes in an

organization's document corpus, including previously unknown or unclassified

data. Then, by correlating the document with its authors, Whispers can

identify which data are easier to contain, and conversely which are at risk.

Using the Enron email database, Whispers constructs a social network segmented

by topic themes. This graph uncovers communication channels within the

organization. Using this social network, Whispers determines the risk of each

topic by measuring the rate at which simulated leaks are not detected. For the

Enron set, Whispers identified 18 separate topic themes between January 1999

and December 2000. The highest risk data emanated from the legal department

with a leakage risk as high as 60%.
ContributorsWright, Jeremy (Author) / Syrotiuk, Violet (Thesis advisor) / Davulcu, Hasan (Committee member) / Yau, Stephen (Committee member) / Arizona State University (Publisher)
Created2014
150382-Thumbnail Image.png
Description
This thesis proposed a novel approach to establish the trust model in a social network scenario based on users' emails. Email is one of the most important social connections nowadays. By analyzing email exchange activities among users, a social network trust model can be established to judge the trust rate

This thesis proposed a novel approach to establish the trust model in a social network scenario based on users' emails. Email is one of the most important social connections nowadays. By analyzing email exchange activities among users, a social network trust model can be established to judge the trust rate between each two users. The whole trust checking process is divided into two steps: local checking and remote checking. Local checking directly contacts the email server to calculate the trust rate based on user's own email communication history. Remote checking is a distributed computing process to get help from user's social network friends and built the trust rate together. The email-based trust model is built upon a cloud computing framework called MobiCloud. Inside MobiCloud, each user occupies a virtual machine which can directly communicate with others. Based on this feature, the distributed trust model is implemented as a combination of local analysis and remote analysis in the cloud. Experiment results show that the trust evaluation model can give accurate trust rate even in a small scale social network which does not have lots of social connections. With this trust model, the security in both social network services and email communication could be improved.
ContributorsZhong, Yunji (Author) / Huang, Dijiang (Thesis advisor) / Dasgupta, Partha (Committee member) / Syrotiuk, Violet (Committee member) / Arizona State University (Publisher)
Created2011
157413-Thumbnail Image.png
Description
Rapid growth of internet and connected devices ranging from cloud systems to internet of things have raised critical concerns for securing these systems. In the recent past, security attacks on different kinds of devices have evolved in terms of complexity and diversity. One of the challenges is establishing secure communication

Rapid growth of internet and connected devices ranging from cloud systems to internet of things have raised critical concerns for securing these systems. In the recent past, security attacks on different kinds of devices have evolved in terms of complexity and diversity. One of the challenges is establishing secure communication in the network among various devices and systems. Despite being protected with authentication and encryption, the network still needs to be protected against cyber-attacks. For this, the network traffic has to be closely monitored and should detect anomalies and intrusions. Intrusion detection can be categorized as a network traffic classification problem in machine learning. Existing network traffic classification methods require a lot of training and data preprocessing, and this problem is more serious if the dataset size is huge. In addition, the machine learning and deep learning methods that have been used so far were trained on datasets that contain obsolete attacks. In this thesis, these problems are addressed by using ensemble methods applied on an up to date network attacks dataset. Ensemble methods use multiple learning algorithms to get better classification accuracy that could be obtained when the corresponding learning algorithm is applied alone. This dataset for network traffic classification has recent attack scenarios and contains over fifteen attacks. This approach shows that ensemble methods can be used to classify network traffic and detect intrusions with less training times of the model, and lesser pre-processing without feature selection. In addition, this thesis also shows that only with less than ten percent of the total features of input dataset will lead to similar accuracy that is achieved on whole dataset. This can heavily reduce the training times and classification duration in real-time scenarios.
ContributorsPonneganti, Ramu (Author) / Yau, Stephen (Thesis advisor) / Richa, Andrea (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019