Matching Items (64)
155671-Thumbnail Image.png
Description
Multi-tenancy architecture (MTA) is often used in Software-as-a-Service (SaaS) and

the central idea is that multiple tenant applications can be developed using compo

nents stored in the SaaS infrastructure. Recently, MTA has been extended where

a tenant application can have its own sub-tenants as the tenant application acts

like a SaaS infrastructure. In other

Multi-tenancy architecture (MTA) is often used in Software-as-a-Service (SaaS) and

the central idea is that multiple tenant applications can be developed using compo

nents stored in the SaaS infrastructure. Recently, MTA has been extended where

a tenant application can have its own sub-tenants as the tenant application acts

like a SaaS infrastructure. In other words, MTA is extended to STA (Sub-Tenancy

Architecture ). In STA, each tenant application not only need to develop its own

functionalities, but also need to prepare an infrastructure to allow its sub-tenants to

develop customized applications. This dissertation formulates eight models for STA,

and proposes a Variant Point based customization model to help tenants and sub

tenants customize tenant and sub-tenant applications. In addition, this dissertation

introduces Crowd- sourcing to become the core of STA component development life

cycle. To discover fit tenant developers or components to help building and com

posing new components, dynamic and static ranking models are proposed. Further,

rank computation architecture is presented to deal with the case when the number of

tenants and components becomes huge. At last, an experiment is performed to prove

rank models and the rank computation architecture work as design.
ContributorsZhong, Peide (Author) / Davulcu, Hasan (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Huang, Dijiang (Committee member) / Tsai, Wei-Tek (Committee member) / Arizona State University (Publisher)
Created2017
155634-Thumbnail Image.png
Description
Scientific workflows allow scientists to easily model and express the entire data processing steps, typically as a directed acyclic graph (DAG). These scientific workflows are made of a collection of tasks that usually take a long time to compute and that produce a considerable amount of intermediate datasets. Because

Scientific workflows allow scientists to easily model and express the entire data processing steps, typically as a directed acyclic graph (DAG). These scientific workflows are made of a collection of tasks that usually take a long time to compute and that produce a considerable amount of intermediate datasets. Because of the nature of scientific exploration, a scientific workflow can be modified and re-run multiple times, or new scientific workflows are created that might make use of past intermediate datasets. Storing intermediate datasets has the potential to save time in computations. Since storage is limited, one main problem that needs a solution is determining which intermediate datasets need to be saved at creation time in order to minimize the computational time of the workflows to be run in the future. This research thesis proposes the design and implementation of Pingo, a system that is capable of managing the computations of scientific workflows as well as the storage, provenance and deletion of intermediate datasets. Pingo uses the history of workflows submitted to the system to predict the most likely datasets to be needed in the future, and subjects the decision of dataset deletion to the optimization of the computational time of future workflows.
Contributorsde Armas, Jadiel (Author) / Bazzi, Rida (Thesis advisor) / Huang, Dijiang (Committee member) / Syrotiuk, Violet (Committee member) / Arizona State University (Publisher)
Created2017
154873-Thumbnail Image.png
Description
Wireless communication technologies have been playing an important role in modern society. Due to its inherent mobility property, wireless networks are more vulnerable to passive attacks than traditional wired networks. Anonymity, as an important issue in mobile network environment, serves as the first topic that leads to all the research

Wireless communication technologies have been playing an important role in modern society. Due to its inherent mobility property, wireless networks are more vulnerable to passive attacks than traditional wired networks. Anonymity, as an important issue in mobile network environment, serves as the first topic that leads to all the research work presented in this manuscript. Specifically, anonymity issue in Mobile Ad hoc Networks (MANETs) is discussed with details as the first section of research.



To thoroughly study on this topic, the presented work approaches it from an attacker's perspective. Under a perfect scenario, all the traffic in a targeted MANET exhibits the communication relations to a passive attacker. However, localization errors pose a significant influence on the accuracy of the derived communication patterns. To handle such issue, a new scheme is proposed to generate super nodes, which represent the activities of user groups in the target MANET. This scheme also helps reduce the scale of monitoring work by grouping users based on their behaviors.



The first part of work on anonymity in MANET leads to the thought on its major cause. The link-based communication pattern is a key contributor to the success of the traffic analysis attack. A natural way to circumvent such issue is to use link-less approaches. Information Centric Networking (ICN) is a typical instance of such kind. Its communication pattern is able to overcome the anonymity issue with MANET. However, it also comes with its own shortcomings. One of them is access control enforcement. To tackle this issue, a new naming scheme for contents transmitted in ICN networks is presented. This scheme is based on a new Attribute-Based Encryption (ABE) algorithm. It enforces access control in ICN with minimum requirements on additional network components.



Following the research work on ABE, an important function, delegation, exhibits a potential security issue. In traditional ABE schemes, Ciphertext-Policy ABE (CP-ABE), a user is able to generate a subset of authentic attribute key components for other users using delegation function. This capability is not monitored or controlled by the trusted third party (TTP) in the cryptosystem. A direct threat caused from this issue is that any user may intentionally or unintentionally lower the standards for attribute assignments. Unauthorized users/attackers may be able to obtain their desired attributes through a delegation party instead of directly from the TTP. As the third part of work presented in this manuscript, a three-level delegation restriction architecture is proposed. Furthermore, a delegation restriction scheme following this architecture is also presented. This scheme allows the TTP to have full control on the delegation function of all its direct users.
ContributorsLi, Bing (Author) / Huang, Dijiang (Thesis advisor) / Xue, Guoliang (Committee member) / Ahn, Gail-Joon (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2016
149360-Thumbnail Image.png
Description
Cloud computing systems fundamentally provide access to large pools of data and computational resources through a variety of interfaces similar in spirit to existing grid and HPC resource management and programming systems. These types of systems offer a new programming target for scalable application developers and have gained popularity over

Cloud computing systems fundamentally provide access to large pools of data and computational resources through a variety of interfaces similar in spirit to existing grid and HPC resource management and programming systems. These types of systems offer a new programming target for scalable application developers and have gained popularity over the past few years. However, most cloud computing systems in operation today are proprietary and rely upon infrastructure that is invisible to the research community, or are not explicitly designed to be instrumented and modified by systems researchers. In this research, Xen Server Management API is employed to build a framework for cloud computing that implements what is commonly referred to as Infrastructure as a Service (IaaS); systems that give users the ability to run and control entire virtual machine instances deployed across a variety physical resources. The goal of this research is to develop a cloud based resource and service sharing platform for Computer network security education a.k.a Virtual Lab.
ContributorsKadne, Aniruddha (Author) / Huang, Dijiang (Thesis advisor) / Tsai, Wei-Tek (Committee member) / Ahn, Gail-Joon (Committee member) / Arizona State University (Publisher)
Created2010
149382-Thumbnail Image.png
Description
Today, many wireless networks are single-channel systems. However, as the interest in wireless services increases, the contention by nodes to occupy the medium is more intense and interference worsens. One direction with the potential to increase system throughput is multi-channel systems. Multi-channel systems have been shown to reduce collisions and

Today, many wireless networks are single-channel systems. However, as the interest in wireless services increases, the contention by nodes to occupy the medium is more intense and interference worsens. One direction with the potential to increase system throughput is multi-channel systems. Multi-channel systems have been shown to reduce collisions and increase concurrency thus producing better bandwidth usage. However, the well-known hidden- and exposed-terminal problems inherited from single-channel systems remain, and a new channel selection problem is introduced. In this dissertation, Multi-channel medium access control (MAC) protocols are proposed for mobile ad hoc networks (MANETs) for nodes equipped with a single half-duplex transceiver, using more sophisticated physical layer technologies. These include code division multiple access (CDMA), orthogonal frequency division multiple access (OFDMA), and diversity. CDMA increases channel reuse, while OFDMA enables communication by multiple users in parallel. There is a challenge to using each technology in MANETs, where there is no fixed infrastructure or centralized control. CDMA suffers from the near-far problem, while OFDMA requires channel synchronization to decode the signal. As a result CDMA and OFDMA are not yet widely used. Cooperative (diversity) mechanisms provide vital information to facilitate communication set-up between source-destination node pairs and help overcome limitations of physical layer technologies in MANETs. In this dissertation, the Cooperative CDMA-based Multi-channel MAC (CCM-MAC) protocol uses CDMA to enable concurrent transmissions on each channel. The Power-controlled CDMA-based Multi-channel MAC (PCC-MAC) protocol uses transmission power control at each node and mitigates collisions of control packets on the control channel by using different sizes of the spreading factor to have different processing gains for the control signals. The Cooperative Dual-access Multi-channel MAC (CDM-MAC) protocol combines the use of OFDMA and CDMA and minimizes channel interference by a resolvable balanced incomplete block design (BIBD). In each protocol, cooperating nodes help reduce the incidence of the multi-channel hidden- and exposed-terminal and help address the near-far problem of CDMA by supplying information. Simulation results show that each of the proposed protocols achieve significantly better system performance when compared to IEEE 802.11, other multi-channel protocols, and another protocol CDMA-based.
ContributorsMoon, Yuhan (Author) / Syrotiuk, Violet R. (Thesis advisor) / Huang, Dijiang (Committee member) / Reisslein, Martin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2010
168452-Thumbnail Image.png
Description
Personalized learning is gaining popularity in online computer science education due to its characteristics of pacing the learning progress and adapting the instructional approach to each individual learner from a diverse background. Among various instructional methods in computer science education, hands-on labs have unique requirements of understanding learners' behavior and

Personalized learning is gaining popularity in online computer science education due to its characteristics of pacing the learning progress and adapting the instructional approach to each individual learner from a diverse background. Among various instructional methods in computer science education, hands-on labs have unique requirements of understanding learners' behavior and assessing learners' performance for personalization. Hands-on labs are a critical learning approach for cybersecurity education. It provides real-world complex problem scenarios and helps learners develop a deeper understanding of knowledge and concepts while solving real-world problems. But there are unique challenges when using hands-on labs for cybersecurity education. Existing hands-on lab exercises materials are usually managed in a problem-centric fashion, while it lacks a coherent way to manage existing labs and provide productive lab exercising plans for cybersecurity learners. To solve these challenges, a personalized learning platform called ThoTh Lab specifically designed for computer science hands-on labs in a cloud environment is established. ThoTh Lab can identify the learning style from student activities and adapt learning material accordingly. With the awareness of student learning styles, instructors are able to use techniques more suitable for the specific student, and hence, improve the speed and quality of the learning process. ThoTh Lab also provides student performance prediction, which allows the instructors to change the learning progress and take other measurements to help the students timely. A knowledge graph in the cybersecurity domain is also constructed using Natural language processing (NLP) technologies including word embedding and hyperlink-based concept mining. This knowledge graph is then utilized during the regular learning process to build a personalized lab recommendation system by suggesting relevant labs based on students' past learning history to maximize their learning outcomes. To evaluate ThoTh Lab, several in-class experiments were carried out in cybersecurity classes for both graduate and undergraduate students at Arizona State University and data was collected over several semesters. The case studies show that, by leveraging the personalized lab platform, students tend to be more absorbed in a lab project, show more interest in the cybersecurity area, spend more effort on the project and gain enhanced learning outcomes.
ContributorsDeng, Yuli (Author) / Huang, Dijiang (Thesis advisor) / Li, Baoxin (Committee member) / Zhao, Ming (Committee member) / Hsiao, Sharon (Committee member) / Arizona State University (Publisher)
Created2021
161996-Thumbnail Image.png
Description
Demand for processing machine learning workloads has grown incredibly over the past few years. Kubernetes, an open-source container orchestrator, has been widely used by public and private cloud providers for building scalable systems for meeting this demand. The data used to train machine learning workloads can be sensitive in nature,

Demand for processing machine learning workloads has grown incredibly over the past few years. Kubernetes, an open-source container orchestrator, has been widely used by public and private cloud providers for building scalable systems for meeting this demand. The data used to train machine learning workloads can be sensitive in nature, and organizations may prefer to be responsible for their data security and governance by housing it on on-premises systems. Hybrid cloud gives organizations the flexibility to use both on-premises and cloud infrastructure together, leveraging the advantages of both. While there is a long list of benefits, Kubernetes has limitations by design that limit a user’s abilities in a hybrid cloud environment. The Kubernetes control plane does not allow for the management of worker nodes across cloud providers. This boundary puts new responsibilities on the end-user when deploying a hybrid cloud workload. The end-user must create their clusters and specify which cluster the workload will be scheduled to ahead of time. The Kubernetes scheduler will not take the capacity of another cluster into account. To address these limitations, this thesis presents a new hybrid cloud Kubernetes scheduler that can create new clusters on-demand and burst machine learning workloads to a public cloud when on-premises resources are insufficient. Workloads begin scheduling on an on-premises Kubernetes cluster. When the on-premises cluster’s capacity is exhausted, a new Kubernetes cluster is created on-demand in a public cloud provider, and machine learning tasks waiting in the Kubernetes scheduling queue are dynamically migrated to the public cloud provider’s Kubernetes cluster. The public Kubernetes cluster is dynamically sized and auto scaled based on the pending tasks’ demand. When migrating tasks, the data dependencies among tasks are considered, and a region is dynamically chosen to reduce migration time and cost. The scheduler is experimentally evaluated with real-world machine learning workloads, including predicting if a subscriber will stay with a subscription service, predicting the discount needed to retain a subscription customer, predicting if a credit card transaction is fraudulent, and simulated real-world job arrival behavior in a real hybrid cloud environment. Results show that the scheduler can substantially reduce the workload execution time by dynamically migrating tasks from on-premises to public cloud and minimizing the cost by dynamically sizing and scaling the public cluster.
ContributorsKieley, James (Author) / Zhao, Ming (Thesis advisor) / Huang, Dijiang (Committee member) / Zou, Jia (Committee member) / Arizona State University (Publisher)
Created2021
156819-Thumbnail Image.png
Description
Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring

Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring substantial computing resources to computing systems with sufficient computing resources, such as servers, cloud systems, and/or data centers for processing. However, this offloading method suffers from both high latency and network congestion in the IoT infrastructures.

Recently edge computing has emerged to reduce the negative impacts of tasks offloading to remote computing systems. As edge computing is in close proximity to IoT devices, it can reduce the latency of task offloading and reduce network congestion. Yet, edge computing has its drawbacks, such as the limited computing resources of some edge computing devices and the unbalanced loads among these devices. In order to effectively explore the potential of edge computing to support IoT applications, it is necessary to have efficient task management and load balancing in edge computing networks.

In this dissertation research, an approach is presented to periodically distributing tasks within the edge computing network while satisfying the quality-of-service (QoS) requirements of tasks. The QoS requirements include task completion deadline and security requirement. The approach aims to maximize the number of tasks that can be accommodated in the edge computing network, with consideration of tasks’ priorities. The goal is achieved through the joint optimization of the computing resource allocation and network bandwidth provisioning. Evaluation results show the improvement of the approach in increasing the number of tasks that can be accommodated in the edge computing network and the efficiency in resource utilization.
ContributorsSong, Yaozhong (Author) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Sarjoughian, Hessam S. (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2018
156040-Thumbnail Image.png
Description
Smart cities are the next wave of rapid expansion of Internet of Things (IoT). A smart city is a designation given to a city that incorporates information and communication technologies (ICT) to enhance the quality and performance of urban services, such as energy, transportation, healthcare, communications, entertainments, education, e-commerce, businesses,

Smart cities are the next wave of rapid expansion of Internet of Things (IoT). A smart city is a designation given to a city that incorporates information and communication technologies (ICT) to enhance the quality and performance of urban services, such as energy, transportation, healthcare, communications, entertainments, education, e-commerce, businesses, city management, and utilities, to reduce resource consumption, wastage and overall costs. The overarching aim of a smart city is to enhance the quality of living for its residents and businesses, through technology. In a large ecosystem, like a smart city, many organizations and companies collaborate with the smart city government to improve the smart city. These entities may need to store and share critical data with each other. A smart city has several thousands of smart devices and sensors deployed across the city. Storing critical data in a secure and scalable manner is an important issue in a smart city. While current cloud-based services, like Splunk and ELK (Elasticsearch-Logstash-Kibana), offer a centralized view and control over the IT operations of these smart devices, it is still prone to insider attacks, data tampering, and rogue administrator problems. In this thesis, we present an approach using blockchain to recovering critical data from unauthorized modifications. We use extensive simulations based on complex adaptive system theory, for evaluation of our approach. Through mathematical proof we proved that the approach always detects an unauthorized modification of critical data.
ContributorsMishra, Vineeta (Author) / Yau, Sik-Sang (Thesis advisor) / Goul, Michael K (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2017
168504-Thumbnail Image.png
Description
Realizing the applications of Internet of Things (IoT) with the goal of achieving a more efficient and automated world requires billions of connected smart devices and the minimization of hardware cost in these devices. As a result, many IoT devices do not have sufficient resources to support various protocols required

Realizing the applications of Internet of Things (IoT) with the goal of achieving a more efficient and automated world requires billions of connected smart devices and the minimization of hardware cost in these devices. As a result, many IoT devices do not have sufficient resources to support various protocols required in many IoT applications. Because of this, new protocols have been introduced to support the integration of these devices. One of these protocols is the increasingly popular routing protocol for low-power and lossy networks (RPL). However, this protocol is well known to attract blackhole and sinkhole attacks and cause serious difficulties when using more computationally intensive techniques to protect against these attacks, such as intrusion detection systems and rank authentication schemes. In this paper, an effective approach is presented to protect RPL networks against blackhole attacks. The approach does not address sinkhole attacks because they cause low damage and are often used along blackhole attacks and can be detected when blackhole attaches are detected. This approach uses the feature of multiple parents per node and a parent evaluation system enabling nodes to select more reliable routes. Simulations have been conducted, compared to existing approaches this approach would provide better protection against blackhole attacks with much lower overheads for small RPL networks.
ContributorsSanders, Kent (Author) / Yau, Stephen S (Thesis advisor) / Huang, Dijiang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2021