Matching Items (177)
150453-Thumbnail Image.png
Description
The adoption of the Service Oriented Architecture (SOA) as the foundation for developing a new generation of software systems - known as Service Based Software Systems (SBS), poses new challenges in system design. While simulation as a methodology serves a principal role in design, there is a growing recognition that

The adoption of the Service Oriented Architecture (SOA) as the foundation for developing a new generation of software systems - known as Service Based Software Systems (SBS), poses new challenges in system design. While simulation as a methodology serves a principal role in design, there is a growing recognition that simulation of SBS requires modeling capabilities beyond those that are developed for the traditional distributed software systems. In particular, while different component-based modeling approaches may lend themselves to simulating the logical process flows in Service Oriented Computing (SOC) systems, they are inadequate in terms of supporting SOA-compliant modeling. Furthermore, composite services must satisfy multiple QoS attributes under constrained service reconfigurations and hardware resources. A key desired capability, therefore, is to model and simulate not only the services consistent with SOA concepts and principles, but also the hardware and network components on which services must execute on. In this dissertation, SOC-DEVS - a novel co-design modeling methodology that enables simulation of software and hardware aspects of SBS for early architectural design evaluation is developed. A set of abstractions representing important service characteristics and service relationships are modeled. The proposed software/hardware co-design simulation capability is introduced into the DEVS-Suite simulator. Exemplar simulation models of a communication intensive Voice Communication System and a computation intensive Encryption System are developed and then validated using data from an existing real system. The applicability of the SOC-DEVS methodology is demonstrated in a simulation testbed aimed at facilitating the design & development of SBS. Furthermore, the simulation testbed is extended by integrating an existing prototype monitoring and adaptation system with the simulator to support basic experimentation towards design & development of Adaptive SBS.
ContributorsMuqsith, Mohammed Abdul (Author) / Sarjoughian, Hessam S. (Thesis advisor) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Tsai, Wei-Tek (Committee member) / Arizona State University (Publisher)
Created2011
150987-Thumbnail Image.png
Description
In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned

In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned and operated by third-party service providers, there are risks of unauthorized use of the users' sensitive data by service providers. Although there are many techniques for protecting users' data from outside attackers, currently there is no effective way to protect users' sensitive data from service providers. In this dissertation, an approach is presented to protecting the confidentiality of users' data from service providers, and ensuring that service providers cannot collect users' confidential data while the data is processed or stored in cloud computing systems. The approach has four major features: (1) separation of software service providers and infrastructure service providers, (2) hiding the information of the owners of data, (3) data obfuscation, and (4) software module decomposition and distributed execution. Since the approach to protecting users' data confidentiality includes software module decomposition and distributed execution, it is very important to effectively allocate the resource of servers in SBS to each of the software module to manage the overall performance of workflows in SBS. An approach is presented to resource allocation for SBS to adaptively allocating the system resources of servers to their software modules in runtime in order to satisfy the performance requirements of multiple workflows in SBS. Experimental results show that the dynamic resource allocation approach can substantially increase the throughput of a SBS and the optimal resource allocation can be found in polynomial time
ContributorsAn, Ho Geun (Author) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Ahn, Gail-Joon (Committee member) / Santanam, Raghu (Committee member) / Arizona State University (Publisher)
Created2012
151006-Thumbnail Image.png
Description
The Open Services Gateway initiative (OSGi) framework is a standard of module system and service platform that implements a complete and dynamic component model. Currently most of OSGi implementations are implemented by Java, which has similarities of Android language. With the emergence of Android operating system, due to the similarities

The Open Services Gateway initiative (OSGi) framework is a standard of module system and service platform that implements a complete and dynamic component model. Currently most of OSGi implementations are implemented by Java, which has similarities of Android language. With the emergence of Android operating system, due to the similarities between Java and Android, the integration of module system and service platform from OSGi to Android system attracts more and more attention. How to make OSGi run in Android is a hot topic, further, how to find a mechanism to enable communication between OSGi and Android system is a more advanced area than simply making OSGi running in Android. This paper, which aimed to fulfill SOA (Service Oriented Architecture) and CBA (Component Based Architecture), proposed a solution on integrating Felix OSGi platform with Android system in order to build up Distributed OSGi framework between mobile phones upon XMPP protocol. And in this paper, it not only successfully makes OSGi run on Android, but also invents a mechanism that makes a seamless collaboration between these two platforms.
ContributorsDong, Xinyi (Author) / Huang, Dijiang (Thesis advisor) / Dasgupta, Partha (Committee member) / Chen, Yinong (Committee member) / Arizona State University (Publisher)
Created2012
150827-Thumbnail Image.png
Description
In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more

In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more willing to shift their electronic medical record (EMR) systems to clouds that can remove the geographical distance barriers among providers and patient. Even though cloud-based EMRs have received considerable attention since it would help achieve lower operational cost and better interoperability with other healthcare providers, the adoption of security-aware cloud systems has become an extremely important prerequisite for bringing interoperability and efficient management to the healthcare industry. Since a shared electronic health record (EHR) essentially represents a virtualized aggregation of distributed clinical records from multiple healthcare providers, sharing of such integrated EHRs may comply with various authorization policies from these data providers. In this work, we focus on the authorized and selective sharing of EHRs among several parties with different duties and objectives that satisfies access control and compliance issues in healthcare cloud computing environments. We present a secure medical data sharing framework to support selective sharing of composite EHRs aggregated from various healthcare providers and compliance of HIPAA regulations. Our approach also ensures that privacy concerns need to be accommodated for processing access requests to patients' healthcare information. To realize our proposed approach, we design and implement a cloud-based EHRs sharing system. In addition, we describe case studies and evaluation results to demonstrate the effectiveness and efficiency of our approach.
ContributorsWu, Ruoyu (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2012
149360-Thumbnail Image.png
Description
Cloud computing systems fundamentally provide access to large pools of data and computational resources through a variety of interfaces similar in spirit to existing grid and HPC resource management and programming systems. These types of systems offer a new programming target for scalable application developers and have gained popularity over

Cloud computing systems fundamentally provide access to large pools of data and computational resources through a variety of interfaces similar in spirit to existing grid and HPC resource management and programming systems. These types of systems offer a new programming target for scalable application developers and have gained popularity over the past few years. However, most cloud computing systems in operation today are proprietary and rely upon infrastructure that is invisible to the research community, or are not explicitly designed to be instrumented and modified by systems researchers. In this research, Xen Server Management API is employed to build a framework for cloud computing that implements what is commonly referred to as Infrastructure as a Service (IaaS); systems that give users the ability to run and control entire virtual machine instances deployed across a variety physical resources. The goal of this research is to develop a cloud based resource and service sharing platform for Computer network security education a.k.a Virtual Lab.
ContributorsKadne, Aniruddha (Author) / Huang, Dijiang (Thesis advisor) / Tsai, Wei-Tek (Committee member) / Ahn, Gail-Joon (Committee member) / Arizona State University (Publisher)
Created2010
149382-Thumbnail Image.png
Description
Today, many wireless networks are single-channel systems. However, as the interest in wireless services increases, the contention by nodes to occupy the medium is more intense and interference worsens. One direction with the potential to increase system throughput is multi-channel systems. Multi-channel systems have been shown to reduce collisions and

Today, many wireless networks are single-channel systems. However, as the interest in wireless services increases, the contention by nodes to occupy the medium is more intense and interference worsens. One direction with the potential to increase system throughput is multi-channel systems. Multi-channel systems have been shown to reduce collisions and increase concurrency thus producing better bandwidth usage. However, the well-known hidden- and exposed-terminal problems inherited from single-channel systems remain, and a new channel selection problem is introduced. In this dissertation, Multi-channel medium access control (MAC) protocols are proposed for mobile ad hoc networks (MANETs) for nodes equipped with a single half-duplex transceiver, using more sophisticated physical layer technologies. These include code division multiple access (CDMA), orthogonal frequency division multiple access (OFDMA), and diversity. CDMA increases channel reuse, while OFDMA enables communication by multiple users in parallel. There is a challenge to using each technology in MANETs, where there is no fixed infrastructure or centralized control. CDMA suffers from the near-far problem, while OFDMA requires channel synchronization to decode the signal. As a result CDMA and OFDMA are not yet widely used. Cooperative (diversity) mechanisms provide vital information to facilitate communication set-up between source-destination node pairs and help overcome limitations of physical layer technologies in MANETs. In this dissertation, the Cooperative CDMA-based Multi-channel MAC (CCM-MAC) protocol uses CDMA to enable concurrent transmissions on each channel. The Power-controlled CDMA-based Multi-channel MAC (PCC-MAC) protocol uses transmission power control at each node and mitigates collisions of control packets on the control channel by using different sizes of the spreading factor to have different processing gains for the control signals. The Cooperative Dual-access Multi-channel MAC (CDM-MAC) protocol combines the use of OFDMA and CDMA and minimizes channel interference by a resolvable balanced incomplete block design (BIBD). In each protocol, cooperating nodes help reduce the incidence of the multi-channel hidden- and exposed-terminal and help address the near-far problem of CDMA by supplying information. Simulation results show that each of the proposed protocols achieve significantly better system performance when compared to IEEE 802.11, other multi-channel protocols, and another protocol CDMA-based.
ContributorsMoon, Yuhan (Author) / Syrotiuk, Violet R. (Thesis advisor) / Huang, Dijiang (Committee member) / Reisslein, Martin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2010
131884-Thumbnail Image.png
Description
As automation research into penetration testing has developed, several methods have been proposed as suitable control mechanisms for use in pentesting frameworks. These include Markov Decision Processes (MDPs), partially observable Markov Decision Processes (POMDPs), and POMDPs utilizing reinforcement learning. Since much work has been done automating other aspects of the

As automation research into penetration testing has developed, several methods have been proposed as suitable control mechanisms for use in pentesting frameworks. These include Markov Decision Processes (MDPs), partially observable Markov Decision Processes (POMDPs), and POMDPs utilizing reinforcement learning. Since much work has been done automating other aspects of the pentesting process using exploit frameworks and scanning tools, this is the next focal point in this field. This paper shows a fully-integrated solution comprised of a POMDP-based planning algorithm, the Nessus scanning utility, and MITRE's CALDERA pentesting platform. These are linked in order to create an autonomous AI attack platform with scanning, planning, and attack capabilities.
ContributorsDejarnett, Eric Andrew (Author) / Huang, Dijiang (Thesis director) / Chowdhary, Ankur (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131892-Thumbnail Image.png
Description
Vulnerability testing/evaluation is a regular task for cyber-security groups. Conducting tasks like this can take up a great amount of time and may not be perfect. Automating these tasks helps speed up the rate at which experts can test systems. However, script based or static programs that run automatically often

Vulnerability testing/evaluation is a regular task for cyber-security groups. Conducting tasks like this can take up a great amount of time and may not be perfect. Automating these tasks helps speed up the rate at which experts can test systems. However, script based or static programs that run automatically often do not have the versatility required to properly replace human analysis. With the advances in Artificial Intelligence and Machine Learning, a utility can be developed that would allow for the creation of penetration testing plans rather than manually testing vulnerabilities. A variety of existing cyber-security programs and utilities provide an API layer that commonly interacts with the Python environment. With the commonality of AI/ML tools within the Python ecosystem, a plugin like interface can be developed to feed any AI/ML program real world data and receive a response/report in return. Using Python 2.7+, Python 3.6+, pymdptoolbox, and POMDPy, a program was developed that ingests real-world data from scanning tools and returned a suggested course of action to be used by analysts in order to perform a practical validation of the algorithms in a real world setting. This program was able to successfully navigate a test network and produce results that were expected to be found on the target machines without needing human analysis of the network. Using POMDP based systems for more cyber-security type tasks may be a valuable use case for future developments and help ease the burden faced in a rapid paced world.
ContributorsBelanger, Connor Lawrence (Author) / Huang, Dijiang (Thesis director) / Chowdhary, Ankur (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
157174-Thumbnail Image.png
Description
Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card

Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card fraud in online transactions. Every online transaction comes with a fraud risk and it is the merchant's liability to detect and stop fraudulent transactions. Merchants utilize various mechanisms to prevent and manage fraud such as automated fraud detection systems and manual transaction reviews by expert fraud analysts. Many proposed solutions mostly focus on fraud detection accuracy and ignore financial considerations. Also, the highly effective manual review process is overlooked. First, I propose Profit Optimizing Neural Risk Manager (PONRM), a selective classifier that (a) constitutes optimal collaboration between machine learning models and human expertise under industrial constraints, (b) is cost and profit sensitive. I suggest directions on how to characterize fraudulent behavior and assess the risk of a transaction. I show that my framework outperforms cost-sensitive and cost-insensitive baselines on three real-world merchant datasets. While PONRM is able to work with many supervised learners and obtain convincing results, utilizing probability outputs directly from the trained model itself can pose problems, especially in deep learning as softmax output is not a true uncertainty measure. This phenomenon, and the wide and rapid adoption of deep learning by practitioners brought unintended consequences in many situations such as in the infamous case of Google Photos' racist image recognition algorithm; thus, necessitated the utilization of the quantified uncertainty for each prediction. There have been recent efforts towards quantifying uncertainty in conventional deep learning methods (e.g., dropout as Bayesian approximation); however, their optimal use in decision making is often overlooked and understudied. Thus, I present a mixed-integer programming framework for selective classification called MIPSC, that investigates and combines model uncertainty and predictive mean to identify optimal classification and rejection regions. I also extend this framework to cost-sensitive settings (MIPCSC) and focus on the critical real-world problem, online fraud management and show that my approach outperforms industry standard methods significantly for online fraud management in real-world settings.
ContributorsYildirim, Mehmet Yigit (Author) / Davulcu, Hasan (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Huang, Dijiang (Committee member) / Hsiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2019
168504-Thumbnail Image.png
Description
Realizing the applications of Internet of Things (IoT) with the goal of achieving a more efficient and automated world requires billions of connected smart devices and the minimization of hardware cost in these devices. As a result, many IoT devices do not have sufficient resources to support various protocols required

Realizing the applications of Internet of Things (IoT) with the goal of achieving a more efficient and automated world requires billions of connected smart devices and the minimization of hardware cost in these devices. As a result, many IoT devices do not have sufficient resources to support various protocols required in many IoT applications. Because of this, new protocols have been introduced to support the integration of these devices. One of these protocols is the increasingly popular routing protocol for low-power and lossy networks (RPL). However, this protocol is well known to attract blackhole and sinkhole attacks and cause serious difficulties when using more computationally intensive techniques to protect against these attacks, such as intrusion detection systems and rank authentication schemes. In this paper, an effective approach is presented to protect RPL networks against blackhole attacks. The approach does not address sinkhole attacks because they cause low damage and are often used along blackhole attacks and can be detected when blackhole attaches are detected. This approach uses the feature of multiple parents per node and a parent evaluation system enabling nodes to select more reliable routes. Simulations have been conducted, compared to existing approaches this approach would provide better protection against blackhole attacks with much lower overheads for small RPL networks.
ContributorsSanders, Kent (Author) / Yau, Stephen S (Thesis advisor) / Huang, Dijiang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2021