This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 35
Filtering by

Clear all filters

149803-Thumbnail Image.png
Description
With the advent of technologies such as web services, service oriented architecture and cloud computing, modern organizations have to deal with policies such as Firewall policies to secure the networks, XACML (eXtensible Access Control Markup Language) policies for controlling the access to critical information as well as resources. Management of

With the advent of technologies such as web services, service oriented architecture and cloud computing, modern organizations have to deal with policies such as Firewall policies to secure the networks, XACML (eXtensible Access Control Markup Language) policies for controlling the access to critical information as well as resources. Management of these policies is an extremely important task in order to avoid unintended security leakages via illegal accesses, while maintaining proper access to services for legitimate users. Managing and maintaining access control policies manually over long period of time is an error prone task due to their inherent complex nature. Existing tools and mechanisms for policy management use different approaches for different types of policies. This research thesis represents a generic framework to provide an unified approach for policy analysis and management of different types of policies. Generic approach captures the common semantics and structure of different access control policies with the notion of policy ontology. Policy ontology representation is then utilized for effectively analyzing and managing the policies. This thesis also discusses a proof-of-concept implementation of the proposed generic framework and demonstrates how efficiently this unified approach can be used for analysis and management of different types of access control policies.
ContributorsKulkarni, Ketan (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2011
149851-Thumbnail Image.png
Description
This research describes software based remote attestation schemes for obtaining the integrity of an executing user application and the Operating System (OS) text section of an untrusted client platform. A trusted external entity issues a challenge to the client platform. The challenge is executable code which the client must execute,

This research describes software based remote attestation schemes for obtaining the integrity of an executing user application and the Operating System (OS) text section of an untrusted client platform. A trusted external entity issues a challenge to the client platform. The challenge is executable code which the client must execute, and the code generates results which are sent to the external entity. These results provide the external entity an assurance as to whether the client application and the OS are in pristine condition. This work also presents a technique where it can be verified that the application which was attested, did not get replaced by a different application after completion of the attestation. The implementation of these three techniques was achieved entirely in software and is backward compatible with legacy machines on the Intel x86 architecture. This research also presents two approaches to incorporating software based "root of trust" using Virtual Machine Monitors (VMMs). The first approach determines the integrity of an executing Guest OS from the Host OS using Linux Kernel-based Virtual Machine (KVM) and qemu emulation software. The second approach implements a small VMM called MIvmm that can be utilized as a trusted codebase to build security applications such as those implemented in this research. MIvmm was conceptualized and implemented without using any existing codebase; its minimal size allows it to be trustworthy. Both the VMM approaches leverage processor support for virtualization in the Intel x86 architecture.
ContributorsSrinivasan, Raghunathan (Author) / Dasgupta, Partha (Thesis advisor) / Colbourn, Charles (Committee member) / Shrivastava, Aviral (Committee member) / Huang, Dijiang (Committee member) / Dewan, Prashant (Committee member) / Arizona State University (Publisher)
Created2011
150148-Thumbnail Image.png
Description
In order to catch the smartest criminals in the world, digital forensics examiners need a means of collaborating and sharing information with each other and outside experts that is not prohibitively difficult. However, standard operating procedures and the rules of evidence generally disallow the use of the collaboration software and

In order to catch the smartest criminals in the world, digital forensics examiners need a means of collaborating and sharing information with each other and outside experts that is not prohibitively difficult. However, standard operating procedures and the rules of evidence generally disallow the use of the collaboration software and techniques that are currently available because they do not fully adhere to the dictated procedures for the handling, analysis, and disclosure of items relating to cases. The aim of this work is to conceive and design a framework that provides a completely new architecture that 1) can perform fundamental functions that are common and necessary to forensic analyses, and 2) is structured such that it is possible to include collaboration-facilitating components without changing the way users interact with the system sans collaboration. This framework is called the Collaborative Forensic Framework (CUFF). CUFF is constructed from four main components: Cuff Link, Storage, Web Interface, and Analysis Block. With the Cuff Link acting as a mediator between components, CUFF is flexible in both the method of deployment and the technologies used in implementation. The details of a realization of CUFF are given, which uses a combination of Java, the Google Web Toolkit, Django with Apache for a RESTful web service, and an Ubuntu Enterprise Cloud using Eucalyptus. The functionality of CUFF's components is demonstrated by the integration of an acquisition script designed for Android OS-based mobile devices that use the YAFFS2 file system. While this work has obvious application to examination labs which work under the mandate of judicial or investigative bodies, security officers at any organization would benefit from the improved ability to cooperate in electronic discovery efforts and internal investigations.
ContributorsMabey, Michael Kent (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2011
152302-Thumbnail Image.png
Description
The energy consumption of data centers is increasing steadily along with the associ- ated power-density. Approximately half of such energy consumption is attributed to the cooling energy, as a result of which reducing cooling energy along with reducing servers energy consumption in data centers is becoming imperative so as to

The energy consumption of data centers is increasing steadily along with the associ- ated power-density. Approximately half of such energy consumption is attributed to the cooling energy, as a result of which reducing cooling energy along with reducing servers energy consumption in data centers is becoming imperative so as to achieve greening of the data centers. This thesis deals with cooling energy management in data centers running data-processing frameworks. In particular, we propose ther- mal aware scheduling for MapReduce framework and its Hadoop implementation to reduce cooling energy in data centers. Data-processing frameworks run many low- priority batch processing jobs, such as background log analysis, that do not have strict completion time requirements; they can be delayed by a bounded amount of time. Cooling energy savings are possible by being able to temporally spread the workload, and assign it to the computing equipments which reduce the heat recirculation in data center room and therefore the load on the cooling systems. We implement our scheme in Hadoop and performs some experiments using both CPU-intensive and I/O-intensive workload benchmarks in order to evaluate the efficiency of our scheme. The evaluation results highlight that our thermal aware scheduling reduces hot-spots and makes uniform temperature distribution within the data center possible. Sum- marizing the contribution, we incorporated thermal awareness in Hadoop MapReduce framework by enhancing the native scheduler to make it thermally aware, compare the Thermal Aware Scheduler(TAS) with the Hadoop scheduler (FCFS) by running PageRank and TeraSort benchmarks in the BlueTool data center of Impact lab and show that there is reduction in peak temperature and decrease in cooling power using TAS over FCFS scheduler.
ContributorsKole, Sayan (Author) / Gupta, Sandeep (Thesis advisor) / Huang, Dijiang (Committee member) / Varsamopoulos, Georgios (Committee member) / Arizona State University (Publisher)
Created2013
150453-Thumbnail Image.png
Description
The adoption of the Service Oriented Architecture (SOA) as the foundation for developing a new generation of software systems - known as Service Based Software Systems (SBS), poses new challenges in system design. While simulation as a methodology serves a principal role in design, there is a growing recognition that

The adoption of the Service Oriented Architecture (SOA) as the foundation for developing a new generation of software systems - known as Service Based Software Systems (SBS), poses new challenges in system design. While simulation as a methodology serves a principal role in design, there is a growing recognition that simulation of SBS requires modeling capabilities beyond those that are developed for the traditional distributed software systems. In particular, while different component-based modeling approaches may lend themselves to simulating the logical process flows in Service Oriented Computing (SOC) systems, they are inadequate in terms of supporting SOA-compliant modeling. Furthermore, composite services must satisfy multiple QoS attributes under constrained service reconfigurations and hardware resources. A key desired capability, therefore, is to model and simulate not only the services consistent with SOA concepts and principles, but also the hardware and network components on which services must execute on. In this dissertation, SOC-DEVS - a novel co-design modeling methodology that enables simulation of software and hardware aspects of SBS for early architectural design evaluation is developed. A set of abstractions representing important service characteristics and service relationships are modeled. The proposed software/hardware co-design simulation capability is introduced into the DEVS-Suite simulator. Exemplar simulation models of a communication intensive Voice Communication System and a computation intensive Encryption System are developed and then validated using data from an existing real system. The applicability of the SOC-DEVS methodology is demonstrated in a simulation testbed aimed at facilitating the design & development of SBS. Furthermore, the simulation testbed is extended by integrating an existing prototype monitoring and adaptation system with the simulator to support basic experimentation towards design & development of Adaptive SBS.
ContributorsMuqsith, Mohammed Abdul (Author) / Sarjoughian, Hessam S. (Thesis advisor) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Tsai, Wei-Tek (Committee member) / Arizona State University (Publisher)
Created2011
150987-Thumbnail Image.png
Description
In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned

In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned and operated by third-party service providers, there are risks of unauthorized use of the users' sensitive data by service providers. Although there are many techniques for protecting users' data from outside attackers, currently there is no effective way to protect users' sensitive data from service providers. In this dissertation, an approach is presented to protecting the confidentiality of users' data from service providers, and ensuring that service providers cannot collect users' confidential data while the data is processed or stored in cloud computing systems. The approach has four major features: (1) separation of software service providers and infrastructure service providers, (2) hiding the information of the owners of data, (3) data obfuscation, and (4) software module decomposition and distributed execution. Since the approach to protecting users' data confidentiality includes software module decomposition and distributed execution, it is very important to effectively allocate the resource of servers in SBS to each of the software module to manage the overall performance of workflows in SBS. An approach is presented to resource allocation for SBS to adaptively allocating the system resources of servers to their software modules in runtime in order to satisfy the performance requirements of multiple workflows in SBS. Experimental results show that the dynamic resource allocation approach can substantially increase the throughput of a SBS and the optimal resource allocation can be found in polynomial time
ContributorsAn, Ho Geun (Author) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Ahn, Gail-Joon (Committee member) / Santanam, Raghu (Committee member) / Arizona State University (Publisher)
Created2012
150827-Thumbnail Image.png
Description
In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more

In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more willing to shift their electronic medical record (EMR) systems to clouds that can remove the geographical distance barriers among providers and patient. Even though cloud-based EMRs have received considerable attention since it would help achieve lower operational cost and better interoperability with other healthcare providers, the adoption of security-aware cloud systems has become an extremely important prerequisite for bringing interoperability and efficient management to the healthcare industry. Since a shared electronic health record (EHR) essentially represents a virtualized aggregation of distributed clinical records from multiple healthcare providers, sharing of such integrated EHRs may comply with various authorization policies from these data providers. In this work, we focus on the authorized and selective sharing of EHRs among several parties with different duties and objectives that satisfies access control and compliance issues in healthcare cloud computing environments. We present a secure medical data sharing framework to support selective sharing of composite EHRs aggregated from various healthcare providers and compliance of HIPAA regulations. Our approach also ensures that privacy concerns need to be accommodated for processing access requests to patients' healthcare information. To realize our proposed approach, we design and implement a cloud-based EHRs sharing system. In addition, we describe case studies and evaluation results to demonstrate the effectiveness and efficiency of our approach.
ContributorsWu, Ruoyu (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2012
149382-Thumbnail Image.png
Description
Today, many wireless networks are single-channel systems. However, as the interest in wireless services increases, the contention by nodes to occupy the medium is more intense and interference worsens. One direction with the potential to increase system throughput is multi-channel systems. Multi-channel systems have been shown to reduce collisions and

Today, many wireless networks are single-channel systems. However, as the interest in wireless services increases, the contention by nodes to occupy the medium is more intense and interference worsens. One direction with the potential to increase system throughput is multi-channel systems. Multi-channel systems have been shown to reduce collisions and increase concurrency thus producing better bandwidth usage. However, the well-known hidden- and exposed-terminal problems inherited from single-channel systems remain, and a new channel selection problem is introduced. In this dissertation, Multi-channel medium access control (MAC) protocols are proposed for mobile ad hoc networks (MANETs) for nodes equipped with a single half-duplex transceiver, using more sophisticated physical layer technologies. These include code division multiple access (CDMA), orthogonal frequency division multiple access (OFDMA), and diversity. CDMA increases channel reuse, while OFDMA enables communication by multiple users in parallel. There is a challenge to using each technology in MANETs, where there is no fixed infrastructure or centralized control. CDMA suffers from the near-far problem, while OFDMA requires channel synchronization to decode the signal. As a result CDMA and OFDMA are not yet widely used. Cooperative (diversity) mechanisms provide vital information to facilitate communication set-up between source-destination node pairs and help overcome limitations of physical layer technologies in MANETs. In this dissertation, the Cooperative CDMA-based Multi-channel MAC (CCM-MAC) protocol uses CDMA to enable concurrent transmissions on each channel. The Power-controlled CDMA-based Multi-channel MAC (PCC-MAC) protocol uses transmission power control at each node and mitigates collisions of control packets on the control channel by using different sizes of the spreading factor to have different processing gains for the control signals. The Cooperative Dual-access Multi-channel MAC (CDM-MAC) protocol combines the use of OFDMA and CDMA and minimizes channel interference by a resolvable balanced incomplete block design (BIBD). In each protocol, cooperating nodes help reduce the incidence of the multi-channel hidden- and exposed-terminal and help address the near-far problem of CDMA by supplying information. Simulation results show that each of the proposed protocols achieve significantly better system performance when compared to IEEE 802.11, other multi-channel protocols, and another protocol CDMA-based.
ContributorsMoon, Yuhan (Author) / Syrotiuk, Violet R. (Thesis advisor) / Huang, Dijiang (Committee member) / Reisslein, Martin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2010
157174-Thumbnail Image.png
Description
Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card

Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card fraud in online transactions. Every online transaction comes with a fraud risk and it is the merchant's liability to detect and stop fraudulent transactions. Merchants utilize various mechanisms to prevent and manage fraud such as automated fraud detection systems and manual transaction reviews by expert fraud analysts. Many proposed solutions mostly focus on fraud detection accuracy and ignore financial considerations. Also, the highly effective manual review process is overlooked. First, I propose Profit Optimizing Neural Risk Manager (PONRM), a selective classifier that (a) constitutes optimal collaboration between machine learning models and human expertise under industrial constraints, (b) is cost and profit sensitive. I suggest directions on how to characterize fraudulent behavior and assess the risk of a transaction. I show that my framework outperforms cost-sensitive and cost-insensitive baselines on three real-world merchant datasets. While PONRM is able to work with many supervised learners and obtain convincing results, utilizing probability outputs directly from the trained model itself can pose problems, especially in deep learning as softmax output is not a true uncertainty measure. This phenomenon, and the wide and rapid adoption of deep learning by practitioners brought unintended consequences in many situations such as in the infamous case of Google Photos' racist image recognition algorithm; thus, necessitated the utilization of the quantified uncertainty for each prediction. There have been recent efforts towards quantifying uncertainty in conventional deep learning methods (e.g., dropout as Bayesian approximation); however, their optimal use in decision making is often overlooked and understudied. Thus, I present a mixed-integer programming framework for selective classification called MIPSC, that investigates and combines model uncertainty and predictive mean to identify optimal classification and rejection regions. I also extend this framework to cost-sensitive settings (MIPCSC) and focus on the critical real-world problem, online fraud management and show that my approach outperforms industry standard methods significantly for online fraud management in real-world settings.
ContributorsYildirim, Mehmet Yigit (Author) / Davulcu, Hasan (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Huang, Dijiang (Committee member) / Hsiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2019
168504-Thumbnail Image.png
Description
Realizing the applications of Internet of Things (IoT) with the goal of achieving a more efficient and automated world requires billions of connected smart devices and the minimization of hardware cost in these devices. As a result, many IoT devices do not have sufficient resources to support various protocols required

Realizing the applications of Internet of Things (IoT) with the goal of achieving a more efficient and automated world requires billions of connected smart devices and the minimization of hardware cost in these devices. As a result, many IoT devices do not have sufficient resources to support various protocols required in many IoT applications. Because of this, new protocols have been introduced to support the integration of these devices. One of these protocols is the increasingly popular routing protocol for low-power and lossy networks (RPL). However, this protocol is well known to attract blackhole and sinkhole attacks and cause serious difficulties when using more computationally intensive techniques to protect against these attacks, such as intrusion detection systems and rank authentication schemes. In this paper, an effective approach is presented to protect RPL networks against blackhole attacks. The approach does not address sinkhole attacks because they cause low damage and are often used along blackhole attacks and can be detected when blackhole attaches are detected. This approach uses the feature of multiple parents per node and a parent evaluation system enabling nodes to select more reliable routes. Simulations have been conducted, compared to existing approaches this approach would provide better protection against blackhole attacks with much lower overheads for small RPL networks.
ContributorsSanders, Kent (Author) / Yau, Stephen S (Thesis advisor) / Huang, Dijiang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2021