This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 21 - 30 of 134
Filtering by

Clear all filters

150827-Thumbnail Image.png
Description
In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more

In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more willing to shift their electronic medical record (EMR) systems to clouds that can remove the geographical distance barriers among providers and patient. Even though cloud-based EMRs have received considerable attention since it would help achieve lower operational cost and better interoperability with other healthcare providers, the adoption of security-aware cloud systems has become an extremely important prerequisite for bringing interoperability and efficient management to the healthcare industry. Since a shared electronic health record (EHR) essentially represents a virtualized aggregation of distributed clinical records from multiple healthcare providers, sharing of such integrated EHRs may comply with various authorization policies from these data providers. In this work, we focus on the authorized and selective sharing of EHRs among several parties with different duties and objectives that satisfies access control and compliance issues in healthcare cloud computing environments. We present a secure medical data sharing framework to support selective sharing of composite EHRs aggregated from various healthcare providers and compliance of HIPAA regulations. Our approach also ensures that privacy concerns need to be accommodated for processing access requests to patients' healthcare information. To realize our proposed approach, we design and implement a cloud-based EHRs sharing system. In addition, we describe case studies and evaluation results to demonstrate the effectiveness and efficiency of our approach.
ContributorsWu, Ruoyu (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2012
150953-Thumbnail Image.png
Description
Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting, the sensing and transmission times of the cognitive radio play

Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting, the sensing and transmission times of the cognitive radio play a more important role because of the decentralized nature of the network. They have a direct impact on the throughput. Due to the tradeoff between throughput and the sensing time, finding optimal values for sensing time and transmission time is difficult. In this thesis, a method is proposed to improve the throughput of a CRAHN by dynamically changing the sensing and transmission times. To simulate the CRAHN setting, ns-2, the network simulator with an extension for CRAHN is used. The CRAHN extension module implements the required Primary User (PU) and Secondary User (SU) and other CR functionalities to simulate a realistic CRAHN scenario. First, this work presents a detailed analysis of various CR parameters, their interactions, their individual contributions to the throughput to understand how they affect the transmissions in the network. Based on the results of this analysis, changes to the system model in the CRAHN extension are proposed. Instantaneous throughput of the network is introduced in the new model, which helps to determine how the parameters should adapt based on the current throughput. Along with instantaneous throughput, checks are done for interference with the PUs and their transmission power, before modifying these CR parameters. Simulation results demonstrate that the throughput of the CRAHN with the adaptive sensing and transmission times is significantly higher as compared to that of non-adaptive parameters.
ContributorsBapat, Namrata Arun (Author) / Syrotiuk, Violet R. (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
151152-Thumbnail Image.png
Description
Access control is one of the most fundamental security mechanisms used in the design and management of modern information systems. However, there still exists an open question on how formal access control models can be automatically analyzed and fully realized in secure system development. Furthermore, specifying and managing access control

Access control is one of the most fundamental security mechanisms used in the design and management of modern information systems. However, there still exists an open question on how formal access control models can be automatically analyzed and fully realized in secure system development. Furthermore, specifying and managing access control policies are often error-prone due to the lack of effective analysis mechanisms and tools. In this dissertation, I present an Assurance Management Framework (AMF) that is designed to cope with various assurance management requirements from both access control system development and policy-based computing. On one hand, the AMF framework facilitates comprehensive analysis and thorough realization of formal access control models in secure system development. I demonstrate how this method can be applied to build role-based access control systems by adopting the NIST/ANSI RBAC standard as an underlying security model. On the other hand, the AMF framework ensures the correctness of access control policies in policy-based computing through automated reasoning techniques and anomaly management mechanisms. A systematic method is presented to formulate XACML in Answer Set Programming (ASP) that allows users to leverage off-the-shelf ASP solvers for a variety of analysis services. In addition, I introduce a novel anomaly management mechanism, along with a grid-based visualization approach, which enables systematic and effective detection and resolution of policy anomalies. I further evaluate the AMF framework through modeling and analyzing multiparty access control in Online Social Networks (OSNs). A MultiParty Access Control (MPAC) model is formulated to capture the essence of multiparty authorization requirements in OSNs. In particular, I show how AMF can be applied to OSNs for identifying and resolving privacy conflicts, and representing and reasoning about MPAC model and policy. To demonstrate the feasibility of the proposed methodology, a suite of proof-of-concept prototype systems is implemented as well.
ContributorsHu, Hongxin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Dasgupta, Partha (Committee member) / Ye, Nong (Committee member) / Arizona State University (Publisher)
Created2012
150148-Thumbnail Image.png
Description
In order to catch the smartest criminals in the world, digital forensics examiners need a means of collaborating and sharing information with each other and outside experts that is not prohibitively difficult. However, standard operating procedures and the rules of evidence generally disallow the use of the collaboration software and

In order to catch the smartest criminals in the world, digital forensics examiners need a means of collaborating and sharing information with each other and outside experts that is not prohibitively difficult. However, standard operating procedures and the rules of evidence generally disallow the use of the collaboration software and techniques that are currently available because they do not fully adhere to the dictated procedures for the handling, analysis, and disclosure of items relating to cases. The aim of this work is to conceive and design a framework that provides a completely new architecture that 1) can perform fundamental functions that are common and necessary to forensic analyses, and 2) is structured such that it is possible to include collaboration-facilitating components without changing the way users interact with the system sans collaboration. This framework is called the Collaborative Forensic Framework (CUFF). CUFF is constructed from four main components: Cuff Link, Storage, Web Interface, and Analysis Block. With the Cuff Link acting as a mediator between components, CUFF is flexible in both the method of deployment and the technologies used in implementation. The details of a realization of CUFF are given, which uses a combination of Java, the Google Web Toolkit, Django with Apache for a RESTful web service, and an Ubuntu Enterprise Cloud using Eucalyptus. The functionality of CUFF's components is demonstrated by the integration of an acquisition script designed for Android OS-based mobile devices that use the YAFFS2 file system. While this work has obvious application to examination labs which work under the mandate of judicial or investigative bodies, security officers at any organization would benefit from the improved ability to cooperate in electronic discovery efforts and internal investigations.
ContributorsMabey, Michael Kent (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2011
153909-Thumbnail Image.png
Description
Cloud computing is known as a new and powerful computing paradigm. This new generation of network computing model delivers both software and hardware as on-demand resources and various services over the Internet. However, the security concerns prevent users from adopting the cloud-based solutions to fulfill the IT requirement for many

Cloud computing is known as a new and powerful computing paradigm. This new generation of network computing model delivers both software and hardware as on-demand resources and various services over the Internet. However, the security concerns prevent users from adopting the cloud-based solutions to fulfill the IT requirement for many business critical computing. Due to the resource-sharing and multi-tenant nature of cloud-based solutions, cloud security is especially the most concern in the Infrastructure as a Service (IaaS). It has been attracting a lot of research and development effort in the past few years.

Virtualization is the main technology of cloud computing to enable multi-tenancy.

Computing power, storage, and network are all virtualizable to be shared in an IaaS system. This important technology makes abstract infrastructure and resources available to users as isolated virtual machines (VMs) and virtual networks (VNs). However, it also increases vulnerabilities and possible attack surfaces in the system, since all users in a cloud share these resources with others or even the attackers. The promising protection mechanism is required to ensure strong isolation, mediated sharing, and secure communications between VMs. Technologies for detecting anomalous traffic and protecting normal traffic in VNs are also needed. Therefore, how to secure and protect the private traffic in VNs and how to prevent the malicious traffic from shared resources are major security research challenges in a cloud system.

This dissertation proposes four novel frameworks to address challenges mentioned above. The first work is a new multi-phase distributed vulnerability, measurement, and countermeasure selection mechanism based on the attack graph analytical model. The second work is a hybrid intrusion detection and prevention system to protect VN and VM using virtual machines introspection (VMI) and software defined networking (SDN) technologies. The third work further improves the previous works by introducing a VM profiler and VM Security Index (VSI) to keep track the security status of each VM and suggest the optimal countermeasure to mitigate potential threats. The final work is a SDN-based proactive defense mechanism for a cloud system using a reconfiguration model and moving target defense approaches to actively and dynamically change the virtual network configuration of a cloud system.
ContributorsChung, Chun-Jen (Author) / Huang, Dijiang (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Xue, Guoliang (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2015
154172-Thumbnail Image.png
Description
Due to the shortcomings of modern Mobile Device Management solutions, businesses

have begun to incorporate forensics to analyze their mobile devices and respond

to any incidents of malicious activity in order to protect their sensitive data. Current

forensic tools, however, can only look a static image of the device being examined,

making it difficult

Due to the shortcomings of modern Mobile Device Management solutions, businesses

have begun to incorporate forensics to analyze their mobile devices and respond

to any incidents of malicious activity in order to protect their sensitive data. Current

forensic tools, however, can only look a static image of the device being examined,

making it difficult for a forensic analyst to produce conclusive results regarding the

integrity of any sensitive data on the device. This research thesis expands on the

use of forensics to secure data by implementing an agent on a mobile device that can

continually collect information regarding the state of the device. This information is

then sent to a separate server in the form of log files to be analyzed using a specialized

tool. The analysis tool is able to look at the data collected from the device over time

and perform specific calculations, according to the user's specifications, highlighting

any correlations or anomalies among the data which might be considered suspicious

to a forensic analyst. The contribution of this paper is both an in-depth explanation

on the implementation of an iOS application to be used to improve the mobile forensics

process as well as a proof-of-concept experiment showing how evidence collected

over time can be used to improve the accuracy of a forensic analysis.
ContributorsWhitaker, Jeremy (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Committee member) / Yau, Stephen (Committee member) / Arizona State University (Publisher)
Created2015
156001-Thumbnail Image.png
Description
The Web is one of the most exciting and dynamic areas of development in today’s technology. However, with such activity, innovation, and ubiquity have come a set of new challenges for digital forensic examiners, making their jobs even more difficult. For examiners to become as effective with evidence from the

The Web is one of the most exciting and dynamic areas of development in today’s technology. However, with such activity, innovation, and ubiquity have come a set of new challenges for digital forensic examiners, making their jobs even more difficult. For examiners to become as effective with evidence from the Web as they currently are with more traditional evidence, they need (1) methods that guide them to know how to approach this new type of evidence and (2) tools that accommodate web environments’ unique characteristics.

In this dissertation, I present my research to alleviate the difficulties forensic examiners currently face with respect to evidence originating from web environments. First, I introduce a framework for web environment forensics, which elaborates on and addresses the key challenges examiners face and outlines a method for how to approach web-based evidence. Next, I describe my work to identify extensions installed on encrypted web thin clients using only a sound understanding of these systems’ inner workings and the metadata of the encrypted files. Finally, I discuss my approach to reconstructing the timeline of events on encrypted web thin clients by using service provider APIs as a proxy for directly analyzing the device. In each of these research areas, I also introduce structured formats that I customized to accommodate the unique features of the evidence sources while also facilitating tool interoperability and information sharing.
ContributorsMabey, Michael Kent (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Thesis advisor) / Yau, Stephen S. (Committee member) / Lee, Joohyung (Committee member) / Zhao, Ziming (Committee member) / Arizona State University (Publisher)
Created2017
156002-Thumbnail Image.png
Description
Hardware-Assisted Security (HAS) is an emerging technology that addresses the shortcomings of software-based virtualized environment. There are two major weaknesses of software-based virtualization that HAS attempts to address - performance overhead and security issues. Performance overhead caused by software-based virtualization is due to the use of additional software layer (i.e.,

Hardware-Assisted Security (HAS) is an emerging technology that addresses the shortcomings of software-based virtualized environment. There are two major weaknesses of software-based virtualization that HAS attempts to address - performance overhead and security issues. Performance overhead caused by software-based virtualization is due to the use of additional software layer (i.e., hypervisor). Since the performance is highly related to efficiency of processing data and providing services, reducing performance overhead is one of the major concerns in data centers and enterprise networks. Software-based virtualization also imposes additional security issues in the virtualized environments. To resolve those issues, HAS is developed to offload security functions from application layer to a dedicated hardware, thereby achieving almost bare-metal performance and enhanced security. As a result, HAS gained

more popularity and the number of studies regarding efficiency of the technology is increasing.

However, there exists no attempt to our knowledge that provides a generic test mechanism that is universally applicable to all HAS devices. Preparing such a testbed for each specific HAS device is a time-consuming and costly task for hardware manufacturers and network administrators. Therefore, we try to address the demands of hardware vendors and researchers for a generic testbed that can evaluate both performance and security functions of the HAS-enabled systems.

In this thesis, the HAS device evaluation framework (HEF) is defined for hardware vendors, network administrators, and researchers to measure performance of the system with HAS devices. HEF provides a generic test environments for a given HAS device by providing generic test metrics and evaluation mechanisms. HEF is also designed to take user-defined test metrics and test cases to support various hardware. The framework performs the entire process in an automated fashion, and thus it requires no user intervention. Finally, the efficacy of HEF is demonstrated by performing a case study using Intel QuickAssist Technology (QAT) adapter, which is a dedicated PCI express device for cryptographic tasks.
ContributorsKyung, Sukwha (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Committee member) / Zhao, Ziming (Committee member) / Arizona State University (Publisher)
Created2017
156185-Thumbnail Image.png
Description
Web applications continue to remain as the most popular method of interaction for businesses over the Internet. With it's simplicity of use and management, they often function as the "front door" for many companies. As such, they are a critical component of the security ecosystem as vulnerabilities present in these

Web applications continue to remain as the most popular method of interaction for businesses over the Internet. With it's simplicity of use and management, they often function as the "front door" for many companies. As such, they are a critical component of the security ecosystem as vulnerabilities present in these systems could potentially allow malicious users access to sensitive business and personal data.

The inherent nature of web applications enables anyone to access them anytime and anywhere, this includes any malicious actors looking to exploit vulnerabilities present in the web application. In addition, the static configurations of these web applications enables attackers the opportunity to perform reconnaissance at their leisure, increasing their success rate by allowing them time to discover information on the system. On the other hand, defenders are often at a disadvantage as they do not have the same temporal opportunity that attackers possess in order to perform counter-reconnaissance. Lastly, the unchanging nature of web applications results in undiscovered vulnerabilities to remain open for exploitation, requiring developers to adopt a reactive approach that is often delayed or to anticipate and prepare for all possible attacks which is often cost-prohibitive.

Moving Target Defense (MTD) seeks to remove the attackers' advantage by reducing the information asymmetry between the attacker and defender. This research explores the concept of MTD and the various methods of applying MTD to secure Web Applications. In particular, MTD concepts are applied to web applications by implementing an automated application diversifier that aims to mitigate specific classes of web application vulnerabilities and exploits. Evaluation is done using two open source web applications to determine the effectiveness of the MTD implementation. Though developed for the chosen applications, the automation process can be customized to fit a variety of applications.
ContributorsTaguinod, Marthony (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Thesis advisor) / Yau, Sik-Sang (Committee member) / Arizona State University (Publisher)
Created2018
155925-Thumbnail Image.png
Description
A Virtual Private Network (VPN) is the traditional approach for an end-to-end secure connection between two endpoints. Most existing VPN solutions are intended for wired networks with reliable connections. In a mobile environment, network connections are less reliable and devices experience intermittent network disconnections due to either switching from one

A Virtual Private Network (VPN) is the traditional approach for an end-to-end secure connection between two endpoints. Most existing VPN solutions are intended for wired networks with reliable connections. In a mobile environment, network connections are less reliable and devices experience intermittent network disconnections due to either switching from one network to another or experiencing a gap in coverage during roaming. These disruptive events affects traditional VPN performance, resulting in possible termination of applications, data loss, and reduced productivity. Mobile VPNs bridge the gap between what users and applications expect from a wired network and the realities of mobile computing.

In this dissertation, MobiVPN, which was built by modifying the widely-used OpenVPN so that the requirements of a mobile VPN were met, was designed and developed. The aim in MobiVPN was for it to be a reliable and efficient VPN for mobile environments. In order to achieve these objectives, MobiVPN introduces the following features: 1) Fast and lightweight VPN session resumption, where MobiVPN is able decrease the time it takes to resume a VPN tunnel after a mobility event by an average of 97.19\% compared to that of OpenVPN. 2) Persistence of TCP sessions of the tunneled applications allowing them to survive VPN tunnel disruptions due to a gap in network coverage no matter how long the coverage gap is. MobiVPN also has mechanisms to suspend and resume TCP flows during and after a network disconnection with a packet buffering option to maintain the TCP sending rate. MobiVPN was able to provide fast resumption of TCP flows after reconnection with improved TCP performance when multiple disconnections occur with an average of 30.08\% increase in throughput in the experiments where buffering was used, and an average of 20.93\% of increased throughput for flows that were not buffered. 3) A fine-grained, flow-based adaptive compression which allows MobiVPN to treat each tunneled flow independently so that compression can be turned on for compressible flows, and turned off for incompressible ones. The experiments showed that the flow-based adaptive compression outperformed OpenVPN's compression options in terms of effective throughput, data reduction, and lesser compression operations.
ContributorsAlshalan, Abdullah O. (Author) / Huang, Dijiang (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Doupe, Adam (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2017