Matching Items (14)
Filtering by

Clear all filters

152422-Thumbnail Image.png
Description
With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may

With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may lead adversaries to access critical information by circumventing security practices. In order to ensure security, considerable efforts have been spent to develop security regulations by facilitating security best-practices. Applying shared security standards to the system is critical to understand vulnerabilities and prevent well-known threats from exploiting vulnerabilities. However, many end users tend to change configurations of their systems without paying attention to the security. Hence, it is not straightforward to protect systems from being changed by unconscious users in a timely manner. Detecting the installation of harmful applications is not sufficient since attackers may exploit risky software as well as commonly used software. In addition, checking the assurance of security configurations periodically is disadvantageous in terms of time and cost due to zero-day attacks and the timing attacks that can leverage the window between each security checks. Therefore, event-driven monitoring approach is critical to continuously assess security of a target system without ignoring a particular window between security checks and lessen the burden of exhausted task to inspect the entire configurations in the system. Furthermore, the system should be able to generate a vulnerability report for any change initiated by a user if such changes refer to the requirements in the standards and turn out to be vulnerable. Assessing various systems in distributed environments also requires to consistently applying standards to each environment. Such a uniformed consistent assessment is important because the way of assessment approach for detecting security vulnerabilities may vary across applications and operating systems. In this thesis, I introduce an automated event-driven security assessment framework to overcome and accommodate the aforementioned issues. I also discuss the implementation details that are based on the commercial-off-the-self technologies and testbed being established to evaluate approach. Besides, I describe evaluation results that demonstrate the effectiveness and practicality of the approaches.
ContributorsSeo, Jeong-Jin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2014
152278-Thumbnail Image.png
Description
The digital forensics community has neglected email forensics as a process, despite the fact that email remains an important tool in the commission of crime. Current forensic practices focus mostly on that of disk forensics, while email forensics is left as an analysis task stemming from that practice. As there

The digital forensics community has neglected email forensics as a process, despite the fact that email remains an important tool in the commission of crime. Current forensic practices focus mostly on that of disk forensics, while email forensics is left as an analysis task stemming from that practice. As there is no well-defined process to be used for email forensics the comprehensiveness, extensibility of tools, uniformity of evidence, usefulness in collaborative/distributed environments, and consistency of investigations are hindered. At present, there exists little support for discovering, acquiring, and representing web-based email, despite its widespread use. To remedy this, a systematic process which includes discovering, acquiring, and representing web-based email for email forensics which is integrated into the normal forensic analysis workflow, and which accommodates the distinct characteristics of email evidence will be presented. This process focuses on detecting the presence of non-obvious artifacts related to email accounts, retrieving the data from the service provider, and representing email in a well-structured format based on existing standards. As a result, developers and organizations can collaboratively create and use analysis tools that can analyze email evidence from any source in the same fashion and the examiner can access additional data relevant to their forensic cases. Following, an extensible framework implementing this novel process-driven approach has been implemented in an attempt to address the problems of comprehensiveness, extensibility, uniformity, collaboration/distribution, and consistency within forensic investigations involving email evidence.
ContributorsPaglierani, Justin W (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Santanam, Raghu T (Committee member) / Arizona State University (Publisher)
Created2013
153173-Thumbnail Image.png
Description
Neuroimaging has appeared in the courtroom as a type of `evidence' to support claims about whether or not criminals should be held accountable for their crimes. Yet the ability to abstract notions of culpability and criminal behavior with confidence from these imagines is unclear. As there remains much to be

Neuroimaging has appeared in the courtroom as a type of `evidence' to support claims about whether or not criminals should be held accountable for their crimes. Yet the ability to abstract notions of culpability and criminal behavior with confidence from these imagines is unclear. As there remains much to be discovered in the relationship between personal responsibility, criminal behavior, and neurological abnormalities, questions have been raised toward neuroimaging as an appropriate means to validate these claims.

This project explores the limits and legitimacy of neuroimaging as a means of understanding behavior and culpability in determining appropriate criminal sentencing. It highlights key philosophical issues surrounding the ability to use neuroimaging to support this process, and proposes a method of ensuring their proper use. By engaging case studies and a thought experiment, this project illustrates the circumstances in which neuroimaging may assist in identifying particular characteristics relevant for criminal sentencing.

I argue that it is not a question of whether or not neuroimaging itself holds validity in determining a criminals guilt or motives, but rather a proper application of the issue is to focus on the way in which information regarding these images is communicated from the `expert' scientists to the `non-expert' making decisions about the sentence that are most important. Those who are considering this information's relevance, a judge or jury, are typically not well versed in criminal neuroscience and interpreting the significance of different images. I advocate the way in which this information is communicated from the scientist-informer to the decision-maker parallels in importance to its actual meaning.

As a solution, I engage Roger Pielke's model of honest brokering as a solution to ensure the appropriate use of neuroimaging in determining criminal responsibility and sentencing. A thought experiment follows to highlight the limits of science, engage philosophical repercussions, and illustrate honest brokering as a means of resolution. To achieve this, a hypothetical dialogue reminiscent of Kenneth Schaffner's `tools for talking' with behavioral geneticists and courtroom professionals will exemplify these ideas.
ContributorsTaddeo, Sarah (Author) / Robert, Jason S (Thesis advisor) / Marchant, Gary (Committee member) / Hurlbut, James B (Committee member) / Arizona State University (Publisher)
Created2014
149803-Thumbnail Image.png
Description
With the advent of technologies such as web services, service oriented architecture and cloud computing, modern organizations have to deal with policies such as Firewall policies to secure the networks, XACML (eXtensible Access Control Markup Language) policies for controlling the access to critical information as well as resources. Management of

With the advent of technologies such as web services, service oriented architecture and cloud computing, modern organizations have to deal with policies such as Firewall policies to secure the networks, XACML (eXtensible Access Control Markup Language) policies for controlling the access to critical information as well as resources. Management of these policies is an extremely important task in order to avoid unintended security leakages via illegal accesses, while maintaining proper access to services for legitimate users. Managing and maintaining access control policies manually over long period of time is an error prone task due to their inherent complex nature. Existing tools and mechanisms for policy management use different approaches for different types of policies. This research thesis represents a generic framework to provide an unified approach for policy analysis and management of different types of policies. Generic approach captures the common semantics and structure of different access control policies with the notion of policy ontology. Policy ontology representation is then utilized for effectively analyzing and managing the policies. This thesis also discusses a proof-of-concept implementation of the proposed generic framework and demonstrates how efficiently this unified approach can be used for analysis and management of different types of access control policies.
ContributorsKulkarni, Ketan (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2011
150827-Thumbnail Image.png
Description
In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more

In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more willing to shift their electronic medical record (EMR) systems to clouds that can remove the geographical distance barriers among providers and patient. Even though cloud-based EMRs have received considerable attention since it would help achieve lower operational cost and better interoperability with other healthcare providers, the adoption of security-aware cloud systems has become an extremely important prerequisite for bringing interoperability and efficient management to the healthcare industry. Since a shared electronic health record (EHR) essentially represents a virtualized aggregation of distributed clinical records from multiple healthcare providers, sharing of such integrated EHRs may comply with various authorization policies from these data providers. In this work, we focus on the authorized and selective sharing of EHRs among several parties with different duties and objectives that satisfies access control and compliance issues in healthcare cloud computing environments. We present a secure medical data sharing framework to support selective sharing of composite EHRs aggregated from various healthcare providers and compliance of HIPAA regulations. Our approach also ensures that privacy concerns need to be accommodated for processing access requests to patients' healthcare information. To realize our proposed approach, we design and implement a cloud-based EHRs sharing system. In addition, we describe case studies and evaluation results to demonstrate the effectiveness and efficiency of our approach.
ContributorsWu, Ruoyu (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2012
150148-Thumbnail Image.png
Description
In order to catch the smartest criminals in the world, digital forensics examiners need a means of collaborating and sharing information with each other and outside experts that is not prohibitively difficult. However, standard operating procedures and the rules of evidence generally disallow the use of the collaboration software and

In order to catch the smartest criminals in the world, digital forensics examiners need a means of collaborating and sharing information with each other and outside experts that is not prohibitively difficult. However, standard operating procedures and the rules of evidence generally disallow the use of the collaboration software and techniques that are currently available because they do not fully adhere to the dictated procedures for the handling, analysis, and disclosure of items relating to cases. The aim of this work is to conceive and design a framework that provides a completely new architecture that 1) can perform fundamental functions that are common and necessary to forensic analyses, and 2) is structured such that it is possible to include collaboration-facilitating components without changing the way users interact with the system sans collaboration. This framework is called the Collaborative Forensic Framework (CUFF). CUFF is constructed from four main components: Cuff Link, Storage, Web Interface, and Analysis Block. With the Cuff Link acting as a mediator between components, CUFF is flexible in both the method of deployment and the technologies used in implementation. The details of a realization of CUFF are given, which uses a combination of Java, the Google Web Toolkit, Django with Apache for a RESTful web service, and an Ubuntu Enterprise Cloud using Eucalyptus. The functionality of CUFF's components is demonstrated by the integration of an acquisition script designed for Android OS-based mobile devices that use the YAFFS2 file system. While this work has obvious application to examination labs which work under the mandate of judicial or investigative bodies, security officers at any organization would benefit from the improved ability to cooperate in electronic discovery efforts and internal investigations.
ContributorsMabey, Michael Kent (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2011
156370-Thumbnail Image.png
Description
A novel clustered regularly interspaced short palindromic repeats/CRISPR-associated (CRISPR/Cas) tool for simultaneous gene editing and regulation was designed and tested. This study used the CRISPR-associated protein 9 (Cas9) endonuclease in complex with a 14-nucleotide (nt) guide RNA (gRNA) to repress a gene of interest using the Krüppel associated box (KRAB)

A novel clustered regularly interspaced short palindromic repeats/CRISPR-associated (CRISPR/Cas) tool for simultaneous gene editing and regulation was designed and tested. This study used the CRISPR-associated protein 9 (Cas9) endonuclease in complex with a 14-nucleotide (nt) guide RNA (gRNA) to repress a gene of interest using the Krüppel associated box (KRAB) domain, while also performing a separate gene modification using a 20-nt gRNA targeted to a reporter vector. DNA Ligase IV (LIGIV) was chosen as the target for gene repression, given its role in nonhomologous end joining, a common DNA repair process that competes with the more precise homology-directed repair (HDR).

To test for gene editing, a 20-nt gRNA was designed to target a disrupted enhanced green fluorescent protein (EGFP) gene present in a reporter vector. After the gRNA introduced a double-stranded break, cells attempted to repair the cut site via HDR using a DNA template within the reporter vector. In the event of successful gene editing, the EGFP sequence was restored to a functional state and green fluorescence was detectable by flow cytometry. To achieve gene repression, a 14-nt gRNA was designed to target LIGIV. The gRNA included a com protein recruitment domain, which recruited a Com-KRAB fusion protein to facilitate gene repression via chromatin modification of LIGIV. Quantitative polymerase chain reaction was used to quantify repression.

This study expanded upon earlier advancements, offering a novel and versatile approach to genetic modification and transcriptional regulation using CRISPR/Cas9. The overall results show that both gene editing and repression were occurring, thereby providing support for a novel CRISPR/Cas system capable of simultaneous gene modification and regulation. Such a system may enhance the genome engineering capabilities of researchers, benefit disease research, and improve the precision with which gene editing is performed.
ContributorsChapman, Jennifer E (Author) / Kiani, Samira (Thesis advisor) / Ugarova, Tatiana (Thesis advisor) / Marchant, Gary (Committee member) / Arizona State University (Publisher)
Created2018
154798-Thumbnail Image.png
Description
Detecting cyber-attacks in cyber systems is essential for protecting cyber infrastructures from cyber-attacks. It is very difficult to detect cyber-attacks in cyber systems due to their high complexity. The accuracy of the attack detection in the cyber systems

Detecting cyber-attacks in cyber systems is essential for protecting cyber infrastructures from cyber-attacks. It is very difficult to detect cyber-attacks in cyber systems due to their high complexity. The accuracy of the attack detection in the cyber systems depends heavily on the completeness of the collected sensor information. In this thesis, two approaches are presented: one to detecting attacks in completely observable cyber systems, and the other to estimating types of states in partially observable cyber systems for attack detection in cyber systems. These two approaches are illustrated using three large data sets of network traffic because the packet-level information of the network traffic data provides details about the cyber systems.

The approach to attack detection in cyber systems is based on a multimodal artificial neural network (MANN) using the collected network traffic data from completely observable cyber systems for training and testing. Since the training of MANN is computationally intensive, to reduce the computational overhead, an efficient feature selection algorithm using the genetic algorithm is developed and incorporated in this approach.

In order to detect attacks in cyber systems in partially observable environments, an approach to estimating the types of states in partially observable cyber systems, which is the first phase of attack detection in cyber systems in partially observable environments, is presented. The types of states of such cyber systems are useful to detecting cyber-attacks in such cyber systems. This approach involves the use of a convolutional neural network (CNN), and unsupervised learning with elbow method and k-means clustering algorithm.
ContributorsGuha, Sayantan (Author) / Yau, Stephen S. (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2016
171782-Thumbnail Image.png
Description
Security requirements are at the heart of developing secure, invulnerable software. Without embedding security principles in the software development life cycle, the likelihood of producing insecure software increases, putting the consumers of that software at great risk. For large-scale software development, this problem is complicated as there may be hundreds

Security requirements are at the heart of developing secure, invulnerable software. Without embedding security principles in the software development life cycle, the likelihood of producing insecure software increases, putting the consumers of that software at great risk. For large-scale software development, this problem is complicated as there may be hundreds or thousands of security requirements that need to be met, and it only worsens if the software development project is developed by a distributed development team. In this thesis, an approach is provided for software security requirement traceability for large-scale and complex software development projects being developed by distributed development teams. The approach utilizes blockchain technology to improve the automation of security requirement satisfaction and create a more transparent and trustworthy development environment for distributed development teams. The approach also introduces immutability, auditability, and non-repudiation into the security requirement traceability process. The approach is evaluated against existing software security requirement solutions.
ContributorsKulkarni, Adi Deepak (Author) / Yau, Stephen S. (Thesis advisor) / Banerjee, Ayan (Committee member) / Wang, Ruoyu (Committee member) / Baek, Jaejong (Committee member) / Arizona State University (Publisher)
Created2022
158417-Thumbnail Image.png
Description
Large organizations have multiple networks that are subject to attacks, which can be detected by continuous monitoring and analyzing the network traffic by Intrusion Detection Systems. Collaborative Intrusion Detection Systems (CIDS) are used for efficient detection of distributed attacks by having a global view of the traffic events in large

Large organizations have multiple networks that are subject to attacks, which can be detected by continuous monitoring and analyzing the network traffic by Intrusion Detection Systems. Collaborative Intrusion Detection Systems (CIDS) are used for efficient detection of distributed attacks by having a global view of the traffic events in large networks. However, CIDS are vulnerable to internal attacks, and these internal attacks decrease the mutual trust among the nodes in CIDS required for sharing of critical and sensitive alert data in CIDS. Without the data sharing, the nodes of CIDS cannot collaborate efficiently to form a comprehensive view of events in the networks monitored to detect distributed attacks. The compromised nodes will further decrease the accuracy of CIDS by generating false positives and false negatives of the traffic event classifications. In this thesis, an approach based on a trust score system is presented to detect and suspend the compromised nodes in CIDS to improve the trust among the nodes for efficient collaboration. This trust score-based approach is implemented as a consensus model on a private blockchain because private blockchain has the features to address the accountability, integrity and privacy requirements of CIDS. In this approach, the trust scores of malicious nodes are decreased with every reported false negative or false positive of the traffic event classifications. When the trust scores of any node falls below a threshold, the node is identified as compromised and suspended. The approach is evaluated for the accuracy of identifying malicious nodes in CIDS.
ContributorsYenugunti, Chandralekha (Author) / Yau, Stephen S. (Thesis advisor) / Yang, Yezhou (Committee member) / Zou, Jia (Committee member) / Arizona State University (Publisher)
Created2020