Matching Items (79)
Filtering by

Clear all filters

151951-Thumbnail Image.png
Description
The consumption of feedstocks from agriculture and forestry by current biofuel production has raised concerns about food security and land availability. In the meantime, intensive human activities have created a large amount of marginal lands that require management. This study investigated the viability of aligning land management with biofuel production

The consumption of feedstocks from agriculture and forestry by current biofuel production has raised concerns about food security and land availability. In the meantime, intensive human activities have created a large amount of marginal lands that require management. This study investigated the viability of aligning land management with biofuel production on marginal lands. Biofuel crop production on two types of marginal lands, namely urban vacant lots and abandoned mine lands (AMLs), were assessed. The investigation of biofuel production on urban marginal land was carried out in Pittsburgh between 2008 and 2011, using the sunflower gardens developed by a Pittsburgh non-profit as an example. Results showed that the crops from urban marginal lands were safe for biofuel. The crop yield was 20% of that on agricultural land while the low input agriculture was used in crop cultivation. The energy balance analysis demonstrated that the sunflower gardens could produce a net energy return even at the current low yield. Biofuel production on AML was assessed from experiments conducted in a greenhouse for sunflower, soybean, corn, canola and camelina. The research successfully created an industrial symbiosis by using bauxite as soil amendment to enable plant growth on very acidic mine refuse. Phytoremediation and soil amendments were found to be able to effectively reduce contamination in the AML and its runoff. Results from this research supported that biofuel production on marginal lands could be a unique and feasible option for cultivating biofuel feedstocks.
ContributorsZhao, Xi (Author) / Landis, Amy (Thesis advisor) / Fox, Peter (Committee member) / Chester, Mikhail (Committee member) / Arizona State University (Publisher)
Created2013
152422-Thumbnail Image.png
Description
With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may

With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may lead adversaries to access critical information by circumventing security practices. In order to ensure security, considerable efforts have been spent to develop security regulations by facilitating security best-practices. Applying shared security standards to the system is critical to understand vulnerabilities and prevent well-known threats from exploiting vulnerabilities. However, many end users tend to change configurations of their systems without paying attention to the security. Hence, it is not straightforward to protect systems from being changed by unconscious users in a timely manner. Detecting the installation of harmful applications is not sufficient since attackers may exploit risky software as well as commonly used software. In addition, checking the assurance of security configurations periodically is disadvantageous in terms of time and cost due to zero-day attacks and the timing attacks that can leverage the window between each security checks. Therefore, event-driven monitoring approach is critical to continuously assess security of a target system without ignoring a particular window between security checks and lessen the burden of exhausted task to inspect the entire configurations in the system. Furthermore, the system should be able to generate a vulnerability report for any change initiated by a user if such changes refer to the requirements in the standards and turn out to be vulnerable. Assessing various systems in distributed environments also requires to consistently applying standards to each environment. Such a uniformed consistent assessment is important because the way of assessment approach for detecting security vulnerabilities may vary across applications and operating systems. In this thesis, I introduce an automated event-driven security assessment framework to overcome and accommodate the aforementioned issues. I also discuss the implementation details that are based on the commercial-off-the-self technologies and testbed being established to evaluate approach. Besides, I describe evaluation results that demonstrate the effectiveness and practicality of the approaches.
ContributorsSeo, Jeong-Jin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2014
152590-Thumbnail Image.png
Description
Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost

Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost in handling breaches for security architects and security engineers. The process of security testing which involves creating tests that effectively examine vulnerabilities is a challenging task. Role-Based Access Control (RBAC) has been widely adopted to support fine-grained access control. However, in practice, due to its complexity including role management, role hierarchy with hundreds of roles, and their associated privileges and users, systematically testing RBAC systems is crucial to ensure the security in various domains ranging from cyber-infrastructure to mission-critical applications. In this thesis, we introduce i) a security testing technique for RBAC systems considering the principle of maximum privileges, the structure of the role hierarchy, and a new security test coverage criterion; ii) a MTBDD (Multi-Terminal Binary Decision Diagram) based representation of RBAC security policy including RHMTBDD (Role Hierarchy MTBDD) to efficiently generate effective positive and negative security test cases; and iii) a security testing framework which takes an XACML-based RBAC security policy as an input, parses it into a RHMTBDD representation and then generates positive and negative test cases. We also demonstrate the efficacy of our approach through case studies.
ContributorsGupta, Poonam (Author) / Ahn, Gail-Joon (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2014
152278-Thumbnail Image.png
Description
The digital forensics community has neglected email forensics as a process, despite the fact that email remains an important tool in the commission of crime. Current forensic practices focus mostly on that of disk forensics, while email forensics is left as an analysis task stemming from that practice. As there

The digital forensics community has neglected email forensics as a process, despite the fact that email remains an important tool in the commission of crime. Current forensic practices focus mostly on that of disk forensics, while email forensics is left as an analysis task stemming from that practice. As there is no well-defined process to be used for email forensics the comprehensiveness, extensibility of tools, uniformity of evidence, usefulness in collaborative/distributed environments, and consistency of investigations are hindered. At present, there exists little support for discovering, acquiring, and representing web-based email, despite its widespread use. To remedy this, a systematic process which includes discovering, acquiring, and representing web-based email for email forensics which is integrated into the normal forensic analysis workflow, and which accommodates the distinct characteristics of email evidence will be presented. This process focuses on detecting the presence of non-obvious artifacts related to email accounts, retrieving the data from the service provider, and representing email in a well-structured format based on existing standards. As a result, developers and organizations can collaboratively create and use analysis tools that can analyze email evidence from any source in the same fashion and the examiner can access additional data relevant to their forensic cases. Following, an extensible framework implementing this novel process-driven approach has been implemented in an attempt to address the problems of comprehensiveness, extensibility, uniformity, collaboration/distribution, and consistency within forensic investigations involving email evidence.
ContributorsPaglierani, Justin W (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Santanam, Raghu T (Committee member) / Arizona State University (Publisher)
Created2013
152495-Thumbnail Image.png
Description
Attribute Based Access Control (ABAC) mechanisms have been attracting a lot of interest from the research community in recent times. This is especially because of the flexibility and extensibility it provides by using attributes assigned to subjects as the basis for access control. ABAC enables an administrator of a server

Attribute Based Access Control (ABAC) mechanisms have been attracting a lot of interest from the research community in recent times. This is especially because of the flexibility and extensibility it provides by using attributes assigned to subjects as the basis for access control. ABAC enables an administrator of a server to enforce access policies on the data, services and other such resources fairly easily. It also accommodates new policies and changes to existing policies gracefully, thereby making it a potentially good mechanism for implementing access control in large systems, particularly in today's age of Cloud Computing. However management of the attributes in ABAC environment is an area that has been little touched upon. Having a mechanism to allow multiple ABAC based systems to share data and resources can go a long way in making ABAC scalable. At the same time each system should be able to specify their own attribute sets independently. In the research presented in this document a new mechanism is proposed that would enable users to share resources and data in a cloud environment using ABAC techniques in a distributed manner. The focus is mainly on decentralizing the access policy specifications for the shared data so that each data owner can specify the access policy independent of others. The concept of ontologies and semantic web is introduced in the ABAC paradigm that would help in giving a scalable structure to the attributes and also allow systems having different sets of attributes to communicate and share resources.
ContributorsPrabhu Verleker, Ashwin Narayan (Author) / Huang, Dijiang (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2014
152385-Thumbnail Image.png
Description
This thesis addresses the ever increasing threat of botnets in the smartphone domain and focuses on the Android platform and the botnets using Online Social Networks (OSNs) as Command and Control (C&C;) medium. With any botnet, C&C; is one of the components on which the survival of botnet depends. Individual

This thesis addresses the ever increasing threat of botnets in the smartphone domain and focuses on the Android platform and the botnets using Online Social Networks (OSNs) as Command and Control (C&C;) medium. With any botnet, C&C; is one of the components on which the survival of botnet depends. Individual bots use the C&C; channel to receive commands and send the data. This thesis develops active host based approach for identifying the presence of bot based on the anomalies in the usage patterns of the user before and after the bot is installed on the user smartphone and alerting the user to the presence of the bot. A profile is constructed for each user based on the regular web usage patterns (achieved by intercepting the http(s) traffic) and implementing machine learning techniques to continuously learn the user's behavior and changes in the behavior and all the while looking for any anomalies in the user behavior above a threshold which will cause the user to be notified of the anomalous traffic. A prototype bot which uses OSN s as C&C; channel is constructed and used for testing. Users are given smartphones(Nexus 4 and Galaxy Nexus) running Application proxy which intercepts http(s) traffic and relay it to a server which uses the traffic and constructs the model for a particular user and look for any signs of anomalies. This approach lays the groundwork for the future host-based counter measures for smartphone botnets using OSN s as C&C; channel.
ContributorsKilari, Vishnu Teja (Author) / Xue, Guoliang (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2013
152744-Thumbnail Image.png
Description
Characterization of petroleum spill site source zones directly influences the selection of corrective action plans and frequently affects the success of remediation efforts. For example, simply knowing whether or not nonaqueous phase liquid (NAPL) is present, or if there is chemical storage in less hydraulically accessible regions, will influence corrective

Characterization of petroleum spill site source zones directly influences the selection of corrective action plans and frequently affects the success of remediation efforts. For example, simply knowing whether or not nonaqueous phase liquid (NAPL) is present, or if there is chemical storage in less hydraulically accessible regions, will influence corrective action planning. The overarching objective of this study was to assess if macroscopic source zone features can be inferred from dissolved concentration vs. time data. Laboratory-scale physical model studies were conducted for idealized sources; defined as Type-1) NAPL-impacted high permeability zones, Type-2) NAPL-impacted lower permeability zones, and Type-3) dissolved chemical matrix storage in lower permeability zones. Aquifer source release studies were conducted using two-dimensional stainless steel flow-through tanks outfitted with sampling ports for the monitoring of effluent concentrations and flow rates. An idealized NAPL mixture of key gasoline components was used to create the NAPL source zones, and dissolved sources were created using aqueous solutions having concentrations similar to water in equilibrium with the NAPL sources. The average linear velocity was controlled by pumping to be about 2 ft/d, and dissolved effluent concentrations were monitored daily. The Type-1 experiment resulted in a source signature similar to that expected for a relatively well-mixed NAPL source, with dissolved concentrations dependent on chemical solubility and initial mass fraction. The Type-2 and Type-3 experiments were conducted for 320 d and 190 d respectively. Unlike the Type-1 experiment, the concentration vs. time behavior was similar for all chemicals, for both source types. The magnitudes of the effluent concentrations varied between the Type-2 and Type-3 experiments, and were related to the hydrocarbon source mass. A fourth physical model experiment was performed to identify differences between ideal equilibrium behavior and the source concentration vs. time behavior observed in the tank experiments. Screening-level mathematical models predicted the general behavior observed in the experiments. The results of these studies suggest that dissolved concentration vs. time data can be used to distinguish between Type-1 sources in transmissive zones and Type-2 and Type-3 sources in lower permeability zones, provided that many years to decades of data are available. The results also suggest that concentration vs. time data alone will be insufficient to distinguish between NAPL and dissolved-phase storage sources in lower permeability regions.
ContributorsWilson, Sean Tomas (Author) / Johnson, Paul (Thesis advisor) / Kavazanjian, Edward (Committee member) / Fox, Peter (Committee member) / Arizona State University (Publisher)
Created2014
153335-Thumbnail Image.png
Description
With the increasing user demand for low latency, elastic provisioning of computing resources coupled with ubiquitous and on-demand access to real-time data, cloud computing has emerged as a popular computing paradigm to meet growing user demands.

With the increasing user demand for low latency, elastic provisioning of computing resources coupled with ubiquitous and on-demand access to real-time data, cloud computing has emerged as a popular computing paradigm to meet growing user demands. However, with the introduction and rising use of wear- able technology and evolving uses of smart-phones, the concept of Internet of Things (IoT) has become a prevailing notion in the currently growing technology industry. Cisco Inc. has projected a data creation of approximately 403 Zetabytes (ZB) by 2018. The combination of bringing benign devices and connecting them to the web has resulted in exploding service and data aggregation requirements, thus requiring a new and innovative computing platform. This platform should have the capability to provide robust real-time data analytics and resource provisioning to clients, such as IoT users, on-demand. Such a computation model would need to function at the edge-of-the-network, forming a bridge between the large cloud data centers and the distributed connected devices.

This research expands on the notion of bringing computational power to the edge- of-the-network, and then integrating it with the cloud computing paradigm whilst providing services to diverse IoT-based applications. This expansion is achieved through the establishment of a new computing model that serves as a platform for IoT-based devices to communicate with services in real-time. We name this paradigm as Gateway-Oriented Reconfigurable Ecosystem (GORE) computing. Finally, this thesis proposes and discusses the development of a policy management framework for accommodating our proposed computational paradigm. The policy framework is designed to serve both the hosted applications and the GORE paradigm by enabling them to function more efficiently. The goal of the framework is to ensure uninterrupted communication and service delivery between users and their applications.
ContributorsDsouza, Clinton (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2015
153126-Thumbnail Image.png
Description
The increasing number of continually connected mobile persons has created an environment conducive to real time user data gathering for many uses both public and private in nature. Publicly, one can envision no longer requiring a census to determine the demographic composition of the country and its sub regions. The

The increasing number of continually connected mobile persons has created an environment conducive to real time user data gathering for many uses both public and private in nature. Publicly, one can envision no longer requiring a census to determine the demographic composition of the country and its sub regions. The information provided is vastly more up to date than that of a census and allows civil authorities to be more agile and preemptive with planning. Privately, advertisers take advantage of a persons stated opinions, demographics, and contextual (where and when) information in order to formulate and present pertinent offers.

Regardless of its use this information can be sensitive in nature and should therefore be under the control of the user. Currently, a user has little say in the manner that their information is processed once it has been released. An ad-hoc approach is currently in use, where the location based service providers each maintain their own policy over personal information usage.

In order to allow more user control over their personal information while still providing for targeted advertising, a systematic approach to the release of the information is needed. It is for that reason we propose a User-Centric Context Aware Spatiotemporal Anonymization framework. At its core the framework will unify the current spatiotemporal anonymization with that of traditional anonymization so that user specified anonymization requirement is met or exceeded while allowing for more demographic information to be released.
ContributorsSanchez, Michael Andrew (Author) / Ahn, Gail-Joon (Thesis advisor) / Doupe, Adam (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2014
153147-Thumbnail Image.png
Description
The rate at which new malicious software (Malware) is created is consistently increasing each year. These new malwares are designed to bypass the current anti-virus countermeasures employed to protect computer systems. Security Analysts must understand the nature and intent of the malware sample in order to protect computer systems from

The rate at which new malicious software (Malware) is created is consistently increasing each year. These new malwares are designed to bypass the current anti-virus countermeasures employed to protect computer systems. Security Analysts must understand the nature and intent of the malware sample in order to protect computer systems from these attacks. The large number of new malware samples received daily by computer security companies require Security Analysts to quickly determine the type, threat, and countermeasure for newly identied samples. Our approach provides for a visualization tool to assist the Security Analyst in these tasks that allows the Analyst to visually identify relationships between malware samples.

This approach consists of three steps. First, the received samples are processed by a sandbox environment to perform a dynamic behavior analysis. Second, the reports of the dynamic behavior analysis are parsed to extract identifying features which are matched against other known and analyzed samples. Lastly, those matches that are determined to express a relationship are visualized as an edge connected pair of nodes in an undirected graph.
ContributorsHolmes, James Edward (Author) / Ahn, Gail-Joon (Thesis advisor) / Dasgupta, Partha (Committee member) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2014