Matching Items (41)
151878-Thumbnail Image.png
Description
Researchers across a variety of fields are often interested in determining if data are of a random nature or if they exhibit patterning which may be the result of some alternative and potentially more interesting process. This dissertation explores a family of statistical methods, i.e. space-time interaction tests, designed to

Researchers across a variety of fields are often interested in determining if data are of a random nature or if they exhibit patterning which may be the result of some alternative and potentially more interesting process. This dissertation explores a family of statistical methods, i.e. space-time interaction tests, designed to detect structure within three-dimensional event data. These tests, widely employed in the fields of spatial epidemiology, criminology, ecology and beyond, are used to identify synergistic interaction across the spatial and temporal dimensions of a series of events. Exploration is needed to better understand these methods and determine how their results may be affected by data quality problems commonly encountered in their implementation; specifically, how inaccuracy and/or uncertainty in the input data analyzed by the methods may impact subsequent results. Additionally, known shortcomings of the methods must be ameliorated. The contributions of this dissertation are twofold: it develops a more complete understanding of how input data quality problems impact the results of a number of global and local tests of space-time interaction and it formulates an improved version of one global test which accounts for the previously identified problem of population shift bias. A series of simulation experiments reveal the global tests of space-time interaction explored here to be dramatically affected by the aforementioned deficiencies in the quality of the input data. It is shown that in some cases, a conservative degree of these common data problems can completely obscure evidence of space-time interaction and in others create it where it does not exist. Conversely, a local metric of space-time interaction examined here demonstrates a surprising robustness in the face of these same deficiencies. This local metric is revealed to be only minimally affected by the inaccuracies and incompleteness introduced in these experiments. Finally, enhancements to one of the global tests are presented which solve the problem of population shift bias associated with the test and better contextualize and visualize its results, thereby enhancing its utility for practitioners.
ContributorsMalizia, Nicholas (Author) / Anselin, Luc (Thesis advisor) / Murray, Alan (Committee member) / Rey, Sergio (Committee member) / Arizona State University (Publisher)
Created2013
151349-Thumbnail Image.png
Description
This dissertation addresses the research challenge of developing efficient new methods for discovering useful patterns and knowledge in large volumes of electronically collected spatiotemporal activity data. I propose to analyze three types of such spatiotemporal activity data in a methodological framework that integrates spatial analysis, data mining, machine learning, and

This dissertation addresses the research challenge of developing efficient new methods for discovering useful patterns and knowledge in large volumes of electronically collected spatiotemporal activity data. I propose to analyze three types of such spatiotemporal activity data in a methodological framework that integrates spatial analysis, data mining, machine learning, and geovisualization techniques. Three different types of spatiotemporal activity data were collected through different data collection approaches: (1) crowd sourced geo-tagged digital photos, representing people's travel activity, were retrieved from the website Panoramio.com through information retrieval techniques; (2) the same techniques were used to crawl crowd sourced GPS trajectory data and related metadata of their daily activities from the website OpenStreetMap.org; and finally (3) preschool children's daily activities and interactions tagged with time and geographical location were collected with a novel TabletPC-based behavioral coding system. The proposed methodology is applied to these data to (1) automatically recommend optimal multi-day and multi-stay travel itineraries for travelers based on discovered attractions from geo-tagged photos, (2) automatically detect movement types of unknown moving objects from GPS trajectories, and (3) explore dynamic social and socio-spatial patterns of preschool children's behavior from both geographic and social perspectives.
ContributorsLi, Xun (Author) / Anselin, Luc (Thesis advisor) / Koschinsky, Julia (Committee member) / Maciejewski, Ross (Committee member) / Rey, Sergio (Committee member) / Griffin, William (Committee member) / Arizona State University (Publisher)
Created2012
151538-Thumbnail Image.png
Description
There exist many facets of error and uncertainty in digital spatial information. As error or uncertainty will not likely ever be completely eliminated, a better understanding of its impacts is necessary. Spatial analytical approaches, in particular, must somehow address data quality issues. This can range from evaluating impacts of potential

There exist many facets of error and uncertainty in digital spatial information. As error or uncertainty will not likely ever be completely eliminated, a better understanding of its impacts is necessary. Spatial analytical approaches, in particular, must somehow address data quality issues. This can range from evaluating impacts of potential data uncertainty in planning processes that make use of methods to devising methods that explicitly account for error/uncertainty. To date, little has been done to structure methods accounting for error. This research focuses on developing methods to address geographic data uncertainty in spatial optimization. An integrated approach that characterizes uncertainty impacts by constructing and solving a new multi-objective model that explicitly incorporates facets of data uncertainty is developed. Empirical findings illustrate that the proposed approaches can be applied to evaluate the impacts of data uncertainty with statistical confidence, which moves beyond popular practices of simulating errors in data. Spatial uncertainty impacts are evaluated in two contexts: harvest scheduling and sex offender residency. Owing to the integration of spatial uncertainty, the detailed multi-objective models are more complex and computationally challenging to solve. As a result, a new multi-objective evolutionary algorithm is developed to address the computational challenges posed. The proposed algorithm incorporates problem-specific spatial knowledge to significantly enhance the capability of the evolutionary algorithm for solving the model.  
ContributorsWei, Ran (Author) / Murray, Alan T. (Thesis advisor) / Anselin, Luc (Committee member) / Rey, Segio J (Committee member) / Mack, Elizabeth A. (Committee member) / Arizona State University (Publisher)
Created2013
152422-Thumbnail Image.png
Description
With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may

With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may lead adversaries to access critical information by circumventing security practices. In order to ensure security, considerable efforts have been spent to develop security regulations by facilitating security best-practices. Applying shared security standards to the system is critical to understand vulnerabilities and prevent well-known threats from exploiting vulnerabilities. However, many end users tend to change configurations of their systems without paying attention to the security. Hence, it is not straightforward to protect systems from being changed by unconscious users in a timely manner. Detecting the installation of harmful applications is not sufficient since attackers may exploit risky software as well as commonly used software. In addition, checking the assurance of security configurations periodically is disadvantageous in terms of time and cost due to zero-day attacks and the timing attacks that can leverage the window between each security checks. Therefore, event-driven monitoring approach is critical to continuously assess security of a target system without ignoring a particular window between security checks and lessen the burden of exhausted task to inspect the entire configurations in the system. Furthermore, the system should be able to generate a vulnerability report for any change initiated by a user if such changes refer to the requirements in the standards and turn out to be vulnerable. Assessing various systems in distributed environments also requires to consistently applying standards to each environment. Such a uniformed consistent assessment is important because the way of assessment approach for detecting security vulnerabilities may vary across applications and operating systems. In this thesis, I introduce an automated event-driven security assessment framework to overcome and accommodate the aforementioned issues. I also discuss the implementation details that are based on the commercial-off-the-self technologies and testbed being established to evaluate approach. Besides, I describe evaluation results that demonstrate the effectiveness and practicality of the approaches.
ContributorsSeo, Jeong-Jin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2014
152171-Thumbnail Image.png
Description

Choropleth maps are a common form of online cartographic visualization. They reveal patterns in spatial distributions of a variable by associating colors with data values measured at areal units. Although this capability of pattern revelation has popularized the use of choropleth maps, existing methods for their online delivery are limited

Choropleth maps are a common form of online cartographic visualization. They reveal patterns in spatial distributions of a variable by associating colors with data values measured at areal units. Although this capability of pattern revelation has popularized the use of choropleth maps, existing methods for their online delivery are limited in supporting dynamic map generation from large areal data. This limitation has become increasingly problematic in online choropleth mapping as access to small area statistics, such as high-resolution census data and real-time aggregates of geospatial data streams, has never been easier due to advances in geospatial web technologies. The current literature shows that the challenge of large areal data can be mitigated through tiled maps where pre-processed map data are hierarchically partitioned into tiny rectangular images or map chunks for efficient data transmission. Various approaches have emerged lately to enable this tile-based choropleth mapping, yet little empirical evidence exists on their ability to handle spatial data with large numbers of areal units, thus complicating technical decision making in the development of online choropleth mapping applications. To fill this knowledge gap, this dissertation study conducts a scalability evaluation of three tile-based methods discussed in the literature: raster, scalable vector graphics (SVG), and HTML5 Canvas. For the evaluation, the study develops two test applications, generates map tiles from five different boundaries of the United States, and measures the response times of the applications under multiple test operations. While specific to the experimental setups of the study, the evaluation results show that the raster method scales better across various types of user interaction than the other methods. Empirical evidence also points to the superior scalability of Canvas to SVG in dynamic rendering of vector tiles, but not necessarily for partial updates of the tiles. These findings indicate that the raster method is better suited for dynamic choropleth rendering from large areal data, while Canvas would be more suitable than SVG when such rendering frequently involves complete updates of vector shapes.

ContributorsHwang, Myunghwa (Author) / Anselin, Luc (Thesis advisor) / Rey, Sergio J. (Committee member) / Wentz, Elizabeth (Committee member) / Arizona State University (Publisher)
Created2013
152278-Thumbnail Image.png
Description
The digital forensics community has neglected email forensics as a process, despite the fact that email remains an important tool in the commission of crime. Current forensic practices focus mostly on that of disk forensics, while email forensics is left as an analysis task stemming from that practice. As there

The digital forensics community has neglected email forensics as a process, despite the fact that email remains an important tool in the commission of crime. Current forensic practices focus mostly on that of disk forensics, while email forensics is left as an analysis task stemming from that practice. As there is no well-defined process to be used for email forensics the comprehensiveness, extensibility of tools, uniformity of evidence, usefulness in collaborative/distributed environments, and consistency of investigations are hindered. At present, there exists little support for discovering, acquiring, and representing web-based email, despite its widespread use. To remedy this, a systematic process which includes discovering, acquiring, and representing web-based email for email forensics which is integrated into the normal forensic analysis workflow, and which accommodates the distinct characteristics of email evidence will be presented. This process focuses on detecting the presence of non-obvious artifacts related to email accounts, retrieving the data from the service provider, and representing email in a well-structured format based on existing standards. As a result, developers and organizations can collaboratively create and use analysis tools that can analyze email evidence from any source in the same fashion and the examiner can access additional data relevant to their forensic cases. Following, an extensible framework implementing this novel process-driven approach has been implemented in an attempt to address the problems of comprehensiveness, extensibility, uniformity, collaboration/distribution, and consistency within forensic investigations involving email evidence.
ContributorsPaglierani, Justin W (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Santanam, Raghu T (Committee member) / Arizona State University (Publisher)
Created2013
153547-Thumbnail Image.png
Description
Mobile applications (Apps) markets with App stores have introduced a new approach to define and sell software applications with access to a large body of heterogeneous consumer population. Several distinctive features of mobile App store markets including – (a) highly heterogeneous consumer preferences and values, (b) high consumer cognitive burden

Mobile applications (Apps) markets with App stores have introduced a new approach to define and sell software applications with access to a large body of heterogeneous consumer population. Several distinctive features of mobile App store markets including – (a) highly heterogeneous consumer preferences and values, (b) high consumer cognitive burden of searching a large selection of similar Apps, and (c) continuously updateable product features and price – present a unique opportunity for IS researchers to investigate theoretically motivated research questions in this area. The aim of this dissertation research is to investigate the key determinants of mobile Apps success in App store markets. The dissertation is organized into three distinct and related studies. First, using the key tenets of product portfolio management theory and theory of economies of scope, this study empirically investigates how sellers’ App portfolio strategies are associated with sales performance over time. Second, the sale performance impacts of App product cues, generated from App product descriptions and offered from market formats, are examined using the theories of market signaling and cue utilization. Third, the role of App updates in stimulating consumer demands in the presence of strong ranking effects is appraised. The findings of this dissertation work highlight the impacts of sellers’ App assortment, strategic product description formulation, and long-term App management with price/feature updates on success in App market. The dissertation studies make key contributions to the IS literature by highlighting three key managerially and theoretically important findings related to mobile Apps: (1) diversification across selling categories is a key driver of high survival probability in the top charts, (2) product cues strategically presented in the descriptions have complementary relationships with market cues in influencing App sales, and (3) continuous quality improvements have long-term effects on App success in the presence of strong ranking effects.
ContributorsLee, Gun Woong (Author) / Santanam, Raghu (Thesis advisor) / Gu, Bin (Committee member) / Park, Sungho (Committee member) / Arizona State University (Publisher)
Created2015
153032-Thumbnail Image.png
Description
Most existing security decisions for both defending and attacking are made based on some deterministic approaches that only give binary answers. Even though these approaches can achieve low false positive rate for decision making, they have high false negative rates due to the lack of accommodations to new attack methods

Most existing security decisions for both defending and attacking are made based on some deterministic approaches that only give binary answers. Even though these approaches can achieve low false positive rate for decision making, they have high false negative rates due to the lack of accommodations to new attack methods and defense techniques. In this dissertation, I study how to discover and use patterns with uncertainty and randomness to counter security challenges. By extracting and modeling patterns in security events, I am able to handle previously unknown security events with quantified confidence, rather than simply making binary decisions. In particular, I cope with the following four real-world security challenges by modeling and analyzing with pattern-based approaches: 1) How to detect and attribute previously unknown shellcode? I propose instruction sequence abstraction that extracts coarse-grained patterns from an instruction sequence and use Markov chain-based model and support vector machines to detect and attribute shellcode; 2) How to safely mitigate routing attacks in mobile ad hoc networks? I identify routing table change patterns caused by attacks, propose an extended Dempster-Shafer theory to measure the risk of such changes, and use a risk-aware response mechanism to mitigate routing attacks; 3) How to model, understand, and guess human-chosen picture passwords? I analyze collected human-chosen picture passwords, propose selection function that models patterns in password selection, and design two algorithms to optimize password guessing paths; and 4) How to identify influential figures and events in underground social networks? I analyze collected underground social network data, identify user interaction patterns, and propose a suite of measures for systematically discovering and mining adversarial evidence. By solving these four problems, I demonstrate that discovering and using patterns could help deal with challenges in computer security, network security, human-computer interaction security, and social network security.
ContributorsZhao, Ziming (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Santanam, Raghu (Committee member) / Arizona State University (Publisher)
Created2014
149803-Thumbnail Image.png
Description
With the advent of technologies such as web services, service oriented architecture and cloud computing, modern organizations have to deal with policies such as Firewall policies to secure the networks, XACML (eXtensible Access Control Markup Language) policies for controlling the access to critical information as well as resources. Management of

With the advent of technologies such as web services, service oriented architecture and cloud computing, modern organizations have to deal with policies such as Firewall policies to secure the networks, XACML (eXtensible Access Control Markup Language) policies for controlling the access to critical information as well as resources. Management of these policies is an extremely important task in order to avoid unintended security leakages via illegal accesses, while maintaining proper access to services for legitimate users. Managing and maintaining access control policies manually over long period of time is an error prone task due to their inherent complex nature. Existing tools and mechanisms for policy management use different approaches for different types of policies. This research thesis represents a generic framework to provide an unified approach for policy analysis and management of different types of policies. Generic approach captures the common semantics and structure of different access control policies with the notion of policy ontology. Policy ontology representation is then utilized for effectively analyzing and managing the policies. This thesis also discusses a proof-of-concept implementation of the proposed generic framework and demonstrates how efficiently this unified approach can be used for analysis and management of different types of access control policies.
ContributorsKulkarni, Ketan (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2011
150225-Thumbnail Image.png
Description
Regional differences of inventive activity and economic growth are important in economic geography. These differences are generally explained by the theory of localized knowledge spillovers, which argues that geographical proximity among economic actors fosters invention and innovation. However, knowledge production involves an increasing number of actors connecting to non-local partners.

Regional differences of inventive activity and economic growth are important in economic geography. These differences are generally explained by the theory of localized knowledge spillovers, which argues that geographical proximity among economic actors fosters invention and innovation. However, knowledge production involves an increasing number of actors connecting to non-local partners. The space of knowledge flows is not tightly bounded in a given territory, but functions as a network-based system where knowledge flows circulate around alignments of actors in different and distant places. The purpose of this dissertation is to understand the dynamics of network aspects of knowledge flows in American biotechnology. The first research task assesses both spatial and network-based dependencies of biotechnology co-invention across 150 large U.S. metropolitan areas over four decades (1979, 1989, 1999, and 2009). An integrated methodology including both spatial and social network analyses are explicitly applied and compared. Results show that the network-based proximity better defines the U.S. biotechnology co-invention urban system in recent years. Co-patenting relationships of major biotechnology centers has demonstrated national and regional association since the 1990s. Associations retain features of spatial proximity especially in some Midwestern and Northeastern cities, but these are no longer the strongest features affecting co-inventive links. The second research task examines how biotechnology knowledge flows circulate over space by focusing on the structural properties of intermetropolitan co-invention networks. All analyses in this task are conducted using social network analysis. Evidence shows that the architecture of the U.S. co-invention networks reveals a trend toward more organized structures and less fragmentation over the four years of analysis. Metropolitan areas are increasingly interconnected into a large web of networked environment. Knowledge flows are less likely to be controlled by a small number of intermediaries. San Francisco, New York, Boston, and San Diego monopolize the central positions of the intermetropolitan co-invention network as major American biotechnology concentrations. The overall network-based system comes close to a relational core/periphery structure where core metropolitan areas are strongly connected to one another and to some peripheral areas. Peripheral metropolitan areas are loosely connected or even disconnected with each other. This dissertation provides empirical evidence to support the argument that technological collaboration reveals a network-based system associated with different or even distant geographical places, which is somewhat different from the conventional theory of localized knowledge spillovers that once dominated understanding of the role of geography in technological advance.
ContributorsLee, Der-Shiuan (Author) / Ó Huallacháin, Breandán (Thesis advisor) / Anselin, Luc (Committee member) / Kuby, Michael (Committee member) / Lobo, Jose (Committee member) / Arizona State University (Publisher)
Created2011