Matching Items (12)

130331-Thumbnail Image.png

A Geospatial Cyberinfrastructure for Urban Economic Analysis and Spatial Decision-Making

Description

Urban economic modeling and effective spatial planning are critical tools towards achieving urban sustainability. However, in practice, many technical obstacles, such as information islands, poor documentation of data and lack

Urban economic modeling and effective spatial planning are critical tools towards achieving urban sustainability. However, in practice, many technical obstacles, such as information islands, poor documentation of data and lack of software platforms to facilitate virtual collaboration, are challenging the effectiveness of decision-making processes. In this paper, we report on our efforts to design and develop a geospatial cyberinfrastructure (GCI) for urban economic analysis and simulation. This GCI provides an operational graphic user interface, built upon a service-oriented architecture to allow (1) widespread sharing and seamless integration of distributed geospatial data; (2) an effective way to address the uncertainty and positional errors encountered in fusing data from diverse sources; (3) the decomposition of complex planning questions into atomic spatial analysis tasks and the generation of a web service chain to tackle such complex problems; and (4) capturing and representing provenance of geospatial data to trace its flow in the modeling task. The Greater Los Angeles Region serves as the test bed. We expect this work to contribute to effective spatial policy analysis and decision-making through the adoption of advanced GCI and to broaden the application coverage of GCI to include urban economic simulations.

Contributors

Created

Date Created
  • 2013-05-21

130375-Thumbnail Image.png

Open Geospatial Analytics with PySAL

Description

This article reviews the range of delivery platforms that have been developed for the PySAL open source Python library for spatial analysis. This includes traditional desktop software (with a graphical

This article reviews the range of delivery platforms that have been developed for the PySAL open source Python library for spatial analysis. This includes traditional desktop software (with a graphical user interface, command line or embedded in a computational notebook), open spatial analytics middleware, and web, cloud and distributed open geospatial analytics for decision support. A common thread throughout the discussion is the emphasis on openness, interoperability, and provenance management in a scientific workflow. The code base of the PySAL library provides the common computing framework underlying all delivery mechanisms.

Contributors

Created

Date Created
  • 2015-06-01

151538-Thumbnail Image.png

Addressing geographic uncertainty in spatial optimization

Description

There exist many facets of error and uncertainty in digital spatial information. As error or uncertainty will not likely ever be completely eliminated, a better understanding of its impacts is

There exist many facets of error and uncertainty in digital spatial information. As error or uncertainty will not likely ever be completely eliminated, a better understanding of its impacts is necessary. Spatial analytical approaches, in particular, must somehow address data quality issues. This can range from evaluating impacts of potential data uncertainty in planning processes that make use of methods to devising methods that explicitly account for error/uncertainty. To date, little has been done to structure methods accounting for error. This research focuses on developing methods to address geographic data uncertainty in spatial optimization. An integrated approach that characterizes uncertainty impacts by constructing and solving a new multi-objective model that explicitly incorporates facets of data uncertainty is developed. Empirical findings illustrate that the proposed approaches can be applied to evaluate the impacts of data uncertainty with statistical confidence, which moves beyond popular practices of simulating errors in data. Spatial uncertainty impacts are evaluated in two contexts: harvest scheduling and sex offender residency. Owing to the integration of spatial uncertainty, the detailed multi-objective models are more complex and computationally challenging to solve. As a result, a new multi-objective evolutionary algorithm is developed to address the computational challenges posed. The proposed algorithm incorporates problem-specific spatial knowledge to significantly enhance the capability of the evolutionary algorithm for solving the model.  

Contributors

Agent

Created

Date Created
  • 2013

150225-Thumbnail Image.png

Intermetropolitan networks of co-invention in American biotechnology

Description

Regional differences of inventive activity and economic growth are important in economic geography. These differences are generally explained by the theory of localized knowledge spillovers, which argues that geographical proximity

Regional differences of inventive activity and economic growth are important in economic geography. These differences are generally explained by the theory of localized knowledge spillovers, which argues that geographical proximity among economic actors fosters invention and innovation. However, knowledge production involves an increasing number of actors connecting to non-local partners. The space of knowledge flows is not tightly bounded in a given territory, but functions as a network-based system where knowledge flows circulate around alignments of actors in different and distant places. The purpose of this dissertation is to understand the dynamics of network aspects of knowledge flows in American biotechnology. The first research task assesses both spatial and network-based dependencies of biotechnology co-invention across 150 large U.S. metropolitan areas over four decades (1979, 1989, 1999, and 2009). An integrated methodology including both spatial and social network analyses are explicitly applied and compared. Results show that the network-based proximity better defines the U.S. biotechnology co-invention urban system in recent years. Co-patenting relationships of major biotechnology centers has demonstrated national and regional association since the 1990s. Associations retain features of spatial proximity especially in some Midwestern and Northeastern cities, but these are no longer the strongest features affecting co-inventive links. The second research task examines how biotechnology knowledge flows circulate over space by focusing on the structural properties of intermetropolitan co-invention networks. All analyses in this task are conducted using social network analysis. Evidence shows that the architecture of the U.S. co-invention networks reveals a trend toward more organized structures and less fragmentation over the four years of analysis. Metropolitan areas are increasingly interconnected into a large web of networked environment. Knowledge flows are less likely to be controlled by a small number of intermediaries. San Francisco, New York, Boston, and San Diego monopolize the central positions of the intermetropolitan co-invention network as major American biotechnology concentrations. The overall network-based system comes close to a relational core/periphery structure where core metropolitan areas are strongly connected to one another and to some peripheral areas. Peripheral metropolitan areas are loosely connected or even disconnected with each other. This dissertation provides empirical evidence to support the argument that technological collaboration reveals a network-based system associated with different or even distant geographical places, which is somewhat different from the conventional theory of localized knowledge spillovers that once dominated understanding of the role of geography in technological advance.

Contributors

Agent

Created

Date Created
  • 2011

151878-Thumbnail Image.png

Essays on space-time interaction tests

Description

Researchers across a variety of fields are often interested in determining if data are of a random nature or if they exhibit patterning which may be the result of some

Researchers across a variety of fields are often interested in determining if data are of a random nature or if they exhibit patterning which may be the result of some alternative and potentially more interesting process. This dissertation explores a family of statistical methods, i.e. space-time interaction tests, designed to detect structure within three-dimensional event data. These tests, widely employed in the fields of spatial epidemiology, criminology, ecology and beyond, are used to identify synergistic interaction across the spatial and temporal dimensions of a series of events. Exploration is needed to better understand these methods and determine how their results may be affected by data quality problems commonly encountered in their implementation; specifically, how inaccuracy and/or uncertainty in the input data analyzed by the methods may impact subsequent results. Additionally, known shortcomings of the methods must be ameliorated. The contributions of this dissertation are twofold: it develops a more complete understanding of how input data quality problems impact the results of a number of global and local tests of space-time interaction and it formulates an improved version of one global test which accounts for the previously identified problem of population shift bias. A series of simulation experiments reveal the global tests of space-time interaction explored here to be dramatically affected by the aforementioned deficiencies in the quality of the input data. It is shown that in some cases, a conservative degree of these common data problems can completely obscure evidence of space-time interaction and in others create it where it does not exist. Conversely, a local metric of space-time interaction examined here demonstrates a surprising robustness in the face of these same deficiencies. This local metric is revealed to be only minimally affected by the inaccuracies and incompleteness introduced in these experiments. Finally, enhancements to one of the global tests are presented which solve the problem of population shift bias associated with the test and better contextualize and visualize its results, thereby enhancing its utility for practitioners.

Contributors

Agent

Created

Date Created
  • 2013

152171-Thumbnail Image.png

Tile-based methods for online choropleth mapping: a scalability evaluation

Description

Choropleth maps are a common form of online cartographic visualization. They reveal patterns in spatial distributions of a variable by associating colors with data values measured at areal units. Although

Choropleth maps are a common form of online cartographic visualization. They reveal patterns in spatial distributions of a variable by associating colors with data values measured at areal units. Although this capability of pattern revelation has popularized the use of choropleth maps, existing methods for their online delivery are limited in supporting dynamic map generation from large areal data. This limitation has become increasingly problematic in online choropleth mapping as access to small area statistics, such as high-resolution census data and real-time aggregates of geospatial data streams, has never been easier due to advances in geospatial web technologies. The current literature shows that the challenge of large areal data can be mitigated through tiled maps where pre-processed map data are hierarchically partitioned into tiny rectangular images or map chunks for efficient data transmission. Various approaches have emerged lately to enable this tile-based choropleth mapping, yet little empirical evidence exists on their ability to handle spatial data with large numbers of areal units, thus complicating technical decision making in the development of online choropleth mapping applications. To fill this knowledge gap, this dissertation study conducts a scalability evaluation of three tile-based methods discussed in the literature: raster, scalable vector graphics (SVG), and HTML5 Canvas. For the evaluation, the study develops two test applications, generates map tiles from five different boundaries of the United States, and measures the response times of the applications under multiple test operations. While specific to the experimental setups of the study, the evaluation results show that the raster method scales better across various types of user interaction than the other methods. Empirical evidence also points to the superior scalability of Canvas to SVG in dynamic rendering of vector tiles, but not necessarily for partial updates of the tiles. These findings indicate that the raster method is better suited for dynamic choropleth rendering from large areal data, while Canvas would be more suitable than SVG when such rendering frequently involves complete updates of vector shapes.

Contributors

Agent

Created

Date Created
  • 2013

151349-Thumbnail Image.png

Spatiotemporal data mining, analysis, and visualization of human activity data

Description

This dissertation addresses the research challenge of developing efficient new methods for discovering useful patterns and knowledge in large volumes of electronically collected spatiotemporal activity data. I propose to analyze

This dissertation addresses the research challenge of developing efficient new methods for discovering useful patterns and knowledge in large volumes of electronically collected spatiotemporal activity data. I propose to analyze three types of such spatiotemporal activity data in a methodological framework that integrates spatial analysis, data mining, machine learning, and geovisualization techniques. Three different types of spatiotemporal activity data were collected through different data collection approaches: (1) crowd sourced geo-tagged digital photos, representing people's travel activity, were retrieved from the website Panoramio.com through information retrieval techniques; (2) the same techniques were used to crawl crowd sourced GPS trajectory data and related metadata of their daily activities from the website OpenStreetMap.org; and finally (3) preschool children's daily activities and interactions tagged with time and geographical location were collected with a novel TabletPC-based behavioral coding system. The proposed methodology is applied to these data to (1) automatically recommend optimal multi-day and multi-stay travel itineraries for travelers based on discovered attractions from geo-tagged photos, (2) automatically detect movement types of unknown moving objects from GPS trajectories, and (3) explore dynamic social and socio-spatial patterns of preschool children's behavior from both geographic and social perspectives.

Contributors

Agent

Created

Date Created
  • 2012

154079-Thumbnail Image.png

A taxonomy of parallel vector spatial analysis algorithms

Description

Nearly 25 years ago, parallel computing techniques were first applied to vector spatial analysis methods. This initial research was driven by the desire to reduce computing times in order to

Nearly 25 years ago, parallel computing techniques were first applied to vector spatial analysis methods. This initial research was driven by the desire to reduce computing times in order to support scaling to larger problem sets. Since this initial work, rapid technological advancement has driven the availability of High Performance Computing (HPC) resources, in the form of multi-core desktop computers, distributed geographic information processing systems, e.g. computational grids, and single site HPC clusters. In step with increases in computational resources, significant advancement in the capabilities to capture and store large quantities of spatially enabled data have been realized. A key component to utilizing vast data quantities in HPC environments, scalable algorithms, have failed to keep pace. The National Science Foundation has identified the lack of scalable algorithms in codified frameworks as an essential research product. Fulfillment of this goal is challenging given the lack of a codified theoretical framework mapping atomic numeric operations from the spatial analysis stack to parallel programming paradigms, the diversity in vernacular utilized by research groups, the propensity for implementations to tightly couple to under- lying hardware, and the general difficulty in realizing scalable parallel algorithms. This dissertation develops a taxonomy of parallel vector spatial analysis algorithms with classification being defined by root mathematical operation and communication pattern, a computational dwarf. Six computational dwarfs are identified, three being drawn directly from an existing parallel computing taxonomy and three being created to capture characteristics unique to spatial analysis algorithms. The taxonomy provides a high-level classification decoupled from low-level implementation details such as hardware, communication protocols, implementation language, decomposition method, or file input and output. By taking a high-level approach implementation specifics are broadly proposed, breadth of coverage is achieved, and extensibility is ensured. The taxonomy is both informed and informed by five case studies im- plemented across multiple, divergent hardware environments. A major contribution of this dissertation is a theoretical framework to support the future development of concrete parallel vector spatial analysis frameworks through the identification of computational dwarfs and, by extension, successful implementation strategies.

Contributors

Agent

Created

Date Created
  • 2015

151109-Thumbnail Image.png

The centralization index as a measure of local spatial segregation

Description

Decades ago in the U.S., clear lines delineated which neighborhoods were acceptable for certain people and which were not. Techniques such as steering and biased mortgage practices continue to perpetuate

Decades ago in the U.S., clear lines delineated which neighborhoods were acceptable for certain people and which were not. Techniques such as steering and biased mortgage practices continue to perpetuate a segregated outcome for many residents. In contrast, ethnic enclaves and age restricted communities are viewed as voluntary segregation based on cultural and social amenities. This diversity surrounding the causes of segregation are not just region-wide characteristics, but can vary within a region. Local segregation analysis aims to uncover this local variation, and hence open the door to policy solutions not visible at the global scale. The centralization index, originally introduced as a global measure of segregation focused on spatial concentration of two population groups relative a region's urban center, has lost relevancy in recent decades as regions have become polycentric, and the index's magnitude is sensitive to the particular point chosen as the center. These attributes, which make it a poor global measure, are leveraged here to repurpose the index as a local measure. The index's ability to differentiate minority from majority segregation, and its focus on a particular location within a region make it an ideal local segregation index. Based on the local centralization index for two groups, a local multigroup variation is defined, and a local space-time redistribution index is presented capturing change in concentration of a single population group over two time periods. Permutation based inference approaches are used to test the statistical significance of measured index values. Applications to the Phoenix, Arizona metropolitan area show persistent cores of black and white segregation over the years 1990, 2000 and 2010, and a trend of white segregated neighborhoods increasing at a faster rate than black. An analysis of the Phoenix area's recently opened light rail system shows that its 28 stations are located in areas of significant white, black and Hispanic segregation, and there is a clear concentration of renters over owners around most stations. There is little indication of statistically significant change in segregation or population concentration around the stations, indicating a lack of near term impact of light rail on the region's overall demographics.

Contributors

Agent

Created

Date Created
  • 2012

151108-Thumbnail Image.png

Outsourcing of IT services: studies on diffusion and new theoretical perspectives

Description

Information technology (IT) outsourcing, including foreign or offshore outsourcing, has been steadily growing over the last two decades. This growth in IT outsourcing has led to the development of different

Information technology (IT) outsourcing, including foreign or offshore outsourcing, has been steadily growing over the last two decades. This growth in IT outsourcing has led to the development of different hubs of services across nations, and has resulted in increased competition among service providers. Firms have been using IT outsourcing to not only leverage advanced technologies and services at lower costs, but also to maintain their competitive edge and grow. Furthermore, as prior studies have shown, there are systematic differences among industries in terms of the degree and impact of IT outsourcing. This dissertation uses a three-study approach to investigate issues related to IT outsourcing at the macro and micro levels, and provides different perspectives for understanding the issues associated with IT outsourcing at a firm and industry level. The first study evaluates the diffusion patterns of IT outsourcing across industries at aggregate level and within industries at a firm level. In addition, it analyzes the factors that influence the diffusion of IT outsourcing and tests models that help us understand the rate and patterns of diffusion at the industry level. This study establishes the presence of hierarchical contagion effects in the diffusion of IT outsourcing. The second study explores the role of location and proximity of industries to understand the diffusion patterns of IT outsourcing within clusters using the spatial analysis technique of space-time clustering. It establishes the presence of simultaneous space and time interactions at the global level in the diffusion of IT outsourcing. The third study examines the development of specialized hubs for IT outsourcing services in four developing economies: Brazil, Russia, India, and China (BRIC). In this study, I adopt a theory-building approach involving the identification of explanatory anomalies, and propose a new hybrid theory called- knowledge network theory. The proposed theory suggests that the growth and development of the IT and related services sector is a result of close interactions among adaptive institutions. It is also based on new knowledge that is created, and which flows through a country's national diaspora of expatriate entrepreneurs, technologists and business leaders. In addition, relevant economic history and regional geography factors are important. This view diverges from the traditional view, wherein effective institutions are considered to be the key determinants of long-term economic growth.

Contributors

Agent

Created

Date Created
  • 2012