Matching Items (19)
136465-Thumbnail Image.png
Description
The humanities as a discipline have typically not used a rigid or technical method of assessment in the process of analysis. GIScience offers numerous benefits to this discipline by applying spatial analysis to rigorously understand it. Photography studios developed in the mid-19th Century as a highly popular business and emerging

The humanities as a discipline have typically not used a rigid or technical method of assessment in the process of analysis. GIScience offers numerous benefits to this discipline by applying spatial analysis to rigorously understand it. Photography studios developed in the mid-19th Century as a highly popular business and emerging technology. This project was initiated by Dr. Jeremy Rowe with support from the ASU Emeritus College Research and Creative Activity and Undergraduate Research Initiative grants, and seeks to use GIS tools to understand the explosive growth of photography studios in the New York City area, specifically Manhattan and Brooklyn. Demonstrated in this project are several capabilities of the ESRI online GIS, including queries for year information, a tool showing growth over time, and a generated density map of photography studios.
ContributorsAbeln, Garrett James (Author) / Li, Wenwen (Thesis director) / Rowe, Jeremy (Committee member) / Barrett, The Honors College (Contributor) / School of Geographical Sciences and Urban Planning (Contributor) / School of Politics and Global Studies (Contributor)
Created2015-05
141433-Thumbnail Image.png
Description

This study seeks to determine the role of land architecture—the composition and configuration of land cover—as well as cadastral/demographic/economic factors on land surface temperature (LST) and the surface urban heat island effect of Phoenix, Arizona. It employs 1 m National Agricultural Imagery Program data of land-cover with 120mLandsat-derived land surface

This study seeks to determine the role of land architecture—the composition and configuration of land cover—as well as cadastral/demographic/economic factors on land surface temperature (LST) and the surface urban heat island effect of Phoenix, Arizona. It employs 1 m National Agricultural Imagery Program data of land-cover with 120mLandsat-derived land surface temperature, decomposed to 30 m, a new measure of configuration, the normalized moment of inertia, and U.S. Census data to address the question for two randomly selected samples comprising 523 and 545 residential neighborhoods (census blocks) in the city. The results indicate that, contrary to most other studies, land configuration has a stronger influence on LST than land composition. In addition, both land configuration and architecture combined with cadastral, demographic, and economic variables, capture a significant amount of explained variance in LST. The results indicate that attention to land architecture in the development of or reshaping of neighborhoods may ameliorate the summer extremes in LST.

ContributorsLi, Xiaoxiao (Author) / Li, Wenwen (Author) / Middel, Ariane (Author) / Harlan, Sharon L. (Author) / Brazel, Anthony J. (Author) / Turner II, B. L. (Author)
Created2015-12-29
130375-Thumbnail Image.png
Description
This article reviews the range of delivery platforms that have been developed for the PySAL open source Python library for spatial analysis. This includes traditional desktop software (with a graphical user interface, command line or embedded in a computational notebook), open spatial analytics middleware, and web, cloud and distributed open

This article reviews the range of delivery platforms that have been developed for the PySAL open source Python library for spatial analysis. This includes traditional desktop software (with a graphical user interface, command line or embedded in a computational notebook), open spatial analytics middleware, and web, cloud and distributed open geospatial analytics for decision support. A common thread throughout the discussion is the emphasis on openness, interoperability, and provenance management in a scientific workflow. The code base of the PySAL library provides the common computing framework underlying all delivery mechanisms.
ContributorsRey, Sergio (Author) / Anselin, Luc (Author) / Li, Xun (Author) / Pahle, Robert (Author) / Laura, Jason (Author) / Li, Wenwen (Author) / Koschinsky, Julia (Author) / College of Liberal Arts and Sciences (Contributor) / School of Geographical Sciences and Urban Planning (Contributor) / Computational Spatial Science (Contributor)
Created2015-06-01
130331-Thumbnail Image.png
Description
Urban economic modeling and effective spatial planning are critical tools towards achieving urban sustainability. However, in practice, many technical obstacles, such as information islands, poor documentation of data and lack of software platforms to facilitate virtual collaboration, are challenging the effectiveness of decision-making processes. In this paper, we report on

Urban economic modeling and effective spatial planning are critical tools towards achieving urban sustainability. However, in practice, many technical obstacles, such as information islands, poor documentation of data and lack of software platforms to facilitate virtual collaboration, are challenging the effectiveness of decision-making processes. In this paper, we report on our efforts to design and develop a geospatial cyberinfrastructure (GCI) for urban economic analysis and simulation. This GCI provides an operational graphic user interface, built upon a service-oriented architecture to allow (1) widespread sharing and seamless integration of distributed geospatial data; (2) an effective way to address the uncertainty and positional errors encountered in fusing data from diverse sources; (3) the decomposition of complex planning questions into atomic spatial analysis tasks and the generation of a web service chain to tackle such complex problems; and (4) capturing and representing provenance of geospatial data to trace its flow in the modeling task. The Greater Los Angeles Region serves as the test bed. We expect this work to contribute to effective spatial policy analysis and decision-making through the adoption of advanced GCI and to broaden the application coverage of GCI to include urban economic simulations.
Created2013-05-21
189297-Thumbnail Image.png
Description
This thesis encompasses a comprehensive research effort dedicated to overcoming the critical bottlenecks that hinder the current generation of neural networks, thereby significantly advancing their reliability and performance. Deep neural networks, with their millions of parameters, suffer from over-parameterization and lack of constraints, leading to limited generalization capabilities. In other

This thesis encompasses a comprehensive research effort dedicated to overcoming the critical bottlenecks that hinder the current generation of neural networks, thereby significantly advancing their reliability and performance. Deep neural networks, with their millions of parameters, suffer from over-parameterization and lack of constraints, leading to limited generalization capabilities. In other words, the complex architecture and millions of parameters present challenges in finding the right balance between capturing useful patterns and avoiding noise in the data. To address these issues, this thesis explores novel solutions based on knowledge distillation, enabling the learning of robust representations. Leveraging the capabilities of large-scale networks, effective learning strategies are developed. Moreover, the limitations of dependency on external networks in the distillation process, which often require large-scale models, are effectively overcome by proposing a self-distillation strategy. The proposed approach empowers the model to generate high-level knowledge within a single network, pushing the boundaries of knowledge distillation. The effectiveness of the proposed method is not only demonstrated across diverse applications, including image classification, object detection, and semantic segmentation but also explored in practical considerations such as handling data scarcity and assessing the transferability of the model to other learning tasks. Another major obstacle hindering the development of reliable and robust models lies in their black-box nature, impeding clear insights into the contributions toward the final predictions and yielding uninterpretable feature representations. To address this challenge, this thesis introduces techniques that incorporate simple yet powerful deep constraints rooted in Riemannian geometry. These constraints confer geometric qualities upon the latent representation, thereby fostering a more interpretable and insightful representation. In addition to its primary focus on general tasks like image classification and activity recognition, this strategy offers significant benefits in real-world applications where data scarcity is prevalent. Moreover, its robustness in feature removal showcases its potential for edge applications. By successfully tackling these challenges, this research contributes to advancing the field of machine learning and provides a foundation for building more reliable and robust systems across various application domains.
ContributorsChoi, Hongjun (Author) / Turaga, Pavan (Thesis advisor) / Jayasuriya, Suren (Committee member) / Li, Wenwen (Committee member) / Fazli, Pooyan (Committee member) / Arizona State University (Publisher)
Created2023
171945-Thumbnail Image.png
Description
Integrated water resources management for flood control, water distribution, conservation, and food security require understanding hydrological spatial and temporal trends. Proliferation of monitoring and sensor data has boosted data-driven simulation and evaluation. Developing data-driven models for such physical process-related phenomena, and meaningful interpretability therein, necessitates an inventive methodology. In this

Integrated water resources management for flood control, water distribution, conservation, and food security require understanding hydrological spatial and temporal trends. Proliferation of monitoring and sensor data has boosted data-driven simulation and evaluation. Developing data-driven models for such physical process-related phenomena, and meaningful interpretability therein, necessitates an inventive methodology. In this dissertation, I developed time series and deep learning model that connected rainfall, runoff, and fish species abundances. I also investigated the underlying explainabilty for hydrological processes and impacts on fish species. First, I created a streamflow simulation model using computer vision and natural language processing as an alternative to physical-based routing. I tested it on seven US river network sections and showed it outperformed time series models, deep learning baselines, and novel variants. In addition, my model explained flow routing without physical parameter input or time-consuming calibration. On the basis of this model, I expanded it from accepting dispersed spatial inputs to adopting comprehensive 2D grid data. I constructed a spatial-temporal deep leaning model for rainfall-runoff simulation. I tested it against a semi-distributed hydrological model and found superior results. Furthermore, I investigated the potential interpretability for rainfall-runoff process in both space and time. To understand impacts of flow variation on fish species, I applied a frequency based model framework for long term time series data simulation. First, I discovered that timing of hydrological anomalies was as crucial as their size. Flooding and drought, when properly timed, were both linked with excellent fishing productivity. To identify responses of various fish trait groups, I used this model to assess mitigated hydrological variation by fish attributes. Longitudinal migratory fish species were more impacted by flow variance, whereas migratory strategy species reacted in the same direction but to various degrees. Finally, I investigated future fish population changes under alternative design flow scenarios and showed that a protracted low flow with a powerful, on-time flood pulse would benefit fish. In my dissertation, I constructed three data-driven models that link the hydrological cycle to the stream environment and give insight into the underlying physical process, which is vital for quantitative, efficient, and integrated water resource management.
ContributorsDeng, Qi (Author) / Sabo, John (Thesis advisor) / Grimm, Nancy (Thesis advisor) / Ganguly, Auroop (Committee member) / Li, Wenwen (Committee member) / Mascaro, Giuseppe (Committee member) / Arizona State University (Publisher)
Created2022
157264-Thumbnail Image.png
Description
Big data that contain geo-referenced attributes have significantly reformed the way that I process and analyze geospatial data. Compared with the expected benefits received in the data-rich environment, more data have not always contributed to more accurate analysis. “Big but valueless” has becoming a critical concern to the community of

Big data that contain geo-referenced attributes have significantly reformed the way that I process and analyze geospatial data. Compared with the expected benefits received in the data-rich environment, more data have not always contributed to more accurate analysis. “Big but valueless” has becoming a critical concern to the community of GIScience and data-driven geography. As a highly-utilized function of GeoAI technique, deep learning models designed for processing geospatial data integrate powerful computing hardware and deep neural networks into various dimensions of geography to effectively discover the representation of data. However, limitations of these deep learning models have also been reported when People may have to spend much time on preparing training data for implementing a deep learning model. The objective of this dissertation research is to promote state-of-the-art deep learning models in discovering the representation, value and hidden knowledge of GIS and remote sensing data, through three research approaches. The first methodological framework aims to unify varied shadow into limited number of patterns, with the convolutional neural network (CNNs)-powered shape classification, multifarious shadow shapes with a limited number of representative shadow patterns for efficient shadow-based building height estimation. The second research focus integrates semantic analysis into a framework of various state-of-the-art CNNs to support human-level understanding of map content. The final research approach of this dissertation focuses on normalizing geospatial domain knowledge to promote the transferability of a CNN’s model to land-use/land-cover classification. This research reports a method designed to discover detailed land-use/land-cover types that might be challenging for a state-of-the-art CNN’s model that previously performed well on land-cover classification only.
ContributorsZhou, Xiran (Author) / Li, Wenwen (Thesis advisor) / Myint, Soe Win (Committee member) / Arundel, Samantha Thompson (Committee member) / Arizona State University (Publisher)
Created2019
157004-Thumbnail Image.png
Description
In the field of Geographic Information Science (GIScience), we have witnessed the unprecedented data deluge brought about by the rapid advancement of high-resolution data observing technologies. For example, with the advancement of Earth Observation (EO) technologies, a massive amount of EO data including remote sensing data and other sensor observation

In the field of Geographic Information Science (GIScience), we have witnessed the unprecedented data deluge brought about by the rapid advancement of high-resolution data observing technologies. For example, with the advancement of Earth Observation (EO) technologies, a massive amount of EO data including remote sensing data and other sensor observation data about earthquake, climate, ocean, hydrology, volcano, glacier, etc., are being collected on a daily basis by a wide range of organizations. In addition to the observation data, human-generated data including microblogs, photos, consumption records, evaluations, unstructured webpages and other Volunteered Geographical Information (VGI) are incessantly generated and shared on the Internet.

Meanwhile, the emerging cyberinfrastructure rapidly increases our capacity for handling such massive data with regard to data collection and management, data integration and interoperability, data transmission and visualization, high-performance computing, etc. Cyberinfrastructure (CI) consists of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high-performance networks to improve research productivity and enable breakthroughs that are not otherwise possible.

The Geospatial CI (GCI, or CyberGIS), as the synthesis of CI and GIScience has inherent advantages in enabling computationally intensive spatial analysis and modeling (SAM) and collaborative geospatial problem solving and decision making.

This dissertation is dedicated to addressing several critical issues and improving the performance of existing methodologies and systems in the field of CyberGIS. My dissertation will include three parts: The first part is focused on developing methodologies to help public researchers find appropriate open geo-spatial datasets from millions of records provided by thousands of organizations scattered around the world efficiently and effectively. Machine learning and semantic search methods will be utilized in this research. The second part develops an interoperable and replicable geoprocessing service by synthesizing the high-performance computing (HPC) environment, the core spatial statistic/analysis algorithms from the widely adopted open source python package – Python Spatial Analysis Library (PySAL), and rich datasets acquired from the first research. The third part is dedicated to studying optimization strategies for feature data transmission and visualization. This study is intended for solving the performance issue in large feature data transmission through the Internet and visualization on the client (browser) side.

Taken together, the three parts constitute an endeavor towards the methodological improvement and implementation practice of the data-driven, high-performance and intelligent CI to advance spatial sciences.
ContributorsShao, Hu (Author) / Li, Wenwen (Thesis advisor) / Rey, Sergio (Thesis advisor) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2018
154079-Thumbnail Image.png
Description
Nearly 25 years ago, parallel computing techniques were first applied to vector spatial analysis methods. This initial research was driven by the desire to reduce computing times in order to support scaling to larger problem sets. Since this initial work, rapid technological advancement has driven the availability of High Performance

Nearly 25 years ago, parallel computing techniques were first applied to vector spatial analysis methods. This initial research was driven by the desire to reduce computing times in order to support scaling to larger problem sets. Since this initial work, rapid technological advancement has driven the availability of High Performance Computing (HPC) resources, in the form of multi-core desktop computers, distributed geographic information processing systems, e.g. computational grids, and single site HPC clusters. In step with increases in computational resources, significant advancement in the capabilities to capture and store large quantities of spatially enabled data have been realized. A key component to utilizing vast data quantities in HPC environments, scalable algorithms, have failed to keep pace. The National Science Foundation has identified the lack of scalable algorithms in codified frameworks as an essential research product. Fulfillment of this goal is challenging given the lack of a codified theoretical framework mapping atomic numeric operations from the spatial analysis stack to parallel programming paradigms, the diversity in vernacular utilized by research groups, the propensity for implementations to tightly couple to under- lying hardware, and the general difficulty in realizing scalable parallel algorithms. This dissertation develops a taxonomy of parallel vector spatial analysis algorithms with classification being defined by root mathematical operation and communication pattern, a computational dwarf. Six computational dwarfs are identified, three being drawn directly from an existing parallel computing taxonomy and three being created to capture characteristics unique to spatial analysis algorithms. The taxonomy provides a high-level classification decoupled from low-level implementation details such as hardware, communication protocols, implementation language, decomposition method, or file input and output. By taking a high-level approach implementation specifics are broadly proposed, breadth of coverage is achieved, and extensibility is ensured. The taxonomy is both informed and informed by five case studies im- plemented across multiple, divergent hardware environments. A major contribution of this dissertation is a theoretical framework to support the future development of concrete parallel vector spatial analysis frameworks through the identification of computational dwarfs and, by extension, successful implementation strategies.
ContributorsLaura, Jason (Author) / Rey, Sergio J. (Thesis advisor) / Anselin, Luc (Committee member) / Wang, Shaowen (Committee member) / Li, Wenwen (Committee member) / Arizona State University (Publisher)
Created2015
154968-Thumbnail Image.png
Description
Economic inequality is always presented as how economic metrics vary amongst individuals in a group, amongst groups in a population, or amongst some regions. Economic inequality can substantially impact the social environment, socioeconomics as well as human living standard. Since economic inequality always plays an important role in our social

Economic inequality is always presented as how economic metrics vary amongst individuals in a group, amongst groups in a population, or amongst some regions. Economic inequality can substantially impact the social environment, socioeconomics as well as human living standard. Since economic inequality always plays an important role in our social environment, its study has attracted much attention from scholars in various research fields, such as development economics, sociology and political science. On the other hand, economic inequality can result from many factors, phenomena, and complex procedures, including policy, ethnic, education, globalization and etc. However, the spatial dimension in economic inequality research did not draw much attention from scholars until early 2000s. Spatial dependency, perform key roles in economic inequality analysis. The spatial econometric methods do not merely convey a consequence of the characters of the data exclusively. More importantly, they also respect and quantify the spatial effects in the economic inequality. As aforementioned, although regional economic inequality starts to attract scholars' attention in both economy and regional science domains, corresponding methodologies to examine such regional inequality remain in their preliminary phase, which need substantial further exploration. My thesis aims at contributing to the body of knowledge in the method development to support economic inequality studies by exploring the feasibility of a set of new analytical methods in use of regional inequality analysis. These methods include Theil's T statistic, geographical rank Markov and new methods applying graph theory. The thesis will also leverage these methods to compare the inequality between China and US, two large economic entities in the world, because of the long history of economic development as well as the corresponding evolution of inequality in US; the rapid economic development and consequent high variation of economic inequality in China.
ContributorsWang, Sizhe (Author) / Rey, Sergio J (Thesis advisor) / Li, Wenwen (Committee member) / Salon, Deborah (Committee member) / Arizona State University (Publisher)
Created2016