Matching Items (103)
150070-Thumbnail Image.png
Description
This dissertation creates models of past potential vegetation in the Southern Levant during most of the Holocene, from the beginnings of farming through the rise of urbanized civilization (12 to 2.5 ka BP). The time scale encompasses the rise and collapse of the earliest agrarian civilizations in this region. The

This dissertation creates models of past potential vegetation in the Southern Levant during most of the Holocene, from the beginnings of farming through the rise of urbanized civilization (12 to 2.5 ka BP). The time scale encompasses the rise and collapse of the earliest agrarian civilizations in this region. The archaeological record suggests that increases in social complexity were linked to climatic episodes (e.g., favorable climatic conditions coincide with intervals of prosperity or marked social development such as the Neolithic Revolution ca. 11.5 ka BP, the Secondary Products Revolution ca. 6 ka BP, and the Middle Bronze Age ca. 4 ka BP). The opposite can be said about periods of climatic deterioration, when settled villages were abandoned as the inhabitants returned to nomadic or semi nomadic lifestyles (e.g., abandonment of the largest Neolithic farming towns after 8 ka BP and collapse of Bronze Age towns and cities after 3.5 ka BP during the Late Bronze Age). This study develops chronologically refined models of past vegetation from 12 to 2.5 ka BP, at 500 year intervals, using GIS, remote sensing and statistical modeling tools (MAXENT) that derive from species distribution modeling. Plants are sensitive to alterations in their environment and respond accordingly. Because of this, they are valuable indicators of landscape change. An extensive database of historical and field gathered observations was created. Using this database as well as environmental variables that include temperature and precipitation surfaces for the whole study period (also at 500 year intervals), the potential vegetation of the region was modeled. Through this means, a continuous chronology of potential vegetation of the Southern Levantwas built. The produced paleo-vegetation models generally agree with the proxy records. They indicate a gradual decline of forests and expansion of steppe and desert throughout the Holocene, interrupted briefly during the Mid Holocene (ca. 4 ka BP, Middle Bronze Age). They also suggest that during the Early Holocene, forest areas were extensive, spreading into the Northern Negev. The two remaining forested areas in the Northern and Southern Plateau Region in Jordan were also connected during this time. The models also show general agreement with the major cultural developments, with forested areas either expanding or remaining stable during prosperous periods (e.g., Pre Pottery Neolithic and Middle Bronze Age), and significantly contracting during moments of instability (e.g., Late Bronze Age).
ContributorsSoto-Berelov, Mariela (Author) / Fall, Patricia L. (Thesis advisor) / Myint, Soe (Committee member) / Turner, Billie L (Committee member) / Falconer, Steven (Committee member) / Arizona State University (Publisher)
Created2011
152370-Thumbnail Image.png
Description
Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models.

Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models. This thesis explores using concepts from computational conformal geometry to create a custom software framework for examining and generating quantitative mathematical models for characterizing the geometry of early visual areas in the human brain. The software framework includes a graphical user interface built on top of a selected core conformal flattening algorithm and various software tools compiled specifically for processing and examining retinotopic data. Three conformal flattening algorithms were implemented and evaluated for speed and how well they preserve the conformal metric. All three algorithms performed well in preserving the conformal metric but the speed and stability of the algorithms varied. The software framework performed correctly on actual retinotopic data collected using the standard travelling-wave experiment. Preliminary analysis of the Beltrami coefficient for the early data set shows that selected regions of V1 that contain reasonably smooth eccentricity and polar angle gradients do show significant local conformality, warranting further investigation of this approach for analysis of early and higher visual cortex.
ContributorsTa, Duyan (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Wonka, Peter (Committee member) / Arizona State University (Publisher)
Created2013
152300-Thumbnail Image.png
Description
In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set

In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set of 3D morphological differences in the corpus callosum between two groups of subjects. The CCs are segmented from whole brain T1-weighted MRI and modeled as 3D tetrahedral meshes. The callosal surface is divided into superior and inferior patches on which we compute a volumetric harmonic field by solving the Laplace's equation with Dirichlet boundary conditions. We adopt a refined tetrahedral mesh to compute the Laplacian operator, so our computation can achieve sub-voxel accuracy. Thickness is estimated by tracing the streamlines in the harmonic field. We combine areal changes found using surface tensor-based morphometry and thickness information into a vector at each vertex to be used as a metric for the statistical analysis. Group differences are assessed on this combined measure through Hotelling's T2 test. The method is applied to statistically compare three groups consisting of: congenitally blind (CB), late blind (LB; onset > 8 years old) and sighted (SC) subjects. Our results reveal significant differences in several regions of the CC between both blind groups and the sighted groups; and to a lesser extent between the LB and CB groups. These results demonstrate the crucial role of visual deprivation during the developmental period in reshaping the structural architecture of the CC.
ContributorsXu, Liang (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
152183-Thumbnail Image.png
Description
Two critical limitations for hyperspatial imagery are higher imagery variances and large data sizes. Although object-based analyses with a multi-scale framework for diverse object sizes are the solution, more data sources and large amounts of testing at high costs are required. In this study, I used tree density segmentation as

Two critical limitations for hyperspatial imagery are higher imagery variances and large data sizes. Although object-based analyses with a multi-scale framework for diverse object sizes are the solution, more data sources and large amounts of testing at high costs are required. In this study, I used tree density segmentation as the key element of a three-level hierarchical vegetation framework for reducing those costs, and a three-step procedure was used to evaluate its effects. A two-step procedure, which involved environmental stratifications and the random walker algorithm, was used for tree density segmentation. I determined whether variation in tone and texture could be reduced within environmental strata, and whether tree density segmentations could be labeled by species associations. At the final level, two tree density segmentations were partitioned into smaller subsets using eCognition in order to label individual species or tree stands in two test areas of two tree densities, and the Z values of Moran's I were used to evaluate whether imagery objects have different mean values from near segmentations as a measure of segmentation accuracy. The two-step procedure was able to delineating tree density segments and label species types robustly, compared to previous hierarchical frameworks. However, eCognition was not able to produce detailed, reasonable image objects with optimal scale parameters for species labeling. This hierarchical vegetation framework is applicable for fine-scale, time-series vegetation mapping to develop baseline data for evaluating climate change impacts on vegetation at low cost using widely available data and a personal laptop.
ContributorsLiau, Yan-ting (Author) / Franklin, Janet (Thesis advisor) / Turner, Billie (Committee member) / Myint, Soe (Committee member) / Arizona State University (Publisher)
Created2013
151336-Thumbnail Image.png
Description
Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis

Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis explores methods of linking publicly available data sources as a means of extrapolating missing information of Facebook. An application named "Visual Friends Income Map" has been created on Facebook to collect social network data and explore geodemographic properties to link publicly available data, such as the US census data. Multiple predictors are implemented to link data sets and extrapolate missing information from Facebook with accurate predictions. The location based predictor matches Facebook users' locations with census data at the city level for income and demographic predictions. Age and relationship based predictors are created to improve the accuracy of the proposed location based predictor utilizing social network link information. In the case where a user does not share any location information on their Facebook profile, a kernel density estimation location predictor is created. This predictor utilizes publicly available telephone record information of all people with the same surname of this user in the US to create a likelihood distribution of the user's location. This is combined with the user's IP level information in order to narrow the probability estimation down to a local regional constraint.
ContributorsMao, Jingxian (Author) / Maciejewski, Ross (Thesis advisor) / Farin, Gerald (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
151928-Thumbnail Image.png
Description
Land transformation under conditions of rapid urbanization has significantly altered the structure and functioning of Earth's systems. Land fragmentation, a characteristic of land transformation, is recognized as a primary driving force in the loss of biological diversity worldwide. However, little is known about its implications in complex urban settings where

Land transformation under conditions of rapid urbanization has significantly altered the structure and functioning of Earth's systems. Land fragmentation, a characteristic of land transformation, is recognized as a primary driving force in the loss of biological diversity worldwide. However, little is known about its implications in complex urban settings where interaction with social dynamics is intense. This research asks: How do patterns of land cover and land fragmentation vary over time and space, and what are the socio-ecological drivers and consequences of land transformation in a rapidly growing city? Using Metropolitan Phoenix as a case study, the research links pattern and process relationships between land cover, land fragmentation, and socio-ecological systems in the region. It examines population growth, water provision and institutions as major drivers of land transformation, and the changes in bird biodiversity that result from land transformation. How to manage socio-ecological systems is one of the biggest challenges of moving towards sustainability. This research project provides a deeper understanding of how land transformation affects socio-ecological dynamics in an urban setting. It uses a series of indices to evaluate land cover and fragmentation patterns over the past twenty years, including land patch numbers, contagion, shapes, and diversities. It then generates empirical evidence on the linkages between land cover patterns and ecosystem properties by exploring the drivers and impacts of land cover change. An interdisciplinary approach that integrates social, ecological, and spatial analysis is applied in this research. Findings of the research provide a documented dataset that can help researchers study the relationship between human activities and biotic processes in an urban setting, and contribute to sustainable urban development.
ContributorsZhang, Sainan (Author) / Boone, Christopher G. (Thesis advisor) / York, Abigail M. (Committee member) / Myint, Soe (Committee member) / Arizona State University (Publisher)
Created2013
151278-Thumbnail Image.png
Description
This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the

This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the NVIDIA CUDA framework; however, the proposed solution in this document uses the Microsoft general-purpose computing on graphics processing units API. The implementation allows for the simulation of a large number of particles in a real-time scenario. The solution presented here uses the Smoothed Particles Hydrodynamics algorithm to calculate the forces within the fluid; this algorithm provides a Lagrangian approach for discretizes the Navier-Stockes equations into a set of particles. Our solution uses the DirectCompute compute shaders to evaluate each particle using the multithreading and multi-core capabilities of the GPU increasing the overall performance. The solution then describes a method for extracting the fluid surface using the Marching Cubes method and the programmable interfaces exposed by the DirectX pipeline. Particularly, this document presents a method for using the Geometry Shader Stage to generate the triangle mesh as defined by the Marching Cubes method. The implementation results show the ability to simulate over 64K particles at a rate of 900 and 400 frames per second, not including the surface reconstruction steps and including the Marching Cubes steps respectively.
ContributorsFigueroa, Gustavo (Author) / Farin, Gerald (Thesis advisor) / Maciejewski, Ross (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
151154-Thumbnail Image.png
Description
Alzheimer's Disease (AD) is the most common form of dementia observed in elderly patients and has significant social-economic impact. There are many initiatives which aim to capture leading causes of AD. Several genetic, imaging, and biochemical markers are being explored to monitor progression of AD and explore treatment and detection

Alzheimer's Disease (AD) is the most common form of dementia observed in elderly patients and has significant social-economic impact. There are many initiatives which aim to capture leading causes of AD. Several genetic, imaging, and biochemical markers are being explored to monitor progression of AD and explore treatment and detection options. The primary focus of this thesis is to identify key biomarkers to understand the pathogenesis and prognosis of Alzheimer's Disease. Feature selection is the process of finding a subset of relevant features to develop efficient and robust learning models. It is an active research topic in diverse areas such as computer vision, bioinformatics, information retrieval, chemical informatics, and computational finance. In this work, state of the art feature selection algorithms, such as Student's t-test, Relief-F, Information Gain, Gini Index, Chi-Square, Fisher Kernel Score, Kruskal-Wallis, Minimum Redundancy Maximum Relevance, and Sparse Logistic regression with Stability Selection have been extensively exploited to identify informative features for AD using data from Alzheimer's Disease Neuroimaging Initiative (ADNI). An integrative approach which uses blood plasma protein, Magnetic Resonance Imaging, and psychometric assessment scores biomarkers has been explored. This work also analyzes the techniques to handle unbalanced data and evaluate the efficacy of sampling techniques. Performance of feature selection algorithm is evaluated using the relevance of derived features and the predictive power of the algorithm using Random Forest and Support Vector Machine classifiers. Performance metrics such as Accuracy, Sensitivity and Specificity, and area under the Receiver Operating Characteristic curve (AUC) have been used for evaluation. The feature selection algorithms best suited to analyze AD proteomics data have been proposed. The key biomarkers distinguishing healthy and AD patients, Mild Cognitive Impairment (MCI) converters and non-converters, and healthy and MCI patients have been identified.
ContributorsDubey, Rashmi (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2012
141392-Thumbnail Image.png
Description

Problem: The prospect that urban heat island (UHI) effects and climate change may increase urban temperatures is a problem for cities that actively promote urban redevelopment and higher densities. One possible UHI mitigation strategy is to plant more trees and other irrigated vegetation to prevent daytime heat storage and facilitate

Problem: The prospect that urban heat island (UHI) effects and climate change may increase urban temperatures is a problem for cities that actively promote urban redevelopment and higher densities. One possible UHI mitigation strategy is to plant more trees and other irrigated vegetation to prevent daytime heat storage and facilitate nighttime cooling, but this requires water resources that are limited in a desert city like Phoenix.

Purpose: We investigated the tradeoffs between water use and nighttime cooling inherent in urban form and land use choices.

Methods: We used a Local-Scale Urban Meteorological Parameterization Scheme (LUMPS) model to examine the variation in temperature and evaporation in 10 census tracts in Phoenix's urban core. After validating results with estimates of outdoor water use based on tract-level city water records and satellite imagery, we used the model to simulate the temperature and water use consequences of implementing three different scenarios.

Results and conclusions: We found that increasing irrigated landscaping lowers nighttime temperatures, but this relationship is not linear; the greatest reductions occur in the least vegetated neighborhoods. A ratio of the change in water use to temperature impact reached a threshold beyond which increased outdoor water use did little to ameliorate UHI effects.

Takeaway for practice: There is no one design and landscape plan capable of addressing increasing UHI and climate effects everywhere. Any one strategy will have inconsistent results if applied across all urban landscape features and may lead to an inefficient allocation of scarce water resources.

Research Support: This work was supported by the National Science Foundation (NSF) under Grant SES-0345945 (Decision Center for a Desert City) and by the City of Phoenix Water Services Department. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF.

ContributorsGober, Patricia (Author) / Brazel, Anthony J. (Author) / Quay, Ray (Author) / Myint, Soe (Author) / Grossman-Clarke, Susanne (Author) / Miller, Adam (Author) / Rossi, Steve (Author)
Created2010-01-04