Matching Items (10)
141431-Thumbnail Image.png
Description

The relationship between the characteristics of the urban land system and land surface temperature (LST) has received increasing attention in urban heat island and sustainability research, especially for desert cities. This research generally employs medium or coarser spatial resolution data and primarily focuses on the effects of a few classes

The relationship between the characteristics of the urban land system and land surface temperature (LST) has received increasing attention in urban heat island and sustainability research, especially for desert cities. This research generally employs medium or coarser spatial resolution data and primarily focuses on the effects of a few classes of land-cover composition and pattern at the neighborhood or larger level using regression models. This study explores the effects of land system architecture—composition and configuration, both pattern and shape, of fine-grain land-cover classes—on LST of single family residential parcels in the Phoenix, Arizona (southwestern USA) metropolitan area. A 1 m resolution land-cover map is used to calculate land architecture metrics at the parcel level, and 6.8 m resolution MODIS/ASTER data are employed to retrieve LST. Linear mixed-effects models quantify the impacts of land configuration on LST at the parcel scale, controlling for the effects of land composition and neighborhood characteristics. Results indicate that parcel-level land-cover composition has the strongest association with daytime and nighttime LST, but the configuration of this cover, foremost compactness and concentration, also affects LST, with different associations between land architecture and LST at nighttime and daytime. Given information on land system architecture at the parcel level, additional information based on geographic and socioeconomic variables does not improve the generalization capability of the statistical models. The results point the way towards parcel-level land-cover design that helps to mitigate the urban heat island effect for warm desert cities, although tradeoffs with other sustainability indicators must be considered.

ContributorsLi, Xiaoxiao (Author) / Kamarianakis, Yiannis (Author) / Ouyang, Yun (Author) / Turner II, B. L. (Author) / Brazel, Anthony J. (Author)
Created2017-02-14
133364-Thumbnail Image.png
Description
The objective of this paper is to provide an educational diagnostic into the technology of blockchain and its application for the supply chain. Education on the topic is important to prevent misinformation on the capabilities of blockchain. Blockchain as a new technology can be confusing to grasp given the wide

The objective of this paper is to provide an educational diagnostic into the technology of blockchain and its application for the supply chain. Education on the topic is important to prevent misinformation on the capabilities of blockchain. Blockchain as a new technology can be confusing to grasp given the wide possibilities it can provide. This can convolute the topic by being too broad when defined. Instead, the focus will be maintained on explaining the technical details about how and why this technology works in improving the supply chain. The scope of explanation will not be limited to the solutions, but will also detail current problems. Both public and private blockchain networks will be explained and solutions they provide in supply chains. In addition, other non-blockchain systems will be described that provide important pieces in supply chain operations that blockchain cannot provide. Blockchain when applied to the supply chain provides improved consumer transparency, management of resources, logistics, trade finance, and liquidity.
ContributorsKrukar, Joel Michael (Author) / Oke, Adegoke (Thesis director) / Duarte, Brett (Committee member) / Hahn, Richard (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Economics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description

As online media, including social media platforms, become the primary and go-to resource for traditional communication, news and the spread of information is more present and accessible to consumers than ever before. This research focuses on analyzing Twitter data on the ongoing Russian-Ukrainian War to understand the significance of social

As online media, including social media platforms, become the primary and go-to resource for traditional communication, news and the spread of information is more present and accessible to consumers than ever before. This research focuses on analyzing Twitter data on the ongoing Russian-Ukrainian War to understand the significance of social media during this period in comparison to previous conflicts. The significance of social media and political conflict will be examined through Twitter user analysis and sentiment analysis. This case study will conduct sentiment analysis on a random sample of tweets from a given dataset, followed by user analysis and classification methods. The data will explore the implications for understanding public opinion on the conflict, the strengths and limitations of Twitter as a data source, and the next steps for future research. Highlighting the implications of the research findings will allow consumers and political stakeholders to make more informed decisions in the future.

ContributorsBlavatsky, Sofia (Author) / Hahn, Richard (Thesis director) / Sirugudi, Kumar (Committee member) / Inozemtseva, Julia (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor)
Created2023-05
193430-Thumbnail Image.png
Description
Gene expression models are key to understanding and predicting transcriptional dynamics. This thesis devises a computational method which can efficiently explore a large, highly correlated parameter space, ultimately allowing the author to accurately deduce the underlying gene network model using discrete, stochastic mRNA counts derived through the non-invasive imaging method

Gene expression models are key to understanding and predicting transcriptional dynamics. This thesis devises a computational method which can efficiently explore a large, highly correlated parameter space, ultimately allowing the author to accurately deduce the underlying gene network model using discrete, stochastic mRNA counts derived through the non-invasive imaging method of single molecule fluorescence in situ hybridization (smFISH). An underlying gene network model consists of the number of gene states (distinguished by distinct production rates) and all associated kinetic rate parameters. In this thesis, the author constructs an algorithm based on Bayesian parametric and nonparametric theory, expanding the traditional single gene network inference tools. This expansion starts by increasing the efficiency of classic Markov-Chain Monte Carlo (MCMC) sampling by combining three schemes known in the Bayesian statistical computing community: 1) Adaptive Metropolis-Hastings (AMH), 2) Hamiltonian Monte Carlo (HMC), and 3) Parallel Tempering (PT). The aggregation of these three methods decreases the autocorrelation between sequential MCMC samples, reducing the number of samples required to gain an accurate representation of the posterior probability distribution. Second, by employing Bayesian nonparametric methods, the author is able to simultaneously evaluate discrete and continuous parameters, enabling the method to devise the structure of the gene network and all kinetic parameters, respectively. Due to the nature of Bayesian theory, uncertainty is evaluated for the gene network model in combination with the kinetic parameters. Tools brought from Bayesian nonparametric theory equip the method with an ability to sample from the posterior distribution of all possible gene network models without pre-defining the gene network structure, i.e. the number of gene states. The author verifies the method’s robustness through the use of synthetic snapshot data, designed to closely represent experimental smFISH data sets, across a range of gene network model structures, parameters and experimental settings (number of probed cells and timepoints).
ContributorsMoyer, Camille (Author) / Armbruster, Dieter (Thesis advisor) / Fricks, John (Committee member) / Hahn, Richard (Committee member) / Renaut, Rosemary (Committee member) / Crook, Sharon (Committee member) / Kilic, Zeliha (Committee member) / Arizona State University (Publisher)
Created2024
156576-Thumbnail Image.png
Description
The primary objective in time series analysis is forecasting. Raw data often exhibits nonstationary behavior: trends, seasonal cycles, and heteroskedasticity. After data is transformed to a weakly stationary process, autoregressive moving average (ARMA) models may capture the remaining temporal dynamics to improve forecasting. Estimation of ARMA can be performed

The primary objective in time series analysis is forecasting. Raw data often exhibits nonstationary behavior: trends, seasonal cycles, and heteroskedasticity. After data is transformed to a weakly stationary process, autoregressive moving average (ARMA) models may capture the remaining temporal dynamics to improve forecasting. Estimation of ARMA can be performed through regressing current values on previous realizations and proxy innovations. The classic paradigm fails when dynamics are nonlinear; in this case, parametric, regime-switching specifications model changes in level, ARMA dynamics, and volatility, using a finite number of latent states. If the states can be identified using past endogenous or exogenous information, a threshold autoregressive (TAR) or logistic smooth transition autoregressive (LSTAR) model may simplify complex nonlinear associations to conditional weakly stationary processes. For ARMA, TAR, and STAR, order parameters quantify the extent past information is associated with the future. Unfortunately, even if model orders are known a priori, the possibility of over-fitting can lead to sub-optimal forecasting performance. By intentionally overestimating these orders, a linear representation of the full model is exploited and Bayesian regularization can be used to achieve sparsity. Global-local shrinkage priors for AR, MA, and exogenous coefficients are adopted to pull posterior means toward 0 without over-shrinking relevant effects. This dissertation introduces, evaluates, and compares Bayesian techniques that automatically perform model selection and coefficient estimation of ARMA, TAR, and STAR models. Multiple Monte Carlo experiments illustrate the accuracy of these methods in finding the "true" data generating process. Practical applications demonstrate their efficacy in forecasting.
ContributorsGiacomazzo, Mario (Author) / Kamarianakis, Yiannis (Thesis advisor) / Reiser, Mark R. (Committee member) / McCulloch, Robert (Committee member) / Hahn, Richard (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2018
156580-Thumbnail Image.png
Description
This dissertation investigates the classification of systemic lupus erythematosus (SLE) in the presence of non-SLE alternatives, while developing novel curve classification methodologies with wide ranging applications. Functional data representations of plasma thermogram measurements and the corresponding derivative curves provide predictors yet to be investigated for SLE identification. Functional

This dissertation investigates the classification of systemic lupus erythematosus (SLE) in the presence of non-SLE alternatives, while developing novel curve classification methodologies with wide ranging applications. Functional data representations of plasma thermogram measurements and the corresponding derivative curves provide predictors yet to be investigated for SLE identification. Functional nonparametric classifiers form a methodological basis, which is used herein to develop a) the family of ESFuNC segment-wise curve classification algorithms and b) per-pixel ensembles based on logistic regression and fused-LASSO. The proposed methods achieve test set accuracy rates as high as 94.3%, while returning information about regions of the temperature domain that are critical for population discrimination. The undertaken analyses suggest that derivate-based information contributes significantly in improved classification performance relative to recently published studies on SLE plasma thermograms.
ContributorsBuscaglia, Robert, Ph.D (Author) / Kamarianakis, Yiannis (Thesis advisor) / Armbruster, Dieter (Committee member) / Lanchier, Nicholas (Committee member) / McCulloch, Robert (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2018
156722-Thumbnail Image.png
Description
Large-scale cultivation of perennial bioenergy crops (e.g., miscanthus and switch-

grass) offers unique opportunities to mitigate climate change through avoided fossil fuel use and associated greenhouse gas reduction. Although conversion of existing agriculturally intensive lands (e.g., maize and soy) to perennial bioenergy cropping systems has been shown to reduce near-surface temperatures,

Large-scale cultivation of perennial bioenergy crops (e.g., miscanthus and switch-

grass) offers unique opportunities to mitigate climate change through avoided fossil fuel use and associated greenhouse gas reduction. Although conversion of existing agriculturally intensive lands (e.g., maize and soy) to perennial bioenergy cropping systems has been shown to reduce near-surface temperatures, unintended consequences on natural water resources via depletion of soil moisture may offset these benefits. In the effort of the cross-fertilization across the disciplines of physics-based modeling and spatio-temporal statistics, three topics are investigated in this dissertation aiming to provide a novel quantification and robust justifications of the hydroclimate impacts associated with bioenergy crop expansion. Topic 1 quantifies the hydroclimatic impacts associated with perennial bioenergy crop expansion over the contiguous United States using the Weather Research and Forecasting Model (WRF) dynamically coupled to a land surface model (LSM). A suite of continuous (2000–09) medium-range resolution (20-km grid spacing) ensemble-based simulations is conducted. Hovmöller and Taylor diagrams are utilized to evaluate simulated temperature and precipitation. In addition, Mann-Kendall modified trend tests and Sieve-bootstrap trend tests are performed to evaluate the statistical significance of trends in soil moisture differences. Finally, this research reveals potential hot spots of suitable deployment and regions to avoid. Topic 2 presents spatio-temporal Bayesian models which quantify the robustness of control simulation bias, as well as biofuel impacts, using three spatio-temporal correlation structures. A hierarchical model with spatially varying intercepts and slopes display satisfactory performance in capturing spatio-temporal associations. Simulated temperature impacts due to perennial bioenergy crop expansion are robust to physics parameterization schemes. Topic 3 further focuses on the accuracy and efficiency of spatial-temporal statistical modeling for large datasets. An ensemble of spatio-temporal eigenvector filtering algorithms (hereafter: STEF) is proposed to account for the spatio-temporal autocorrelation structure of the data while taking into account spatial confounding. Monte Carlo experiments are conducted. This method is then used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.
ContributorsWang, Meng, Ph.D (Author) / Kamarianakis, Yiannis (Thesis advisor) / Georgescu, Matei (Thesis advisor) / Fotheringham, A. Stewart (Committee member) / Moustaoui, Mohamed (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2018
153860-Thumbnail Image.png
Description
Threshold regression is used to model regime switching dynamics where the effects of the explanatory variables in predicting the response variable depend on whether a certain threshold has been crossed. When regime-switching dynamics are present, new estimation problems arise related to estimating the value of the threshold. Conventional methods utilize

Threshold regression is used to model regime switching dynamics where the effects of the explanatory variables in predicting the response variable depend on whether a certain threshold has been crossed. When regime-switching dynamics are present, new estimation problems arise related to estimating the value of the threshold. Conventional methods utilize an iterative search procedure, seeking to minimize the sum of squares criterion. However, when unnecessary variables are included in the model or certain variables drop out of the model depending on the regime, this method may have high variability. This paper proposes Lasso-type methods as an alternative to ordinary least squares. By incorporating an L_{1} penalty term, Lasso methods perform variable selection, thus potentially reducing some of the variance in estimating the threshold parameter. This paper discusses the results of a study in which two different underlying model structures were simulated. The first is a regression model with correlated predictors, whereas the second is a self-exciting threshold autoregressive model. Finally the proposed Lasso-type methods are compared to conventional methods in an application to urban traffic data.
ContributorsVan Schaijik, Maria (Author) / Kamarianakis, Yiannis (Committee member) / Reiser, Mark R. (Committee member) / Stufken, John (Committee member) / Arizona State University (Publisher)
Created2015
156174-Thumbnail Image.png
Description
Heart transplantation is the final treatment option for end-stage heart failure. In the United States, 70 pediatric patients die annually on the waitlist while 800 well-functioning organs get discarded. Concern for potential size-mismatch is one source of allograft waste and high waitlist mortality. Clinicians use the donor-recipient body weight (DRBW)

Heart transplantation is the final treatment option for end-stage heart failure. In the United States, 70 pediatric patients die annually on the waitlist while 800 well-functioning organs get discarded. Concern for potential size-mismatch is one source of allograft waste and high waitlist mortality. Clinicians use the donor-recipient body weight (DRBW) ratio, a standalone metric, to evaluate allograft size-match. However, this body weight metric is far removed from cardiac anatomy and neglects an individual’s anatomical variations. This thesis body of work developed a novel virtual heart transplant fit assessment tool and investigated the tool’s clinical utility to help clinicians safely expand patient donor pools.

The tool allowed surgeons to take an allograft reconstruction and fuse it to a patient’s CT or MR medical image for virtual fit assessment. The allograft is either a reconstruction of the donor’s actual heart (from CT or MR images) or an analogue from a health heart library. The analogue allograft geometry is identified from gross donor parameters using a regression model build herein. The need for the regression model is donor images may not exist or they may not become available within the time-window clinicians have to make a provisional acceptance of an offer.

The tool’s assessment suggested > 20% of upper DRBW listings could have been increased at Phoenix Children’s Hospital (PCH). Upper DRBW listings in the UNOS national database was statistically smaller than at PCH (p-values: < 0.001). Delayed sternal closure and surgeon perceived complication variables had an association (p-value: 0.000016) with 9 of the 11 cases that surgeons had perceived fit-related complications had delayed closures (p-value: 0.034809).

A tool to assess allograft size-match has been developed. Findings warrant future preclinical and clinical prospective studies to further assess the tool’s clinical utility.
ContributorsPlasencia, Jonathan (Author) / Frakes, David H (Thesis advisor) / Kodibagkar, Vikram (Thesis advisor) / Sadleir, Rosalind (Committee member) / Kamarianakis, Yiannis (Committee member) / Zangwill, Steven (Committee member) / Pophal, Stephen (Committee member) / Arizona State University (Publisher)
Created2018
141429-Thumbnail Image.png
Description

The impacts of land-cover composition on urban temperatures, including temperature extremes, are well documented. Much less attention has been devoted to the consequences of land-cover configuration, most of which addresses land surface temperatures. This study explores the role of both composition and configuration—or land system architecture—of residential neighborhoods in the

The impacts of land-cover composition on urban temperatures, including temperature extremes, are well documented. Much less attention has been devoted to the consequences of land-cover configuration, most of which addresses land surface temperatures. This study explores the role of both composition and configuration—or land system architecture—of residential neighborhoods in the Phoenix metropolitan area, on near-surface air temperature. It addresses two-dimensional, spatial attributes of buildings, impervious surfaces, bare soil/rock, vegetation and the “urbanscape” at large, from 50 m to 550 m at 100 m increments, for a representative 30-day high sun period. Linear mixed-effects models evaluate the significance of land system architecture metrics at different spatial aggregation levels. The results indicate that, controlling for land-cover composition and geographical variables, land-cover configuration, specifically the fractal dimension of buildings, is significantly associated with near-surface temperatures. In addition, statistically significant predictors related to composition and configuration appear to depend on the adopted level of spatial aggregation.

ContributorsKamarianakis, Yiannis (Author) / Li, Xiaoxiao (Author) / Turner II, B. L. (Author) / Brazel, Anthony J. (Author)
Created2017-12-05