Matching Items (19)
Filtering by

Clear all filters

151543-Thumbnail Image.png
Description
The numerical climate models have provided scientists, policy makers and the general public, crucial information for climate projections since mid-20th century. An international effort to compare and validate the simulations of all major climate models is organized by the Coupled Model Intercomparison Project (CMIP), which has gone through several phases

The numerical climate models have provided scientists, policy makers and the general public, crucial information for climate projections since mid-20th century. An international effort to compare and validate the simulations of all major climate models is organized by the Coupled Model Intercomparison Project (CMIP), which has gone through several phases since 1995 with CMIP5 being the state of the art. In parallel, an organized effort to consolidate all observational data in the past century culminates in the creation of several "reanalysis" datasets that are considered the closest representation of the true observation. This study compared the climate variability and trend in the climate model simulations and observations on the timescales ranging from interannual to centennial. The analysis focused on the dynamic climate quantity of zonal-mean zonal wind and global atmospheric angular momentum (AAM), and incorporated multiple datasets from reanalysis and the most recent CMIP3 and CMIP5 archives. For the observation, the validation of AAM by the length-of-day (LOD) and the intercomparison of AAM revealed a good agreement among reanalyses on the interannual and the decadal-to-interdecadal timescales, respectively. But the most significant discrepancies among them are in the long-term mean and long-term trend. For the simulations, the CMIP5 models produced a significantly smaller bias and a narrower ensemble spread of the climatology and trend in the 20th century for AAM compared to CMIP3, while CMIP3 and CMIP5 simulations consistently produced a positive trend for the 20th and 21st century. Both CMIP3 and CMIP5 models produced a wide range of the magnitudes of decadal and interdecadal variability of wind component of AAM (MR) compared to observation. The ensemble means of CMIP3 and CMIP5 are not statistically distinguishable for either the 20th- or 21st-century runs. The in-house atmospheric general circulation model (AGCM) simulations forced by the sea surface temperature (SST) taken from the CMIP5 simulations as lower boundary conditions were carried out. The zonal wind and MR in the CMIP5 simulations are well simulated in the AGCM simulations. This confirmed SST as an important mediator in regulating the global atmospheric changes due to GHG effect.
ContributorsPaek, Houk (Author) / Huang, Huei-Ping (Thesis advisor) / Adrian, Ronald (Committee member) / Wang, Zhihua (Committee member) / Anderson, James (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
151294-Thumbnail Image.png
Description
The partitioning of available solar energy into different fluxes at the Earth's surface is important in determining different physical processes, such as turbulent transport, subsurface hydrology, land-atmospheric interactions, etc. Direct measurements of these turbulent fluxes were carried out using eddy-covariance (EC) towers. However, the distribution of EC towers is sparse

The partitioning of available solar energy into different fluxes at the Earth's surface is important in determining different physical processes, such as turbulent transport, subsurface hydrology, land-atmospheric interactions, etc. Direct measurements of these turbulent fluxes were carried out using eddy-covariance (EC) towers. However, the distribution of EC towers is sparse due to relatively high cost and practical difficulties in logistics and deployment. As a result, data is temporally and spatially limited and is inadequate to be used for researches at large scales, such as regional and global climate modeling. Besides field measurements, an alternative way is to estimate turbulent fluxes based on the intrinsic relations between surface energy budget components, largely through thermodynamic equilibrium. These relations, referred as relative efficiency, have been included in several models to estimate the magnitude of turbulent fluxes in surface energy budgets such as latent heat and sensible heat. In this study, three theoretical models based on the lumped heat transfer model, the linear stability analysis and the maximum entropy principle respectively, were investigated. Model predictions of relative efficiencies were compared with turbulent flux data over different land covers, viz. lake, grassland and suburban surfaces. Similar results were observed over lake and suburban surface but significant deviation is found over vegetation surface. The relative efficiency of outgoing longwave radiation is found to be orders of magnitude deviated from theoretic predictions. Meanwhile, results show that energy partitioning process is influenced by the surface water availability to a great extent. The study provides insight into what property is determining energy partitioning process over different land covers and gives suggestion for future models.
ContributorsYang, Jiachuan (Author) / Wang, Zhihua (Thesis advisor) / Huang, Huei-Ping (Committee member) / Vivoni, Enrique (Committee member) / Mays, Larry (Committee member) / Arizona State University (Publisher)
Created2012
151515-Thumbnail Image.png
Description
This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified

This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS®) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.
ContributorsChoukulkar, Aditya (Author) / Calhoun, Ronald (Thesis advisor) / Mahalov, Alex (Committee member) / Kostelich, Eric (Committee member) / Huang, Huei-Ping (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2013
152007-Thumbnail Image.png
Description
The implications of a changing climate have a profound impact on human life, society, and policy making. The need for accurate climate prediction becomes increasingly important as we better understand these implications. Currently, the most widely used climate prediction relies on the synthesis of climate model simulations organized by the

The implications of a changing climate have a profound impact on human life, society, and policy making. The need for accurate climate prediction becomes increasingly important as we better understand these implications. Currently, the most widely used climate prediction relies on the synthesis of climate model simulations organized by the Coupled Model Intercomparison Project (CMIP); these simulations are ensemble-averaged to construct projections for the 21st century climate. However, a significant degree of bias and variability in the model simulations for the 20th century climate is well-known at both global and regional scales. Based on that insight, this study provides an alternative approach for constructing climate projections that incorporates knowledge of model bias. This approach is demonstrated to be a viable alternative which can be easily implemented by water resource managers for potentially more accurate projections. Tests of the new approach are provided on a global scale with an emphasis on semiarid regional studies for their particular vulnerability to water resource changes, using both the former CMIP Phase 3 (CMIP3) and current Phase 5 (CMIP5) model archives. This investigation is accompanied by a detailed analysis of the dynamical processes and water budget to understand the behaviors and sources of model biases. Sensitivity studies of selected CMIP5 models are also performed with an atmospheric component model by testing the relationship between climate change forcings and model simulated response. The information derived from each study is used to determine the progressive quality of coupled climate models in simulating the global water cycle by rigorously investigating sources of model bias related to the moisture budget. As such, the conclusions of this project are highly relevant to model development and potentially may be used to further improve climate projections.
ContributorsBaker, Noel C (Author) / Huang, Huei-Ping (Thesis advisor) / Trimble, Steve (Committee member) / Anderson, James (Committee member) / Clarke, Amanda (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2013
151002-Thumbnail Image.png
Description
This study considered the impact of grid resolution on wind velocity simulated by the Weather Research and Forecasting (WRF) model. The period simulated spanned November 2009 through January 2010, for which, multi-resolution nested domains were examined. Basic analysis was performed utilizing the data assimilation tools of NCEP/NCAR (National Center for

This study considered the impact of grid resolution on wind velocity simulated by the Weather Research and Forecasting (WRF) model. The period simulated spanned November 2009 through January 2010, for which, multi-resolution nested domains were examined. Basic analysis was performed utilizing the data assimilation tools of NCEP/NCAR (National Center for Environmental Prediction/National Center for Atmospheric Research) to determine the ideal location to examine during the simulation was the Pacific Northwest portion of the United States, specifically the border between California and Oregon. The simulated mutli-resolution nested domains in this region indicated an increase in apparent wind speed as the resolution for the domain was increased. These findings were confirmed by statistical analysis which identified a positive bias for wind speed with respect to increased resolution as well as a correlation coefficient indicating the existence of a positive change in wind speed with increased resolution. An analysis of temperature change was performed in order to test the validity of the findings of the WRF simulation model. The statistical analysis performed on temperature change throughout the increased grid resolution did not indicate any change in temperature. In fact the correlation coefficient values between the domains were found in the 0.90 range, indicating the non-sensitivity of temperature across the increased resolutions. These results validate the findings of the WRF simulation: increased wind velocity can be observed at higher grid resolution. The study then considered the difference between wind velocity observed over the entire domains and the wind velocity observed solely over offshore locations. Wind velocity was observed to be significantly higher (an increase of 68.4%) in the offshore locations. The findings of this study suggest simulation tools should be utilized to examine domains at a higher resolution in order to identify potential locations for wind farms. The results go further to suggest the ideal location for these potential wind farms will be at offshore locations.
ContributorsBouey, Michael (Author) / Huang, Huei-Ping (Thesis advisor) / Trimble, Steve (Committee member) / Ronald, Ronald (Committee member) / Arizona State University (Publisher)
Created2012
190708-Thumbnail Image.png
Description
Generative models are deep neural network-based models trained to learn the underlying distribution of a dataset. Once trained, these models can be used to sample novel data points from this distribution. Their impressive capabilities have been manifested in various generative tasks, encompassing areas like image-to-image translation, style transfer, image editing,

Generative models are deep neural network-based models trained to learn the underlying distribution of a dataset. Once trained, these models can be used to sample novel data points from this distribution. Their impressive capabilities have been manifested in various generative tasks, encompassing areas like image-to-image translation, style transfer, image editing, and more. One notable application of generative models is data augmentation, aimed at expanding and diversifying the training dataset to augment the performance of deep learning models for a downstream task. Generative models can be used to create new samples similar to the original data but with different variations and properties that are difficult to capture with traditional data augmentation techniques. However, the quality, diversity, and controllability of the shape and structure of the generated samples from these models are often directly proportional to the size and diversity of the training dataset. A more extensive and diverse training dataset allows the generative model to capture overall structures present in the data and generate more diverse and realistic-looking samples. In this dissertation, I present innovative methods designed to enhance the robustness and controllability of generative models, drawing upon physics-based, probabilistic, and geometric techniques. These methods help improve the generalization and controllability of the generative model without necessarily relying on large training datasets. I enhance the robustness of generative models by integrating classical geometric moments for shape awareness and minimizing trainable parameters. Additionally, I employ non-parametric priors for the generative model's latent space through basic probability and optimization methods to improve the fidelity of interpolated images. I adopt a hybrid approach to address domain-specific challenges with limited data and controllability, combining physics-based rendering with generative models for more realistic results. These approaches are particularly relevant in industrial settings, where the training datasets are small and class imbalance is common. Through extensive experiments on various datasets, I demonstrate the effectiveness of the proposed methods over conventional approaches.
ContributorsSingh, Rajhans (Author) / Turaga, Pavan (Thesis advisor) / Jayasuriya, Suren (Committee member) / Berisha, Visar (Committee member) / Fazli, Pooyan (Committee member) / Arizona State University (Publisher)
Created2023
187454-Thumbnail Image.png
Description
This dissertation presents novel solutions for improving the generalization capabilities of deep learning based computer vision models. Neural networks are known to suffer a large drop in performance when tested on samples from a different distribution than the one on which they were trained. The proposed solutions, based on latent

This dissertation presents novel solutions for improving the generalization capabilities of deep learning based computer vision models. Neural networks are known to suffer a large drop in performance when tested on samples from a different distribution than the one on which they were trained. The proposed solutions, based on latent space geometry and meta-learning, address this issue by improving the robustness of these models to distribution shifts. Through the use of geometrical alignment, state-of-the-art domain adaptation and source-free test-time adaptation strategies are developed. Additionally, geometrical alignment can allow classifiers to be progressively adapted to new, unseen test domains without requiring retraining of the feature extractors. The dissertation also presents algorithms for enabling in-the-wild generalization without needing access to any samples from the target domain. Other causes of poor generalization, such as data scarcity in critical applications and training data with high levels of noise and variance, are also explored. To address data scarcity in fine-grained computer vision tasks such as object detection, novel context-aware augmentations are suggested. While the first four chapters focus on general-purpose computer vision models, strategies are also developed to improve robustness in specific applications. The efficiency of training autonomous agents for visual navigation is improved by incorporating semantic knowledge, and the integration of domain experts' knowledge allows for the realization of a low-cost, minimally invasive generalizable automated rehabilitation system. Lastly, new tools for explainability and model introspection using counter-factual explainers trained through interval-based uncertainty calibration objectives are presented.
ContributorsThopalli, Kowshik (Author) / Turaga, Pavan (Thesis advisor) / Thiagarajan, Jayaraman J (Committee member) / Li, Baoxin (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023
191748-Thumbnail Image.png
Description
Millimeter-wave (mmWave) and sub-terahertz (sub-THz) systems aim to utilize the large bandwidth available at these frequencies. This has the potential to enable several future applications that require high data rates, such as autonomous vehicles and digital twins. These systems, however, have several challenges that need to be addressed to realize

Millimeter-wave (mmWave) and sub-terahertz (sub-THz) systems aim to utilize the large bandwidth available at these frequencies. This has the potential to enable several future applications that require high data rates, such as autonomous vehicles and digital twins. These systems, however, have several challenges that need to be addressed to realize their gains in practice. First, they need to deploy large antenna arrays and use narrow beams to guarantee sufficient receive power. Adjusting the narrow beams of the large antenna arrays incurs massive beam training overhead. Second, the sensitivity to blockages is a key challenge for mmWave and THz networks. Since these networks mainly rely on line-of-sight (LOS) links, sudden link blockages highly threaten the reliability of the networks. Further, when the LOS link is blocked, the network typically needs to hand off the user to another LOS basestation, which may incur critical time latency, especially if a search over a large codebook of narrow beams is needed. A promising way to tackle both these challenges lies in leveraging additional side information such as visual, LiDAR, radar, and position data. These sensors provide rich information about the wireless environment, which can be utilized for fast beam and blockage prediction. This dissertation presents a machine-learning framework for sensing-aided beam and blockage prediction. In particular, for beam prediction, this work proposes to utilize visual and positional data to predict the optimal beam indices. For the first time, this work investigates the sensing-aided beam prediction task in a real-world vehicle-to-infrastructure and drone communication scenario. Similarly, for blockage prediction, this dissertation proposes a multi-modal wireless communication solution that utilizes bimodal machine learning to perform proactive blockage prediction and user hand-off. Evaluations on both real-world and synthetic datasets illustrate the promising performance of the proposed solutions and highlight their potential for next-generation communication and sensing systems.
ContributorsCharan, Gouranga (Author) / Alkhateeb, Ahmed (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Turaga, Pavan (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2024
156747-Thumbnail Image.png
Description
Mixture of experts is a machine learning ensemble approach that consists of individual models that are trained to be ``experts'' on subsets of the data, and a gating network that provides weights to output a combination of the expert predictions. Mixture of experts models do not currently see wide use

Mixture of experts is a machine learning ensemble approach that consists of individual models that are trained to be ``experts'' on subsets of the data, and a gating network that provides weights to output a combination of the expert predictions. Mixture of experts models do not currently see wide use due to difficulty in training diverse experts and high computational requirements. This work presents modifications of the mixture of experts formulation that use domain knowledge to improve training, and incorporate parameter sharing among experts to reduce computational requirements.

First, this work presents an application of mixture of experts models for quality robust visual recognition. First it is shown that human subjects outperform deep neural networks on classification of distorted images, and then propose a model, MixQualNet, that is more robust to distortions. The proposed model consists of ``experts'' that are trained on a particular type of image distortion. The final output of the model is a weighted sum of the expert models, where the weights are determined by a separate gating network. The proposed model also incorporates weight sharing to reduce the number of parameters, as well as increase performance.



Second, an application of mixture of experts to predict visual saliency is presented. A computational saliency model attempts to predict where humans will look in an image. In the proposed model, each expert network is trained to predict saliency for a set of closely related images. The final saliency map is computed as a weighted mixture of the expert networks' outputs, with weights determined by a separate gating network. The proposed model achieves better performance than several other visual saliency models and a baseline non-mixture model.

Finally, this work introduces a saliency model that is a weighted mixture of models trained for different levels of saliency. Levels of saliency include high saliency, which corresponds to regions where almost all subjects look, and low saliency, which corresponds to regions where some, but not all subjects look. The weighted mixture shows improved performance compared with baseline models because of the diversity of the individual model predictions.
ContributorsDodge, Samuel Fuller (Author) / Karam, Lina (Thesis advisor) / Jayasuriya, Suren (Committee member) / Li, Baoxin (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2018
154025-Thumbnail Image.png
Description
This study uses the Weather Research and Forecasting (WRF) model to simulate and predict the changes in local climate attributed to the urbanization for five desert cities. The simulations are performed in the fashion of climate downscaling, constrained by the surface boundary conditions generated from high resolution land-use maps. For

This study uses the Weather Research and Forecasting (WRF) model to simulate and predict the changes in local climate attributed to the urbanization for five desert cities. The simulations are performed in the fashion of climate downscaling, constrained by the surface boundary conditions generated from high resolution land-use maps. For each city, the land-use maps of 1985 and 2010 from Landsat satellite observation, and a projected land-use map for 2030, are used to represent the past, present, and future. An additional set of simulations for Las Vegas, the largest of the five cities, uses the NLCD 1992 and 2006 land-use maps and an idealized historical land-use map with no urban coverage for 1900.

The study finds that urbanization in Las Vegas produces a classic urban heat island (UHI) at night but a minor cooling during the day. A further analysis of the surface energy balance shows that the decrease in surface Albedo and increase effective emissivity play an important role in shaping the local climate change over urban areas. The emerging urban structures slow down the diurnal wind circulation over the city due to an increased effective surface roughness. This leads to a secondary modification of temperature due to the interaction between the mechanical and thermodynamic effects of urbanization.

The simulations for the five desert cities for 1985 and 2010 further confirm a common pattern of the climatic effect of urbanization with significant nighttime warming and moderate daytime cooling. This effect is confined to the urban area and is not sensitive to the size of the city or the detail of land cover in the surrounding areas. The pattern of nighttime warming and daytime cooling remains robust in the simulations for the future climate of the five cities using the projected 2030 land-use maps. Inter-city differences among the five urban areas are discussed.
ContributorsKamal, Samy (Author) / Huang, Huei-Ping (Thesis advisor) / Anderson, James (Thesis advisor) / Herrmann, Marcus (Committee member) / Calhoun, Ronald (Committee member) / Myint, Soe (Committee member) / Arizona State University (Publisher)
Created2015