Matching Items (6)
Filtering by

Clear all filters

150690-Thumbnail Image.png
Description
Isentropic analysis is a type of analysis that is based on using the concept of potential temperatures, the adiabatically established temperature at 1000 hPa. In the 1930s and 1940s this type of analysis proved to be valuable in indicating areas of increased moisture content and locations experiencing flow up or

Isentropic analysis is a type of analysis that is based on using the concept of potential temperatures, the adiabatically established temperature at 1000 hPa. In the 1930s and 1940s this type of analysis proved to be valuable in indicating areas of increased moisture content and locations experiencing flow up or down adiabatic surfaces. However, in the early 1950s, this type of analysis faded out of use and not until the twenty-first century have some researchers started once again to examine the usefulness of isentropic analysis. One aspect in which isentropic analysis could be practical, based on prior research, is in severe weather situations, due to its ability to easily show adiabatic motion and moisture. As a result, I analyzed monthly climatological isentropic surfaces to identify distinct patterns associated with tornado occurrences for specific regions and months across the contiguous United States. I collected tornado reports from 1974 through 2009 to create tornado regions for each month across the contiguous United States and corresponding upper air data for the same time period. I then separated these upper air data into tornado and non-tornado days for specific regions and conducted synoptic and statistical analyses to establish differences between the two. Finally, I compared those results with analyses of individual case studies for each defined region using independent data from 2009 through 2010. On tornado days distinct patterns can be identified on the isentropic surface: (1) the average isentropic surface lowered on tornado days indicating a trough across the region, (2) a corresponding increase in moisture content occurred across the tornado region, and (3) wind shifted in such a manner to produce flow up the isentropic trough indicating uplift. When comparing the climatological results with the case studies, the isentropic pattern for the case studies in general was more pronounced compared to the climatological pattern; however, this would be expected as when creating the average the pattern/conditions will be smoothed. These findings begin to bridge the large gap in literature, show the usefulness of isentropic analysis in monthly and daily use and serve as catalysts to create a finer resolution database in isentropic coordinates.
ContributorsPace, Matthew Brandon (Author) / Cerveny, Randall S. (Thesis advisor) / Selover, Nancy J (Committee member) / Brazel, Anthony J. (Committee member) / Arizona State University (Publisher)
Created2012
154744-Thumbnail Image.png
Description
Energy use within urban building stocks is continuing to increase globally as populations expand and access to electricity improves. This projected increase in demand could require deployment of new generation capacity, but there is potential to offset some of this demand through modification of the buildings themselves. Building

Energy use within urban building stocks is continuing to increase globally as populations expand and access to electricity improves. This projected increase in demand could require deployment of new generation capacity, but there is potential to offset some of this demand through modification of the buildings themselves. Building stocks are quasi-permanent infrastructures which have enduring influence on urban energy consumption, and research is needed to understand: 1) how development patterns constrain energy use decisions and 2) how cities can achieve energy and environmental goals given the constraints of the stock. This requires a thorough evaluation of both the growth of the stock and as well as the spatial distribution of use throughout the city. In this dissertation, a case study in Los Angeles County, California (LAC) is used to quantify urban growth, forecast future energy use under climate change, and to make recommendations for mitigating energy consumption increases. A reproducible methodological framework is included for application to other urban areas.

In LAC, residential electricity demand could increase as much as 55-68% between 2020 and 2060, and building technology lock-in has constricted the options for mitigating energy demand, as major changes to the building stock itself are not possible, as only a small portion of the stock is turned over every year. Aggressive and timely efficiency upgrades to residential appliances and building thermal shells can significantly offset the projected increases, potentially avoiding installation of new generation capacity, but regulations on new construction will likely be ineffectual due to the long residence time of the stock (60+ years and increasing). These findings can be extrapolated to other U.S. cities where the majority of urban expansion has already occurred, such as the older cities on the eastern coast. U.S. population is projected to increase 40% by 2060, with growth occurring in the warmer southern and western regions. In these growing cities, improving new construction buildings can help offset electricity demand increases before the city reaches the lock-in phase.
ContributorsReyna, Janet Lorel (Author) / Chester, Mikhail V (Thesis advisor) / Gurney, Kevin (Committee member) / Reddy, T. Agami (Committee member) / Rey, Sergio (Committee member) / Arizona State University (Publisher)
Created2016
158484-Thumbnail Image.png
Description
Cancer is a disease involving abnormal growth of cells. Its growth dynamics is perplexing. Mathematical modeling is a way to shed light on this progress and its medical treatments. This dissertation is to study cancer invasion in time and space using a mathematical approach. Chapter 1 presents a detailed review

Cancer is a disease involving abnormal growth of cells. Its growth dynamics is perplexing. Mathematical modeling is a way to shed light on this progress and its medical treatments. This dissertation is to study cancer invasion in time and space using a mathematical approach. Chapter 1 presents a detailed review of literature on cancer modeling.

Chapter 2 focuses sorely on time where the escape of a generic cancer out of immune control is described by stochastic delayed differential equations (SDDEs). Without time delay and noise, this system demonstrates bistability. The effects of response time of the immune system and stochasticity in the tumor proliferation rate are studied by including delay and noise in the model. Stability, persistence and extinction of the tumor are analyzed. The result shows that both time delay and noise can induce the transition from low tumor burden equilibrium to high tumor equilibrium. The aforementioned work has been published (Han et al., 2019b).

In Chapter 3, Glioblastoma multiforme (GBM) is studied using a partial differential equation (PDE) model. GBM is an aggressive brain cancer with a grim prognosis. A mathematical model of GBM growth with explicit motility, birth, and death processes is proposed. A novel method is developed to approximate key characteristics of the wave profile, which can be compared with MRI data. Several test cases of MRI data of GBM patients are used to yield personalized parameterizations of the model. The aforementioned work has been published (Han et al., 2019a).

Chapter 4 presents an innovative way of forecasting spatial cancer invasion. Most mathematical models, including the ones described in previous chapters, are formulated based on strong assumptions, which are hard, if not impossible, to verify due to complexity of biological processes and lack of quality data. Instead, a nonparametric forecasting method using Gaussian processes is proposed. By exploiting the local nature of the spatio-temporal process, sparse (in terms of time) data is sufficient for forecasting. Desirable properties of Gaussian processes facilitate selection of the size of the local neighborhood and computationally efficient propagation of uncertainty. The method is tested on synthetic data and demonstrates promising results.
ContributorsHan, Lifeng (Author) / Kuang, Yang (Thesis advisor) / Fricks, John (Thesis advisor) / Kostelich, Eric (Committee member) / Baer, Steve (Committee member) / Gumel, Abba (Committee member) / Arizona State University (Publisher)
Created2020
157618-Thumbnail Image.png
Description
Accurate forecasting of electricity prices has been a key factor for bidding strategies in the electricity markets. The increase in renewable generation due to large scale PV and wind deployment in California has led to an increase in day-ahead and real-time price volatility. This has also led to prices going

Accurate forecasting of electricity prices has been a key factor for bidding strategies in the electricity markets. The increase in renewable generation due to large scale PV and wind deployment in California has led to an increase in day-ahead and real-time price volatility. This has also led to prices going negative due to the supply-demand imbalance caused by excess renewable generation during instances of low demand. This research focuses on applying machine learning models to analyze the impact of renewable generation on the hourly locational marginal prices (LMPs) for California Independent System Operator (CAISO). Historical data involving the load, renewable generation from solar and wind, fuel prices, aggregated generation outages is extracted and collected together in a dataset and used as features to train different machine learning models. Tree- based machine learning models such as Extra Trees, Gradient Boost, Extreme Gradient Boost (XGBoost) as well as models based on neural networks such as Long short term memory networks (LSTMs) are implemented for price forecasting. The focus is to capture the best relation between the features and the target LMP variable and determine the weight of every feature in determining the price.

The impact of renewable generation on LMP forecasting is determined for several different days in 2018. It is seen that the prices are impacted significantly by solar and wind generation and it ranks second in terms of impact after the electric load. The results of this research propose a method to evaluate the impact of several parameters on the day-ahead price forecast and would be useful for the grid operators to evaluate the parameters that could significantly impact the day-ahead price prediction and which parameters with low impact could be ignored to avoid an error in the forecast.
ContributorsVad, Chinmay (Author) / Honsberg, C. (Christiana B.) (Thesis advisor) / King, Richard R. (Committee member) / Kurtz, Sarah (Committee member) / Arizona State University (Publisher)
Created2019
Description
This dissertation studies how forecasting performance can be improved in big data. The first chapter with Seung C. Ahn considers Partial Least Squares (PLS) estimation of a time-series forecasting model with data containing a large number of time series observations of many predictors. In the model, a subset or a

This dissertation studies how forecasting performance can be improved in big data. The first chapter with Seung C. Ahn considers Partial Least Squares (PLS) estimation of a time-series forecasting model with data containing a large number of time series observations of many predictors. In the model, a subset or a whole set of the latent common factors in predictors determine a target variable. First, the optimal number of the PLS factors for forecasting could be smaller than the number of the common factors relevant for the target variable. Second, as more than the optimal number of PLS factors is used, the out-of-sample explanatory power of the factors could decrease while their in-sample power may increase. Monte Carlo simulation results also confirm these asymptotic results. In addition, simulation results indicate that the out-of-sample forecasting power of the PLS factors is often higher when a smaller than the asymptotically optimal number of factors are used. Finally, the out-of-sample forecasting power of the PLS factors often decreases as the second, third, and more factors are added, even if the asymptotically optimal number of the factors is greater than one. The second chapter studies the predictive performance of various factor estimations comprehensively. Big data that consist of major U.S. macroeconomic and finance variables, are constructed. 148 target variables are forecasted, using 7 factor estimation methods with 11 information criteria. First, the number of factors used in forecasting is important and Incorporating more factors does not always provide better forecasting performance. Second, using consistently estimated number of factors does not necessarily improve predictive performance. The first PLS factor, which is not theoretically consistent, very often shows strong forecasting performance. Third, there is a large difference in the forecasting performance across different information criteria, even when the same factor estimation method is used. Therefore, the choice of factor estimation method, as well as the information criterion, is crucial in forecasting practice. Finally, the first PLS factor yields forecasting performance very close to the best result from the total combinations of the 7 factor estimation methods and 11 information criteria.
ContributorsBae, Juhui (Author) / Ahn, Seung (Thesis advisor) / Pruitt, Seth (Committee member) / Kuminoff, Nicolai (Committee member) / Ferraro, Domenico (Committee member) / Arizona State University (Publisher)
Created2021
161901-Thumbnail Image.png
Description
The need of effective forecasting models for multi-variate time series has been underlined by the integration of sensory technologies into essential applications such as building energy optimizations, flight monitoring, and health monitoring. To meet this requirement, time series prediction techniques have been expanded from uni-variate to multi-variate. However, due to

The need of effective forecasting models for multi-variate time series has been underlined by the integration of sensory technologies into essential applications such as building energy optimizations, flight monitoring, and health monitoring. To meet this requirement, time series prediction techniques have been expanded from uni-variate to multi-variate. However, due to the extended models’ poor ability to capture the intrinsic relationships among variates, naïve extensions of prediction approaches result in an unwanted rise in the cost of model learning and, more critically, a significant loss in model performance. While recurrent models like Long Short-Term Memory (LSTM) and Recurrent Neural Network Network (RNN) are designed to capture the temporal intricacies in data, their performance can soon deteriorate. First, I claim in this thesis that (a) by exploiting temporal alignments of variates to quantify the importance of the recorded variates in relation to a target variate, one can build a more accurate forecasting model. I also argue that (b) traditional time series similarity/distance functions, such as Dynamic Time Warping (DTW), which require that variates have similar absolute patterns are fundamentally ill-suited for this purpose, and that should instead quantify temporal correlation in terms of temporal alignments of key “events” impacting these series, rather than series similarity. Further, I propose that (c) while learning a temporal model with recurrence-based techniques (such as RNN and LSTM – even when leveraging attention strategies) is challenging and expensive, the better results can be obtained by coupling simpler CNNs with an adaptive variate selection strategy. Putting these together, I introduce a novel Selego framework for variate selection based on these arguments, and I experimentally evaluate the performance of the proposed approach on various forecasting models, such as LSTM, RNN, and CNN, for different top-X% percent variates and different forecasting time in the future (lead), on multiple real-world data sets. Experiments demonstrate that the proposed framework can reduce the number of recorded variates required to train predictive models by 90 - 98% while also increasing accuracy. Finally, I present a fault onset detection technique that leverages the precise baseline forecasting models trained using the Selego framework. The proposed, Selego-enabled Fault Detection Framework (FDF-Selego) has been experimentally evaluated within the context of detecting the onset of faults in the building Heating, Ventilation, and Air Conditioning (HVAC) system.
ContributorsTiwaskar, Manoj (Author) / Candan, K. Selcuk (Thesis advisor) / Sapino, Maria Luisa (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2021