Matching Items (11)
Filtering by

Clear all filters

150690-Thumbnail Image.png
Description
Isentropic analysis is a type of analysis that is based on using the concept of potential temperatures, the adiabatically established temperature at 1000 hPa. In the 1930s and 1940s this type of analysis proved to be valuable in indicating areas of increased moisture content and locations experiencing flow up or

Isentropic analysis is a type of analysis that is based on using the concept of potential temperatures, the adiabatically established temperature at 1000 hPa. In the 1930s and 1940s this type of analysis proved to be valuable in indicating areas of increased moisture content and locations experiencing flow up or down adiabatic surfaces. However, in the early 1950s, this type of analysis faded out of use and not until the twenty-first century have some researchers started once again to examine the usefulness of isentropic analysis. One aspect in which isentropic analysis could be practical, based on prior research, is in severe weather situations, due to its ability to easily show adiabatic motion and moisture. As a result, I analyzed monthly climatological isentropic surfaces to identify distinct patterns associated with tornado occurrences for specific regions and months across the contiguous United States. I collected tornado reports from 1974 through 2009 to create tornado regions for each month across the contiguous United States and corresponding upper air data for the same time period. I then separated these upper air data into tornado and non-tornado days for specific regions and conducted synoptic and statistical analyses to establish differences between the two. Finally, I compared those results with analyses of individual case studies for each defined region using independent data from 2009 through 2010. On tornado days distinct patterns can be identified on the isentropic surface: (1) the average isentropic surface lowered on tornado days indicating a trough across the region, (2) a corresponding increase in moisture content occurred across the tornado region, and (3) wind shifted in such a manner to produce flow up the isentropic trough indicating uplift. When comparing the climatological results with the case studies, the isentropic pattern for the case studies in general was more pronounced compared to the climatological pattern; however, this would be expected as when creating the average the pattern/conditions will be smoothed. These findings begin to bridge the large gap in literature, show the usefulness of isentropic analysis in monthly and daily use and serve as catalysts to create a finer resolution database in isentropic coordinates.
ContributorsPace, Matthew Brandon (Author) / Cerveny, Randall S. (Thesis advisor) / Selover, Nancy J (Committee member) / Brazel, Anthony J. (Committee member) / Arizona State University (Publisher)
Created2012
134517-Thumbnail Image.png
Description
The purpose of this project is to provide our client with a tool to mitigate Company X's franchise-wide inventory control problem. The problem stems from the franchises' initial strategy to buy all inventory as customers brought them in without a quantitative way for buyers to evaluate the store's inventory needs.

The purpose of this project is to provide our client with a tool to mitigate Company X's franchise-wide inventory control problem. The problem stems from the franchises' initial strategy to buy all inventory as customers brought them in without a quantitative way for buyers to evaluate the store's inventory needs. The Excel solution created by our team serves to provide that evaluation for buyers using deseasonalized linear regression to forecast inventory needs for clothing of different sizes and seasons by month. When looking at the provided sales data from 2014-2016, there was a clear seasonal trend, so the appropriate forecasting model was determined by testing 3 models: Triple Exponential Smoothing model, Deseasonalized Simple Linear Regression, and Multiple Linear Regression.The model calculates monthly optimal inventory levels (current period plus future 2 periods of inventory). All of the models were evaluated using the lowest mean absolute error (meaning best fit with the data), and the model with best fit was Deseasonalized Simple Linear Regression, which was then used to build the Excel tool. Buyers can use the Excel tool built with this forecasting model to evaluate whether or not to buy a given item of any size or season. To do this, the model uses the previous year's sales data to forecast optimal inventory level and compares it to the stores' current inventory level. If the current level is less than the optimal level, the cell housing current value will turn green (buy). If the currently level is greater than or equal to optimal level or less than optimal inventory level*1.05, current value will turn yellow (buy only if good quality). If the current level is greater than optimal level*1.05 current level will be red (don't buy). We recommend both stores implement a way of keeping track of how many clothing items held in each bin to keep more accurate inventory count. In addition, the model's utility will be of limited use until both stores' inventories are at a level where they can afford to buy. Therefore, it is in the client's best interest to liquidate stale inventor into store credit or cash In the future, the team would also like to develop a pricing model to better meet the needs of the client's two locations.
ContributorsUribes-Yanez, Diego (Co-author) / Liu, Jessica (Co-author) / Taylor, Todd (Thesis director) / Gentile, Erica (Committee member) / Department of Economics (Contributor) / Department of Information Systems (Contributor) / Department of Marketing (Contributor) / School of International Letters and Cultures (Contributor) / School of Life Sciences (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
132984-Thumbnail Image.png
Description
The listing price of residential rental real estate is dependent upon property specific attributes. These attributes involve data that can be tabulated as categorical and continuous predictors. The forecasting model presented in this paper is developed using publicly available, property specific information sourced from the Zillow and Trulia online real

The listing price of residential rental real estate is dependent upon property specific attributes. These attributes involve data that can be tabulated as categorical and continuous predictors. The forecasting model presented in this paper is developed using publicly available, property specific information sourced from the Zillow and Trulia online real estate databases. The following fifteen predictors were tracked for forty-eight rental listings in the 85281 area code: housing type, square footage, number of baths, number of bedrooms, distance to Arizona State University’s Tempe Campus, crime level of the neighborhood, median age range of the neighborhood population, percentage of the neighborhood population that is married, median year of construction of the neighborhood, percentage of the population commuting longer than thirty minutes, percentage of neighborhood homes occupied by renters, percentage of the population commuting by transit, and the number of restaurants, grocery stores, and nightlife within a one mile radius of the property. Through regression analysis, the significant predictors of the listing price of a rental property in the 85281 area code were discerned. These predictors were used to form a forecasting model. This forecasting model explains 75.5% of the variation in listing prices of residential rental real estate in the 85281 area code.
ContributorsSchuchter, Grant (Author) / Clough, Michael (Thesis director) / Escobedo, Adolfo (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133011-Thumbnail Image.png
Description
Only an Executive Summary of the project is included.
The goal of this project is to develop a deeper understanding of how machine learning pertains to the business world and how business professionals can capitalize on its capabilities. It explores the end-to-end process of integrating a machine and the tradeoffs

Only an Executive Summary of the project is included.
The goal of this project is to develop a deeper understanding of how machine learning pertains to the business world and how business professionals can capitalize on its capabilities. It explores the end-to-end process of integrating a machine and the tradeoffs and obstacles to consider. This topic is extremely pertinent today as the advent of big data increases and the use of machine learning and artificial intelligence is expanding across industries and functional roles. The approach I took was to expand on a project I championed as a Microsoft intern where I facilitated the integration of a forecasting machine learning model firsthand into the business. I supplement my findings from the experience with research on machine learning as a disruptive technology. This paper will not delve into the technical aspects of coding a machine model, but rather provide a holistic overview of developing the model from a business perspective. My findings show that, while the advantages of machine learning are large and widespread, a lack of visibility and transparency into the algorithms behind machine learning, the necessity for large amounts of data, and the overall complexity of creating accurate models are all tradeoffs to consider when deciding whether or not machine learning is suitable for a certain objective. The results of this paper are important in order to increase the understanding of any business professional on the capabilities and obstacles of integrating machine learning into their business operations.
ContributorsVerma, Ria (Author) / Goegan, Brian (Thesis director) / Moore, James (Committee member) / Department of Information Systems (Contributor) / Department of Supply Chain Management (Contributor) / Department of Economics (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
During the summer of 2016 I had an internship in the Fab Materials Planning group (FMP) at Intel Corporation. FMP generates long-range (6-24 months) forecasts for chemical and gas materials used in the chip fabrication process. These forecasts are sent to Commodity Mangers (CMs) in a separate department where they

During the summer of 2016 I had an internship in the Fab Materials Planning group (FMP) at Intel Corporation. FMP generates long-range (6-24 months) forecasts for chemical and gas materials used in the chip fabrication process. These forecasts are sent to Commodity Mangers (CMs) in a separate department where they communicate the forecast and any constraints to Intel suppliers. The intern manager of the group, Scott Keithley, created a prototype of a model to redefine how FMP determines which materials require a forecast update (forecasting cadence). However, the model prototype was complex to use, not intuitive, and did not receive positive feedback from the rest of the team or external stakeholders. This thesis will detail the steps I took in identifying the main problem the model was intended to address, how I approached the problem, and some of the major iterations I took to modify the model. It will also go over the final model dashboard and the results of the model use and integration. An improvement analysis and the intended and unintended consequences of the model will also be included. The results of this model demonstrate that statistical process control, a traditionally operational analysis, can be used to generate a forecasting cadence. It will also verify that an intuitive user interface is vital to the end user adoption and integration of an analytics based model into an established process flow. This model will generate an estimated time savings of 900 hours per year as well as giving FMP the ability to be more proactive in its forecasting approach.
ContributorsMatson, Rilee Nicole (Author) / Kellso, James (Thesis director) / Keithley, Scott (Committee member) / Department of Supply Chain Management (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
154744-Thumbnail Image.png
Description
Energy use within urban building stocks is continuing to increase globally as populations expand and access to electricity improves. This projected increase in demand could require deployment of new generation capacity, but there is potential to offset some of this demand through modification of the buildings themselves. Building

Energy use within urban building stocks is continuing to increase globally as populations expand and access to electricity improves. This projected increase in demand could require deployment of new generation capacity, but there is potential to offset some of this demand through modification of the buildings themselves. Building stocks are quasi-permanent infrastructures which have enduring influence on urban energy consumption, and research is needed to understand: 1) how development patterns constrain energy use decisions and 2) how cities can achieve energy and environmental goals given the constraints of the stock. This requires a thorough evaluation of both the growth of the stock and as well as the spatial distribution of use throughout the city. In this dissertation, a case study in Los Angeles County, California (LAC) is used to quantify urban growth, forecast future energy use under climate change, and to make recommendations for mitigating energy consumption increases. A reproducible methodological framework is included for application to other urban areas.

In LAC, residential electricity demand could increase as much as 55-68% between 2020 and 2060, and building technology lock-in has constricted the options for mitigating energy demand, as major changes to the building stock itself are not possible, as only a small portion of the stock is turned over every year. Aggressive and timely efficiency upgrades to residential appliances and building thermal shells can significantly offset the projected increases, potentially avoiding installation of new generation capacity, but regulations on new construction will likely be ineffectual due to the long residence time of the stock (60+ years and increasing). These findings can be extrapolated to other U.S. cities where the majority of urban expansion has already occurred, such as the older cities on the eastern coast. U.S. population is projected to increase 40% by 2060, with growth occurring in the warmer southern and western regions. In these growing cities, improving new construction buildings can help offset electricity demand increases before the city reaches the lock-in phase.
ContributorsReyna, Janet Lorel (Author) / Chester, Mikhail V (Thesis advisor) / Gurney, Kevin (Committee member) / Reddy, T. Agami (Committee member) / Rey, Sergio (Committee member) / Arizona State University (Publisher)
Created2016
137647-Thumbnail Image.png
Description
The widespread use of statistical analysis in sports-particularly Baseball- has made it increasingly necessary for small and mid-market teams to find ways to maintain their analytical advantages over large market clubs. In baseball, an opportunity for exists for teams with limited financial resources to sign players under team control to

The widespread use of statistical analysis in sports-particularly Baseball- has made it increasingly necessary for small and mid-market teams to find ways to maintain their analytical advantages over large market clubs. In baseball, an opportunity for exists for teams with limited financial resources to sign players under team control to long-term contracts before other teams can bid for their services in free agency. If small and mid-market clubs can successfully identify talented players early, clubs can save money, achieve cost certainty and remain competitive for longer periods of time. These deals are also advantageous to players since they receive job security and greater financial dividends earlier in their career. The objective of this paper is to develop a regression-based predictive model that teams can use to forecast the performance of young baseball players with limited Major League experience. There were several tasks conducted to achieve this goal: (1) Data was obtained from Major League Baseball and Lahman's Baseball Database and sorted using Excel macros for easier analysis. (2) Players were separated into three positional groups depending on similar fielding requirements and offensive profiles: Group I was comprised of first and third basemen, Group II contains second basemen, shortstops, and center fielders and Group III contains left and right fielders. (3) Based on the context of baseball and the nature of offensive performance metrics, only players who achieve greater than 200 plate appearances within the first two years of their major league debut are included in this analysis. (4) The statistical software package JMP was used to create regression models of each group and analyze the residuals for any irregularities or normality violations. Once the models were developed, slight adjustments were made to improve the accuracy of the forecasts and identify opportunities for future work. It was discovered that Group I and Group III were the easiest player groupings to forecast while Group II required several attempts to improve the model.
ContributorsJack, Nathan Scott (Author) / Shunk, Dan (Thesis director) / Montgomery, Douglas (Committee member) / Borror, Connie (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2013-05
158484-Thumbnail Image.png
Description
Cancer is a disease involving abnormal growth of cells. Its growth dynamics is perplexing. Mathematical modeling is a way to shed light on this progress and its medical treatments. This dissertation is to study cancer invasion in time and space using a mathematical approach. Chapter 1 presents a detailed review

Cancer is a disease involving abnormal growth of cells. Its growth dynamics is perplexing. Mathematical modeling is a way to shed light on this progress and its medical treatments. This dissertation is to study cancer invasion in time and space using a mathematical approach. Chapter 1 presents a detailed review of literature on cancer modeling.

Chapter 2 focuses sorely on time where the escape of a generic cancer out of immune control is described by stochastic delayed differential equations (SDDEs). Without time delay and noise, this system demonstrates bistability. The effects of response time of the immune system and stochasticity in the tumor proliferation rate are studied by including delay and noise in the model. Stability, persistence and extinction of the tumor are analyzed. The result shows that both time delay and noise can induce the transition from low tumor burden equilibrium to high tumor equilibrium. The aforementioned work has been published (Han et al., 2019b).

In Chapter 3, Glioblastoma multiforme (GBM) is studied using a partial differential equation (PDE) model. GBM is an aggressive brain cancer with a grim prognosis. A mathematical model of GBM growth with explicit motility, birth, and death processes is proposed. A novel method is developed to approximate key characteristics of the wave profile, which can be compared with MRI data. Several test cases of MRI data of GBM patients are used to yield personalized parameterizations of the model. The aforementioned work has been published (Han et al., 2019a).

Chapter 4 presents an innovative way of forecasting spatial cancer invasion. Most mathematical models, including the ones described in previous chapters, are formulated based on strong assumptions, which are hard, if not impossible, to verify due to complexity of biological processes and lack of quality data. Instead, a nonparametric forecasting method using Gaussian processes is proposed. By exploiting the local nature of the spatio-temporal process, sparse (in terms of time) data is sufficient for forecasting. Desirable properties of Gaussian processes facilitate selection of the size of the local neighborhood and computationally efficient propagation of uncertainty. The method is tested on synthetic data and demonstrates promising results.
ContributorsHan, Lifeng (Author) / Kuang, Yang (Thesis advisor) / Fricks, John (Thesis advisor) / Kostelich, Eric (Committee member) / Baer, Steve (Committee member) / Gumel, Abba (Committee member) / Arizona State University (Publisher)
Created2020
157618-Thumbnail Image.png
Description
Accurate forecasting of electricity prices has been a key factor for bidding strategies in the electricity markets. The increase in renewable generation due to large scale PV and wind deployment in California has led to an increase in day-ahead and real-time price volatility. This has also led to prices going

Accurate forecasting of electricity prices has been a key factor for bidding strategies in the electricity markets. The increase in renewable generation due to large scale PV and wind deployment in California has led to an increase in day-ahead and real-time price volatility. This has also led to prices going negative due to the supply-demand imbalance caused by excess renewable generation during instances of low demand. This research focuses on applying machine learning models to analyze the impact of renewable generation on the hourly locational marginal prices (LMPs) for California Independent System Operator (CAISO). Historical data involving the load, renewable generation from solar and wind, fuel prices, aggregated generation outages is extracted and collected together in a dataset and used as features to train different machine learning models. Tree- based machine learning models such as Extra Trees, Gradient Boost, Extreme Gradient Boost (XGBoost) as well as models based on neural networks such as Long short term memory networks (LSTMs) are implemented for price forecasting. The focus is to capture the best relation between the features and the target LMP variable and determine the weight of every feature in determining the price.

The impact of renewable generation on LMP forecasting is determined for several different days in 2018. It is seen that the prices are impacted significantly by solar and wind generation and it ranks second in terms of impact after the electric load. The results of this research propose a method to evaluate the impact of several parameters on the day-ahead price forecast and would be useful for the grid operators to evaluate the parameters that could significantly impact the day-ahead price prediction and which parameters with low impact could be ignored to avoid an error in the forecast.
ContributorsVad, Chinmay (Author) / Honsberg, C. (Christiana B.) (Thesis advisor) / King, Richard R. (Committee member) / Kurtz, Sarah (Committee member) / Arizona State University (Publisher)
Created2019
Description
This dissertation studies how forecasting performance can be improved in big data. The first chapter with Seung C. Ahn considers Partial Least Squares (PLS) estimation of a time-series forecasting model with data containing a large number of time series observations of many predictors. In the model, a subset or a

This dissertation studies how forecasting performance can be improved in big data. The first chapter with Seung C. Ahn considers Partial Least Squares (PLS) estimation of a time-series forecasting model with data containing a large number of time series observations of many predictors. In the model, a subset or a whole set of the latent common factors in predictors determine a target variable. First, the optimal number of the PLS factors for forecasting could be smaller than the number of the common factors relevant for the target variable. Second, as more than the optimal number of PLS factors is used, the out-of-sample explanatory power of the factors could decrease while their in-sample power may increase. Monte Carlo simulation results also confirm these asymptotic results. In addition, simulation results indicate that the out-of-sample forecasting power of the PLS factors is often higher when a smaller than the asymptotically optimal number of factors are used. Finally, the out-of-sample forecasting power of the PLS factors often decreases as the second, third, and more factors are added, even if the asymptotically optimal number of the factors is greater than one. The second chapter studies the predictive performance of various factor estimations comprehensively. Big data that consist of major U.S. macroeconomic and finance variables, are constructed. 148 target variables are forecasted, using 7 factor estimation methods with 11 information criteria. First, the number of factors used in forecasting is important and Incorporating more factors does not always provide better forecasting performance. Second, using consistently estimated number of factors does not necessarily improve predictive performance. The first PLS factor, which is not theoretically consistent, very often shows strong forecasting performance. Third, there is a large difference in the forecasting performance across different information criteria, even when the same factor estimation method is used. Therefore, the choice of factor estimation method, as well as the information criterion, is crucial in forecasting practice. Finally, the first PLS factor yields forecasting performance very close to the best result from the total combinations of the 7 factor estimation methods and 11 information criteria.
ContributorsBae, Juhui (Author) / Ahn, Seung (Thesis advisor) / Pruitt, Seth (Committee member) / Kuminoff, Nicolai (Committee member) / Ferraro, Domenico (Committee member) / Arizona State University (Publisher)
Created2021