Matching Items (40)
154578-Thumbnail Image.png
Description
Buildings consume nearly 50% of the total energy in the United States, which drives the need to develop high-fidelity models for building energy systems. Extensive methods and techniques have been developed, studied, and applied to building energy simulation and forecasting, while most of work have focused on developing dedicated modeling

Buildings consume nearly 50% of the total energy in the United States, which drives the need to develop high-fidelity models for building energy systems. Extensive methods and techniques have been developed, studied, and applied to building energy simulation and forecasting, while most of work have focused on developing dedicated modeling approach for generic buildings. In this study, an integrated computationally efficient and high-fidelity building energy modeling framework is proposed, with the concentration on developing a generalized modeling approach for various types of buildings. First, a number of data-driven simulation models are reviewed and assessed on various types of computationally expensive simulation problems. Motivated by the conclusion that no model outperforms others if amortized over diverse problems, a meta-learning based recommendation system for data-driven simulation modeling is proposed. To test the feasibility of the proposed framework on the building energy system, an extended application of the recommendation system for short-term building energy forecasting is deployed on various buildings. Finally, Kalman filter-based data fusion technique is incorporated into the building recommendation system for on-line energy forecasting. Data fusion enables model calibration to update the state estimation in real-time, which filters out the noise and renders more accurate energy forecast. The framework is composed of two modules: off-line model recommendation module and on-line model calibration module. Specifically, the off-line model recommendation module includes 6 widely used data-driven simulation models, which are ranked by meta-learning recommendation system for off-line energy modeling on a given building scenario. Only a selective set of building physical and operational characteristic features is needed to complete the recommendation task. The on-line calibration module effectively addresses system uncertainties, where data fusion on off-line model is applied based on system identification and Kalman filtering methods. The developed data-driven modeling framework is validated on various genres of buildings, and the experimental results demonstrate desired performance on building energy forecasting in terms of accuracy and computational efficiency. The framework could be easily implemented into building energy model predictive control (MPC), demand response (DR) analysis and real-time operation decision support systems.
ContributorsCui, Can (Author) / Wu, Teresa (Thesis advisor) / Weir, Jeffery D. (Thesis advisor) / Li, Jing (Committee member) / Fowler, John (Committee member) / Hu, Mengqi (Committee member) / Arizona State University (Publisher)
Created2016
154948-Thumbnail Image.png
Description
In this dissertation research, I expand the definition of the supply network to include the buying firm’s competitors. Just as one buyer-supplier relationship impacts all other relationships within the network, the presence of competitor-supplier relationships must also impact the focal buying firm. Therefore, the concept of a “competitive

In this dissertation research, I expand the definition of the supply network to include the buying firm’s competitors. Just as one buyer-supplier relationship impacts all other relationships within the network, the presence of competitor-supplier relationships must also impact the focal buying firm. Therefore, the concept of a “competitive network” made up of a focal firm, its competitors and all of their combined suppliers is introduced. Utilizing a unique longitudinal dataset, this research explores how the organic structural changes within the new, many-to-many supply network impact firm performance. The investigation begins by studying the change in number of suppliers used by global auto manufacturers between 2004 and 2013. Following the Great Recession of 2008-09, firms have been growing the number of suppliers at more than twice the rate they had been reducing suppliers just a few years prior. The second phase of research explores the structural changes to the network resulting from this explosive growth in the number of suppliers. The final investigation explores a different flow – financial flow -- and evaluates its association with firm performance. Overall, this dissertation research demonstrates the value of aggregating individual supply networks into a macro-network defined as the competitive network. From this view, no one firm is able to control the structure of the network and the change in structure directly impacts firm performance. A new metric is introduced which addresses the subtle changes in buyer-supplier relationships and relates significantly to firm performance. The analyses expand the body of knowledge through the use of longitudinal datasets and uncovers otherwise overlooked dynamics existing within supply networks over the past decade.
ContributorsHuff, Jerry (Author) / Fowler, John (Thesis advisor) / Rogers, Dale (Committee member) / Carter, Craig (Committee member) / Arizona State University (Publisher)
Created2016
155028-Thumbnail Image.png
Description
Mobile healthy food retailers are a novel alleviation technique to address disparities in access to urban produce stores in food desert communities. Such retailers, which tend to exclusively stock produce items, have become significantly more popular in the past decade, but many are unable to achieve economic sustainability. Therefore, when

Mobile healthy food retailers are a novel alleviation technique to address disparities in access to urban produce stores in food desert communities. Such retailers, which tend to exclusively stock produce items, have become significantly more popular in the past decade, but many are unable to achieve economic sustainability. Therefore, when local and federal grants and scholarships are no longer available for a mobile food retailer, they must stop operating which poses serious health risks to consumers who rely on their services.

To address these issues, a framework was established in this dissertation to aid mobile food retailers with reaching economic sustainability by addressing two key operational decisions. The first decision was the stocked product mix of the mobile retailer. In this problem, it was assumed that mobile retailers want to balance the health, consumer cost, and retailer profitability of their product mix. The second investigated decision was the scheduling and routing plan of the mobile retailer. In this problem, it was assumed that mobile retailers operate similarly to traditional distribution vehicles with the exception that their customers are willing to travel between service locations so long as they are in close proximity.

For each of these problems, multiple formulations were developed which address many of the nuances for most existing mobile food retailers. For each problem, a combination of exact and heuristic solution procedures were developed with many utilizing software independent methodologies as it was assumed that mobile retailers would not have access to advanced computational software. Extensive computational tests were performed on these algorithm with the findings demonstrating the advantages of the developed procedures over other algorithms and commercial software.

The applicability of these techniques to mobile food retailers was demonstrated through a case study on a local Phoenix, AZ mobile retailer. Both the product mix and routing of the retailer were evaluated using the developed tools under a variety of conditions and assumptions. The results from this study clearly demonstrate that improved decision making can result in improved profits and longitudinal sustainability for the Phoenix mobile food retailer and similar entities.
ContributorsWishon, Christopher John (Author) / Villalobos, Rene (Thesis advisor) / Fowler, John (Committee member) / Mirchandani, Pitu (Committee member) / Wharton, Christopher (Christopher Mack), 1977- (Committee member) / Arizona State University (Publisher)
Created2016
155654-Thumbnail Image.png
Description
The following is a case study composed of three workflow investigations at the open source software development (OSSD) based Apache Software Foundation (Apache). I start with an examination of the workload inequality within the Apache, particularly with regard to requirements writing. I established that the stronger a participant's

The following is a case study composed of three workflow investigations at the open source software development (OSSD) based Apache Software Foundation (Apache). I start with an examination of the workload inequality within the Apache, particularly with regard to requirements writing. I established that the stronger a participant's experience indicators are, the more likely they are to propose a requirement that is not a defect and the more likely the requirement is eventually implemented. Requirements at Apache are divided into work tickets (tickets). In our second investigation, I reported many insights into the distribution patterns of these tickets. The participants that create the tickets often had the best track records for determining who should participate in that ticket. Tickets that were at one point volunteered for (self-assigned) had a lower incident of neglect but in some cases were also associated with severe delay. When a participant claims a ticket but postpones the work involved, these tickets exist without a solution for five to ten times as long, depending on the circumstances. I make recommendations that may reduce the incidence of tickets that are claimed but not implemented in a timely manner. After giving an in-depth explanation of how I obtained this data set through web crawlers, I describe the pattern mining platform I developed to make my data mining efforts highly scalable and repeatable. Lastly, I used process mining techniques to show that workflow patterns vary greatly within teams at Apache. I investigated a variety of process choices and how they might be influencing the outcomes of OSSD projects. I report a moderately negative association between how often a team updates the specifics of a requirement and how often requirements are completed. I also verified that the prevalence of volunteerism indicators is positively associated with work completion but what was surprising is that this correlation is stronger if I exclude the very large projects. I suggest the largest projects at Apache may benefit from some level of traditional delegation in addition to the phenomenon of volunteerism that OSSD is normally associated with.
ContributorsPanos, Ryan (Author) / Collofello, James (Thesis advisor) / Fowler, John (Thesis advisor) / Pan, Rong (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2017
149475-Thumbnail Image.png
Description
The emergence of new technologies as well as a fresh look at analyzing existing processes have given rise to a new type of response characteristic, known as a profile. Profiles are useful when a quality variable is functionally dependent on one or more explanatory, or independent, variables. So, instead of

The emergence of new technologies as well as a fresh look at analyzing existing processes have given rise to a new type of response characteristic, known as a profile. Profiles are useful when a quality variable is functionally dependent on one or more explanatory, or independent, variables. So, instead of observing a single measurement on each unit or product a set of values is obtained over a range which, when plotted, takes the shape of a curve. Traditional multivariate monitoring schemes are inadequate for monitoring profiles due to high dimensionality and poor use of the information stored in functional form leading to very large variance-covariance matrices. Profile monitoring has become an important area of study in statistical process control and is being actively addressed by researchers across the globe. This research explores the understanding of the area in three parts. A comparative analysis is conducted of two linear profile-monitoring techniques based on probability of false alarm rate and average run length (ARL) under shifts in the model parameters. The two techniques studied are control chart based on classical calibration statistic and a control chart based on the parameters of a linear model. The research demonstrates that a profile characterized by a parametric model is more efficient monitoring scheme than one based on monitoring only the individual features of the profile. A likelihood ratio based changepoint control chart is proposed for detecting a sustained step shift in low order polynomial profiles. The test statistic is plotted on a Shewhart like chart with control limits derived from asymptotic distribution theory. The statistic is factored to reflect the variation due to the parameters in to aid in interpreting an out of control signal. The research also looks at the robust parameter design study of profiles, also referred to as signal response systems. Such experiments are often necessary for understanding and reducing the common cause variation in systems. A split-plot approach is proposed to analyze the profiles. It is demonstrated that an explicit modeling of variance components using generalized linear mixed models approach has more precise point estimates and tighter confidence intervals.
ContributorsGupta, Shilpa (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie M. (Thesis advisor) / Fowler, John (Committee member) / Prewitt, Kathy (Committee member) / Kulahci, Murat (Committee member) / Arizona State University (Publisher)
Created2010
149613-Thumbnail Image.png
Description
Yield is a key process performance characteristic in the capital-intensive semiconductor fabrication process. In an industry where machines cost millions of dollars and cycle times are a number of months, predicting and optimizing yield are critical to process improvement, customer satisfaction, and financial success. Semiconductor yield modeling is

Yield is a key process performance characteristic in the capital-intensive semiconductor fabrication process. In an industry where machines cost millions of dollars and cycle times are a number of months, predicting and optimizing yield are critical to process improvement, customer satisfaction, and financial success. Semiconductor yield modeling is essential to identifying processing issues, improving quality, and meeting customer demand in the industry. However, the complicated fabrication process, the massive amount of data collected, and the number of models available make yield modeling a complex and challenging task. This work presents modeling strategies to forecast yield using generalized linear models (GLMs) based on defect metrology data. The research is divided into three main parts. First, the data integration and aggregation necessary for model building are described, and GLMs are constructed for yield forecasting. This technique yields results at both the die and the wafer levels, outperforms existing models found in the literature based on prediction errors, and identifies significant factors that can drive process improvement. This method also allows the nested structure of the process to be considered in the model, improving predictive capabilities and violating fewer assumptions. To account for the random sampling typically used in fabrication, the work is extended by using generalized linear mixed models (GLMMs) and a larger dataset to show the differences between batch-specific and population-averaged models in this application and how they compare to GLMs. These results show some additional improvements in forecasting abilities under certain conditions and show the differences between the significant effects identified in the GLM and GLMM models. The effects of link functions and sample size are also examined at the die and wafer levels. The third part of this research describes a methodology for integrating classification and regression trees (CART) with GLMs. This technique uses the terminal nodes identified in the classification tree to add predictors to a GLM. This method enables the model to consider important interaction terms in a simpler way than with the GLM alone, and provides valuable insight into the fabrication process through the combination of the tree structure and the statistical analysis of the GLM.
ContributorsKrueger, Dana Cheree (Author) / Montgomery, Douglas C. (Thesis advisor) / Fowler, John (Committee member) / Pan, Rong (Committee member) / Pfund, Michele (Committee member) / Arizona State University (Publisher)
Created2011
137619-Thumbnail Image.png
Description
Within humanitarian logistics, there has been a growing trend of adopting information systems to enhance the responsiveness of aid delivery. By utilizing such technology, organizations are able to take advantage of information sharing and its benefits, including improved coordination and reduced uncertainty. This paper seeks to explore this phenomenon using

Within humanitarian logistics, there has been a growing trend of adopting information systems to enhance the responsiveness of aid delivery. By utilizing such technology, organizations are able to take advantage of information sharing and its benefits, including improved coordination and reduced uncertainty. This paper seeks to explore this phenomenon using organizational information processing theory. Drawing from complexity literature, we argue that demand complexity should have a positive relationship with information sharing. Moreover, higher levels of information sharing should generate higher responsiveness. Lastly, we examine the effects of organizational structure on the relationship between information sharing and responsiveness. We posit that the degree of centralization will have a positive moderation effect on the aforementioned relationship. The paper then describes the methodology planned to test these hypotheses. We will design a case-based simulation that will incorporate current disaster situations and parameters experienced by Community Preparedness Exercise and Fair (COMPEF), which acts as a broker for the City of Tempe and various humanitarian groups. With the case-based simulation data, we will draw theoretical and managerial implications for the field of humanitarian logistics.
ContributorsYoo, Eunae (Author) / Maltz, Arnold (Thesis director) / Pfund, Michele (Committee member) / Fowler, John (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Department of Supply Chain Management (Contributor) / W. P. Carey School of Business (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Accountancy (Contributor)
Created2013-05
149367-Thumbnail Image.png
Description
There has been much research involving simultaneous monitoring of several correlated quality characteristics that rely on the assumptions of multivariate normality and independence. In real world applications, these assumptions are not always met, particularly when small counts are of interest. In general, the use of normal approximation to the Poisson

There has been much research involving simultaneous monitoring of several correlated quality characteristics that rely on the assumptions of multivariate normality and independence. In real world applications, these assumptions are not always met, particularly when small counts are of interest. In general, the use of normal approximation to the Poisson distribution seems to be justified when the Poisson means are large enough. A new two-sided Multivariate Poisson Exponentially Weighted Moving Average (MPEWMA) control chart is proposed, and the control limits are directly derived from the multivariate Poisson distribution. The MPEWMA and the conventional Multivariate Exponentially Weighted Moving Average (MEWMA) charts are evaluated by using the multivariate Poisson framework. The MPEWMA chart outperforms the MEWMA with the normal-theory limits in terms of the in-control average run lengths. An extension study of the two-sided MPEWMA to a one-sided version is performed; this is useful for detecting an increase in the count means. The results of comparison with the one-sided MEWMA chart are quite similar to the two-sided case. The implementation of the MPEWMA scheme for multiple count data is illustrated, with step by step guidelines and several examples. In addition, the method is compared to other model-based control charts that are used to monitor the residual values such as the regression adjustment. The MPEWMA scheme shows better performance on detecting the mean shift in count data when positive correlation exists among all variables.
ContributorsLaungrungrong, Busaba (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Fowler, John (Committee member) / Young, Dennis (Committee member) / Arizona State University (Publisher)
Created2010
149315-Thumbnail Image.png
Description
In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is

In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is proposed using simulation and online calibration methods to enable the adaptive management of large-scale complex supply chain systems. The design, implementation and verification of the integrated approach are studied in this dissertation. The research contributions are two-fold. First, this work enriches symbiotic simulation methodology by proposing a framework of simulation and advanced data fusion methods to improve simulation accuracy. Data fusion techniques optimally calibrate the simulation state/parameters by considering errors in both the simulation models and in measurements of the real-world system. Data fusion methods - Kalman Filtering, Extended Kalman Filtering, and Ensemble Kalman Filtering - are examined and discussed under varied conditions of system chaotic levels, data quality and data availability. Second, the proposed framework is developed, validated and demonstrated in `proof-of-concept' case studies on representative supply chain problems. In the case study of a simplified supply chain system, Kalman Filtering is applied to fuse simulation data and emulation data to effectively improve the accuracy of the detection of abnormalities. In the case study of the `beer game' supply chain model, the system's chaotic level is identified as a key factor to influence simulation performance and the choice of data fusion method. Ensemble Kalman Filtering is found more robust than Extended Kalman Filtering in a highly chaotic system. With appropriate tuning, the improvement of simulation accuracy is up to 80% in a chaotic system, and 60% in a stable system. In the last study, the integrated framework is applied to adaptive inventory control of a multi-echelon supply chain with non-stationary demand. It is worth pointing out that the framework proposed in this dissertation is not only useful in supply chain management, but also suitable to model other complex dynamic systems, such as healthcare delivery systems and energy consumption networks.
ContributorsWang, Shanshan (Author) / Wu, Teresa (Thesis advisor) / Fowler, John (Thesis advisor) / Pfund, Michele (Committee member) / Li, Jing (Committee member) / Pavlicek, William (Committee member) / Arizona State University (Publisher)
Created2010
135895-Thumbnail Image.png
Description
The purpose of this honors thesis is to discover ways for a large humanitarian organization to more cost effectively manage its fleet of vehicles. The first phase of work involved cleaning the large data set provided by the organization. Next, we used the program STATA to run a Seemingly Unrelated

The purpose of this honors thesis is to discover ways for a large humanitarian organization to more cost effectively manage its fleet of vehicles. The first phase of work involved cleaning the large data set provided by the organization. Next, we used the program STATA to run a Seemingly Unrelated Regression (SUR) to see which variables have the largest effect on the percentage of price decline and total mileage of each vehicle. The SUR model indicated that price decline is most influenced by cumulative minor repairs, total accessories, age, percentage of paved roads, and number of accidents. In addition, total mileage was most affected by percentage of paved roads, cumulative minor repairs, all wheel drive, and age. The final step of the project involved providing recommendations to the humanitarian organization based on the above results. We recommend several changes to their fleet management, including: driver training programs, increasing the amount of preventative maintenance performed on vehicles, and increasing the amount of accessories purchased for each vehicle. Implementing these changes could potentially save the organization millions of dollars due to the scope of its operation.
ContributorsPisauro, Jeffrey (Co-author) / Miller, Michael (Co-author) / Eftekhar, Mahyar (Thesis director) / Maltz, Arnold (Committee member) / Fowler, John (Committee member) / Department of Supply Chain Management (Contributor) / W. P. Carey School of Business (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12