Matching Items (9)
Filtering by

Clear all filters

151957-Thumbnail Image.png
Description
Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The

Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The results suggested that, depending on the nature of data, optimal specification of (1) decision rules to select the covariate and its split value in a Classification Tree, (2) the number of covariates randomly sampled for selection, and (3) methods of estimating Random Forests propensity scores could potentially produce an unbiased average treatment effect estimate after propensity scores weighting by the odds adjustment. Compared to the logistic regression estimation model using the true propensity score model, Random Forests had an additional advantage in producing unbiased estimated standard error and correct statistical inference of the average treatment effect. The relationship between the balance on the covariates' means and the bias of average treatment effect estimate was examined both within and between conditions of the simulation. Within conditions, across repeated samples there was no noticeable correlation between the covariates' mean differences and the magnitude of bias of average treatment effect estimate for the covariates that were imbalanced before adjustment. Between conditions, small mean differences of covariates after propensity score adjustment were not sensitive enough to identify the optimal Random Forests model specification for propensity score analysis.
ContributorsCham, Hei Ning (Author) / Tein, Jenn-Yun (Thesis advisor) / Enders, Stephen G (Thesis advisor) / Enders, Craig K. (Committee member) / Mackinnon, David P (Committee member) / Arizona State University (Publisher)
Created2013
151341-Thumbnail Image.png
Description
With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic

With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic monitoring and management, etc. To better understand movement behaviors from the raw mobility data, this doctoral work provides analytic models for analyzing trajectory data. As a first contribution, a model is developed to detect changes in trajectories with time. If the taxis moving in a city are viewed as sensors that provide real time information of the traffic in the city, a change in these trajectories with time can reveal that the road network has changed. To detect changes, trajectories are modeled with a Hidden Markov Model (HMM). A modified training algorithm, for parameter estimation in HMM, called m-BaumWelch, is used to develop likelihood estimates under assumed changes and used to detect changes in trajectory data with time. Data from vehicles are used to test the method for change detection. Secondly, sequential pattern mining is used to develop a model to detect changes in frequent patterns occurring in trajectory data. The aim is to answer two questions: Are the frequent patterns still frequent in the new data? If they are frequent, has the time interval distribution in the pattern changed? Two different approaches are considered for change detection, frequency-based approach and distribution-based approach. The methods are illustrated with vehicle trajectory data. Finally, a model is developed for clustering and outlier detection in semantic trajectories. A challenge with clustering semantic trajectories is that both numeric and categorical attributes are present. Another problem to be addressed while clustering is that trajectories can be of different lengths and also have missing values. A tree-based ensemble is used to address these problems. The approach is extended to outlier detection in semantic trajectories.
ContributorsKondaveeti, Anirudh (Author) / Runger, George C. (Thesis advisor) / Mirchandani, Pitu (Committee member) / Pan, Rong (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
154115-Thumbnail Image.png
Description
Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic

Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic response have more complicated structures. In the literature, the optimal design problem for some functional responses has been solved using genetic algorithm (GA) and approximate design methods. The goal of this dissertation is to develop fast computer algorithms for calculating exact D-optimal designs.



First, we demonstrated how the traditional exchange methods could be improved to generate a computationally efficient algorithm for finding G-optimal designs. The proposed two-stage algorithm, which is called the cCEA, uses a clustering-based approach to restrict the set of possible candidates for PEA, and then improves the G-efficiency using CEA.



The second major contribution of this dissertation is the development of fast algorithms for constructing D-optimal designs that determine the optimal sequence of stimuli in fMRI studies. The update formula for the determinant of the information matrix was improved by exploiting the sparseness of the information matrix, leading to faster computation times. The proposed algorithm outperforms genetic algorithm with respect to computational efficiency and D-efficiency.



The third contribution is a study of optimal experimental designs for more general functional response models. First, the B-spline system is proposed to be used as the non-parametric smoother of response function and an algorithm is developed to determine D-optimal sampling points of a spectrum variable. Second, we proposed a two-step algorithm for finding the optimal design for both sampling points and experimental settings. In the first step, the matrix of experimental settings is held fixed while the algorithm optimizes the determinant of the information matrix for a mixed effects model to find the optimal sampling times. In the second step, the optimal sampling times obtained from the first step is held fixed while the algorithm iterates on the information matrix to find the optimal experimental settings. The designs constructed by this approach yield superior performance over other designs found in literature.
ContributorsSaleh, Moein (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Runger, George C. (Committee member) / Kao, Ming-Hung (Committee member) / Arizona State University (Publisher)
Created2015
154011-Thumbnail Image.png
Description
This thesis presents a successful application of operations research techniques in nonprofit distribution system to improve the distribution efficiency and increase customer service quality. It focuses on truck routing problems faced by St. Mary’s Food Bank Distribution Center. This problem is modeled as a capacitated vehicle routing problem to improve the distribution efficiency

This thesis presents a successful application of operations research techniques in nonprofit distribution system to improve the distribution efficiency and increase customer service quality. It focuses on truck routing problems faced by St. Mary’s Food Bank Distribution Center. This problem is modeled as a capacitated vehicle routing problem to improve the distribution efficiency and is extended to capacitated vehicle routing problem with time windows to increase customer service quality. Several heuristics are applied to solve these vehicle routing problems and tested in well-known benchmark problems. Algorithms are tested by comparing the results with the plan currently used by St. Mary’s Food Bank Distribution Center. The results suggest heuristics are quite completive: average 17% less trucks and 28.52% less travel time are used in heuristics’ solution.
ContributorsLi, Xiaoyan (Author) / Askin, Ronald (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2015
156106-Thumbnail Image.png
Description
One of the greatest 21st century challenges is meeting the needs of a growing world population expected to increase 35% by 2050 given projected trends in diets, consumption and income. This in turn requires a 70-100% improvement on current production capability, even as the world is undergoing systemic climate

One of the greatest 21st century challenges is meeting the needs of a growing world population expected to increase 35% by 2050 given projected trends in diets, consumption and income. This in turn requires a 70-100% improvement on current production capability, even as the world is undergoing systemic climate pattern changes. This growth not only translates to higher demand for staple products, such as rice, wheat, and beans, but also creates demand for high-value products such as fresh fruits and vegetables (FVs), fueled by better economic conditions and a more health conscious consumer. In this case, it would seem that these trends would present opportunities for the economic development of environmentally well-suited regions to produce high-value products. Interestingly, many regions with production potential still exhibit a considerable gap between their current and ‘true’ maximum capability, especially in places where poverty is more common. Paradoxically, often high-value, horticultural products could be produced in these regions, if relatively small capital investments are made and proper marketing and distribution channels are created. The hypothesis is that small farmers within local agricultural systems are well positioned to take advantage of existing sustainable and profitable opportunities, specifically in high-value agricultural production. Unearthing these opportunities can entice investments in small farming development and help them enter the horticultural industry, thus expand the volume, variety and/or quality of products available for global consumption. In this dissertation, the objective is three-fold: (1) to demonstrate the hidden production potential that exist within local agricultural communities, (2) highlight the importance of supply chain modeling tools in the strategic design of local agricultural systems, and (3) demonstrate the application of optimization and machine learning techniques to strategize the implementation of protective agricultural technologies.

As part of this dissertation, a yield approximation method is developed and integrated with a mixed-integer program to estimate a region’s potential to produce non-perennial, vegetable items. This integration offers practical approximations that help decision-makers identify technologies needed to protect agricultural production, alter harvesting patterns to better match market behavior, and provide an analytical framework through which external investment entities can assess different production options.
ContributorsFlores, Hector M. (Author) / Villalobos, Rene (Thesis advisor) / Pan, Rong (Committee member) / Wu, Teresa (Committee member) / Parker, Nathan (Committee member) / Arizona State University (Publisher)
Created2017
156215-Thumbnail Image.png
Description
Project portfolio selection (PPS) is a significant problem faced by most organizations. How to best select the many innovative ideas that a company has developed to deploy in a proper and sustained manner with a balanced allocation of its resources over multiple time periods is one of vital importance to

Project portfolio selection (PPS) is a significant problem faced by most organizations. How to best select the many innovative ideas that a company has developed to deploy in a proper and sustained manner with a balanced allocation of its resources over multiple time periods is one of vital importance to a company's goals. This dissertation details the steps involved in deploying a more intuitive portfolio selection framework that facilitates bringing analysts and management to a consensus on ongoing company efforts and buy into final decisions. A binary integer programming selection model that constructs an efficient frontier allows the evaluation of portfolios on many different criteria and allows decision makers (DM) to bring their experience and insight to the table when making a decision is discussed. A binary fractional integer program provides additional choices by optimizing portfolios on cost-benefit ratios over multiple time periods is also presented. By combining this framework with an `elimination by aspects' model of decision making, DMs evaluate portfolios on various objectives and ensure the selection of a portfolio most in line with their goals. By presenting a modeling framework to easily model a large number of project inter-dependencies and an evolutionary algorithm that is intelligently guided in the search for attractive portfolios by a beam search heuristic, practitioners are given a ready recipe to solve big problem instances to generate attractive project portfolios for their organizations. Finally, this dissertation attempts to address the problem of risk and uncertainty in project portfolio selection. After exploring the selection of portfolios based on trade-offs between a primary benefit and a primary cost, the third important dimension of uncertainty of outcome and the risk a decision maker is willing to take on in their quest to select the best portfolio for their organization is examined.
ContributorsSampath, Siddhartha (Author) / Gel, Esma (Thesis advisor) / Fowler, Jown W (Thesis advisor) / Kempf, Karl G. (Committee member) / Pan, Rong (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2018
154566-Thumbnail Image.png
Description
This research is to address the design optimization of systems for a specified reliability level, considering the dynamic nature of component failure rates. In case of designing a mechanical system (especially a load-sharing system), the failure of one component will lead to increase in probability of failure of remaining components.

This research is to address the design optimization of systems for a specified reliability level, considering the dynamic nature of component failure rates. In case of designing a mechanical system (especially a load-sharing system), the failure of one component will lead to increase in probability of failure of remaining components. Many engineering systems like aircrafts, automobiles, and construction bridges will experience this phenomenon.

In order to design these systems, the Reliability-Based Design Optimization framework using Sequential Optimization and Reliability Assessment (SORA) method is developed. The dynamic nature of component failure probability is considered in the system reliability model. The Stress-Strength Interference (SSI) theory is used to build the limit state functions of components and the First Order Reliability Method (FORM) lies at the heart of reliability assessment. Also, in situations where the user needs to determine the optimum number of components and reduce component redundancy, this method can be used to optimally allocate the required number of components to carry the system load. The main advantage of this method is that the computational efficiency is high and also any optimization and reliability assessment technique can be incorporated. Different cases of numerical examples are provided to validate the methodology.
ContributorsBala Subramaniyan, Arun (Author) / Pan, Rong (Thesis advisor) / Askin, Ronald (Committee member) / Ju, Feng (Committee member) / Arizona State University (Publisher)
Created2016
149352-Thumbnail Image.png
Description
For this thesis a Monte Carlo simulation was conducted to investigate the robustness of three latent interaction modeling approaches (constrained product indicator, generalized appended product indicator (GAPI), and latent moderated structural equations (LMS)) under high degrees of nonnormality of the exogenous indicators, which have not been investigated in previous literature.

For this thesis a Monte Carlo simulation was conducted to investigate the robustness of three latent interaction modeling approaches (constrained product indicator, generalized appended product indicator (GAPI), and latent moderated structural equations (LMS)) under high degrees of nonnormality of the exogenous indicators, which have not been investigated in previous literature. Results showed that the constrained product indicator and LMS approaches yielded biased estimates of the interaction effect when the exogenous indicators were highly nonnormal. When the violation of nonnormality was not severe (symmetric with excess kurtosis < 1), the LMS approach with ML estimation yielded the most precise latent interaction effect estimates. The LMS approach with ML estimation also had the highest statistical power among the three approaches, given that the actual Type-I error rates of the Wald and likelihood ratio test of interaction effect were acceptable. In highly nonnormal conditions, only the GAPI approach with ML estimation yielded unbiased latent interaction effect estimates, with an acceptable actual Type-I error rate of both the Wald test and likelihood ratio test of interaction effect. No support for the use of the Satorra-Bentler or Yuan-Bentler ML corrections was found across all three methods.
ContributorsCham, Hei Ning (Author) / West, Stephen G. (Thesis advisor) / Aiken, Leona S. (Committee member) / Enders, Craig K. (Committee member) / Arizona State University (Publisher)
Created2010