Matching Items (20)
Filtering by

Clear all filters

151633-Thumbnail Image.png
Description
In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also

In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also involves mixed-product transshipments between DCs with multiple truck size selection and routing delivery to retailers. Optimally solving such an integrated problem is in general not easy due to its combinatorial nature, especially when transshipments and routing are involved. In order to find out a good solution effectively, a two-phase solution methodology is derived: Phase I solves an integer programming model which includes all the constraints in the original model except that the routings are simplified to direct shipments by using estimated routing cost parameters. Then Phase II model solves the lower level inventory routing problem for each opened DC and its assigned retailers. The accuracy of the estimated routing cost and the effectiveness of the two-phase solution methodology are evaluated, the computational performance is found to be promising. The problem is able to be heuristically solved within a reasonable time frame for a broad range of problem sizes (one hour for the instance of 200 retailers). In addition, a model is generated for a similar network design problem considering direct shipment and consolidation within the same product set opportunities. A genetic algorithm and a specific problem heuristic are designed, tested and compared on several realistic scenarios.
ContributorsXia, Mingjun (Author) / Askin, Ronald (Thesis advisor) / Mirchandani, Pitu (Committee member) / Zhang, Muhong (Committee member) / Kierstead, Henry (Committee member) / Arizona State University (Publisher)
Created2013
Description
Multicore processors have proliferated in nearly all forms of computing, from servers, desktop, to smartphones. The primary reason for this large adoption of multicore processors is due to its ability to overcome the power-wall by providing higher performance at a lower power consumption rate. With multi-cores, there is increased need

Multicore processors have proliferated in nearly all forms of computing, from servers, desktop, to smartphones. The primary reason for this large adoption of multicore processors is due to its ability to overcome the power-wall by providing higher performance at a lower power consumption rate. With multi-cores, there is increased need for dynamic energy management (DEM), much more than for single-core processors, as DEM for multi-cores is no more a mechanism just to ensure that a processor is kept under specified temperature limits, but also a set of techniques that manage various processor controls like dynamic voltage and frequency scaling (DVFS), task migration, fan speed, etc. to achieve a stated objective. The objectives span a wide range from maximizing throughput, minimizing power consumption, reducing peak temperature, maximizing energy efficiency, maximizing processor reliability, and so on, along with much more wider constraints of temperature, power, timing, and reliability constraints. Thus DEM can be very complex and challenging to achieve. Since often times many DEMs operate together on a single processor, there is a need to unify various DEM techniques. This dissertation address such a need. In this work, a framework for DEM is proposed that provides a unifying processor model that includes processor power, thermal, timing, and reliability models, supports various DEM control mechanisms, many different objective functions along with equally diverse constraint specifications. Using the framework, a range of novel solutions is derived for instances of DEM problems, that include maximizing processor performance, energy efficiency, or minimizing power consumption, peak temperature under constraints of maximum temperature, memory reliability and task deadlines. Finally, a robust closed-loop controller to implement the above solutions on a real processor platform with a very low operational overhead is proposed. Along with the controller design, a model identification methodology for obtaining the required power and thermal models for the controller is also discussed. The controller is architecture independent and hence easily portable across many platforms. The controller has been successfully deployed on Intel Sandy Bridge processor and the use of the controller has increased the energy efficiency of the processor by over 30%
ContributorsHanumaiah, Vinay (Author) / Vrudhula, Sarma (Thesis advisor) / Chatha, Karamvir (Committee member) / Chakrabarti, Chaitali (Committee member) / Rodriguez, Armando (Committee member) / Askin, Ronald (Committee member) / Arizona State University (Publisher)
Created2013
151176-Thumbnail Image.png
Description
Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these

Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these massive datasets lay in their complex structures, such as high-dimensionality, hierarchy, multi-modality, heterogeneity and data uncertainty. Besides the statistical challenges, the associated computational approaches are also considered essential in achieving efficiency, effectiveness, as well as the numerical stability in practice. On the other hand, some recent developments in statistics and machine learning, such as sparse learning, transfer learning, and some traditional methodologies which still hold potential, such as multi-level models, all shed lights on addressing these complex datasets in a statistically powerful and computationally efficient way. In this dissertation, we identify four kinds of general complex datasets, including "high-dimensional datasets", "hierarchically-structured datasets", "multimodality datasets" and "data uncertainties", which are ubiquitous in many domains, such as biology, medicine, neuroscience, health care delivery, manufacturing, etc. We depict the development of novel statistical models to analyze complex datasets which fall under these four categories, and we show how these models can be applied to some real-world applications, such as Alzheimer's disease research, nursing care process, and manufacturing.
ContributorsHuang, Shuai (Author) / Li, Jing (Thesis advisor) / Askin, Ronald (Committee member) / Ye, Jieping (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2012
149538-Thumbnail Image.png
Description
Cloud computing has received significant attention recently as it is a new computing infrastructure to enable rapid delivery of computing resources as a utility in a dynamic, scalable, and visualized manner. SaaS (Software-as-a-Service) provide a now paradigm in cloud computing, which goal is to provide an effective and intelligent way

Cloud computing has received significant attention recently as it is a new computing infrastructure to enable rapid delivery of computing resources as a utility in a dynamic, scalable, and visualized manner. SaaS (Software-as-a-Service) provide a now paradigm in cloud computing, which goal is to provide an effective and intelligent way to support end users' on-demand requirements to computing resources, including maturity levels of customizable, multi-tenancy and scalability. To meet requirements of on-demand, my thesis discusses several critical research problems and proposed solutions using real application scenarios. Service providers receive multiple requests from customers, how to prioritize those service requests to maximize the business values is one of the most important issues in cloud. An innovative prioritization model is proposed, which uses different types of information, including customer, service, environment and workflow information to optimize the performance of the system. To provide "on-demand" services, an accurate demand prediction and provision become critical for the successful of the cloud computing. An effective demand prediction model is proposed, and applied to a real mortgage application. To support SaaS customization and fulfill the various functional and quality requirements of individual tenants, a unified and innovative multi-layered customization framework is proposed to support and manage the variability of SaaS applications. To support scalable SaaS, a hybrid database design to support SaaS customization with two-layer database partitioning is proposed. To support secure SaaS, O-RBAC, an ontology based RBAC (Role based Access Control) model is used for Multi-Tenancy Architecture in clouds. To support a significant number of tenants, an easy to use SaaS construction framework is proposed. As a summary, this thesis discusses the most important research problems in cloud computing, towards effective and intelligent SaaS. The research in this thesis is critical to the development of cloud computing and provides fundamental solutions to those problems.
ContributorsShao, Qihong (Author) / Tsai, Wei-Tek (Thesis advisor) / Askin, Ronald (Committee member) / Ye, Jieping (Committee member) / Naphade, Milind (Committee member) / Arizona State University (Publisher)
Created2011
134111-Thumbnail Image.png
Description
Every year, millions of guests visit theme parks internationally. Within that massive population, accidents and emergencies are bound to occur. Choosing the correct location for emergency responders inside of the park could mean the difference between life and death. In an effort to provide the utmost safety for the guests

Every year, millions of guests visit theme parks internationally. Within that massive population, accidents and emergencies are bound to occur. Choosing the correct location for emergency responders inside of the park could mean the difference between life and death. In an effort to provide the utmost safety for the guests of a park, it is important to make the best decision when selecting the location for emergency response crews. A theme park is different from a regular residential or commercial area because the crowds and shows block certain routes, and they change throughout the day. We propose an optimization model that selects staging locations for emergency medical responders in a theme park to maximize the number of responses that can occur within a pre-specified time. The staging areas are selected from a candidate set of restricted access locations where the responders can store their equipment. Our solution approach considers all routes to access any park location, including areas that are unavailable to a regular guest. Theme parks are a highly dynamic environment. Because special events occurring in the park at certain hours (e.g., parades) might impact the responders' travel times, our model's decisions also include the time dimension in the location and re-location of the responders. Our solution provides the optimal location of the responders for each time partition, including backup responders. When an optimal solution is found, the model is also designed to consider alternate optimal solutions that provide a more balanced workload for the crews.
ContributorsLivingston, Noah Russell (Author) / Sefair, Jorge (Thesis director) / Askin, Ronald (Committee member) / Industrial, Systems and Operations Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
161608-Thumbnail Image.png
Description
A production system is commonly restricted by time windows. For example, perishability is a major concern in food processing and requires products, such as yogurt, beer and meat, not to stay in buffer for long. Semiconductor manufacturing is faced with oxidation and moisture absorption issues, if a product in buffer

A production system is commonly restricted by time windows. For example, perishability is a major concern in food processing and requires products, such as yogurt, beer and meat, not to stay in buffer for long. Semiconductor manufacturing is faced with oxidation and moisture absorption issues, if a product in buffer is exposed to air for long. Machine reliability is a major source of uncertainty in production systems that causes residence time constraints to be unsatisfied, leading to potential product quality issues. Rapid advances in sensor technology and automation provide potentials to manage production in real time, but the system complexity, brought by residence time constraints, makes it difficult to optimize system performance while providing a guaranteed product quality. To contribute to this end, this dissertation is dedicated to modeling, analysis and control of production systems with constrained time windows. This study starts with a small-scale serial production line with two machines and one buffer. Even the simplest serial lines could have too large state space due to the consideration of residence time constraints. A Markov chain model is developed to approximately analyze its transient behavior with a high accuracy. An iterative learning algorithm is proposed to perform real-time control. The analysis of two-machine serial line contributes to the further analysis of more general and complex serial lines with multiple machines. Residence time constraints can be required in multiple stages. To deal with it, a two-machine-one-buffer subsystem isolated from a multi-stage serial production line is firstly analyzed and then acts as a building block to support the aggregation method for overall system performance. The proposed aggregation method substantially reduces the complexity of the problem while maintaining a high accuracy. A decomposition-based control approach is proposed to control a multi-stage serial production line. A production system is decomposed into small-scale subsystems, and an iterative aggregation procedure is then used to generate a production control policy. The decomposition-based control approach outperforms general-purpose reinforcement learning method by delivering significant system performance improvement and substantial reduction on computation overhead. Semiconductor assembly line is a typical production system, where products are restricted by time windows and production can be disrupted by machine failures. A production control problem of semiconductor assembly line is presented and studied, and thus total lot delay time and residence time constraint violation are minimized.
ContributorsWang, Feifan (Author) / Ju, Feng (Thesis advisor) / Askin, Ronald (Committee member) / Mirchandani, Pitu (Committee member) / Patel, Nital (Committee member) / Arizona State University (Publisher)
Created2021
157648-Thumbnail Image.png
Description
Conservation planning is fundamental to guarantee the survival of endangered species and to preserve the ecological values of some ecosystems. Planning land acquisitions increasingly requires a landscape approach to mitigate the negative impacts of spatial threats such as urbanization, agricultural development, and climate change. In this context, landscape connectivity and

Conservation planning is fundamental to guarantee the survival of endangered species and to preserve the ecological values of some ecosystems. Planning land acquisitions increasingly requires a landscape approach to mitigate the negative impacts of spatial threats such as urbanization, agricultural development, and climate change. In this context, landscape connectivity and compactness are vital characteristics for the effective functionality of conservation reserves. Connectivity allows species to travel across landscapes, facilitating the flow of genes across populations from different protected areas. Compactness measures the spatial dispersion of protected sites, which can be used to mitigate risk factors associated with species leaving and re-entering the reserve. This research proposes an optimization model to identify areas to protect while enforcing connectivity and compactness. In the suggested projected area, this research builds upon existing methods and develops an alternative metric of compactness that penalizes the selection of patches of land with few protected neighbors. The new metric is referred as leaf because it intends to minimize the number of selected areas with 1 neighboring protected area. The model includes budget and minimum selected area constraints to reflect realistic financial and ecological requirements. Using a lexicographic approach, the model can improve the compactness of conservation reserves obtained by other methods. The use of the model is illustrated by solving instances of up to 1100 patches.
ContributorsRavishankar, Shreyas (Author) / Sefair, Jorge A (Thesis advisor) / Askin, Ronald (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2019
153643-Thumbnail Image.png
Description
Recent advances in medical imaging technology have greatly enhanced imaging based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this dissertation, one type of imaging objects is of interest: small blobs. Example small blob objects are cells in

Recent advances in medical imaging technology have greatly enhanced imaging based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this dissertation, one type of imaging objects is of interest: small blobs. Example small blob objects are cells in histopathology images, small breast lesions in ultrasound images, glomeruli in kidney MR images etc. This problem is particularly challenging because the small blobs often have inhomogeneous intensity distribution and indistinct boundary against the background.

This research develops a generalized four-phased system for small blob detections. The system includes (1) raw image transformation, (2) Hessian pre-segmentation, (3) feature extraction and (4) unsupervised clustering for post-pruning. First, detecting blobs from 2D images is studied where a Hessian-based Laplacian of Gaussian (HLoG) detector is proposed. Using the scale space theory as foundation, the image is smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale based on which a pre-segmentation is conducted. Novel Regional features are extracted from pre-segmented blob candidates and fed to Variational Bayesian Gaussian Mixture Models (VBGMM) for post pruning. Sixteen cell histology images and two hundred cell fluorescent images are tested to demonstrate the performances of HLoG. Next, as an extension, Hessian-based Difference of Gaussians (HDoG) is proposed which is capable to identify the small blobs from 3D images. Specifically, kidney glomeruli segmentation from 3D MRI (6 rats, 3 humans) is investigated. The experimental results show that HDoG has the potential to automatically detect glomeruli, enabling new measurements of renal microstructures and pathology in preclinical and clinical studies. Realizing the computation time is a key factor impacting the clinical adoption, the last phase of this research is to investigate the data reduction technique for VBGMM in HDoG to handle large-scale datasets. A new coreset algorithm is developed for variational Bayesian mixture models. Using the same MRI dataset, it is observed that the four-phased system with coreset-VBGMM has similar performance as using the full dataset but about 20 times faster.
ContributorsZhang, Min (Author) / Wu, Teresa (Thesis advisor) / Li, Jing (Committee member) / Pavlicek, William (Committee member) / Askin, Ronald (Committee member) / Arizona State University (Publisher)
Created2015
154536-Thumbnail Image.png
Description
Revenue management is at the core of airline operations today; proprietary algorithms and heuristics are used to determine prices and availability of tickets on an almost-continuous basis. While initial developments in revenue management were motivated by industry practice, later developments overcoming fundamental omissions from earlier models show significant improvement, despite

Revenue management is at the core of airline operations today; proprietary algorithms and heuristics are used to determine prices and availability of tickets on an almost-continuous basis. While initial developments in revenue management were motivated by industry practice, later developments overcoming fundamental omissions from earlier models show significant improvement, despite their focus on relatively esoteric aspects of the problem, and have limited potential for practical use due to computational requirements. This dissertation attempts to address various modeling and computational issues, introducing realistic choice-based demand revenue management models. In particular, this work introduces two optimization formulations alongside a choice-based demand modeling framework, improving on the methods that choice-based revenue management literature has created to date, by providing sensible models for airline implementation.

The first model offers an alternative formulation to the traditional choice-based revenue management problem presented in the literature, and provides substantial gains in expected revenue while limiting the problem’s computational complexity. Making assumptions on passenger demand, the Choice-based Mixed Integer Program (CMIP) provides a significantly more compact formulation when compared to other choice-based revenue management models, and consistently outperforms previous models.

Despite the prevalence of choice-based revenue management models in literature, the assumptions made on purchasing behavior inhibit researchers to create models that properly reflect passenger sensitivities to various ticket attributes, such as price, number of stops, and flexibility options. This dissertation introduces a general framework for airline choice-based demand modeling that takes into account various ticket attributes in addition to price, providing a framework for revenue management models to relate airline companies’ product design strategies to the practice of revenue management through decisions on ticket availability and price.

Finally, this dissertation introduces a mixed integer non-linear programming formulation for airline revenue management that accommodates the possibility of simultaneously setting prices and availabilities on a network. Traditional revenue management models primarily focus on availability, only, forcing secondary models to optimize prices. The Price-dynamic Choice-based Mixed Integer Program (PCMIP) eliminates this two-step process, aligning passenger purchase behavior with revenue management policies, and is shown to outperform previously developed models, providing a new frontier of research in airline revenue management.
ContributorsClough, Michael C (Author) / Gel, Esma (Thesis advisor) / Jacobs, Timothy (Thesis advisor) / Askin, Ronald (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2016
154011-Thumbnail Image.png
Description
This thesis presents a successful application of operations research techniques in nonprofit distribution system to improve the distribution efficiency and increase customer service quality. It focuses on truck routing problems faced by St. Mary’s Food Bank Distribution Center. This problem is modeled as a capacitated vehicle routing problem to improve the distribution efficiency

This thesis presents a successful application of operations research techniques in nonprofit distribution system to improve the distribution efficiency and increase customer service quality. It focuses on truck routing problems faced by St. Mary’s Food Bank Distribution Center. This problem is modeled as a capacitated vehicle routing problem to improve the distribution efficiency and is extended to capacitated vehicle routing problem with time windows to increase customer service quality. Several heuristics are applied to solve these vehicle routing problems and tested in well-known benchmark problems. Algorithms are tested by comparing the results with the plan currently used by St. Mary’s Food Bank Distribution Center. The results suggest heuristics are quite completive: average 17% less trucks and 28.52% less travel time are used in heuristics’ solution.
ContributorsLi, Xiaoyan (Author) / Askin, Ronald (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2015