Matching Items (17)

134111-Thumbnail Image.png

An optimization model for emergency response crew location within a theme park

Description

Every year, millions of guests visit theme parks internationally. Within that massive population, accidents and emergencies are bound to occur. Choosing the correct location for emergency responders inside of the

Every year, millions of guests visit theme parks internationally. Within that massive population, accidents and emergencies are bound to occur. Choosing the correct location for emergency responders inside of the park could mean the difference between life and death. In an effort to provide the utmost safety for the guests of a park, it is important to make the best decision when selecting the location for emergency response crews. A theme park is different from a regular residential or commercial area because the crowds and shows block certain routes, and they change throughout the day. We propose an optimization model that selects staging locations for emergency medical responders in a theme park to maximize the number of responses that can occur within a pre-specified time. The staging areas are selected from a candidate set of restricted access locations where the responders can store their equipment. Our solution approach considers all routes to access any park location, including areas that are unavailable to a regular guest. Theme parks are a highly dynamic environment. Because special events occurring in the park at certain hours (e.g., parades) might impact the responders' travel times, our model's decisions also include the time dimension in the location and re-location of the responders. Our solution provides the optimal location of the responders for each time partition, including backup responders. When an optimal solution is found, the model is also designed to consider alternate optimal solutions that provide a more balanced workload for the crews.

Contributors

Created

Date Created
  • 2017-12

129203-Thumbnail Image.png

Stochastic optimization of product-machine qualification in a semiconductor back-end facility

Description

In order to process a product in a semiconductor back-end facility, a machine needs to be qualified, first by having product-specific software installed and then running test wafers through it

In order to process a product in a semiconductor back-end facility, a machine needs to be qualified, first by having product-specific software installed and then running test wafers through it to verify that the machine is capable of performing the process correctly. In general, not all machines are qualified to process all products due to the high machine qualification cost and tool set availability. The machine qualification decision affects future capacity allocation in the facility and subsequently affects daily production schedules. To balance the tradeoff between current machine qualification costs and future potential backorder costs due to not enough machines qualified with uncertain demand, a stochastic product–machine qualification optimization model is proposed in this article. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to show the necessity of the stochastic model and the performance of different solution methods.

Contributors

Agent

Created

Date Created
  • 2015-07-03

152494-Thumbnail Image.png

Design, analytics and quality assurance for emerging personalized clinical diagnostics based on next-gen sequencing

Description

Major advancements in biology and medicine have been realized during recent decades, including massively parallel sequencing, which allows researchers to collect millions or billions of short reads from a DNA

Major advancements in biology and medicine have been realized during recent decades, including massively parallel sequencing, which allows researchers to collect millions or billions of short reads from a DNA or RNA sample. This capability opens the door to a renaissance in personalized medicine if effectively deployed. Three projects that address major and necessary advancements in massively parallel sequencing are included in this dissertation. The first study involves a pair of algorithms to verify patient identity based on single nucleotide polymorphisms (SNPs). In brief, we developed a method that allows de novo construction of sample relationships, e.g., which ones are from the same individuals and which are from different individuals. We also developed a method to confirm the hypothesis that a tumor came from a known individual. The second study derives an algorithm to multiplex multiple Polymerase Chain Reaction (PCR) reactions, while minimizing interference between reactions that compromise results. PCR is a powerful technique that amplifies pre-determined regions of DNA and is often used to selectively amplify DNA and RNA targets that are destined for sequencing. It is highly desirable to multiplex reactions to save on reagent and assay setup costs as well as equalize the effect of minor handling issues across gene targets. Our solution involves a binary integer program that minimizes events that are likely to cause interference between PCR reactions. The third study involves design and analysis methods required to analyze gene expression and copy number results against a reference range in a clinical setting for guiding patient treatments. Our goal is to determine which events are present in a given tumor specimen. These events may be mutation, DNA copy number or RNA expression. All three techniques are being used in major research and diagnostic projects for their intended purpose at the time of writing this manuscript. The SNP matching solution has been selected by The Cancer Genome Atlas to determine sample identity. Paradigm Diagnostics, Viomics and International Genomics Consortium utilize the PCR multiplexing technique to multiplex various types of PCR reactions on multi-million dollar projects. The reference range-based normalization method is used by Paradigm Diagnostics to analyze results from every patient.

Contributors

Agent

Created

Date Created
  • 2014

158514-Thumbnail Image.png

Capacity Planning, Production and Distribution Scheduling for a Multi-Facility and Multi-Product Supply Chain Network

Description

In today’s rapidly changing world and competitive business environment, firms are challenged to build their production and distribution systems to provide the desired customer service at the lowest possible

In today’s rapidly changing world and competitive business environment, firms are challenged to build their production and distribution systems to provide the desired customer service at the lowest possible cost. Designing an optimal supply chain by optimizing supply chain operations and decisions is key to achieving these goals.

In this research, a capacity planning and production scheduling mathematical model for a multi-facility and multiple product supply chain network with significant capital and labor costs is first proposed. This model considers the key levers of capacity configuration at production plants namely, shifts, run rate, down periods, finished goods inventory management and overtime. It suggests a minimum cost plan for meeting medium range demand forecasts that indicates production and inventory levels at plants by time period, the associated manpower plan and outbound shipments over the planning horizon. This dissertation then investigates two model extensions: production flexibility and pricing. In the first extension, the cost and benefits of investing in production flexibility is studied. In the second extension, product pricing decisions are added to the model for demand shaping taking into account price elasticity of demand.

The research develops methodologies to optimize supply chain operations by determining the optimal capacity plan and optimal flows of products among facilities based on a nonlinear mixed integer programming formulation. For large size real life cases the problem is intractable. An alternate formulation and an iterative heuristic algorithm are proposed and tested. The performance and bounds for the heuristic are evaluated. A real life case study in the automotive industry is considered for the implementation of the proposed models. The implementation results illustrate that the proposed method provides valuable insights for assisting the decision making process in the supply chain and provides significant improvement over current practice.

Contributors

Agent

Created

Date Created
  • 2020

157648-Thumbnail Image.png

Optimization Model and Algorithm for the Design of Connected and Compact Conservation Reserves

Description

Conservation planning is fundamental to guarantee the survival of endangered species and to preserve the ecological values of some ecosystems. Planning land acquisitions increasingly requires a landscape approach to mitigate

Conservation planning is fundamental to guarantee the survival of endangered species and to preserve the ecological values of some ecosystems. Planning land acquisitions increasingly requires a landscape approach to mitigate the negative impacts of spatial threats such as urbanization, agricultural development, and climate change. In this context, landscape connectivity and compactness are vital characteristics for the effective functionality of conservation reserves. Connectivity allows species to travel across landscapes, facilitating the flow of genes across populations from different protected areas. Compactness measures the spatial dispersion of protected sites, which can be used to mitigate risk factors associated with species leaving and re-entering the reserve. This research proposes an optimization model to identify areas to protect while enforcing connectivity and compactness. In the suggested projected area, this research builds upon existing methods and develops an alternative metric of compactness that penalizes the selection of patches of land with few protected neighbors. The new metric is referred as leaf because it intends to minimize the number of selected areas with 1 neighboring protected area. The model includes budget and minimum selected area constraints to reflect realistic financial and ecological requirements. Using a lexicographic approach, the model can improve the compactness of conservation reserves obtained by other methods. The use of the model is illustrated by solving instances of up to 1100 patches.

Contributors

Agent

Created

Date Created
  • 2019

154536-Thumbnail Image.png

Efficient formulations for next-generation choice-based network revenue management for airline implementation

Description

Revenue management is at the core of airline operations today; proprietary algorithms and heuristics are used to determine prices and availability of tickets on an almost-continuous basis. While initial developments

Revenue management is at the core of airline operations today; proprietary algorithms and heuristics are used to determine prices and availability of tickets on an almost-continuous basis. While initial developments in revenue management were motivated by industry practice, later developments overcoming fundamental omissions from earlier models show significant improvement, despite their focus on relatively esoteric aspects of the problem, and have limited potential for practical use due to computational requirements. This dissertation attempts to address various modeling and computational issues, introducing realistic choice-based demand revenue management models. In particular, this work introduces two optimization formulations alongside a choice-based demand modeling framework, improving on the methods that choice-based revenue management literature has created to date, by providing sensible models for airline implementation.

The first model offers an alternative formulation to the traditional choice-based revenue management problem presented in the literature, and provides substantial gains in expected revenue while limiting the problem’s computational complexity. Making assumptions on passenger demand, the Choice-based Mixed Integer Program (CMIP) provides a significantly more compact formulation when compared to other choice-based revenue management models, and consistently outperforms previous models.

Despite the prevalence of choice-based revenue management models in literature, the assumptions made on purchasing behavior inhibit researchers to create models that properly reflect passenger sensitivities to various ticket attributes, such as price, number of stops, and flexibility options. This dissertation introduces a general framework for airline choice-based demand modeling that takes into account various ticket attributes in addition to price, providing a framework for revenue management models to relate airline companies’ product design strategies to the practice of revenue management through decisions on ticket availability and price.

Finally, this dissertation introduces a mixed integer non-linear programming formulation for airline revenue management that accommodates the possibility of simultaneously setting prices and availabilities on a network. Traditional revenue management models primarily focus on availability, only, forcing secondary models to optimize prices. The Price-dynamic Choice-based Mixed Integer Program (PCMIP) eliminates this two-step process, aligning passenger purchase behavior with revenue management policies, and is shown to outperform previously developed models, providing a new frontier of research in airline revenue management.

Contributors

Agent

Created

Date Created
  • 2016

153643-Thumbnail Image.png

Small blob detection in medical images

Description

Recent advances in medical imaging technology have greatly enhanced imaging based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment.

Recent advances in medical imaging technology have greatly enhanced imaging based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this dissertation, one type of imaging objects is of interest: small blobs. Example small blob objects are cells in histopathology images, small breast lesions in ultrasound images, glomeruli in kidney MR images etc. This problem is particularly challenging because the small blobs often have inhomogeneous intensity distribution and indistinct boundary against the background.

This research develops a generalized four-phased system for small blob detections. The system includes (1) raw image transformation, (2) Hessian pre-segmentation, (3) feature extraction and (4) unsupervised clustering for post-pruning. First, detecting blobs from 2D images is studied where a Hessian-based Laplacian of Gaussian (HLoG) detector is proposed. Using the scale space theory as foundation, the image is smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale based on which a pre-segmentation is conducted. Novel Regional features are extracted from pre-segmented blob candidates and fed to Variational Bayesian Gaussian Mixture Models (VBGMM) for post pruning. Sixteen cell histology images and two hundred cell fluorescent images are tested to demonstrate the performances of HLoG. Next, as an extension, Hessian-based Difference of Gaussians (HDoG) is proposed which is capable to identify the small blobs from 3D images. Specifically, kidney glomeruli segmentation from 3D MRI (6 rats, 3 humans) is investigated. The experimental results show that HDoG has the potential to automatically detect glomeruli, enabling new measurements of renal microstructures and pathology in preclinical and clinical studies. Realizing the computation time is a key factor impacting the clinical adoption, the last phase of this research is to investigate the data reduction technique for VBGMM in HDoG to handle large-scale datasets. A new coreset algorithm is developed for variational Bayesian mixture models. Using the same MRI dataset, it is observed that the four-phased system with coreset-VBGMM has similar performance as using the full dataset but about 20 times faster.

Contributors

Agent

Created

Date Created
  • 2015

151176-Thumbnail Image.png

Novel statistical models for complex data structures

Description

Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the

Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these massive datasets lay in their complex structures, such as high-dimensionality, hierarchy, multi-modality, heterogeneity and data uncertainty. Besides the statistical challenges, the associated computational approaches are also considered essential in achieving efficiency, effectiveness, as well as the numerical stability in practice. On the other hand, some recent developments in statistics and machine learning, such as sparse learning, transfer learning, and some traditional methodologies which still hold potential, such as multi-level models, all shed lights on addressing these complex datasets in a statistically powerful and computationally efficient way. In this dissertation, we identify four kinds of general complex datasets, including "high-dimensional datasets", "hierarchically-structured datasets", "multimodality datasets" and "data uncertainties", which are ubiquitous in many domains, such as biology, medicine, neuroscience, health care delivery, manufacturing, etc. We depict the development of novel statistical models to analyze complex datasets which fall under these four categories, and we show how these models can be applied to some real-world applications, such as Alzheimer's disease research, nursing care process, and manufacturing.

Contributors

Agent

Created

Date Created
  • 2012

158541-Thumbnail Image.png

Queueing Network Models for Performance Evaluation of Dynamic Multi-Product Manufacturing Systems

Description

Modern manufacturing systems are part of a complex supply chain where customer preferences are constantly evolving. The rapidly evolving market demands manufacturing organizations to be increasingly agile and flexible. Medium

Modern manufacturing systems are part of a complex supply chain where customer preferences are constantly evolving. The rapidly evolving market demands manufacturing organizations to be increasingly agile and flexible. Medium term capacity planning for manufacturing systems employ queueing network models based on stationary demand assumptions. However, these stationary demand assumptions are not very practical for rapidly evolving supply chains. Nonstationary demand processes provide a reasonable framework to capture the time-varying nature of modern markets. The analysis of queues and queueing networks with time-varying parameters is mathematically intractable. In this dissertation, heuristics which draw upon existing steady state queueing results are proposed to provide computationally efficient approximations for dynamic multi-product manufacturing systems modeled as time-varying queueing networks with multiple customer classes (product types). This dissertation addresses the problem of performance evaluation of such manufacturing systems.

This dissertation considers the two key aspects of dynamic multi-product manufacturing systems - namely, performance evaluation and optimal server resource allocation. First, the performance evaluation of systems with infinite queueing room and a first-come first-serve service paradigm is considered. Second, systems with finite queueing room and priorities between product types are considered. Finally, the optimal server allocation problem is addressed in the context of dynamic multi-product manufacturing systems. The performance estimates developed in the earlier part of the dissertation are leveraged in a simulated annealing algorithm framework to obtain server resource allocations.

Contributors

Agent

Created

Date Created
  • 2020

154566-Thumbnail Image.png

Reliability based design optimization of systems with dynamic failure probabilities of components

Description

This research is to address the design optimization of systems for a specified reliability level, considering the dynamic nature of component failure rates. In case of designing a mechanical system

This research is to address the design optimization of systems for a specified reliability level, considering the dynamic nature of component failure rates. In case of designing a mechanical system (especially a load-sharing system), the failure of one component will lead to increase in probability of failure of remaining components. Many engineering systems like aircrafts, automobiles, and construction bridges will experience this phenomenon.

In order to design these systems, the Reliability-Based Design Optimization framework using Sequential Optimization and Reliability Assessment (SORA) method is developed. The dynamic nature of component failure probability is considered in the system reliability model. The Stress-Strength Interference (SSI) theory is used to build the limit state functions of components and the First Order Reliability Method (FORM) lies at the heart of reliability assessment. Also, in situations where the user needs to determine the optimum number of components and reduce component redundancy, this method can be used to optimally allocate the required number of components to carry the system load. The main advantage of this method is that the computational efficiency is high and also any optimization and reliability assessment technique can be incorporated. Different cases of numerical examples are provided to validate the methodology.

Contributors

Agent

Created

Date Created
  • 2016