Matching Items (15)

148263-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
  • 2021-05

156053-Thumbnail Image.png

A Data Mining Approach to Modeling Customer Preference: A Case Study of Intel Corporation

Description

Understanding customer preference is crucial for new product planning and marketing decisions. This thesis explores how historical data can be leveraged to understand and predict customer preference. This thesis presents

Understanding customer preference is crucial for new product planning and marketing decisions. This thesis explores how historical data can be leveraged to understand and predict customer preference. This thesis presents a decision support framework that provides a holistic view on customer preference by following a two-phase procedure. Phase-1 uses cluster analysis to create product profiles based on which customer profiles are derived. Phase-2 then delves deep into each of the customer profiles and investigates causality behind their preference using Bayesian networks. This thesis illustrates the working of the framework using the case of Intel Corporation, world’s largest semiconductor manufacturing company.

Contributors

Agent

Created

Date Created
  • 2017

154566-Thumbnail Image.png

Reliability based design optimization of systems with dynamic failure probabilities of components

Description

This research is to address the design optimization of systems for a specified reliability level, considering the dynamic nature of component failure rates. In case of designing a mechanical system

This research is to address the design optimization of systems for a specified reliability level, considering the dynamic nature of component failure rates. In case of designing a mechanical system (especially a load-sharing system), the failure of one component will lead to increase in probability of failure of remaining components. Many engineering systems like aircrafts, automobiles, and construction bridges will experience this phenomenon.

In order to design these systems, the Reliability-Based Design Optimization framework using Sequential Optimization and Reliability Assessment (SORA) method is developed. The dynamic nature of component failure probability is considered in the system reliability model. The Stress-Strength Interference (SSI) theory is used to build the limit state functions of components and the First Order Reliability Method (FORM) lies at the heart of reliability assessment. Also, in situations where the user needs to determine the optimum number of components and reduce component redundancy, this method can be used to optimally allocate the required number of components to carry the system load. The main advantage of this method is that the computational efficiency is high and also any optimization and reliability assessment technique can be incorporated. Different cases of numerical examples are provided to validate the methodology.

Contributors

Agent

Created

Date Created
  • 2016

156625-Thumbnail Image.png

Performance Analysis of a Double Crane with Finite Interoperational Buffer Capacity with Multiple Fidelity Simulations

Description

With trends of globalization on rise, predominant of the trades happen by sea, and experts have predicted an increase in trade volumes over the next few years. With increasing trade

With trends of globalization on rise, predominant of the trades happen by sea, and experts have predicted an increase in trade volumes over the next few years. With increasing trade volumes, container ships’ upsizing is being carried out to meet the demand. But the problem with container ships’ upsizing is that the sea port terminals must be equipped adequately to improve the turnaround time otherwise the container ships’ upsizing would not yield the anticipated benefits. This thesis focus on a special type of a double automated crane set-up, with a finite interoperational buffer capacity. The buffer is placed in between the cranes, and the idea behind this research is to analyze the performance of the crane operations when this technology is adopted. This thesis proposes the approximation of this complex system, thereby addressing the computational time issue and allowing to efficiently analyze the performance of the system. The approach to model this system has been carried out in two phases. The first phase consists of the development of discrete event simulation model to make the system evolve over time. The challenges of this model are its high processing time which consists of performing large number of experimental runs, thus laying the foundation for the development of the analytical model of the system, and with respect to analytical modeling, a continuous time markov process approach has been adopted. Further, to improve the efficiency of the analytical model, a state aggregation approach is proposed. Thus, this thesis would give an insight on the outcomes of the two approaches and the behavior of the error space, and the performance of the models for the varying buffer capacities would reflect the scope of improvement in these kinds of operational set up.

Contributors

Agent

Created

Date Created
  • 2018

158541-Thumbnail Image.png

Queueing Network Models for Performance Evaluation of Dynamic Multi-Product Manufacturing Systems

Description

Modern manufacturing systems are part of a complex supply chain where customer preferences are constantly evolving. The rapidly evolving market demands manufacturing organizations to be increasingly agile and flexible. Medium

Modern manufacturing systems are part of a complex supply chain where customer preferences are constantly evolving. The rapidly evolving market demands manufacturing organizations to be increasingly agile and flexible. Medium term capacity planning for manufacturing systems employ queueing network models based on stationary demand assumptions. However, these stationary demand assumptions are not very practical for rapidly evolving supply chains. Nonstationary demand processes provide a reasonable framework to capture the time-varying nature of modern markets. The analysis of queues and queueing networks with time-varying parameters is mathematically intractable. In this dissertation, heuristics which draw upon existing steady state queueing results are proposed to provide computationally efficient approximations for dynamic multi-product manufacturing systems modeled as time-varying queueing networks with multiple customer classes (product types). This dissertation addresses the problem of performance evaluation of such manufacturing systems.

This dissertation considers the two key aspects of dynamic multi-product manufacturing systems - namely, performance evaluation and optimal server resource allocation. First, the performance evaluation of systems with infinite queueing room and a first-come first-serve service paradigm is considered. Second, systems with finite queueing room and priorities between product types are considered. Finally, the optimal server allocation problem is addressed in the context of dynamic multi-product manufacturing systems. The performance estimates developed in the earlier part of the dissertation are leveraged in a simulated annealing algorithm framework to obtain server resource allocations.

Contributors

Agent

Created

Date Created
  • 2020

148215-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
  • 2021-05

148216-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
  • 2021-05

147540-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
  • 2021-05

158154-Thumbnail Image.png

Multivariate Statistical Modeling and Analysis of Accelerated Degradation Testing Data for Reliability Prediction

Description

Degradation process, as a course of progressive deterioration, commonly exists on many engineering systems. Since most failure mechanisms of these systems can be traced to the underlying degradation process, utilizing

Degradation process, as a course of progressive deterioration, commonly exists on many engineering systems. Since most failure mechanisms of these systems can be traced to the underlying degradation process, utilizing degradation data for reliability prediction is much needed. In industries, accelerated degradation tests (ADTs) are widely used to obtain timely reliability information of the system under test. This dissertation develops methodologies for the ADT data modeling and analysis.

In the first part of this dissertation, ADT is introduced along with three major challenges in the ADT data analysis – modeling framework, inference method, and the need of analyzing multi-dimensional processes. To overcome these challenges, in the second part, a hierarchical approach, that leads to a nonlinear mixed-effects regression model, to modeling a univariate degradation process is developed. With this modeling framework, the issues of ignoring uncertainties in both data analysis and lifetime prediction, as presented by an International Standard Organization (ISO) standard, are resolved. In the third part, an approach to modeling a bivariate degradation process is addressed. It is developed using the copula theory that brings the benefits of both model flexibility and inference convenience. This approach is provided with an efficient Bayesian method for reliability evaluation. In the last part, an extension to a multivariate modeling framework is developed. Three fundamental copula classes are applied to model the complex dependence structure among correlated degradation processes. The advantages of the proposed modeling framework and the effect of ignoring tail dependence are demonstrated through simulation studies. The applications of the copula-based multivariate degradation models on both system reliability evaluation and remaining useful life prediction are provided.

In summary, this dissertation studies and explores the use of statistical methods in analyzing ADT data. All proposed methodologies are demonstrated by case studies.

Contributors

Agent

Created

Date Created
  • 2020

158661-Thumbnail Image.png

Data Driven Personalized Management of Hospital Inventory of Perishable and Substitutable Blood Units

Description

The use of Red Blood Cells (RBCs) is a pillar of modern health care. Annually, the lives of hundreds of thousands of patients are saved through ready access to safe,

The use of Red Blood Cells (RBCs) is a pillar of modern health care. Annually, the lives of hundreds of thousands of patients are saved through ready access to safe, fresh, blood-type compatible RBCs. Worldwide, hospitals have the common goal to better utilize available blood units by maximizing patients served and reducing blood wastage. Managing blood is challenging because blood is perishable, its supply is stochastic and its demand pattern is highly uncertain. Additionally, RBCs are typed and patient compatibility is required.

This research focuses on improving blood inventory management at the hospital level. It explores the importance of hospital characteristics, such as demand rate and blood-type distribution in supply and demand, for improving RBC inventory management. Available inventory models make simplifying assumptions; they tend to be general and do not utilize available data that could improve blood delivery. This dissertation develops useful and realistic models that incorporate data characterizing the hospital inventory position, distribution of blood types of donors and the population being served.

The dissertation contributions can be grouped into three areas. First, simulations are used to characterize the benefits of demand forecasting. In addition to forecast accuracy, it shows that characteristics such as forecast horizon, the age of replenishment units, and the percentage of demand that is forecastable influence the benefits resulting from demand variability reduction.

Second, it develops Markov decision models for improved allocation policies under emergency conditions, where only the units on the shelf are available for dispensing. In this situation the RBC perishability has no impact due to the short timeline for decision making. Improved location-specific policies are demonstrated via simulation models for two emergency event types: mass casualty events and pandemic influenza.

Third, improved allocation policies under normal conditions are found using Markov decision models that incorporate temporal dynamics. In this case, hospitals receive replenishment and units age and outdate. The models are solved using Approximate Dynamic Programming with model-free approximate policy iteration, using machine learning algorithms to approximate value or policy functions. These are the first stock- and age-dependent allocation policies that engage substitution between blood type groups to improve inventory performance.

Contributors

Agent

Created

Date Created
  • 2020