Matching Items (18)

148263-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
2021-05

147540-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
2021-05

148215-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
2021-05

148216-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
2021-05

161785-Thumbnail Image.png

Disaster Analytics for Critical Infrastructures : Methods and Algorithms for Modeling Disasters and Proactive Recovery Preparedness

Description

Natural disasters are occurring increasingly around the world, causing significant economiclosses. To alleviate their adverse effect, it is crucial to plan what should be done in response
to them in a proactive manner. This research aims at developing proactive and

Natural disasters are occurring increasingly around the world, causing significant economiclosses. To alleviate their adverse effect, it is crucial to plan what should be done in response
to them in a proactive manner. This research aims at developing proactive and real-time
recovery algorithms for large-scale power networks exposed to weather events considering
uncertainty. These algorithms support the recovery decisions to mitigate the disaster impact, resulting in faster recovery of the network. The challenges associated with developing
these algorithms are summarized below:
1. Even ignoring uncertainty, when operating cost of the network is considered the problem will be a bi-level optimization which is NP-hard.
2. To meet the requirement for real-time decision making under uncertainty, the problem could be formulated a Stochastic Dynamic Program with the aim to minimize
the total cost. However, considering the operating cost of the network violates the
underlying assumptions of this approach.
3. Stochastic Dynamic Programming approach is also not applicable to realistic problem sizes, due to the curse of dimensionality.
4. Uncertainty-based approaches for failure modeling, rely on point-generation of failures and ignore the network structure.
To deal with the first challenge, in chapter 2, a heuristic solution framework is proposed, and its performance is evaluated by conducting numerical experiments. To address the second challenge, in chapter 3, after formulating the problem as a Stochastic Dynamic Program, an approximated dynamic programming heuristic is proposed to solve the problem. Numerical experiments on synthetic and realistic test-beds, show the satisfactory performance of the proposed approach. To address the third challenge, in chapter 4, an efficient base heuristic policy and an aggregation scheme in the action space is proposed. Numerical experiments on a realistic test-bed verify the ability of the proposed method to
recover the network more efficiently. Finally, to address the fourth challenge, in chapter 5, a simulation-based model is proposed that using historical data and accounting for the interaction between network components, allows for analyzing the impact of adverse events on regional service level. A realistic case study is then conducted to showcase the applicability of the approach.

Contributors

Agent

Created

Date Created
2021

161762-Thumbnail Image.png

A Disease Progression Modeling Framework for Nonalcoholic Steatohepatitis Using Multiparametric Serial Magnetic Resonance Imaging and Elastography

Description

Nonalcoholic Steatohepatitis (NASH) is a severe form of Nonalcoholic fatty liverdisease, that is caused due to excessive calorie intake, sedentary lifestyle and in the
absence of severe alcohol consumption. It is widely prevalent in the United States
and in many

Nonalcoholic Steatohepatitis (NASH) is a severe form of Nonalcoholic fatty liverdisease, that is caused due to excessive calorie intake, sedentary lifestyle and in the
absence of severe alcohol consumption. It is widely prevalent in the United States
and in many other developed countries, affecting up to 25 percent of the population.
Due to being asymptotic, it usually goes unnoticed and may lead to liver failure if
not treated at the right time.
Currently, liver biopsy is the gold standard to diagnose NASH, but being an
invasive procedure, it comes with it's own complications along with the inconvenience
of sampling repeated measurements over a period of time. Hence, noninvasive
procedures to assess NASH are urgently required. Magnetic Resonance Elastography
(MRE) based Shear Stiffness and Loss Modulus along with Magnetic Resonance
Imaging based proton density fat fraction have been successfully combined to predict
NASH stages However, their role in the prediction of disease progression still remains
to be investigated.
This thesis thus looks into combining features from serial MRE observations to
develop statistical models to predict NASH progression. It utilizes data from an experiment
conducted on male mice to develop progressive and regressive NASH and
trains ordinal models, ordered probit regression and ordinal forest on labels generated
from a logistic regression model. The models are assessed on histological data collected
at the end point of the experiment. The models developed provide a framework
to utilize a non-invasive tool to predict NASH disease progression.

Contributors

Agent

Created

Date Created
2021

156625-Thumbnail Image.png

Performance Analysis of a Double Crane with Finite Interoperational Buffer Capacity with Multiple Fidelity Simulations

Description

With trends of globalization on rise, predominant of the trades happen by sea, and experts have predicted an increase in trade volumes over the next few years. With increasing trade volumes, container ships’ upsizing is being carried out to meet

With trends of globalization on rise, predominant of the trades happen by sea, and experts have predicted an increase in trade volumes over the next few years. With increasing trade volumes, container ships’ upsizing is being carried out to meet the demand. But the problem with container ships’ upsizing is that the sea port terminals must be equipped adequately to improve the turnaround time otherwise the container ships’ upsizing would not yield the anticipated benefits. This thesis focus on a special type of a double automated crane set-up, with a finite interoperational buffer capacity. The buffer is placed in between the cranes, and the idea behind this research is to analyze the performance of the crane operations when this technology is adopted. This thesis proposes the approximation of this complex system, thereby addressing the computational time issue and allowing to efficiently analyze the performance of the system. The approach to model this system has been carried out in two phases. The first phase consists of the development of discrete event simulation model to make the system evolve over time. The challenges of this model are its high processing time which consists of performing large number of experimental runs, thus laying the foundation for the development of the analytical model of the system, and with respect to analytical modeling, a continuous time markov process approach has been adopted. Further, to improve the efficiency of the analytical model, a state aggregation approach is proposed. Thus, this thesis would give an insight on the outcomes of the two approaches and the behavior of the error space, and the performance of the models for the varying buffer capacities would reflect the scope of improvement in these kinds of operational set up.

Contributors

Agent

Created

Date Created
2018

158682-Thumbnail Image.png

Real-time Analysis and Control for Smart Manufacturing Systems

Description

Recent advances in manufacturing system, such as advanced embedded sensing, big data analytics and IoT and robotics, are promising a paradigm shift in the manufacturing industry towards smart manufacturing systems. Typically, real-time data is available in many industries, such as

Recent advances in manufacturing system, such as advanced embedded sensing, big data analytics and IoT and robotics, are promising a paradigm shift in the manufacturing industry towards smart manufacturing systems. Typically, real-time data is available in many industries, such as automotive, semiconductor, and food production, which can reflect the machine conditions and production system’s operation performance. However, a major research gap still exists in terms of how to utilize these real-time data information to evaluate and predict production system performance and to further facilitate timely decision making and production control on the factory floor. To tackle these challenges, this dissertation takes on an integrated analytical approach by hybridizing data analytics, stochastic modeling and decision making under uncertainty methodology to solve practical manufacturing problems.

Specifically, in this research, the machine degradation process is considered. It has been shown that machines working at different operating states may break down in different probabilistic manners. In addition, machines working in worse operating stage are more likely to fail, thus causing more frequent down period and reducing the system throughput. However, there is still a lack of analytical methods to quantify the potential impact of machine condition degradation on the overall system performance to facilitate operation decision making on the factory floor. To address these issues, this dissertation considers a serial production line with finite buffers and multiple machines following Markovian degradation process. An integrated model based on the aggregation method is built to quantify the overall system performance and its interactions with machine condition process. Moreover, system properties are investigated to analyze the influence of system parameters on system performance. In addition, three types of bottlenecks are defined and their corresponding indicators are derived to provide guidelines on improving system performance. These methods provide quantitative tools for modeling, analyzing, and improving manufacturing systems with the coupling between machine condition degradation and productivity given the real-time signals.

Contributors

Agent

Created

Date Created
2020

158154-Thumbnail Image.png

Multivariate Statistical Modeling and Analysis of Accelerated Degradation Testing Data for Reliability Prediction

Description

Degradation process, as a course of progressive deterioration, commonly exists on many engineering systems. Since most failure mechanisms of these systems can be traced to the underlying degradation process, utilizing degradation data for reliability prediction is much needed. In industries,

Degradation process, as a course of progressive deterioration, commonly exists on many engineering systems. Since most failure mechanisms of these systems can be traced to the underlying degradation process, utilizing degradation data for reliability prediction is much needed. In industries, accelerated degradation tests (ADTs) are widely used to obtain timely reliability information of the system under test. This dissertation develops methodologies for the ADT data modeling and analysis.

In the first part of this dissertation, ADT is introduced along with three major challenges in the ADT data analysis – modeling framework, inference method, and the need of analyzing multi-dimensional processes. To overcome these challenges, in the second part, a hierarchical approach, that leads to a nonlinear mixed-effects regression model, to modeling a univariate degradation process is developed. With this modeling framework, the issues of ignoring uncertainties in both data analysis and lifetime prediction, as presented by an International Standard Organization (ISO) standard, are resolved. In the third part, an approach to modeling a bivariate degradation process is addressed. It is developed using the copula theory that brings the benefits of both model flexibility and inference convenience. This approach is provided with an efficient Bayesian method for reliability evaluation. In the last part, an extension to a multivariate modeling framework is developed. Three fundamental copula classes are applied to model the complex dependence structure among correlated degradation processes. The advantages of the proposed modeling framework and the effect of ignoring tail dependence are demonstrated through simulation studies. The applications of the copula-based multivariate degradation models on both system reliability evaluation and remaining useful life prediction are provided.

In summary, this dissertation studies and explores the use of statistical methods in analyzing ADT data. All proposed methodologies are demonstrated by case studies.

Contributors

Agent

Created

Date Created
2020

158661-Thumbnail Image.png

Data Driven Personalized Management of Hospital Inventory of Perishable and Substitutable Blood Units

Description

The use of Red Blood Cells (RBCs) is a pillar of modern health care. Annually, the lives of hundreds of thousands of patients are saved through ready access to safe, fresh, blood-type compatible RBCs. Worldwide, hospitals have the common goal

The use of Red Blood Cells (RBCs) is a pillar of modern health care. Annually, the lives of hundreds of thousands of patients are saved through ready access to safe, fresh, blood-type compatible RBCs. Worldwide, hospitals have the common goal to better utilize available blood units by maximizing patients served and reducing blood wastage. Managing blood is challenging because blood is perishable, its supply is stochastic and its demand pattern is highly uncertain. Additionally, RBCs are typed and patient compatibility is required.

This research focuses on improving blood inventory management at the hospital level. It explores the importance of hospital characteristics, such as demand rate and blood-type distribution in supply and demand, for improving RBC inventory management. Available inventory models make simplifying assumptions; they tend to be general and do not utilize available data that could improve blood delivery. This dissertation develops useful and realistic models that incorporate data characterizing the hospital inventory position, distribution of blood types of donors and the population being served.

The dissertation contributions can be grouped into three areas. First, simulations are used to characterize the benefits of demand forecasting. In addition to forecast accuracy, it shows that characteristics such as forecast horizon, the age of replenishment units, and the percentage of demand that is forecastable influence the benefits resulting from demand variability reduction.

Second, it develops Markov decision models for improved allocation policies under emergency conditions, where only the units on the shelf are available for dispensing. In this situation the RBC perishability has no impact due to the short timeline for decision making. Improved location-specific policies are demonstrated via simulation models for two emergency event types: mass casualty events and pandemic influenza.

Third, improved allocation policies under normal conditions are found using Markov decision models that incorporate temporal dynamics. In this case, hospitals receive replenishment and units age and outdate. The models are solved using Approximate Dynamic Programming with model-free approximate policy iteration, using machine learning algorithms to approximate value or policy functions. These are the first stock- and age-dependent allocation policies that engage substitution between blood type groups to improve inventory performance.

Contributors

Agent

Created

Date Created
2020