Matching Items (147)

134600-Thumbnail Image.png

A Simulation Model of the Effect of Workplace Structure on Productivity

Description

Workplace productivity is a result of many factors, and among them is the setup of the office and its resultant noise level. The conversations and interruptions that come along with converting an office to an open plan can foster innovation

Workplace productivity is a result of many factors, and among them is the setup of the office and its resultant noise level. The conversations and interruptions that come along with converting an office to an open plan can foster innovation and creativity, or they can be distracting and harm the performance of employees. Through simulation, the impact of different types of office noise was studied along with other changing conditions such as number of people in the office. When productivity per person, defined in terms of mood and focus, was measured, it was found that the effect of noise was positive in some scenarios and negative in others. In simulations where employees were performing very similar tasks, noise (and its correlates, such as number of employees), was beneficial. On the other hand, when employees were engaged in a variety of different types of tasks, noise had a negative overall effect. This indicates that workplaces that group their employees by common job functions may be more productive than workplaces where the problems and products that employees are working on are varied throughout the workspace.

Contributors

Agent

Created

Date Created
2017-05

134111-Thumbnail Image.png

An optimization model for emergency response crew location within a theme park

Description

Every year, millions of guests visit theme parks internationally. Within that massive population, accidents and emergencies are bound to occur. Choosing the correct location for emergency responders inside of the park could mean the difference between life and death. In

Every year, millions of guests visit theme parks internationally. Within that massive population, accidents and emergencies are bound to occur. Choosing the correct location for emergency responders inside of the park could mean the difference between life and death. In an effort to provide the utmost safety for the guests of a park, it is important to make the best decision when selecting the location for emergency response crews. A theme park is different from a regular residential or commercial area because the crowds and shows block certain routes, and they change throughout the day. We propose an optimization model that selects staging locations for emergency medical responders in a theme park to maximize the number of responses that can occur within a pre-specified time. The staging areas are selected from a candidate set of restricted access locations where the responders can store their equipment. Our solution approach considers all routes to access any park location, including areas that are unavailable to a regular guest. Theme parks are a highly dynamic environment. Because special events occurring in the park at certain hours (e.g., parades) might impact the responders' travel times, our model's decisions also include the time dimension in the location and re-location of the responders. Our solution provides the optimal location of the responders for each time partition, including backup responders. When an optimal solution is found, the model is also designed to consider alternate optimal solutions that provide a more balanced workload for the crews.

Contributors

Created

Date Created
2017-12

133986-Thumbnail Image.png

A Strategy for Improved Traffic Flow

Description

Commuting is a significant cost in time and in travel expenses for working individuals and a major contributor to emissions in the United States. This project focuses on increasing the efficiency of an intersection through the use of "light metering."

Commuting is a significant cost in time and in travel expenses for working individuals and a major contributor to emissions in the United States. This project focuses on increasing the efficiency of an intersection through the use of "light metering." Light metering involves a series of lights leading up to an intersection forcing cars to stop further away from the final intersection in smaller queues instead of congregating in a large queue before the final intersection. The simulation software package AnyLogic was used to model a simple two-lane intersection with and without light metering. It was found that light metering almost eliminates start-up delay by preventing a long queue to form in front of the modeled intersection. Shorter queue lengths and reduction in the start-up delays prevents cycle failure and significantly reduces the overall delay for the intersection. However, frequent deceleration and acceleration for a few of the cars occurs before each light meter. This solution significantly reduces the traffic density before the intersection and the overall delay but does not appear to be a better emission alternative due to an increase in acceleration. Further research would need to quantify the difference in emissions for this model compared to a standard intersection.

Contributors

Agent

Created

Date Created
2018-05

136587-Thumbnail Image.png

Balancing the Present and Future: Making Valuable Predictions for Continuous Improvement in Management

Description

In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper

In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory, and by minimizing bias and variance while fully utilizing the available information about the system at hand, one can make valuable, accurate predictions about the future. Combining this knowledge with the work of quality gurus W. E. Deming, Eliyahu Goldratt, and Dean Kashiwagi, a framework for making valuable predictions for continuous improvement is made. After this information is synthesized, it is concluded that the best way to make accurate, informative predictions about the future is to "balance the present and future," seeing the future through the lens of the present and thus minimizing bias, variance, and risk.

Contributors

Agent

Created

Date Created
2015-05

135611-Thumbnail Image.png

A Stochastic Airline Staff Scheduling Model with Risk Considerations that Minimizes Costs

Description

Most staff planning for airline industries are done using point estimates; these do not account for the probabilistic nature of employees not showing up to work, and the airline company risks being under or overstaffed at different times, which increases

Most staff planning for airline industries are done using point estimates; these do not account for the probabilistic nature of employees not showing up to work, and the airline company risks being under or overstaffed at different times, which increases costs and deteriorates customer service. This model proposes utilizing a stochastic method for American Airlines to schedule their ground crew staff. We developed a stochastic model for scheduling that incorporates the risks of absent employees and as well as reliability so that stakeholders can determine the level of reliability they want to maintain in their system based on the costs. We also incorporated a preferences component to the model in order to increase staff satisfaction in the schedules they get assigned based on their predetermined preferences. Since this is a general staffing model, this can be utilized for an airline crew or virtually any other workforce so long as certain parameters about the population can be determined.

Contributors

Agent

Created

Date Created
2016-05

148263-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
2021-05

147540-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
2021-05

149723-Thumbnail Image.png

System complexity reduction via feature selection

Description

This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the

This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve high accuracy, but the combination of many rules is difficult to interpret. Rule condition subset selection (RCSS) methods for associative classification are considered. RCSS aims to prune the rule conditions into a subset via feature selection. The subset then can be summarized into rule-based classifiers. Experiments show that classifiers after RCSS can substantially improve the classification interpretability without loss of accuracy. An ensemble feature selection method is proposed to learn Markov blankets for either discrete or continuous networks (without linear, Gaussian assumptions). The method is compared to a Bayesian local structure learning algorithm and to alternative feature selection methods in the causal structure learning problem. Feature selection is also used to enhance the interpretability of time series classification. Existing time series classification algorithms (such as nearest-neighbor with dynamic time warping measures) are accurate but difficult to interpret. This research leverages the time-ordering of the data to extract features, and generates an effective and efficient classifier referred to as a time series forest (TSF). The computational complexity of TSF is only linear in the length of time series, and interpretable features can be extracted. These features can be further reduced, and summarized for even better interpretability. Lastly, two variable importance measures are proposed to reduce the feature selection bias in tree-based ensemble models. It is well known that bias can occur when predictor attributes have different numbers of values. Two methods are proposed to solve the bias problem. One uses an out-of-bag sampling method called OOBForest, and the other, based on the new concept of a partial permutation test, is called a pForest. Experimental results show the existing methods are not always reliable for multi-valued predictors, while the proposed methods have advantages.

Contributors

Agent

Created

Date Created
2011

149754-Thumbnail Image.png

Production scheduling and system configuration for capacitated flow lines with application in the semiconductor backend process

Description

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.

Contributors

Agent

Created

Date Created
2011

152494-Thumbnail Image.png

Design, analytics and quality assurance for emerging personalized clinical diagnostics based on next-gen sequencing

Description

Major advancements in biology and medicine have been realized during recent decades, including massively parallel sequencing, which allows researchers to collect millions or billions of short reads from a DNA or RNA sample. This capability opens the door to a

Major advancements in biology and medicine have been realized during recent decades, including massively parallel sequencing, which allows researchers to collect millions or billions of short reads from a DNA or RNA sample. This capability opens the door to a renaissance in personalized medicine if effectively deployed. Three projects that address major and necessary advancements in massively parallel sequencing are included in this dissertation. The first study involves a pair of algorithms to verify patient identity based on single nucleotide polymorphisms (SNPs). In brief, we developed a method that allows de novo construction of sample relationships, e.g., which ones are from the same individuals and which are from different individuals. We also developed a method to confirm the hypothesis that a tumor came from a known individual. The second study derives an algorithm to multiplex multiple Polymerase Chain Reaction (PCR) reactions, while minimizing interference between reactions that compromise results. PCR is a powerful technique that amplifies pre-determined regions of DNA and is often used to selectively amplify DNA and RNA targets that are destined for sequencing. It is highly desirable to multiplex reactions to save on reagent and assay setup costs as well as equalize the effect of minor handling issues across gene targets. Our solution involves a binary integer program that minimizes events that are likely to cause interference between PCR reactions. The third study involves design and analysis methods required to analyze gene expression and copy number results against a reference range in a clinical setting for guiding patient treatments. Our goal is to determine which events are present in a given tumor specimen. These events may be mutation, DNA copy number or RNA expression. All three techniques are being used in major research and diagnostic projects for their intended purpose at the time of writing this manuscript. The SNP matching solution has been selected by The Cancer Genome Atlas to determine sample identity. Paradigm Diagnostics, Viomics and International Genomics Consortium utilize the PCR multiplexing technique to multiplex various types of PCR reactions on multi-million dollar projects. The reference range-based normalization method is used by Paradigm Diagnostics to analyze results from every patient.

Contributors

Agent

Created

Date Created
2014