Matching Items (13)
Filtering by

Clear all filters

134600-Thumbnail Image.png
Description
Workplace productivity is a result of many factors, and among them is the setup of the office and its resultant noise level. The conversations and interruptions that come along with converting an office to an open plan can foster innovation and creativity, or they can be distracting and harm the

Workplace productivity is a result of many factors, and among them is the setup of the office and its resultant noise level. The conversations and interruptions that come along with converting an office to an open plan can foster innovation and creativity, or they can be distracting and harm the performance of employees. Through simulation, the impact of different types of office noise was studied along with other changing conditions such as number of people in the office. When productivity per person, defined in terms of mood and focus, was measured, it was found that the effect of noise was positive in some scenarios and negative in others. In simulations where employees were performing very similar tasks, noise (and its correlates, such as number of employees), was beneficial. On the other hand, when employees were engaged in a variety of different types of tasks, noise had a negative overall effect. This indicates that workplaces that group their employees by common job functions may be more productive than workplaces where the problems and products that employees are working on are varied throughout the workspace.
ContributorsHall, Mikaela Starrantino (Author) / Pavlic, Theodore P. (Thesis director) / Cooke, Nancy (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
Description

The first step in process improvement is to scope the problem, next is measure the current process, but if data is not readily available and cannot be manually collected, then a measurement system must be implemented. General Dynamics Mission Systems (GDMS) is a lean company that is always seeking to

The first step in process improvement is to scope the problem, next is measure the current process, but if data is not readily available and cannot be manually collected, then a measurement system must be implemented. General Dynamics Mission Systems (GDMS) is a lean company that is always seeking to improve. One of their current bottlenecks is the incoming inspection department. This department is responsible for finding defects on parts purchased and is critical to the high reliability product produced by GDMS. To stay competitive and hold their market share, a decision was made to optimize incoming inspection. This proved difficult because no data is being collected. Early steps in many process improvement methodologies, such as Define, Measure, Analyze, Improve and Control (DMAIC), include data collection; however, no measurement system was in place, resulting in no available data for improvement. The solution to this problem was to design and implement a Management Information System (MIS) that will track a variety of data. This will provide the company with data that will be used for analysis and improvement. The first stage of the MIS was developed in Microsoft Excel with Visual Basic for Applications because of the low cost and overall effectiveness of the software. Excel allows update to be made quickly, and allows GDMS to collect data immediately. Stage two would be moving the MIS to a more practicable software, such as Access or MySQL. This thesis is only focuses on stage one of the MIS, and GDMS will proceed with stage two.

ContributorsDiaz, Angel (Author) / McCarville, Daniel R. (Thesis director) / Pavlic, Theodore (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134662-Thumbnail Image.png
Description
The overall energy consumption around the United States has not been reduced even with the advancement of technology over the past decades. Deficiencies exist between design and actual energy performances. Energy Infrastructure Systems (EIS) are impacted when the amount of energy production cannot be accurately and efficiently forecasted. Inaccurate engineering

The overall energy consumption around the United States has not been reduced even with the advancement of technology over the past decades. Deficiencies exist between design and actual energy performances. Energy Infrastructure Systems (EIS) are impacted when the amount of energy production cannot be accurately and efficiently forecasted. Inaccurate engineering assumptions can result when there is a lack of understanding on how energy systems can operate in real-world applications. Energy systems are complex, which results in unknown system behaviors, due to an unknown structural system model. Currently, there exists a lack of data mining techniques in reverse engineering, which are needed to develop efficient structural system models. In this project, a new type of reverse engineering algorithm has been applied to a year's worth of energy data collected from an ASU research building called MacroTechnology Works, to identify the structural system model. Developing and understanding structural system models is the first step in creating accurate predictive analytics for energy production. The associative network of the building's data will be highlighted to accurately depict the structural model. This structural model will enhance energy infrastructure systems' energy efficiency, reduce energy waste, and narrow the gaps between energy infrastructure design, planning, operation and management (DPOM).
ContributorsCamarena, Raquel Jimenez (Author) / Chong, Oswald (Thesis director) / Ye, Nong (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
168304-Thumbnail Image.png
Description
Monitoring a system for deviations from standard or reference behavior is essential for many data-driven tasks. Whether it is monitoring sensor data or the interactions between system elements, such as edges in a path or transactions in a network, the goal is to detect significant changes from a reference. As

Monitoring a system for deviations from standard or reference behavior is essential for many data-driven tasks. Whether it is monitoring sensor data or the interactions between system elements, such as edges in a path or transactions in a network, the goal is to detect significant changes from a reference. As technological advancements allow for more data to be collected from systems, monitoring approaches should evolve to accommodate the greater collection of high-dimensional data and complex system settings. This dissertation introduces system-level models for monitoring tasks characterized by changes in a subset of system components, utilizing component-level information and relationships. A change may only affect a portion of the data or system (partial change). The first three parts of this dissertation present applications and methods for detecting partial changes. The first part introduces a methodology for partial change detection in a simple, univariate setting. Changes are detected with posterior probabilities and statistical mixture models which allow only a fraction of data to change. The second and third parts of this dissertation center around monitoring more complex multivariate systems modeled through networks. The goal is to detect partial changes in the underlying network attributes and topology. The contributions of the second and third parts are two non-parametric system-level monitoring techniques that consider relationships between network elements. The algorithm Supervised Network Monitoring (SNetM) leverages Graph Neural Networks and transforms the problem into supervised learning. The other algorithm Supervised Network Monitoring for Partial Temporal Inhomogeneity (SNetMP) generates a network embedding, and then transforms the problem to supervised learning. At the end, both SNetM and SNetMP construct measures and transform them to pseudo-probabilities to be monitored for changes. The last topic addresses predicting and monitoring system-level delays on paths in a transportation/delivery system. For each item, the risk of delay is quantified. Machine learning is used to build a system-level model for delay risk, given the information available (such as environmental conditions) on the edges of a path, which integrates edge models. The outputs can then be used in a system-wide monitoring framework, and items most at risk are identified for potential corrective actions.
ContributorsKasaei Roodsari, Maziar (Author) / Runger, George (Thesis advisor) / Escobedo, Adolfo (Committee member) / Pan, Rong (Committee member) / Shinde, Amit (Committee member) / Arizona State University (Publisher)
Created2021
Description

Lean philosophy is a set of practices aimed at reducing waste in an industry/enterprise. By eliminating the aspects of a system that do not add value, the system process will be able to work continuously in a flow, and as a result have a shorter cycle time. With a shorter

Lean philosophy is a set of practices aimed at reducing waste in an industry/enterprise. By eliminating the aspects of a system that do not add value, the system process will be able to work continuously in a flow, and as a result have a shorter cycle time. With a shorter cycle time, less resources are diminished, and efforts can be properly distributed in order to achieve maximum efficiency. In relation, Six Sigma is a process that aims to reduce the variability of a system, and in turn reduce the number of defects and improve overall quality of a product/process. For this reason, Lean and Six Sigma go hand-in-hand. Cutting out non-value adding steps in a process will increase efficiency and perfecting the steps still in place will improve quality. Both aspects are important when it comes to the success of a business practice. DNASU Plasmid Repository would be a major benefactor of the Lean Six Sigma process. The process of cloning DNA requires great attention to detail and time in order to avoid defects. For instance, any mistake made in the bacteria growth process, such as contamination, will result in a significant amount of time being wasted. In addition, the purification of DNA steps also necessitates vigilant observation since the procedure is highly susceptible to little mistakes that could have big impacts. The goal of this project will be to integrate Lean Six Sigma methodology into the DNASU laboratory. By applying numerous aspects of Lean Six Sigma, the DNA repository will be able to improve its efficiency and quality of processes and obtain its highest rate of success.

ContributorsMorton, Haley (Author) / McCarville, Daniel (Thesis director) / Eyerly, Ann (Committee member) / Taylor, Clayton (Committee member) / Barrett, The Honors College (Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor)
Created2023-05
ContributorsMorton, Haley (Author) / McCarville, Daniel (Thesis director) / Eyerly, Ann (Committee member) / Taylor, Clayton (Committee member) / Barrett, The Honors College (Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor)
Created2023-05
ContributorsMorton, Haley (Author) / McCarville, Daniel (Thesis director) / Eyerly, Ann (Committee member) / Taylor, Clayton (Committee member) / Barrett, The Honors College (Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor)
Created2023-05
Description

This paper analyzes the impact of the December 2022 winter storm on Southwest Airlines (SWA). The storm caused delays and cancellations for all airlines, but SWA was the only major airline that was unable to recover fully. The disruption was unique due to the higher volume of people traveling during

This paper analyzes the impact of the December 2022 winter storm on Southwest Airlines (SWA). The storm caused delays and cancellations for all airlines, but SWA was the only major airline that was unable to recover fully. The disruption was unique due to the higher volume of people traveling during the holiday season and the lack of good alternative transportation for stranded passengers. The paper explains SWA's point-to-point (PTP) model, which allows them to offer competitive ticket prices, and organizational factors that have helped them hold a significant market share. The paper also discusses previous failures of SWA's IT and aircraft maintenance management systems and the outdated crewing system, which were not addressed until after the storm. The paper uses AnyLogic agent based modeling to investigate why SWA was so affected and why it took them so long to recover.

ContributorsBray, Mariana (Author) / McCarville, Daniel (Thesis director) / Kucukozyigit, Ali (Committee member) / Barrett, The Honors College (Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2023-05
171393-Thumbnail Image.png
Description
The rank aggregation problem has ubiquitous applications in operations research, artificial intelligence, computational social choice, and various other fields. Generally, rank aggregation is utilized whenever a set of judges (human or non-human) express their preferences over a set of items, and it is necessary to find a consensus ranking that

The rank aggregation problem has ubiquitous applications in operations research, artificial intelligence, computational social choice, and various other fields. Generally, rank aggregation is utilized whenever a set of judges (human or non-human) express their preferences over a set of items, and it is necessary to find a consensus ranking that best represents these preferences collectively. Many real-world instances of this problem involve a very large number of items, include ties, and/or contain partial information, which brings a challenge to decision-makers. This work makes several contributions to overcoming these challenges. Most attention on this problem has focused on an NP-hard distance-based variant known as Kemeny aggregation, for which solution approaches with provable guarantees that can handle difficult large-scale instances remain elusive. Firstly, this work introduces exact and approximate methodologies inspired by the social choice foundations of the problem, namely the Condorcet criterion, to decompose the problem. To deal with instances where exact partitioning does not yield many subsets, it proposes Approximate Condorcet Partitioning, which is a scalable solution technique capable of handling large-scale instances while providing provable guarantees. Secondly, this work delves into the rank aggregation problem under the generalized Kendall-tau distance, which contains Kemeny aggregation as a special case. This new problem provides a robust and highly-flexible framework for handling ties. First, it derives exact and heuristic solution methods for the generalized problem. Second, it introduces a novel social choice property that encloses existing variations of the Condorcet criterion as special cases. Thirdly, this work focuses on top-k list aggregation. Top-k lists are a special form of item orderings wherein out of n total items only a small number of them, k, are explicitly ordered. Top-k lists are being increasingly utilized in various fields including recommendation systems, information retrieval, and machine learning. This work introduces exact and inexact methods for consolidating a collection of heterogeneous top- lists. Furthermore, the strength of the proposed exact formulations is analyzed from a polyhedral point of view. Finally, this work identifies the top-100 U.S. universities by consolidating four prominent university rankings to assess the computational implications of this problem.
ContributorsAkbari, Sina (Author) / Escobedo, Adolfo (Thesis advisor) / Byeon, Geunyeong (Committee member) / Sefair, Jorge (Committee member) / Wu, Shin-Yi (Committee member) / Arizona State University (Publisher)
Created2022
158694-Thumbnail Image.png
Description
In conventional supervised learning tasks, information retrieval from extensive collections of data happens automatically at low cost, whereas in many real-world problems obtaining labeled data can be hard, time-consuming, and expensive. Consider healthcare systems, for example, where unlabeled medical images are abundant while labeling requires a considerable amount of knowledge

In conventional supervised learning tasks, information retrieval from extensive collections of data happens automatically at low cost, whereas in many real-world problems obtaining labeled data can be hard, time-consuming, and expensive. Consider healthcare systems, for example, where unlabeled medical images are abundant while labeling requires a considerable amount of knowledge from experienced physicians. Active learning addresses this challenge with an iterative process to select instances from the unlabeled data to annotate and improve the supervised learner. At each step, the query of examples to be labeled can be considered as a dilemma between exploitation of the supervised learner's current knowledge and exploration of the unlabeled input features.

Motivated by the need for efficient active learning strategies, this dissertation proposes new algorithms for batch-mode, pool-based active learning. The research considers the following questions: how can unsupervised knowledge of the input features (exploration) improve learning when incorporated with supervised learning (exploitation)? How to characterize exploration in active learning when data is high-dimensional? Finally, how to adaptively make a balance between exploration and exploitation?

The first contribution proposes a new active learning algorithm, Cluster-based Stochastic Query-by-Forest (CSQBF), which provides a batch-mode strategy that accelerates learning with added value from exploration and improved exploitation scores. CSQBF balances exploration and exploitation using a probabilistic scoring criterion based on classification probabilities from a tree-based ensemble model within each data cluster.

The second contribution introduces two more query strategies, Double Margin Active Learning (DMAL) and Cluster Agnostic Active Learning (CAAL), that combine consistent exploration and exploitation modules into a coherent and unified measure for label query. Instead of assuming a fixed clustering structure, CAAL and DMAL adopt a soft-clustering strategy which provides a new approach to formalize exploration in active learning.

The third contribution addresses the challenge of dynamically making a balance between exploration and exploitation criteria throughout the active learning process. Two adaptive algorithms are proposed based on feedback-driven bandit optimization frameworks that elegantly handle this issue by learning the relationship between exploration-exploitation trade-off and an active learner's performance.
ContributorsShams, Ghazal (Author) / Runger, George C. (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Escobedo, Adolfo (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2020