Matching Items (20)
Filtering by

Clear all filters

152768-Thumbnail Image.png
Description
In a healthcare setting, the Sterile Processing Department (SPD) provides ancillary services to the Operating Room (OR), Emergency Room, Labor & Delivery, and off-site clinics. SPD's function is to reprocess reusable surgical instruments and return them to their home departments. The management of surgical instruments and medical devices can impact

In a healthcare setting, the Sterile Processing Department (SPD) provides ancillary services to the Operating Room (OR), Emergency Room, Labor & Delivery, and off-site clinics. SPD's function is to reprocess reusable surgical instruments and return them to their home departments. The management of surgical instruments and medical devices can impact patient safety and hospital revenue. Any time instrumentation or devices are not available or are not fit for use, patient safety and revenue can be negatively impacted. One step of the instrument reprocessing cycle is sterilization. Steam sterilization is the sterilization method used for the majority of surgical instruments and is preferred to immediate use steam sterilization (IUSS) because terminally sterilized items can be stored until needed. IUSS Items must be used promptly and cannot be stored for later use. IUSS is intended for emergency situations and not as regular course of action. Unfortunately, IUSS is used to compensate for inadequate inventory levels, scheduling conflicts, and miscommunications. If IUSS is viewed as an adverse event, then monitoring IUSS incidences can help healthcare organizations meet patient safety goals and financial goals along with aiding in process improvement efforts. This work recommends statistical process control methods to IUSS incidents and illustrates the use of control charts for IUSS occurrences through a case study and analysis of the control charts for data from a health care provider. Furthermore, this work considers the application of data mining methods to IUSS occurrences and presents a representative example of data mining to the IUSS occurrences. This extends the application of statistical process control and data mining in healthcare applications.
ContributorsWeart, Gail (Author) / Runger, George C. (Thesis advisor) / Li, Jing (Committee member) / Shunk, Dan (Committee member) / Arizona State University (Publisher)
Created2014
150466-Thumbnail Image.png
Description
The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order

The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order to meet and exceed customer expectations, many companies are forced to improve quality and on-time delivery, and have looked towards Lean Six Sigma as an approach to enable process improvement. The Lean Six Sigma literature is rich in deployment strategies; however, there is a general lack of a mathematical approach to deploy Lean Six Sigma in a global enterprise. This includes both project identification and prioritization. The research presented here is two-fold. Firstly, a process characterization framework is presented to evaluate processes based on eight characteristics. An unsupervised learning technique, using clustering algorithms, is then utilized to group processes that are Lean Six Sigma conducive. The approach helps Lean Six Sigma deployment champions to identify key areas within the business to focus a Lean Six Sigma deployment. A case study is presented and 33% of the processes were found to be Lean Six Sigma conducive. Secondly, having identified parts of the business that are lean Six Sigma conducive, the next steps are to formulate and prioritize a portfolio of projects. Very often the deployment champion is faced with the decision of selecting a portfolio of Lean Six Sigma projects that meet multiple objectives which could include: maximizing productivity, customer satisfaction or return on investment, while meeting certain budgetary constraints. A multi-period 0-1 knapsack problem is presented that maximizes the expected net savings of the Lean Six Sigma portfolio over the life cycle of the deployment. Finally, a case study is presented that demonstrates the application of the model in a large multinational company. Traditionally, Lean Six Sigma found its roots in manufacturing. The research presented in this dissertation also emphasizes the applicability of the methodology to the non-manufacturing space. Additionally, a comparison is conducted between manufacturing and non-manufacturing processes to highlight the challenges in deploying the methodology in both spaces.
ContributorsDuarte, Brett Marc (Author) / Fowler, John W (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Shunk, Dan (Committee member) / Borror, Connie (Committee member) / Konopka, John (Committee member) / Arizona State University (Publisher)
Created2011
151008-Thumbnail Image.png
Description
Buildings (approximately half commercial and half residential) consume over 70% of the electricity among all the consumption units in the United States. Buildings are also responsible for approximately 40% of CO2 emissions, which is more than any other industry sectors. As a result, the initiative smart building which aims to

Buildings (approximately half commercial and half residential) consume over 70% of the electricity among all the consumption units in the United States. Buildings are also responsible for approximately 40% of CO2 emissions, which is more than any other industry sectors. As a result, the initiative smart building which aims to not only manage electrical consumption in an efficient way but also reduce the damaging effect of greenhouse gases on the environment has been launched. Another important technology being promoted by government agencies is the smart grid which manages energy usage across a wide range of buildings in an effort to reduce cost and increase reliability and transparency. As a great amount of efforts have been devoted to these two initiatives by either exploring the smart grid designs or developing technologies for smart buildings, the research studying how the smart buildings and smart grid coordinate thus more efficiently use the energy is currently lacking. In this dissertation, a "system-of-system" approach is employed to develop an integrated building model which consists a number of buildings (building cluster) interacting with smart grid. The buildings can function as both energy consumption unit as well as energy generation/storage unit. Memetic Algorithm (MA) and Particle Swarm Optimization (PSO) based decision framework are developed for building operation decisions. In addition, Particle Filter (PF) is explored as a mean for fusing online sensor and meter data so adaptive decision could be made in responding to dynamic environment. The dissertation is divided into three inter-connected research components. First, an integrated building energy model including building consumption, storage, generation sub-systems for the building cluster is developed. Then a bi-level Memetic Algorithm (MA) based decentralized decision framework is developed to identify the Pareto optimal operation strategies for the building cluster. The Pareto solutions not only enable multiple dimensional tradeoff analysis, but also provide valuable insight for determining pricing mechanisms and power grid capacity. Secondly, a multi-objective PSO based decision framework is developed to reduce the computational effort of the MA based decision framework without scarifying accuracy. With the improved performance, the decision time scale could be refined to make it capable for hourly operation decisions. Finally, by integrating the multi-objective PSO based decision framework with PF, an adaptive framework is developed for adaptive operation decisions for smart building cluster. The adaptive framework not only enables me to develop a high fidelity decision model but also enables the building cluster to respond to the dynamics and uncertainties inherent in the system.
ContributorsHu, Mengqi (Author) / Wu, Teresa (Thesis advisor) / Weir, Jeffery (Thesis advisor) / Wen, Jin (Committee member) / Fowler, John (Committee member) / Shunk, Dan (Committee member) / Arizona State University (Publisher)
Created2012
151203-Thumbnail Image.png
Description
This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers

This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers (between 11% and 22%) of an aging population. There is only one approved therapy with limited efficacy, which results in a huge unmet need. The reason so few drugs have reached approval is a lack of a recognized therapeutic pathway with reproducible endpoints. While the interplay between blink function and ocular surface protection has long been recognized, all currently used evaluation techniques have addressed blink function in isolation from tear film stability, the gold standard of which is Tear Film Break-Up Time (TFBUT). In the first part of this research a manual technique of calculating ocular surface protection during natural blink function through the use of video analysis is developed and evaluated for it's ability to differentiate between dry eye and normal subjects, the results are compared with that of TFBUT. In the second part of this research the technique is improved in precision and automated through the use of video analysis algorithms. This software, called the OPI 2.0 System, is evaluated for accuracy and precision, and comparisons are made between the OPI 2.0 System and other currently recognized dry eye diagnostic techniques (e.g. TFBUT). In the third part of this research the OPI 2.0 System is deployed for use in the evaluation of subjects before, immediately after and 30 minutes after exposure to a controlled adverse environment (CAE), once again the results are compared and contrasted against commonly used dry eye endpoints. The results demonstrate that the evaluation of ocular surface protection using the OPI 2.0 System offers superior accuracy to the current standard, TFBUT.
ContributorsAbelson, Richard (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Committee member) / Shunk, Dan (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
156575-Thumbnail Image.png
Description
Mathematical modeling and decision-making within the healthcare industry have given means to quantitatively evaluate the impact of decisions into diagnosis, screening, and treatment of diseases. In this work, we look into a specific, yet very important disease, the Alzheimer. In the United States, Alzheimer’s Disease (AD) is the 6th leading

Mathematical modeling and decision-making within the healthcare industry have given means to quantitatively evaluate the impact of decisions into diagnosis, screening, and treatment of diseases. In this work, we look into a specific, yet very important disease, the Alzheimer. In the United States, Alzheimer’s Disease (AD) is the 6th leading cause of death. Diagnosis of AD cannot be confidently confirmed until after death. This has prompted the importance of early diagnosis of AD, based upon symptoms of cognitive decline. A symptom of early cognitive decline and indicator of AD is Mild Cognitive Impairment (MCI). In addition to this qualitative test, Biomarker tests have been proposed in the medical field including p-Tau, FDG-PET, and hippocampal. These tests can be administered to patients as early detectors of AD thus improving patients’ life quality and potentially reducing the costs of the health structure. Preliminary work has been conducted in the development of a Sequential Tree Based Classifier (STC), which helps medical providers predict if a patient will contract AD or not, by sequentially testing these biomarker tests. The STC model, however, has its limitations and the need for a more complex, robust model is needed. In fact, STC assumes a general linear model as the status of the patient based upon the tests results. We take a simulation perspective and try to define a more complex model that represents the patient evolution in time.

Specifically, this thesis focuses on the formulation of a Markov Chain model that is complex and robust. This Markov Chain model emulates the evolution of MCI patients based upon doctor visits and the sequential administration of biomarker tests. Data provided to create this Markov Chain model were collected by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The data lacked detailed information of the sequential administration of the biomarker tests and therefore, different analytical approaches were tried and conducted in order to calibrate the model. The resulting Markov Chain model provided the capability to conduct experiments regarding different parameters of the Markov Chain and yielded different results of patients that contracted AD and those that did not, leading to important insights into effect of thresholds and sequence on patient prediction capability as well as health costs reduction.



The data in this thesis was provided from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). ADNI investigators did not contribute to any analysis or writing of this thesis. A list of the ADNI investigators can be found at: http://adni.loni.usc.edu/about/governance/principal-investigators/ .
ContributorsCamarena, Raquel (Author) / Pedrielli, Giulia (Thesis advisor) / Li, Jing (Thesis advisor) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2018
156625-Thumbnail Image.png
Description
With trends of globalization on rise, predominant of the trades happen by sea, and experts have predicted an increase in trade volumes over the next few years. With increasing trade volumes, container ships’ upsizing is being carried out to meet the demand. But the problem with container ships’ upsizing is

With trends of globalization on rise, predominant of the trades happen by sea, and experts have predicted an increase in trade volumes over the next few years. With increasing trade volumes, container ships’ upsizing is being carried out to meet the demand. But the problem with container ships’ upsizing is that the sea port terminals must be equipped adequately to improve the turnaround time otherwise the container ships’ upsizing would not yield the anticipated benefits. This thesis focus on a special type of a double automated crane set-up, with a finite interoperational buffer capacity. The buffer is placed in between the cranes, and the idea behind this research is to analyze the performance of the crane operations when this technology is adopted. This thesis proposes the approximation of this complex system, thereby addressing the computational time issue and allowing to efficiently analyze the performance of the system. The approach to model this system has been carried out in two phases. The first phase consists of the development of discrete event simulation model to make the system evolve over time. The challenges of this model are its high processing time which consists of performing large number of experimental runs, thus laying the foundation for the development of the analytical model of the system, and with respect to analytical modeling, a continuous time markov process approach has been adopted. Further, to improve the efficiency of the analytical model, a state aggregation approach is proposed. Thus, this thesis would give an insight on the outcomes of the two approaches and the behavior of the error space, and the performance of the models for the varying buffer capacities would reflect the scope of improvement in these kinds of operational set up.
ContributorsRengarajan, Sundaravaradhan (Author) / Pedrielli, Giulia (Thesis advisor) / Ju, Feng (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2018
137487-Thumbnail Image.png
Description
The current Enterprise Requirements and Acquisition Model (ERAM), a discrete event simulation of the major tasks and decisions within the DoD acquisition system, identifies several what-if intervention strategies to improve program completion time. However, processes that contribute to the program acquisition completion time were not explicitly identified in the simulation

The current Enterprise Requirements and Acquisition Model (ERAM), a discrete event simulation of the major tasks and decisions within the DoD acquisition system, identifies several what-if intervention strategies to improve program completion time. However, processes that contribute to the program acquisition completion time were not explicitly identified in the simulation study. This research seeks to determine the acquisition processes that contribute significantly to total simulated program time in the acquisition system for all programs reaching Milestone C. Specifically, this research examines the effect of increased scope management, technology maturity, and decreased variation and mean process times in post-Design Readiness Review contractor activities by performing additional simulation analyses. Potential policies are formulated from the results to further improve program acquisition completion time.
ContributorsWorger, Danielle Marie (Author) / Wu, Teresa (Thesis director) / Shunk, Dan (Committee member) / Wirthlin, J. Robert (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2013-05
187702-Thumbnail Image.png
Description
Efforts to enhance the quality of life and promote better health have led to improved water quality standards. Adequate daily fluid intake, primarily from tap water, is crucial for human health. By improving drinking water quality, negative health effects associated with consuming inadequate water can be mitigated. Although the United

Efforts to enhance the quality of life and promote better health have led to improved water quality standards. Adequate daily fluid intake, primarily from tap water, is crucial for human health. By improving drinking water quality, negative health effects associated with consuming inadequate water can be mitigated. Although the United States Environmental Protection Agency (EPA) sets and enforces federal water quality limits at water treatment plants, water quality reaching end users degrades during the water delivery process, emphasizing the need for proactive control systems in buildings to ensure safe drinking water.Future commercial and institutional buildings are anticipated to feature real-time water quality sensors, automated flushing and filtration systems, temperature control devices, and chemical boosters. Integrating these technologies with a reliable water quality control system that optimizes the use of chemical additives, filtration, flushing, and temperature adjustments ensures users consistently have access to water of adequate quality. Additionally, existing buildings can be retrofitted with these technologies at a reasonable cost, guaranteeing user safety. In the absence of smart buildings with the required technology, Chapter 2 describes developing an EPANET-MSX (a multi-species extension of EPA’s water simulation tool) model for a typical 5-story building. Chapter 3 involves creating accurate nonlinear approximation models of EPANET-MSX’s complex fluid dynamics and chemical reactions and developing an open-loop water quality control system that can regulate the water quality based on the approximated state of water quality. To address potential sudden changes in water quality, improve predictions, and reduce the gap between approximated and true state of water quality, a feedback control loop is developed in Chapter 4. Lastly, this dissertation includes the development of a reinforcement learning (RL) based water quality control system for cases where the approximation models prove inadequate and cause instability during implementation with a real building water network. The RL-based control system can be implemented in various buildings without the need to develop new hydraulic models and can handle the stochastic nature of water demand, ensuring the proactive control system’s effectiveness in maintaining water quality within safe limits for consumption.
ContributorsGhasemzadeh, Kiarash (Author) / Mirchandani, Pitu (Thesis advisor) / Boyer, Treavor (Committee member) / Ju, Feng (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2023
187584-Thumbnail Image.png
Description
Photolithography is among the key phases in chip manufacturing. It is also among the most expensive with manufacturing equipment valued at the hundreds of millions of dollars. It is paramount that the process is run efficiently, guaranteeing high resource utilization and low product cycle times. A key element in the

Photolithography is among the key phases in chip manufacturing. It is also among the most expensive with manufacturing equipment valued at the hundreds of millions of dollars. It is paramount that the process is run efficiently, guaranteeing high resource utilization and low product cycle times. A key element in the operation of a photolithography system is the effective management of the reticles that are responsible for the imprinting of the circuit path on the wafers. Managing reticles means determining which are appropriate to mount on the very expensive scanners as a function of the product types being released to the system. Given the importance of the problem, several heuristic policies have been developed in the industry practice in an attempt to guarantee that the expensive tools are never idle. However, such policies have difficulties reacting to unforeseen events (e.g., unplanned failures, unavailability of reticles). On the other hand, the technological advance of the semiconductor industry in sensing at system and process level should be harnessed to improve on these “expert policies”. In this thesis, a system for the real time reticle management is developed that not only is able to retrieve information from the real system, but also can embed commonly used policies to improve upon them. A new digital twin for the photolithography process is developed that efficiently and accurately predicts the system performance thus enabling predictions for the future behaviors as a function of possible decisions. The results demonstrate the validity of the developed model, and the feasibility of the overall approach demonstrating a statistically significant improvement of performance as compared to the current policy.
ContributorsSivasubramanian, Chandrasekhar (Author) / Pedrielli, Giulia (Thesis advisor) / Jevtic, Petar (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2023
158527-Thumbnail Image.png
Description
With the increased demand for genetically modified T-cells in treating hematological malignancies, the need for an optimized measurement policy within the current good manufacturing practices for better quality control has grown greatly. There are several steps involved in manufacturing gene therapy. These steps are for the autologous-type gene therapy, in

With the increased demand for genetically modified T-cells in treating hematological malignancies, the need for an optimized measurement policy within the current good manufacturing practices for better quality control has grown greatly. There are several steps involved in manufacturing gene therapy. These steps are for the autologous-type gene therapy, in chronological order, are harvesting T-cells from the patient, activation of the cells (thawing the cryogenically frozen cells after transport to manufacturing center), viral vector transduction, Chimeric Antigen Receptor (CAR) attachment during T-cell expansion, then infusion into patient. The need for improved measurement heuristics within the transduction and expansion portions of the manufacturing process has reached an all-time high because of the costly nature of manufacturing the product, the high cycle time (approximately 14-28 days from activation to infusion), and the risk for external contamination during manufacturing that negatively impacts patients post infusion (such as illness and death).

The main objective of this work is to investigate and improve measurement policies on the basis of quality control in the transduction/expansion bio-manufacturing processes. More specifically, this study addresses the issue of measuring yield within the transduction/expansion phases of gene therapy. To do so, it was decided to model the process as a Markov Decision Process where the decisions being made are optimally chosen to create an overall optimal measurement policy; for a set of predefined parameters.
ContributorsStarkey, Michaela (Author) / Pedrielli, Giulia (Thesis advisor) / Li, Jing (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2020