Matching Items (50)
151698-Thumbnail Image.png
Description
Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient

Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient or the Quality Assurance for the amount of organ dose received. In this study, we are exploring the methodologies to systematically reduce the absorbed radiation dose in the Fluoroscopically Guided Interventional Radiology procedures. In the first part of this study, we developed a mathematical model which determines a set of geometry settings for the equipment and a level for the energy during a patient exam. The goal is to minimize the amount of absorbed dose in the critical organs while maintaining image quality required for the diagnosis. The model is a large-scale mixed integer program. We performed polyhedral analysis and derived several sets of strong inequalities to improve the computational speed and quality of the solution. Results present the amount of absorbed dose in the critical organ can be reduced up to 99% for a specific set of angles. In the second part, we apply an approximate gradient method to simultaneously optimize angle and table location while minimizing dose in the critical organs with respect to the image quality. In each iteration, we solve a sub-problem as a MIP to determine the radiation field size and corresponding X-ray tube energy. In the computational experiments, results show further reduction (up to 80%) of the absorbed dose in compare with previous method. Last, there are uncertainties in the medical procedures resulting imprecision of the absorbed dose. We propose a robust formulation to hedge from the worst case absorbed dose while ensuring feasibility. In this part, we investigate a robust approach for the organ motions within a radiology procedure. We minimize the absorbed dose for the critical organs across all input data scenarios which are corresponding to the positioning and size of the organs. The computational results indicate up to 26% increase in the absorbed dose calculated for the robust approach which ensures the feasibility across scenarios.
ContributorsKhodadadegan, Yasaman (Author) / Zhang, Muhong (Thesis advisor) / Pavlicek, William (Thesis advisor) / Fowler, John (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2013
151329-Thumbnail Image.png
Description
During the initial stages of experimentation, there are usually a large number of factors to be investigated. Fractional factorial (2^(k-p)) designs are particularly useful during this initial phase of experimental work. These experiments often referred to as screening experiments help reduce the large number of factors to a smaller set.

During the initial stages of experimentation, there are usually a large number of factors to be investigated. Fractional factorial (2^(k-p)) designs are particularly useful during this initial phase of experimental work. These experiments often referred to as screening experiments help reduce the large number of factors to a smaller set. The 16 run regular fractional factorial designs for six, seven and eight factors are in common usage. These designs allow clear estimation of all main effects when the three-factor and higher order interactions are negligible, but all two-factor interactions are aliased with each other making estimation of these effects problematic without additional runs. Alternatively, certain nonregular designs called no-confounding (NC) designs by Jones and Montgomery (Jones & Montgomery, Alternatives to resolution IV screening designs in 16 runs, 2010) partially confound the main effects with the two-factor interactions but do not completely confound any two-factor interactions with each other. The NC designs are useful for independently estimating main effects and two-factor interactions without additional runs. While several methods have been suggested for the analysis of data from nonregular designs, stepwise regression is familiar to practitioners, available in commercial software, and is widely used in practice. Given that an NC design has been run, the performance of stepwise regression for model selection is unknown. In this dissertation I present a comprehensive simulation study evaluating stepwise regression for analyzing both regular fractional factorial and NC designs. Next, the projection properties of the six, seven and eight factor NC designs are studied. Studying the projection properties of these designs allows the development of analysis methods to analyze these designs. Lastly the designs and projection properties of 9 to 14 factor NC designs onto three and four factors are presented. Certain recommendations are made on analysis methods for these designs as well.
ContributorsShinde, Shilpa (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Committee member) / Fowler, John (Committee member) / Jones, Bradley (Committee member) / Arizona State University (Publisher)
Created2012
152589-Thumbnail Image.png
Description
The upstream transmission of bulk data files in Ethernet passive optical networks (EPONs) arises from a number of applications, such as data back-up and multimedia file upload. Existing upstream transmission approaches lead to severe delays for conventional packet traffic when best-effort file and packet traffic are mixed. I propose and

The upstream transmission of bulk data files in Ethernet passive optical networks (EPONs) arises from a number of applications, such as data back-up and multimedia file upload. Existing upstream transmission approaches lead to severe delays for conventional packet traffic when best-effort file and packet traffic are mixed. I propose and evaluate an exclusive interval for bulk transfer (EIBT) transmission strategy that reserves an EIBT for file traffic in an EPON polling cycle. I optimize the duration of the EIBT to minimize a weighted sum of packet and file delays. Through mathematical delay analysis and verifying simulation, it is demonstrated that the EIBT approach preserves small delays for packet traffic while efficiently serving bulk data file transfers. Dynamic circuits are well suited for applications that require predictable service with a constant bit rate for a prescribed period of time, such as demanding e-science applications. Past research on upstream transmission in passive optical networks (PONs) has mainly considered packet-switched traffic and has focused on optimizing packet-level performance metrics, such as reducing mean delay. This study proposes and evaluates a dynamic circuit and packet PON (DyCaPPON) that provides dynamic circuits along with packet-switched service. DyCaPPON provides (i) flexible packet-switched service through dynamic bandwidth allocation in periodic polling cycles, and (ii) consistent circuit service by allocating each active circuit a fixed-duration upstream transmission window during each fixed-duration polling cycle. I analyze circuit-level performance metrics, including the blocking probability of dynamic circuit requests in DyCaPPON through a stochastic knapsack-based analysis. Through this analysis I also determine the bandwidth occupied by admitted circuits. The remaining bandwidth is available for packet traffic and I analyze the resulting mean delay of packet traffic. Through extensive numerical evaluations and verifying simulations, the circuit blocking and packet delay trade-offs in DyCaPPON is demonstrated. An extended version of the DyCaPPON designed for light traffic situation is introduced in this article as well.
ContributorsWei, Xing (Author) / Reisslein, Martin (Thesis advisor) / Fowler, John (Committee member) / Palais, Joseph (Committee member) / McGarry, Michael (Committee member) / Arizona State University (Publisher)
Created2014
152768-Thumbnail Image.png
Description
In a healthcare setting, the Sterile Processing Department (SPD) provides ancillary services to the Operating Room (OR), Emergency Room, Labor & Delivery, and off-site clinics. SPD's function is to reprocess reusable surgical instruments and return them to their home departments. The management of surgical instruments and medical devices can impact

In a healthcare setting, the Sterile Processing Department (SPD) provides ancillary services to the Operating Room (OR), Emergency Room, Labor & Delivery, and off-site clinics. SPD's function is to reprocess reusable surgical instruments and return them to their home departments. The management of surgical instruments and medical devices can impact patient safety and hospital revenue. Any time instrumentation or devices are not available or are not fit for use, patient safety and revenue can be negatively impacted. One step of the instrument reprocessing cycle is sterilization. Steam sterilization is the sterilization method used for the majority of surgical instruments and is preferred to immediate use steam sterilization (IUSS) because terminally sterilized items can be stored until needed. IUSS Items must be used promptly and cannot be stored for later use. IUSS is intended for emergency situations and not as regular course of action. Unfortunately, IUSS is used to compensate for inadequate inventory levels, scheduling conflicts, and miscommunications. If IUSS is viewed as an adverse event, then monitoring IUSS incidences can help healthcare organizations meet patient safety goals and financial goals along with aiding in process improvement efforts. This work recommends statistical process control methods to IUSS incidents and illustrates the use of control charts for IUSS occurrences through a case study and analysis of the control charts for data from a health care provider. Furthermore, this work considers the application of data mining methods to IUSS occurrences and presents a representative example of data mining to the IUSS occurrences. This extends the application of statistical process control and data mining in healthcare applications.
ContributorsWeart, Gail (Author) / Runger, George C. (Thesis advisor) / Li, Jing (Committee member) / Shunk, Dan (Committee member) / Arizona State University (Publisher)
Created2014
153400-Thumbnail Image.png
Description
Economic and environmental concerns necessitate the preference for retrofits over new construction in manufacturing facilities for incorporating modern technology, expanding production, becoming more energy-efficient and improving operational efficiency. Despite the technical and functional challenges in retrofits, the expectation from the project team is to; reduce costs, ensure the time to

Economic and environmental concerns necessitate the preference for retrofits over new construction in manufacturing facilities for incorporating modern technology, expanding production, becoming more energy-efficient and improving operational efficiency. Despite the technical and functional challenges in retrofits, the expectation from the project team is to; reduce costs, ensure the time to market and maintain a high standard for quality and safety. Thus, the construction supply chain faces increasing pressure to improve performance by ensuring better labor productivity, among other factors, for efficiency gain. Building Information Modeling (BIM) & off-site prefabrication are determined as effective management & production methods to meet these goals. However, there are limited studies assessing their impact on labor productivity within the constraints of a retrofit environment. This study fills the gap by exploring the impact of BIM on labor productivity (metric) in retrofits (context).

BIM use for process tool installation at a semiconductor manufacturing facility serves as an ideal environment for practical observations. Direct site observations indicate a positive correlation between disruptions in the workflow attributed to an immature use of BIM, waste due to rework and high non-value added time at the labor work face. Root-cause analysis traces the origins of the said disruptions to decision-factors that are critical for the planning, management and implementation of BIM. Analysis shows that stakeholders involved in decision-making during BIM planning, management and implementation identify BIM-value based on their immediate utility for BIM-use instead of the utility for the customers of the process. This differing value-system manifests in the form of unreliable and inaccurate information at the labor work face.

Grounding the analysis in theory and observations, the author hypothesizes that stakeholders of a construction project value BIM and BIM-aspects (i.e. geometrical information, descriptive information and workflows) differently and the accuracy of geometrical information is critical for improving labor productivity when using prefabrication in retrofit construction. In conclusion, this research presents a BIM-value framework, associating stakeholders with their relative value for BIM, the decision-factors for the planning, management and implementation of BIM and the potential impact of those decisions on labor productivity.
ContributorsGhosh, Arundhati (Author) / Chasey, Allan D (Thesis advisor) / Laroche, Dominique-Claude (Committee member) / Fowler, John (Committee member) / Arizona State University (Publisher)
Created2015
153346-Thumbnail Image.png
Description
This thesis presents research on innovative AC transmission design concepts and focused mathematics for electric power transmission design. The focus relates to compact designs, high temperature low sag conductors, and high phase order design. The motivation of the research is to increase transmission capacity with limited right of way.

Regarding compact

This thesis presents research on innovative AC transmission design concepts and focused mathematics for electric power transmission design. The focus relates to compact designs, high temperature low sag conductors, and high phase order design. The motivation of the research is to increase transmission capacity with limited right of way.

Regarding compact phase spacing, insight into the possibility of increasing the security rating of transmission lines is the primary focus through increased mutual coupling and decreased positive sequence reactance. Compact design can reduce the required corridor width to as little as 31% of traditional designs, especially with the use of inter-phase spacers. Typically transmission lines are built with conservative clearances, with difficulty obtaining right of way, more compact phase spacing may be needed. With design consideration significant compaction can produce an increase by 5-25% in the transmission line security (steady state stability) rating. In addition, other advantages and disadvantages of compact phase design are analyzed. Also, the next two topics: high temperature low sag conductors and high phase order designs include the use of compact designs.

High temperature low sag (HTLS) conductors are used to increase the thermal capacity of a transmission line up to two times the capacity compared to traditional conductors. HTLS conductors can operate continuously at 150-210oC and in emergency at 180-250oC (depending on the HTLS conductor). ACSR conductors operate continuously at 50-110oC and in emergency conditions at 110-150oC depending on the utility, line, and location. HTLS conductors have decreased sag characteristics of up to 33% compared to traditional ACSR conductors at 100oC and up to 22% at 180oC. In addition to what HTLS has to offer in terms of the thermal rating improvement, the possibility of using HTLS conductors to indirectly reduce tower height and compact the phases to increase the security limit is investigated. In addition, utilizing HTLS conductors to increase span length and decrease the number of transmission towers is investigated. The phase compaction or increased span length is accomplished by utilization of the improved physical sag characteristics of HTLS conductors.

High phase order (HPO) focuses on the ability to increase the power capacity for a given right of way. For example, a six phase line would have a thermal rating of approximately 173%, a security rating of approximately 289%, and the SIL would be approximately 300% of a double circuit three phase line with equal right of way and equal voltage line to line. In addition, this research focuses on algorithm and model development of HPO systems. A study of the impedance of HPO lines is presented. The line impedance matrices for some high phase order configurations are circulant Toeplitz matrices. Properties of circulant matrices are developed for the generalized sequence impedances of HPO lines. A method to calculate the sequence impedances utilizing unique distance parameter algorithms is presented. A novel method to design the sequence impedances to specifications is presented. Utilizing impedance matrices in circulant form, a generalized form of the sequence components transformation matrix is presented. A generalized voltage unbalance factor in discussed for HPO transmission lines. Algorithms to calculate the number of fault types and number of significant fault types for an n-phase system are presented. A discussion is presented on transposition of HPO transmission lines and a generalized fault analysis of a high phase order circuit is presented along with an HPO analysis program.

The work presented has the objective of increasing the use of rights of way for bulk power transmission through the use of innovative transmission technologies. The purpose of this dissertation is to lay down some of the building blocks and to help make the three technologies discussed practical applications in the future.
ContributorsPierre, Brian J (Author) / Heydt, Gerald (Thesis advisor) / Karady, George G. (Committee member) / Shunk, Dan (Committee member) / Vittal, Vijay (Committee member) / Arizona State University (Publisher)
Created2015
Description
Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and

Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and overall bandwidth utilization. With new DBA components coming out in research, a comprehensive study of DBA is conducted in this thesis, adding in Double Phase Polling coupled with novel Limited with Share credits Excess distribution method. By conducting a series simulation of DBAs using different components, we found out that grant sizing has the strongest impact on average packet delay and grant scheduling also has a significant impact on the average packet delay; grant scheduling has the strongest impact on the stability limit or maximum achievable channel utilization. Whereas the grant sizing only has a modest impact on the stability limit; the SPD grant scheduling policy in the Double Phase Polling scheduling framework coupled with Limited with Share credits Excess distribution grant sizing produced both the lowest average packet delay and the highest stability limit.
ContributorsZhao, Du (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Fowler, John (Committee member) / Arizona State University (Publisher)
Created2011
149997-Thumbnail Image.png
Description
This thesis pursues a method to deregulate the electric distribution system and provide support to distributed renewable generation. A locational marginal price is used to determine prices across a distribution network in real-time. The real-time pricing may provide benefits such as a reduced electricity bill, decreased peak demand, and lower

This thesis pursues a method to deregulate the electric distribution system and provide support to distributed renewable generation. A locational marginal price is used to determine prices across a distribution network in real-time. The real-time pricing may provide benefits such as a reduced electricity bill, decreased peak demand, and lower emissions. This distribution locational marginal price (D-LMP) determines the cost of electricity at each node in the electrical network. The D-LMP is comprised of the cost of energy, cost of losses, and a renewable energy premium. The renewable premium is an adjustable function to compensate `green' distributed generation. A D-LMP is derived and formulated from the PJM model, as well as several alternative formulations. The logistics and infrastructure an implementation is briefly discussed. This study also takes advantage of the D-LMP real-time pricing to implement distributed storage technology. A storage schedule optimization is developed using linear programming. Day-ahead LMPs and historical load data are used to determine a predictive optimization. A test bed is created to represent a practical electric distribution system. Historical load, solar, and LMP data are used in the test bed to create a realistic environment. A power flow and tabulation of the D-LMPs was conducted for twelve test cases. The test cases included various penetrations of solar photovoltaics (PV), system networking, and the inclusion of storage technology. Tables of the D-LMPs and network voltages are presented in this work. The final costs are summed and the basic economics are examined. The use of a D-LMP can lower costs across a system when advanced technologies are used. Storage improves system costs, decreases losses, improves system load factor, and bolsters voltage. Solar energy provides many of these same attributes at lower penetrations, but high penetrations have a detrimental effect on the system. System networking also increases these positive effects. The D-LMP has a positive impact on residential customer cost, while greatly increasing the costs for the industrial sector. The D-LMP appears to have many positive impacts on the distribution system but proper cost allocation needs further development.
ContributorsKiefer, Brian Daniel (Author) / Heydt, Gerald T (Thesis advisor) / Shunk, Dan (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2011
150466-Thumbnail Image.png
Description
The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order

The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order to meet and exceed customer expectations, many companies are forced to improve quality and on-time delivery, and have looked towards Lean Six Sigma as an approach to enable process improvement. The Lean Six Sigma literature is rich in deployment strategies; however, there is a general lack of a mathematical approach to deploy Lean Six Sigma in a global enterprise. This includes both project identification and prioritization. The research presented here is two-fold. Firstly, a process characterization framework is presented to evaluate processes based on eight characteristics. An unsupervised learning technique, using clustering algorithms, is then utilized to group processes that are Lean Six Sigma conducive. The approach helps Lean Six Sigma deployment champions to identify key areas within the business to focus a Lean Six Sigma deployment. A case study is presented and 33% of the processes were found to be Lean Six Sigma conducive. Secondly, having identified parts of the business that are lean Six Sigma conducive, the next steps are to formulate and prioritize a portfolio of projects. Very often the deployment champion is faced with the decision of selecting a portfolio of Lean Six Sigma projects that meet multiple objectives which could include: maximizing productivity, customer satisfaction or return on investment, while meeting certain budgetary constraints. A multi-period 0-1 knapsack problem is presented that maximizes the expected net savings of the Lean Six Sigma portfolio over the life cycle of the deployment. Finally, a case study is presented that demonstrates the application of the model in a large multinational company. Traditionally, Lean Six Sigma found its roots in manufacturing. The research presented in this dissertation also emphasizes the applicability of the methodology to the non-manufacturing space. Additionally, a comparison is conducted between manufacturing and non-manufacturing processes to highlight the challenges in deploying the methodology in both spaces.
ContributorsDuarte, Brett Marc (Author) / Fowler, John W (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Shunk, Dan (Committee member) / Borror, Connie (Committee member) / Konopka, John (Committee member) / Arizona State University (Publisher)
Created2011
150981-Thumbnail Image.png
Description
For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP failures, timing and risk factors) during heart failure (HF) hospitalizations

For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP failures, timing and risk factors) during heart failure (HF) hospitalizations are unknown. There were 1,722 patients discharged with a primary diagnosis of HF from an academic hospital between January 2005 and December 2007. Three hundred eighty-one patients experienced 566 AEs, classified into four categories: medication (43.9%), infection (18.9%), patient care (26.3%), or procedural (10.9%). Three distinct analyses were performed: 1) patient's perspective of SOPP reliability including cumulative distribution and hazard functions of time to AEs; 2) Cox proportional hazards model to determine independent patient-specific risk factors for AEs; and 3) hospital administration's perspective of SOPP reliability through three years of the study including cumulative distribution and hazard functions of time between AEs and moving range statistical process control (SPC) charts for days between failures of each type. This is the first study, to our knowledge, to consider reliability of SOPP from both the patient's and hospital administration's perspective. AE rates in hospitalized patients are similar to other recently published reports and did not improve during the study period. Operations research methodologies will be necessary to improve reliability of care delivered to hospitalized patients.
ContributorsHuddleston, Jeanne (Author) / Fowler, John (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Gel, Esma (Committee member) / Shunk, Dan (Committee member) / Arizona State University (Publisher)
Created2012