Matching Items (68)
149997-Thumbnail Image.png
Description
This thesis pursues a method to deregulate the electric distribution system and provide support to distributed renewable generation. A locational marginal price is used to determine prices across a distribution network in real-time. The real-time pricing may provide benefits such as a reduced electricity bill, decreased peak demand, and lower

This thesis pursues a method to deregulate the electric distribution system and provide support to distributed renewable generation. A locational marginal price is used to determine prices across a distribution network in real-time. The real-time pricing may provide benefits such as a reduced electricity bill, decreased peak demand, and lower emissions. This distribution locational marginal price (D-LMP) determines the cost of electricity at each node in the electrical network. The D-LMP is comprised of the cost of energy, cost of losses, and a renewable energy premium. The renewable premium is an adjustable function to compensate `green' distributed generation. A D-LMP is derived and formulated from the PJM model, as well as several alternative formulations. The logistics and infrastructure an implementation is briefly discussed. This study also takes advantage of the D-LMP real-time pricing to implement distributed storage technology. A storage schedule optimization is developed using linear programming. Day-ahead LMPs and historical load data are used to determine a predictive optimization. A test bed is created to represent a practical electric distribution system. Historical load, solar, and LMP data are used in the test bed to create a realistic environment. A power flow and tabulation of the D-LMPs was conducted for twelve test cases. The test cases included various penetrations of solar photovoltaics (PV), system networking, and the inclusion of storage technology. Tables of the D-LMPs and network voltages are presented in this work. The final costs are summed and the basic economics are examined. The use of a D-LMP can lower costs across a system when advanced technologies are used. Storage improves system costs, decreases losses, improves system load factor, and bolsters voltage. Solar energy provides many of these same attributes at lower penetrations, but high penetrations have a detrimental effect on the system. System networking also increases these positive effects. The D-LMP has a positive impact on residential customer cost, while greatly increasing the costs for the industrial sector. The D-LMP appears to have many positive impacts on the distribution system but proper cost allocation needs further development.
ContributorsKiefer, Brian Daniel (Author) / Heydt, Gerald T (Thesis advisor) / Shunk, Dan (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2011
149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
148169-Thumbnail Image.png
Description

This thesis was conducted to study and analyze the fund allocation process adopted by different states in the United States to reduce the impact of the Covid-19 virus. Seven different states and their funding methodologies were compared against the case count within the state. The study also focused on development

This thesis was conducted to study and analyze the fund allocation process adopted by different states in the United States to reduce the impact of the Covid-19 virus. Seven different states and their funding methodologies were compared against the case count within the state. The study also focused on development of a physical distancing index based on three significant attributes. This index was then compared to the expenditure and case counts to support decision making.
A regression model was developed to analyze and compare how different states case counts played out against the regression model and the risk index.

ContributorsJaisinghani, Shaurya (Author) / Mirchandani, Pitu (Thesis director) / Clough, Michael (Committee member) / McCarville, Daniel R. (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Department of Information Systems (Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149723-Thumbnail Image.png
Description
This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve

This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve high accuracy, but the combination of many rules is difficult to interpret. Rule condition subset selection (RCSS) methods for associative classification are considered. RCSS aims to prune the rule conditions into a subset via feature selection. The subset then can be summarized into rule-based classifiers. Experiments show that classifiers after RCSS can substantially improve the classification interpretability without loss of accuracy. An ensemble feature selection method is proposed to learn Markov blankets for either discrete or continuous networks (without linear, Gaussian assumptions). The method is compared to a Bayesian local structure learning algorithm and to alternative feature selection methods in the causal structure learning problem. Feature selection is also used to enhance the interpretability of time series classification. Existing time series classification algorithms (such as nearest-neighbor with dynamic time warping measures) are accurate but difficult to interpret. This research leverages the time-ordering of the data to extract features, and generates an effective and efficient classifier referred to as a time series forest (TSF). The computational complexity of TSF is only linear in the length of time series, and interpretable features can be extracted. These features can be further reduced, and summarized for even better interpretability. Lastly, two variable importance measures are proposed to reduce the feature selection bias in tree-based ensemble models. It is well known that bias can occur when predictor attributes have different numbers of values. Two methods are proposed to solve the bias problem. One uses an out-of-bag sampling method called OOBForest, and the other, based on the new concept of a partial permutation test, is called a pForest. Experimental results show the existing methods are not always reliable for multi-valued predictors, while the proposed methods have advantages.
ContributorsDeng, Houtao (Author) / Runger, George C. (Thesis advisor) / Lohr, Sharon L (Committee member) / Pan, Rong (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2011
149658-Thumbnail Image.png
Description
Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered

Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered as 30 to 50 years. Power plants over 30 years old usually conduct a feasibility study of rehabilitation on their entire facilities including infrastructure. By age 35, the forced outage rate increases by 10 percentage points compared to the previous year. Much longer outages occur in power plants older than 20 years. Consequently, the forced outage rate increases exponentially due to these longer outages. Although these long forced outages are not frequent, their impact is immense. If reasonable timing of rehabilitation is missed, an abrupt long-term outage could occur and additional unnecessary repairs and inefficiencies would follow. On the contrary, too early replacement might cause the waste of revenue. The hydropower plants of Korea Water Resources Corporation (hereafter K-water) are utilized for this study. Twenty-four K-water generators comprise the population for quantifying the reliability of each equipment. A facility in a hydropower plant is a repairable system because most failures can be fixed without replacing the entire facility. The fault data of each power plant are collected, within which only forced outage faults are considered as raw data for reliability analyses. The mean cumulative repair functions (MCF) of each facility are determined with the failure data tables, using Nelson's graph method. The power law model, a popular model for a repairable system, can also be obtained to represent representative equipment and system availability. The criterion-based analysis of HydroAmp is used to provide more accurate reliability of each power plant. Two case studies are presented to enhance the understanding of the availability of each power plant and represent economic evaluations for modernization. Also, equipment in a hydropower plant is categorized into two groups based on their reliability for determining modernization timing and their suitable replacement periods are obtained using simulation.
ContributorsKwon, Ogeuk (Author) / Holbert, Keith E. (Thesis advisor) / Heydt, Gerald T (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2011
149092-Thumbnail Image.png
Description

The ASU COVID-19 testing lab process was developed to operate as the primary testing site for all ASU staff, students, and specified external individuals. Tests are collected at various collection sites, including a walk-in site at the SDFC and various drive-up sites on campus; analysis is conducted on ASU campus

The ASU COVID-19 testing lab process was developed to operate as the primary testing site for all ASU staff, students, and specified external individuals. Tests are collected at various collection sites, including a walk-in site at the SDFC and various drive-up sites on campus; analysis is conducted on ASU campus and results are distributed virtually to all patients via the Health Services patient portal. The following is a literature review on past implementations of various process improvement techniques and how they can be applied to the ABCTL testing process to achieve laboratory goals. (abstract)

ContributorsKrell, Abby Elizabeth (Co-author) / Bruner, Ashley (Co-author) / Ramesh, Frankincense (Co-author) / Lewis, Gabriel (Co-author) / Barwey, Ishna (Co-author) / Myers, Jack (Co-author) / Hymer, William (Co-author) / Reagan, Sage (Co-author) / Compton, Carolyn (Thesis director) / McCarville, Daniel R. (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
151698-Thumbnail Image.png
Description
Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient

Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient or the Quality Assurance for the amount of organ dose received. In this study, we are exploring the methodologies to systematically reduce the absorbed radiation dose in the Fluoroscopically Guided Interventional Radiology procedures. In the first part of this study, we developed a mathematical model which determines a set of geometry settings for the equipment and a level for the energy during a patient exam. The goal is to minimize the amount of absorbed dose in the critical organs while maintaining image quality required for the diagnosis. The model is a large-scale mixed integer program. We performed polyhedral analysis and derived several sets of strong inequalities to improve the computational speed and quality of the solution. Results present the amount of absorbed dose in the critical organ can be reduced up to 99% for a specific set of angles. In the second part, we apply an approximate gradient method to simultaneously optimize angle and table location while minimizing dose in the critical organs with respect to the image quality. In each iteration, we solve a sub-problem as a MIP to determine the radiation field size and corresponding X-ray tube energy. In the computational experiments, results show further reduction (up to 80%) of the absorbed dose in compare with previous method. Last, there are uncertainties in the medical procedures resulting imprecision of the absorbed dose. We propose a robust formulation to hedge from the worst case absorbed dose while ensuring feasibility. In this part, we investigate a robust approach for the organ motions within a radiology procedure. We minimize the absorbed dose for the critical organs across all input data scenarios which are corresponding to the positioning and size of the organs. The computational results indicate up to 26% increase in the absorbed dose calculated for the robust approach which ensures the feasibility across scenarios.
ContributorsKhodadadegan, Yasaman (Author) / Zhang, Muhong (Thesis advisor) / Pavlicek, William (Thesis advisor) / Fowler, John (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2013
151763-Thumbnail Image.png
Description
This thesis presents an overview of the calculation and application of locational marginal prices in electric power systems particularly pertaining to the distribution system. The terminology proposed is a distribution locational marginal price or DLMP. The calculation of locational process in distribution engineering is conjectured and discussed. The use of

This thesis presents an overview of the calculation and application of locational marginal prices in electric power systems particularly pertaining to the distribution system. The terminology proposed is a distribution locational marginal price or DLMP. The calculation of locational process in distribution engineering is conjectured and discussed. The use of quadratic programming for this calculation is proposed and illustrated. A small four bus test bed exemplifies the concept and then the concept is expanded to the IEEE 34 bus distribution system. Alternatives for the calculation are presented, and approximations are reviewed. Active power losses in the system are modeled and incorporated by two different methods. These calculation methods are also applied to the 34 bus system. The results from each method are compared to results found using the PowerWorld simulator. The application of energy management using the DLMP to control load is analyzed as well. This analysis entails the use of the DLMP to cause certain controllable loads to decrease when the DLMP is high, and vice-versa. Tests are done to illustrate the impact of energy management using DLMPs for residential, commercial, and industrial controllable loads. Results showing the dynamics of the loads are shown. The use and characteristics of Matlab function FMINCON are presented in an appendix.
ContributorsSteffan, Nick (Author) / Heydt, Gerald T (Thesis advisor) / Hedman, Kory (Committee member) / Karady, George G. (Committee member) / Arizona State University (Publisher)
Created2013
151633-Thumbnail Image.png
Description
In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also

In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also involves mixed-product transshipments between DCs with multiple truck size selection and routing delivery to retailers. Optimally solving such an integrated problem is in general not easy due to its combinatorial nature, especially when transshipments and routing are involved. In order to find out a good solution effectively, a two-phase solution methodology is derived: Phase I solves an integer programming model which includes all the constraints in the original model except that the routings are simplified to direct shipments by using estimated routing cost parameters. Then Phase II model solves the lower level inventory routing problem for each opened DC and its assigned retailers. The accuracy of the estimated routing cost and the effectiveness of the two-phase solution methodology are evaluated, the computational performance is found to be promising. The problem is able to be heuristically solved within a reasonable time frame for a broad range of problem sizes (one hour for the instance of 200 retailers). In addition, a model is generated for a similar network design problem considering direct shipment and consolidation within the same product set opportunities. A genetic algorithm and a specific problem heuristic are designed, tested and compared on several realistic scenarios.
ContributorsXia, Mingjun (Author) / Askin, Ronald (Thesis advisor) / Mirchandani, Pitu (Committee member) / Zhang, Muhong (Committee member) / Kierstead, Henry (Committee member) / Arizona State University (Publisher)
Created2013
152153-Thumbnail Image.png
Description
Transmission expansion planning (TEP) is a complex decision making process that requires comprehensive analysis to determine the time, location, and number of electric power transmission facilities that are needed in the future power grid. This dissertation investigates the topic of solving TEP problems for large power systems. The dissertation can

Transmission expansion planning (TEP) is a complex decision making process that requires comprehensive analysis to determine the time, location, and number of electric power transmission facilities that are needed in the future power grid. This dissertation investigates the topic of solving TEP problems for large power systems. The dissertation can be divided into two parts. The first part of this dissertation focuses on developing a more accurate network model for TEP study. First, a mixed-integer linear programming (MILP) based TEP model is proposed for solving multi-stage TEP problems. Compared with previous work, the proposed approach reduces the number of variables and constraints needed and improves the computational efficiency significantly. Second, the AC power flow model is applied to TEP models. Relaxations and reformulations are proposed to make the AC model based TEP problem solvable. Third, a convexified AC network model is proposed for TEP studies with reactive power and off-nominal bus voltage magnitudes included in the model. A MILP-based loss model and its relaxations are also investigated. The second part of this dissertation investigates the uncertainty modeling issues in the TEP problem. A two-stage stochastic TEP model is proposed and decomposition algorithms based on the L-shaped method and progressive hedging (PH) are developed to solve the stochastic model. Results indicate that the stochastic TEP model can give a more accurate estimation of the annual operating cost as compared to the deterministic TEP model which focuses only on the peak load.
ContributorsZhang, Hui (Author) / Vittal, Vijay (Thesis advisor) / Heydt, Gerald T (Thesis advisor) / Mittelmann, Hans D (Committee member) / Hedman, Kory W (Committee member) / Arizona State University (Publisher)
Created2013