Matching Items (6)
Filtering by

Clear all filters

151994-Thumbnail Image.png
Description
Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly

Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly organized into the following two parts: I) spatio-temporal wind power analysis for wind generation forecast and integration, and II) data mining and information fusion of synchrophasor measurements toward secure power grids. Part I is centered around wind power generation forecast and integration. First, a spatio-temporal analysis approach for short-term wind farm generation forecasting is proposed. Specifically, using extensive measurement data from an actual wind farm, the probability distribution and the level crossing rate of wind farm generation are characterized using tools from graphical learning and time-series analysis. Built on these spatial and temporal characterizations, finite state Markov chain models are developed, and a point forecast of wind farm generation is derived using the Markov chains. Then, multi-timescale scheduling and dispatch with stochastic wind generation and opportunistic demand response is investigated. Part II focuses on incorporating the emerging synchrophasor technology into the security assessment and the post-disturbance fault diagnosis of power systems. First, a data-mining framework is developed for on-line dynamic security assessment by using adaptive ensemble decision tree learning of real-time synchrophasor measurements. Under this framework, novel on-line dynamic security assessment schemes are devised, aiming to handle various factors (including variations of operating conditions, forced system topology change, and loss of critical synchrophasor measurements) that can have significant impact on the performance of conventional data-mining based on-line DSA schemes. Then, in the context of post-disturbance analysis, fault detection and localization of line outage is investigated using a dependency graph approach. It is shown that a dependency graph for voltage phase angles can be built according to the interconnection structure of power system, and line outage events can be detected and localized through networked data fusion of the synchrophasor measurements collected from multiple locations of power grids. Along a more practical avenue, a decentralized networked data fusion scheme is proposed for efficient fault detection and localization.
ContributorsHe, Miao (Author) / Zhang, Junshan (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Hedman, Kory (Committee member) / Si, Jennie (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
150671-Thumbnail Image.png
Description
Contemporary methods for dynamic security assessment (DSA) mainly re-ly on time domain simulations to explore the influence of large disturbances in a power system. These methods are computationally intensive especially when the system operating point changes continually. The trajectory sensitivity method, when implemented and utilized as a complement to the

Contemporary methods for dynamic security assessment (DSA) mainly re-ly on time domain simulations to explore the influence of large disturbances in a power system. These methods are computationally intensive especially when the system operating point changes continually. The trajectory sensitivity method, when implemented and utilized as a complement to the existing DSA time domain simulation routine, can provide valuable insights into the system variation in re-sponse to system parameter changes. The implementation of the trajectory sensitivity analysis is based on an open source power system analysis toolbox called PSAT. Eight categories of sen-sitivity elements have been implemented and tested. The accuracy assessment of the implementation demonstrates the validity of both the theory and the imple-mentation. The computational burden introduced by the additional sensitivity equa-tions is relieved by two innovative methods: one is by employing a cluster to per-form the sensitivity calculations in parallel; the other one is by developing a mod-ified very dishonest Newton method in conjunction with the latest sparse matrix processing technology. The relation between the linear approximation accuracy and the perturba-tion size is also studied numerically. It is found that there is a fixed connection between the linear approximation accuracy and the perturbation size. Therefore this finding can serve as a general application guide to evaluate the accuracy of the linear approximation. The applicability of the trajectory sensitivity approach to a large realistic network has been demonstrated in detail. This research work applies the trajectory sensitivity analysis method to the Western Electricity Coordinating Council (WECC) system. Several typical power system dynamic security problems, in-cluding the transient angle stability problem, the voltage stability problem consid-ering load modeling uncertainty and the transient stability constrained interface real power flow limit calculation, have been addressed. Besides, a method based on the trajectory sensitivity approach and the model predictive control has been developed for determination of under frequency load shedding strategy for real time stability assessment. These applications have shown the great efficacy and accuracy of the trajectory sensitivity method in handling these traditional power system stability problems.
ContributorsHou, Guanji (Author) / Vittal, Vijay (Thesis advisor) / Heydt, Gerald (Committee member) / Tylavsky, Daniel (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
149913-Thumbnail Image.png
Description
One necessary condition for the two-pass risk premium estimator to be consistent and asymptotically normal is that the rank of the beta matrix in a proposed linear asset pricing model is full column. I first investigate the asymptotic properties of the risk premium estimators and the related t-test and

One necessary condition for the two-pass risk premium estimator to be consistent and asymptotically normal is that the rank of the beta matrix in a proposed linear asset pricing model is full column. I first investigate the asymptotic properties of the risk premium estimators and the related t-test and Wald test statistics when the full rank condition fails. I show that the beta risk of useless factors or multiple proxy factors for a true factor are priced more often than they should be at the nominal size in the asset pricing models omitting some true factors. While under the null hypothesis that the risk premiums of the true factors are equal to zero, the beta risk of the true factors are priced less often than the nominal size. The simulation results are consistent with the theoretical findings. Hence, the factor selection in a proposed factor model should not be made solely based on their estimated risk premiums. In response to this problem, I propose an alternative estimation of the underlying factor structure. Specifically, I propose to use the linear combination of factors weighted by the eigenvectors of the inner product of estimated beta matrix. I further propose a new method to estimate the rank of the beta matrix in a factor model. For this method, the idiosyncratic components of asset returns are allowed to be correlated both over different cross-sectional units and over different time periods. The estimator I propose is easy to use because it is computed with the eigenvalues of the inner product of an estimated beta matrix. Simulation results show that the proposed method works well even in small samples. The analysis of US individual stock returns suggests that there are six common risk factors in US individual stock returns among the thirteen factor candidates used. The analysis of portfolio returns reveals that the estimated number of common factors changes depending on how the portfolios are constructed. The number of risk sources found from the analysis of portfolio returns is generally smaller than the number found in individual stock returns.
ContributorsWang, Na (Author) / Ahn, Seung C. (Thesis advisor) / Kallberg, Jarl G. (Committee member) / Liu, Crocker H. (Committee member) / Arizona State University (Publisher)
Created2011
151242-Thumbnail Image.png
Description
Photovoltaic (PV) power generation has the potential to cause a significant impact on power system reliability since its total installed capacity is projected to increase at a significant rate. PV generation can be described as an intermittent and variable resource because its production is influenced by ever-changing environmental conditions. The

Photovoltaic (PV) power generation has the potential to cause a significant impact on power system reliability since its total installed capacity is projected to increase at a significant rate. PV generation can be described as an intermittent and variable resource because its production is influenced by ever-changing environmental conditions. The study in this dissertation focuses on the influence of PV generation on trans-mission system reliability. This is a concern because PV generation output is integrated into present power systems at various voltage levels and may significantly affect the power flow patterns. This dissertation applies a probabilistic power flow (PPF) algorithm to evaluate the influence of PV generation uncertainty on transmission system perfor-mance. A cumulant-based PPF algorithm suitable for large systems is used. Correlation among adjacent PV resources is considered. Three types of approximation expansions based on cumulants namely Gram-Charlier expansion, Edgeworth expansion and Cor-nish-Fisher expansion are compared, and their properties, advantages and deficiencies are discussed. Additionally, a novel probabilistic model of PV generation is developed to obtain the probability density function (PDF) of the PV generation production based on environmental conditions. Besides, this dissertation proposes a novel PPF algorithm considering the conven-tional generation dispatching operation to balance PV generation uncertainties. It is pru-dent to include generation dispatch in the PPF algorithm since the dispatching strategy compensates for PV generation injections and influences the uncertainty results. Fur-thermore, this dissertation also proposes a probabilistic optimal power dispatching strat-egy which considers uncertainty problems in the economic dispatch and optimizes the expected value of the total cost with the overload probability as a constraint. The proposed PPF algorithm with the three expansions is compared with Monte Carlo simulations (MCS) with results for a 2497-bus representation of the Arizona area of the Western Electricity Coordinating Council (WECC) system. The PDFs of the bus voltages, line flows and slack bus production are computed, and are used to identify the confidence interval, the over limit probability and the expected over limit time of the ob-jective variables. The proposed algorithm is of significant relevance to the operating and planning studies of the transmission systems with PV generation installed.
ContributorsFan, Miao (Author) / Vittal, Vijay (Thesis advisor) / Heydt, Gerald Thomas (Committee member) / Ayyanar, Raja (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
151214-Thumbnail Image.png
Description
In electric power systems, phasor measurement units (PMUs) are capable of providing synchronized voltage and current phasor measurements which are superior to conventional measurements collected by the supervisory control and data acquisition (SCADA) system in terms of resolution and accuracy. These measurements are known as synchrophasor measurements. Considerable research work

In electric power systems, phasor measurement units (PMUs) are capable of providing synchronized voltage and current phasor measurements which are superior to conventional measurements collected by the supervisory control and data acquisition (SCADA) system in terms of resolution and accuracy. These measurements are known as synchrophasor measurements. Considerable research work has been done on the applications of PMU measurements based on the as-sumption that a high level of accuracy is obtained in the field. The study in this dissertation is conducted to address the basic issue concerning the accuracy of actual PMU measurements in the field. Synchronization is one of the important features of PMU measurements. However, the study presented in this dissertation reveals that the problem of faulty synchronization between measurements with the same time stamps from different PMUs exists. A Kalman filter model is proposed to analyze and calcu-late the time skew error caused by faulty synchronization. In order to achieve a high level of accuracy of PMU measurements, inno-vative methods are proposed to detect and identify system state changes or bad data which are reflected by changes in the measurements. This procedure is ap-plied as a key step in adaptive Kalman filtering of PMU measurements to over-come the insensitivity of a conventional Kalman filter. Calibration of PMU measurements is implemented in specific PMU instal-lation scenarios using transmission line (TL) parameters from operation planning data. The voltage and current correction factors calculated from the calibration procedure indicate the possible errors in PMU measurements. Correction factors can be applied in on-line calibration of PMU measurements. A study is conducted to address an important issue when integrating PMU measurements into state estimation. The reporting rate of PMU measurements is much higher than that of the measurements collected by the SCADA. The ques-tion of how to buffer PMU measurements is raised. The impact of PMU meas-urement buffer length on state estimation is discussed. A method based on hy-pothesis testing is proposed to determine the optimal buffer length of PMU meas-urements considering the two conflicting features of PMU measurements, i. e. un-certainty and variability. Results are presented for actual PMU synchrophasor measurements.
ContributorsZhang, Qing (Author) / Heydt, Gerald (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Ayyanar, Raja (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
149506-Thumbnail Image.png
Description
A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production Index, Money Supply (MS), Unemployment Rate, and Ten-Year Treasury are

A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production Index, Money Supply (MS), Unemployment Rate, and Ten-Year Treasury are used to predict/estimate asset (sector ETF`s) returns. Fundamental ratios of individual stocks are used to predict the stock returns. An a priori known cash-flow sequence is assumed available for investment. Given the importance of sector performance on stock performance, sector based Exchange Traded Funds (ETFs) for the S&P; and Dow Jones are considered and wealth is allocated. Mean variance optimization with risk and return constraints are used to distribute the wealth in individual sectors among the selected stocks. The results presented should be viewed as providing an outer control/decision loop generating sector target allocations that will ultimately drive an inner control/decision loop focusing on stock selection. Receding horizon control (RHC) ideas are exploited to pose and solve two relevant constrained optimization problems. First, the classic problem of wealth maximization subject to risk constraints (as measured by a metric on the covariance matrices) is considered. Special consideration is given to an optimization problem that attempts to minimize the peak risk over the prediction horizon, while trying to track a wealth objective. It is concluded that this approach may be particularly beneficial during downturns - appreciably limiting downside during downturns while providing most of the upside during upturns. Investment in stocks during upturns and in sector ETF`s during downturns is profitable.
ContributorsChitturi, Divakar (Author) / Rodriguez, Armando (Thesis advisor) / Tsakalis, Konstantinos S (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2010