Matching Items (12)

128038-Thumbnail Image.png

A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

Description

Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models

Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversity exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.

Contributors

Agent

Created

Date Created
  • 2017-08-19

129262-Thumbnail Image.png

Generation bidding game with potentially false attestation of flexible demand

Description

With the onset of large numbers of energy-flexible appliances, in particular plug-in electric and hybrid-electric vehicles, a significant portion of electricity demand will be somewhat flexible and accordingly may be

With the onset of large numbers of energy-flexible appliances, in particular plug-in electric and hybrid-electric vehicles, a significant portion of electricity demand will be somewhat flexible and accordingly may be responsive to changes in electricity prices. In the future, this increased degree of demand flexibility (and the onset of only short-term predictable intermittent renewable supply) will considerably exceed present level of uncertainty in day-ahead prediction of assumed inelastic demand. For such a responsive demand idealized, we consider a deregulated wholesale day-ahead electricity marketplace wherein bids by generators (or energy traders) are determined through a Nash equilibrium via a common clearing price (i.e., no location marginality). This model assumes the independent system operator (ISO) helps the generators to understand how to change their bids to improve their net revenue based on a model of demand-response. The model of demand-response (equivalently, demand-side bidding day ahead) is based on information from load-serving entities regarding their price-flexible demand. We numerically explore how collusion between generators and loads can manipulate this market. The objective is to learn how to deter such collusion, e.g., how to set penalties for significant differences between stated and actual demand, resulting in higher energy prices that benefit certain generators.

Contributors

Agent

Created

Date Created
  • 2015-03-24

156420-Thumbnail Image.png

Critical coupling and synchronized clusters in arbitrary networks of Kuramoto oscillators

Description

The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of

The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and show a collective rhythm. Kuramoto-like

dynamics are considered a relevant model for instabilities of the AC-power grid which

operates in synchrony under standard conditions but exhibits, in a state of failure,

segmentation of the grid into desynchronized clusters.

In this dissertation the minimum coupling strength required to ensure total frequency

synchronization in a Kuramoto system, called the critical coupling, is investigated.

For coupling strength below the critical coupling, clusters of oscillators form

where oscillators within a cluster are on average oscillating with the same long-term

frequency. A unified order parameter based approach is developed to create approximations

of the critical coupling. Some of the new approximations provide strict lower

bounds for the critical coupling. In addition, these approximations allow for predictions

of the partially synchronized clusters that emerge in the bifurcation from the

synchronized state.

Merging the order parameter approach with graph theoretical concepts leads to a

characterization of this bifurcation as a weighted graph partitioning problem on an

arbitrary networks which then leads to an optimization problem that can efficiently

estimate the partially synchronized clusters. Numerical experiments on random Kuramoto

systems show the high accuracy of these methods. An interpretation of the

methods in the context of power systems is provided.

Contributors

Agent

Created

Date Created
  • 2018

155821-Thumbnail Image.png

Wireless Sensor Data Transport, Aggregation and Security

Description

Wireless sensor networks (WSN) and the communication and the security therein have been gaining further prominence in the tech-industry recently, with the emergence of the so called Internet of Things

Wireless sensor networks (WSN) and the communication and the security therein have been gaining further prominence in the tech-industry recently, with the emergence of the so called Internet of Things (IoT). The steps from acquiring data and making a reactive decision base on the acquired sensor measurements are complex and requires careful execution of several steps. In many of these steps there are still technological gaps to fill that are due to the fact that several primitives that are desirable in a sensor network environment are bolt on the networks as application layer functionalities, rather than built in them. For several important functionalities that are at the core of IoT architectures we have developed a solution that is analyzed and discussed in the following chapters.

The chain of steps from the acquisition of sensor samples until these samples reach a control center or the cloud where the data analytics are performed, starts with the acquisition of the sensor measurements at the correct time and, importantly, synchronously among all sensors deployed. This synchronization has to be network wide, including both the wired core network as well as the wireless edge devices. This thesis studies a decentralized and lightweight solution to synchronize and schedule IoT devices over wireless and wired networks adaptively, with very simple local signaling. Furthermore, measurement results have to be transported and aggregated over the same interface, requiring clever coordination among all nodes, as network resources are shared, keeping scalability and fail-safe operation in mind. Furthermore ensuring the integrity of measurements is a complicated task. On the one hand Cryptography can shield the network from outside attackers and therefore is the first step to take, but due to the volume of sensors must rely on an automated key distribution mechanism. On the other hand cryptography does not protect against exposed keys or inside attackers. One however can exploit statistical properties to detect and identify nodes that send false information and exclude these attacker nodes from the network to avoid data manipulation. Furthermore, if data is supplied by a third party, one can apply automated trust metric for each individual data source to define which data to accept and consider for mentioned statistical tests in the first place. Monitoring the cyber and physical activities of an IoT infrastructure in concert is another topic that is investigated in this thesis.

Contributors

Agent

Created

Date Created
  • 2017

154355-Thumbnail Image.png

Pricing schemes in electric energy markets

Description

Two thirds of the U.S. power systems are operated under market structures. A good market design should maximize social welfare and give market participants proper incentives to follow market solutions.

Two thirds of the U.S. power systems are operated under market structures. A good market design should maximize social welfare and give market participants proper incentives to follow market solutions. Pricing schemes play very important roles in market design.

Locational marginal pricing scheme is the core pricing scheme in energy markets. Locational marginal prices are good pricing signals for dispatch marginal costs. However, the locational marginal prices alone are not incentive compatible since energy markets are non-convex markets. Locational marginal prices capture dispatch costs but fail to capture commitment costs such as startup cost, no-load cost, and shutdown cost. As a result, uplift payments are paid to generators in markets in order to provide incentives for generators to follow market solutions. The uplift payments distort pricing signals.

In this thesis, pricing schemes in electric energy markets are studied. In the first part, convex hull pricing scheme is studied and the pricing model is extended with network constraints. The subgradient algorithm is applied to solve the pricing model. In the second part, a stochastic dispatchable pricing model is proposed to better address the non-convexity and uncertainty issues in day-ahead energy markets. In the third part, an energy storage arbitrage model with the current locational marginal price scheme is studied. Numerical test cases are studied to show the arguments in this thesis.

The overall market and pricing scheme design is a very complex problem. This thesis gives a thorough overview of pricing schemes in day-ahead energy markets and addressed several key issues in the markets. New pricing schemes are proposed to improve market efficiency.

Contributors

Agent

Created

Date Created
  • 2016

158716-Thumbnail Image.png

Model-Based Machine Learning for the Power Grid

Description

The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is

The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that machine learning algorithms can be employed for a variety of purposes. To achieve that, without sacrificing the interpretation of the results, the dissertation leverages the physics behind power systems, well-known laws that underlie this man-made infrastructure, and the nature of the underlying stochastic phenomena that define the system operating conditions as the backbone for modeling data from the grid.

The first part of the dissertation introduces a new framework of graph signal processing (GSP) for the power grid, Grid-GSP, and applies it to voltage phasor measurements that characterize the overall system state of the power grid. Concepts from GSP are used in conjunction with known power system models in order to highlight the low-dimensional structure in data and present generative models for voltage phasors measurements. Applications such as identification of graphical communities, network inference, interpolation of missing data, detection of false data injection attacks and data compression are explored wherein Grid-GSP based generative models are used.

The second part of the dissertation develops a model for a joint statistical description of solar photo-voltaic (PV) power and the outdoor temperature which can lead to better management of power generation resources so that electricity demand such as air conditioning and supply from solar power are always matched in the face of stochasticity. The low-rank structure inherent in solar PV power data is used for forecasting and to detect partial-shading type of faults in solar panels.

Contributors

Agent

Created

Date Created
  • 2020

158293-Thumbnail Image.png

Unobservable False Data Injection Attacks on Power Systems

Description

Reliable operation of modern power systems is ensured by an intelligent cyber layer that monitors and controls the physical system. The data collection and transmission is achieved by the supervisory

Reliable operation of modern power systems is ensured by an intelligent cyber layer that monitors and controls the physical system. The data collection and transmission is achieved by the supervisory control and data acquisition (SCADA) system, and data processing is performed by the energy management system (EMS). In the recent decades, the development of phasor measurement units (PMUs) enables wide area real-time monitoring and control. However, both SCADA-based and PMU-based cyber layers are prone to cyber attacks that can impact system operation and lead to severe physical consequences.

This dissertation studies false data injection (FDI) attacks that are unobservable to bad data detectors (BDD). Prior work has shown that an attacker-defender bi-level linear program (ADBLP) can be used to determine the worst-case consequences of FDI attacks aiming to maximize the physical power flow on a target line. However, the results were only demonstrated on small systems assuming that they are operated with DC optimal power flow (OPF). This dissertation is divided into four parts to thoroughly understand the consequences of these attacks as well as develop countermeasures.

The first part focuses on evaluating the vulnerability of large-scale power systems to FDI attacks. The solution technique introduced in prior work to solve the ADBLP is intractable on large-scale systems due to the large number of binary variables. Four new computationally efficient algorithms are presented to solve this problem.

The second part studies vulnerability of N-1 reliable power systems operated by state-of-the-art EMSs commonly used in practice, specifically real-time contingency analysis (RTCA), and security-constrained economic dispatch (SCED). An ADBLP is formulated with detailed assumptions on attacker's knowledge and system operations.

The third part considers FDI attacks on PMU measurements that have strong temporal correlations due to high data rate. It is shown that predictive filters can detect suddenly injected attacks, but not gradually ramping attacks.

The last part proposes a machine learning-based attack detection framework consists of a support vector regression (SVR) load predictor that predicts loads by exploiting both spatial and temporal correlations, and a subsequent support vector machine (SVM) attack detector to determine the existence of attacks.

Contributors

Agent

Created

Date Created
  • 2020

154895-Thumbnail Image.png

Fundamental limits in data privacy: from privacy measures to economic foundations

Description

Data privacy is emerging as one of the most serious concerns of big data analytics, particularly with the growing use of personal data and the ever-improving capability of data analysis.

Data privacy is emerging as one of the most serious concerns of big data analytics, particularly with the growing use of personal data and the ever-improving capability of data analysis. This dissertation first investigates the relation between different privacy notions, and then puts the main focus on developing economic foundations for a market model of trading private data.

The first part characterizes differential privacy, identifiability and mutual-information privacy by their privacy--distortion functions, which is the optimal achievable privacy level as a function of the maximum allowable distortion. The results show that these notions are fundamentally related and exhibit certain consistency: (1) The gap between the privacy--distortion functions of identifiability and differential privacy is upper bounded by a constant determined by the prior. (2) Identifiability and mutual-information privacy share the same optimal mechanism. (3) The mutual-information optimal mechanism satisfies differential privacy with a level at most a constant away from the optimal level.

The second part studies a market model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The value of epsilon units of privacy is measured by the minimum payment such that an individual's equilibrium strategy is to report data in an epsilon-differentially private manner. For the setting with binary private data that represents individuals' knowledge about a common underlying state, asymptotically tight lower and upper bounds on the value of privacy are established as the number of individuals becomes large, and the payment--accuracy tradeoff for learning the state is obtained. The lower bound assures the impossibility of using lower payment to buy epsilon units of privacy, and the upper bound is given by a designed reward mechanism. When the individuals' valuations of privacy are unknown to the data collector, mechanisms with possible negative payments (aiming to penalize individuals with "unacceptably" high privacy valuations) are designed to fulfill the accuracy goal and drive the total payment to zero. For the setting with binary private data following a general joint probability distribution with some symmetry, asymptotically optimal mechanisms are designed in the high data quality regime.

Contributors

Agent

Created

Date Created
  • 2016

155997-Thumbnail Image.png

Techniques for Decentralized and Dynamic Resource Allocation

Description

This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable

This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems (DAS) layer of the Internet of Things (IoT) architecture. To avoid congestion and enable low-latency services, limits have to be imposed on the amount of decisions that can be centralized (i.e. solved in the ``cloud") and/or amount of control information that devices can exchange. This has been the motivation to develop i) a lightweight PHY Layer protocol for time synchronization and scheduling in Wireless Sensor Networks (WSNs), ii) an adaptive receiver that enables Sub-Nyquist sampling, for efficient spectrum sensing at high frequencies, and iii) an SDN-scheme for resource-sharing across different technologies and operators, to harmoniously and holistically respond to fluctuations in demands at the eNodeB' s layer.

The proposed solution for time synchronization and scheduling is a new protocol, called PulseSS, which is completely event-driven and is inspired by biological networks. The results on convergence and accuracy for locally connected networks, presented in this thesis, constitute the theoretical foundation for the protocol in terms of performance guarantee. The derived limits provided guidelines for ad-hoc solutions in the actual implementation of the protocol.

The proposed receiver for Compressive Spectrum Sensing (CSS) aims at tackling the noise folding phenomenon, e.g., the accumulation of noise from different sub-bands that are folded, prior to sampling and baseband processing, when an analog front-end aliasing mixer is utilized.

The sensing phase design has been conducted via a utility maximization approach, thus the scheme derived has been called Cognitive Utility Maximization Multiple Access (CUMMA).

The framework described in the last part of the thesis is inspired by stochastic network optimization tools and dynamics.

While convergence of the proposed approach remains an open problem, the numerical results here presented suggest the capability of the algorithm to handle traffic fluctuations across operators, while respecting different time and economic constraints.

The scheme has been named Decomposition of Infrastructure-based Dynamic Resource Allocation (DIDRA).

Contributors

Agent

Created

Date Created
  • 2017

157058-Thumbnail Image.png

Creating, Validating, and Using Synthetic Power Flow Cases: A Statistical Approach to Power System Analysis

Description

Synthetic power system test cases offer a wealth of new data for research and development purposes, as well as an avenue through which new kinds of analyses and questions can

Synthetic power system test cases offer a wealth of new data for research and development purposes, as well as an avenue through which new kinds of analyses and questions can be examined. This work provides both a methodology for creating and validating synthetic test cases, as well as a few use-cases for how access to synthetic data enables otherwise impossible analysis.

First, the question of how synthetic cases may be generated in an automatic manner, and how synthetic samples should be validated to assess whether they are sufficiently ``real'' is considered. Transmission and distribution levels are treated separately, due to the different nature of the two systems. Distribution systems are constructed by sampling distributions observed in a dataset from the Netherlands. For transmission systems, only first-order statistics, such as generator limits or line ratings are sampled statistically. The task of constructing an optimal power flow case from the sample sets is left to an optimization problem built on top of the optimal power flow formulation.

Secondly, attention is turned to some examples where synthetic models are used to inform analysis and modeling tasks. Co-simulation of transmission and multiple distribution systems is considered, where distribution feeders are allowed to couple transmission substations. Next, a distribution power flow method is parametrized to better account for losses. Numerical values for the parametrization can be statistically supported thanks to the ability to generate thousands of feeders on command.

Contributors

Agent

Created

Date Created
  • 2019