Matching Items (25)

127939-Thumbnail Image.png

Context-Aware Generative Adversarial Privacy

Description

Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy

Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals’ private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP’s performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model; and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.

Contributors

Agent

Created

Date Created
  • 2017-12-01

132649-Thumbnail Image.png

Privacy-guaranteed Data Collection: The Case for Efficient Resource Management of Nonprofit Organizations

Description

Through the personal experience of volunteering at ASU Project Humanities, an organization that provides resources such as clothing and toiletries to the homeless population in Downtown Phoenix, I noticed efficiently

Through the personal experience of volunteering at ASU Project Humanities, an organization that provides resources such as clothing and toiletries to the homeless population in Downtown Phoenix, I noticed efficiently serving the needs of the homeless population is an important endeavor, but the current processes for Phoenix nonprofits to collect data are manual, ad-hoc, and inefficient. This leads to the research question: is it possible to improve this process of collecting statistics on client needs, tracking donations, and managing resources using technology? Background research includes an interview with ASU Project Humanities, articles by analysts, and related work including case studies of current technologies in the nonprofit community. Major findings include i) a lack of centralized communication in nonprofits collecting needs, tracking surplus donations, and sharing resources, ii) privacy assurance is important to homeless individuals, and iii) pre-existing databases and technological solutions have demonstrated that technology has the ability to make an impact in the nonprofit community. To improve the process, standardization, efficiency, and automation need to increase. As a result of my analysis, the thesis proposes a prototype solution which includes two parts: an inventory database and a web application with forms for user input and tables for the user to view. This solution addresses standardization by showing a consistent way of collecting data on need requests and surplus donations while guaranteeing privacy of homeless individuals. This centralized solution also increases efficiency by connecting different agencies that cater to these clients. Lastly, the solution demonstrates the ability for resources to be made available to each organization which can increase automation. In conclusion, this database and web application has the potential to improve nonprofit organizations’ networking capabilities, resource management, and resource distribution. The percentile of homeless individuals connected to these resources is expected to increase substantially with future live testing and large-scale implementation.

Contributors

Agent

Created

Date Created
  • 2019-05

153235-Thumbnail Image.png

Detection of cyber attacks in power distribution energy management systems

Description

The objective of this thesis is to detect certain cyber attacks in a power distribution ener-gy management system in a Smart Grid infrastructure. In the Smart Grid, signals are sent

The objective of this thesis is to detect certain cyber attacks in a power distribution ener-gy management system in a Smart Grid infrastructure. In the Smart Grid, signals are sent be-tween the distribution operator and the customer on a real-time basis. Signals are used for auto-mated energy management, protection and energy metering. This thesis aims at making use of various signals in the system to detect cyber attacks. The focus of the thesis is on a cyber attack that changes the parameters of the energy management system. The attacks considered change the set points, thresholds for energy management decisions, signal multipliers, and other digitally stored parameters that ultimately determine the transfer functions of the components. Since the distribution energy management system is assumed to be in a Smart Grid infrastructure, customer demand is elastic to the price of energy. The energy pricing is represented by a distribution loca-tional marginal price. A closed loop control system is utilized as representative of the energy management system. Each element of the system is represented by a linear transfer function. Studies are done via simulations and these simulations are performed in Matlab SimuLink. The analytical calculations are done using Matlab.

Signals from the system are used to obtain the frequency response of the component transfer functions. The magnitude and phase angle of the transfer functions are obtained using the fast Fourier transform. The transfer function phase angles of base cases (no attack) are stored and are compared with the phase angles calculated at regular time intervals. If the difference in the phase characteristics is greater than a set threshold, an alarm is issued indicating the detection of a cyber attack.

The developed algorithm is designed for use in the envisioned Future Renewable Electric Energy Delivery and Management (FREEDM) system. Examples are shown for the noise free and noisy cases.

Contributors

Agent

Created

Date Created
  • 2014

153876-Thumbnail Image.png

Solving for the low-voltage/large-angle power-flow solutions by using the holomorphic embedding method

Description

For a (N+1)-bus power system, possibly 2N solutions exists. One of these solutions

is known as the high-voltage (HV) solution or operable solution. The rest of the solutions

are the low-voltage (LV),

For a (N+1)-bus power system, possibly 2N solutions exists. One of these solutions

is known as the high-voltage (HV) solution or operable solution. The rest of the solutions

are the low-voltage (LV), or large-angle, solutions.

In this report, a recently developed non-iterative algorithm for solving the power-

flow (PF) problem using the holomorphic embedding (HE) method is shown as

being capable of finding the HV solution, while avoiding converging to LV solutions

nearby which is a drawback to all other iterative solutions. The HE method provides a

novel non-iterative procedure to solve the PF problems by eliminating the

non-convergence and initial-estimate dependency issues appeared in the traditional

iterative methods. The detailed implementation of the HE method is discussed in the

report.

While published work focuses mainly on finding the HV PF solution, modified

holomorphically embedded formulations are proposed in this report to find the

LV/large-angle solutions of the PF problem. It is theoretically proven that the proposed

method is guaranteed to find a total number of 2N solutions to the PF problem

and if no solution exists, the algorithm is guaranteed to indicate such by the oscillations

in the maximal analytic continuation of the coefficients of the voltage power series

obtained.

After presenting the derivation of the LV/large-angle formulations for both PQ

and PV buses, numerical tests on the five-, seven- and 14-bus systems are conducted

to find all the solutions of the system of nonlinear PF equations for those systems using

the proposed HE method.

After completing the derivation to find all the PF solutions using the HE method, it

is shown that the proposed HE method can be used to find only the of interest PF solutions

(i.e. type-1 PF solutions with one positive real-part eigenvalue in the Jacobian

matrix), with a proper algorithm developed. The closet unstable equilibrium point

(UEP), one of the type-1 UEP’s, can be obtained by the proposed HE method with

limited dynamic models included.

The numerical performance as well as the robustness of the proposed HE method is

investigated and presented by implementing the algorithm on the problematic cases and

large-scale power system.

Contributors

Agent

Created

Date Created
  • 2015

156751-Thumbnail Image.png

Data-Driven and Game-Theoretic Approaches for Privacy

Description

In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers

In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers (businesses) to collect a large amount of data. However, this ``data collection" process can put the privacy of users at risk and also lead to user reluctance in accepting services or sharing data. This dissertation first investigates privacy sensitive consumer-retailers/service providers interactions under different scenarios, and then focuses on a unified framework for various information-theoretic privacy and privacy mechanisms that can be learned directly from data.

Existing approaches such as differential privacy or information-theoretic privacy try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. The first part of this dissertation introduces models to study consumer-retailer interaction problems and to better understand how retailers/service providers can balance their revenue objectives while being sensitive to user privacy concerns. This dissertation considers the following three scenarios: (i) the consumer-retailer interaction via personalized advertisements; (ii) incentive mechanisms that electrical utility providers need to offer for privacy sensitive consumers with alternative energy sources; (iii) the market viability of offering privacy guaranteed free online services. We use game-theoretic models to capture the behaviors of both consumers and retailers, and provide insights for retailers to maximize their profits when interacting with privacy sensitive consumers.

Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. In the second part, a novel context-aware privacy framework called generative adversarial privacy (GAP) is introduced. Inspired by recent advancements in generative adversarial networks, GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. For appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. Both synthetic and real-world datasets are used to show that GAP can greatly reduce the adversary's capability of inferring private information at a small cost of distorting the data.

Contributors

Agent

Created

Date Created
  • 2018

157375-Thumbnail Image.png

Designing a Software Platform for Evaluating Cyber-Attacks on The Electric PowerGrid

Description

Energy management system (EMS) is at the heart of the operation and control of a modern electrical grid. Because of economic, safety, and security reasons, access to industrial grade EMS

Energy management system (EMS) is at the heart of the operation and control of a modern electrical grid. Because of economic, safety, and security reasons, access to industrial grade EMS and real-world power system data is extremely limited. Therefore, the ability to simulate an EMS is invaluable in researching the EMS in normal and anomalous operating conditions.

I first lay the groundwork for a basic EMS loop simulation in modern power grids and review a class of cybersecurity threats called false data injection (FDI) attacks. Then I propose a software architecture as the basis of software simulation of the EMS loop and explain an actual software platform built using the proposed architecture. I also explain in detail the power analysis libraries used for building the platform with examples and illustrations from the implemented application. Finally, I will use the platform to simulate FDI attacks on two synthetic power system test cases and analyze and visualize the consequences using the capabilities built into the platform.

Contributors

Agent

Created

Date Created
  • 2019

158139-Thumbnail Image.png

Quantifying Information Leakage via Adversarial Loss Functions: Theory and Practice

Description

Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility

Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This dissertation addresses this challenge.

This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information.

There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks.

Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models.

Contributors

Agent

Created

Date Created
  • 2020

154355-Thumbnail Image.png

Pricing schemes in electric energy markets

Description

Two thirds of the U.S. power systems are operated under market structures. A good market design should maximize social welfare and give market participants proper incentives to follow market solutions.

Two thirds of the U.S. power systems are operated under market structures. A good market design should maximize social welfare and give market participants proper incentives to follow market solutions. Pricing schemes play very important roles in market design.

Locational marginal pricing scheme is the core pricing scheme in energy markets. Locational marginal prices are good pricing signals for dispatch marginal costs. However, the locational marginal prices alone are not incentive compatible since energy markets are non-convex markets. Locational marginal prices capture dispatch costs but fail to capture commitment costs such as startup cost, no-load cost, and shutdown cost. As a result, uplift payments are paid to generators in markets in order to provide incentives for generators to follow market solutions. The uplift payments distort pricing signals.

In this thesis, pricing schemes in electric energy markets are studied. In the first part, convex hull pricing scheme is studied and the pricing model is extended with network constraints. The subgradient algorithm is applied to solve the pricing model. In the second part, a stochastic dispatchable pricing model is proposed to better address the non-convexity and uncertainty issues in day-ahead energy markets. In the third part, an energy storage arbitrage model with the current locational marginal price scheme is studied. Numerical test cases are studied to show the arguments in this thesis.

The overall market and pricing scheme design is a very complex problem. This thesis gives a thorough overview of pricing schemes in day-ahead energy markets and addressed several key issues in the markets. New pricing schemes are proposed to improve market efficiency.

Contributors

Agent

Created

Date Created
  • 2016

154530-Thumbnail Image.png

Harnessing flexibility of the transmission grid to enhance reliability of the power system

Description

The standard optimal power flow (OPF) problem is an economic dispatch (ED) problem combined with transmission constraints, which are based on a static topology. However, topology control (TC) has been

The standard optimal power flow (OPF) problem is an economic dispatch (ED) problem combined with transmission constraints, which are based on a static topology. However, topology control (TC) has been proposed in the past as a corrective mechanism to relieve overloads and voltage violations. Even though the benefits of TC are presented by several research works in the past, the computational complexity associated with TC has been a major deterrent to its implementation. The proposed work develops heuristics for TC and investigates its potential to improve the computational time for TC for various applications. The objective is to develop computationally light methods to harness the flexibility of the grid to derive maximum benefits to the system in terms of reliability. One of the goals of this research is to develop a tool that will be capable of providing TC actions in a minimal time-frame, which can be readily adopted by the industry for real-time corrective applications.

A DC based heuristic, i.e., a greedy algorithm, is developed and applied to improve the computational time for the TC problem while still maintaining the ability to find quality solutions. In the greedy algorithm, an expression is derived, which indicates the impact on the objective for a marginal change in the state of a transmission line. This expression is used to generate a priority list with potential candidate lines for switching, which may provide huge improvements to the system. The advantage of this method is that it is a fast heuristic as compared to using mixed integer programming (MIP) approach.

Alternatively, AC based heuristics are developed for TC problem and tested on actual data from PJM, ERCOT and TVA. AC based N-1 contingency analysis is performed to identify the contingencies that cause network violations. Simple proximity based heuristics are developed and the fast decoupled power flow is solved iteratively to identify the top five TC actions, which provide reduction in violations. Time domain simulations are performed to ensure that the TC actions do not cause system instability. Simulation results show significant reductions in violations in the system by the application of the TC heuristics.

Contributors

Agent

Created

Date Created
  • 2016

157937-Thumbnail Image.png

Bayesian Framework for Sparse Vector Recovery and Parameter Bounds with Application to Compressive Sensing

Description

Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this

Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed using a Bernoulli-Gaussian (BG) prior distribution and is assumed to be the true generative model. The second is constructed using a Gamma-Normal (GN) prior distribution and is, therefore, a different (i.e. misspecified) model. To estimate the posterior distribution for the correctly specified scenario, an algorithm based on generalized approximated message passing (GAMP) is constructed, while an algorithm based on sparse Bayesian learning (SBL) is used for the misspecified scenario. Recovering sparse signal using Bayesian framework is one class of algorithms to solve the sparse problem. All classes of algorithms aim to get around the high computations associated with the combinatorial searches. Compressive sensing (CS) is a widely-used terminology attributed to optimize the sparse problem and its applications. Applications such as magnetic resonance imaging (MRI), image acquisition in radar imaging, and facial recognition. In CS literature, the target vector can be recovered either by optimizing an objective function using point estimation, or recovering a distribution of the sparse vector using Bayesian estimation. Although Bayesian framework provides an extra degree of freedom to assume a distribution that is directly applicable to the problem of interest, it is hard to find a theoretical guarantee of convergence. This limitation has shifted some of researches to use a non-Bayesian framework. This thesis tries to close this gab by proposing a Bayesian framework with a suggested theoretical bound for the assumed, not necessarily correct, distribution. In the simulation study, a general lower Bayesian Cram\'er-Rao bound (BCRB) bound is extracted along with misspecified Bayesian Cram\'er-Rao bound (MBCRB) for GN model. Both bounds are validated using mean square error (MSE) performances of the aforementioned algorithms. Also, a quantification of the performance in terms of gains versus losses is introduced as one main finding of this report.

Contributors

Agent

Created

Date Created
  • 2019