Matching Items (34)
156280-Thumbnail Image.png
Description
Fundamental limits of fixed-to-variable (F-V) and variable-to-fixed (V-F) length universal source coding at short blocklengths is characterized. For F-V length coding, the Type Size (TS) code has previously been shown to be optimal up to the third-order rate for universal compression of all memoryless sources over finite alphabets. The TS

Fundamental limits of fixed-to-variable (F-V) and variable-to-fixed (V-F) length universal source coding at short blocklengths is characterized. For F-V length coding, the Type Size (TS) code has previously been shown to be optimal up to the third-order rate for universal compression of all memoryless sources over finite alphabets. The TS code assigns sequences ordered based on their type class sizes to binary strings ordered lexicographically.

Universal F-V coding problem for the class of first-order stationary, irreducible and aperiodic Markov sources is first considered. Third-order coding rate of the TS code for the Markov class is derived. A converse on the third-order coding rate for the general class of F-V codes is presented which shows the optimality of the TS code for such Markov sources.

This type class approach is then generalized for compression of the parametric sources. A natural scheme is to define two sequences to be in the same type class if and only if they are equiprobable under any model in the parametric class. This natural approach, however, is shown to be suboptimal. A variation of the Type Size code is introduced, where type classes are defined based on neighborhoods of minimal sufficient statistics. Asymptotics of the overflow rate of this variation is derived and a converse result establishes its optimality up to the third-order term. These results are derived for parametric families of i.i.d. sources as well as Markov sources.

Finally, universal V-F length coding of the class of parametric sources is considered in the short blocklengths regime. The proposed dictionary which is used to parse the source output stream, consists of sequences in the boundaries of transition from low to high quantized type complexity, hence the name Type Complexity (TC) code. For large enough dictionary, the $\epsilon$-coding rate of the TC code is derived and a converse result is derived showing its optimality up to the third-order term.
ContributorsIri, Nematollah (Author) / Kosut, Oliver (Thesis advisor) / Bliss, Daniel (Committee member) / Sankar, Lalitha (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2018
157375-Thumbnail Image.png
Description
Energy management system (EMS) is at the heart of the operation and control of a modern electrical grid. Because of economic, safety, and security reasons, access to industrial grade EMS and real-world power system data is extremely limited. Therefore, the ability to simulate an EMS is invaluable in researching the

Energy management system (EMS) is at the heart of the operation and control of a modern electrical grid. Because of economic, safety, and security reasons, access to industrial grade EMS and real-world power system data is extremely limited. Therefore, the ability to simulate an EMS is invaluable in researching the EMS in normal and anomalous operating conditions.

I first lay the groundwork for a basic EMS loop simulation in modern power grids and review a class of cybersecurity threats called false data injection (FDI) attacks. Then I propose a software architecture as the basis of software simulation of the EMS loop and explain an actual software platform built using the proposed architecture. I also explain in detail the power analysis libraries used for building the platform with examples and illustrations from the implemented application. Finally, I will use the platform to simulate FDI attacks on two synthetic power system test cases and analyze and visualize the consequences using the capabilities built into the platform.
ContributorsKhodadadeh, Roozbeh (Author) / Sankar, Lalitha (Thesis advisor) / Xue, Guoliang (Thesis advisor) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2019
156751-Thumbnail Image.png
Description
In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers (businesses) to collect a large amount of data. However, this ``data collection" process can put the privacy of users at

In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers (businesses) to collect a large amount of data. However, this ``data collection" process can put the privacy of users at risk and also lead to user reluctance in accepting services or sharing data. This dissertation first investigates privacy sensitive consumer-retailers/service providers interactions under different scenarios, and then focuses on a unified framework for various information-theoretic privacy and privacy mechanisms that can be learned directly from data.

Existing approaches such as differential privacy or information-theoretic privacy try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. The first part of this dissertation introduces models to study consumer-retailer interaction problems and to better understand how retailers/service providers can balance their revenue objectives while being sensitive to user privacy concerns. This dissertation considers the following three scenarios: (i) the consumer-retailer interaction via personalized advertisements; (ii) incentive mechanisms that electrical utility providers need to offer for privacy sensitive consumers with alternative energy sources; (iii) the market viability of offering privacy guaranteed free online services. We use game-theoretic models to capture the behaviors of both consumers and retailers, and provide insights for retailers to maximize their profits when interacting with privacy sensitive consumers.

Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. In the second part, a novel context-aware privacy framework called generative adversarial privacy (GAP) is introduced. Inspired by recent advancements in generative adversarial networks, GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. For appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. Both synthetic and real-world datasets are used to show that GAP can greatly reduce the adversary's capability of inferring private information at a small cost of distorting the data.
ContributorsHuang, Chong (Author) / Sankar, Lalitha (Thesis advisor) / Kosut, Oliver (Committee member) / Nedich, Angelia (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2018
157058-Thumbnail Image.png
Description
Synthetic power system test cases offer a wealth of new data for research and development purposes, as well as an avenue through which new kinds of analyses and questions can be examined. This work provides both a methodology for creating and validating synthetic test cases, as well as a few

Synthetic power system test cases offer a wealth of new data for research and development purposes, as well as an avenue through which new kinds of analyses and questions can be examined. This work provides both a methodology for creating and validating synthetic test cases, as well as a few use-cases for how access to synthetic data enables otherwise impossible analysis.

First, the question of how synthetic cases may be generated in an automatic manner, and how synthetic samples should be validated to assess whether they are sufficiently ``real'' is considered. Transmission and distribution levels are treated separately, due to the different nature of the two systems. Distribution systems are constructed by sampling distributions observed in a dataset from the Netherlands. For transmission systems, only first-order statistics, such as generator limits or line ratings are sampled statistically. The task of constructing an optimal power flow case from the sample sets is left to an optimization problem built on top of the optimal power flow formulation.

Secondly, attention is turned to some examples where synthetic models are used to inform analysis and modeling tasks. Co-simulation of transmission and multiple distribution systems is considered, where distribution feeders are allowed to couple transmission substations. Next, a distribution power flow method is parametrized to better account for losses. Numerical values for the parametrization can be statistically supported thanks to the ability to generate thousands of feeders on command.
ContributorsSchweitzer, Eran (Author) / Scaglione, Anna (Thesis advisor) / Hedman, Kory W (Committee member) / Overbye, Thomas J (Committee member) / Monti, Antonello (Committee member) / Sankar, Lalitha (Committee member) / Arizona State University (Publisher)
Created2019
153876-Thumbnail Image.png
Description
For a (N+1)-bus power system, possibly 2N solutions exists. One of these solutions

is known as the high-voltage (HV) solution or operable solution. The rest of the solutions

are the low-voltage (LV), or large-angle, solutions.

In this report, a recently developed non-iterative algorithm for solving the power-

flow (PF) problem using the holomorphic embedding

For a (N+1)-bus power system, possibly 2N solutions exists. One of these solutions

is known as the high-voltage (HV) solution or operable solution. The rest of the solutions

are the low-voltage (LV), or large-angle, solutions.

In this report, a recently developed non-iterative algorithm for solving the power-

flow (PF) problem using the holomorphic embedding (HE) method is shown as

being capable of finding the HV solution, while avoiding converging to LV solutions

nearby which is a drawback to all other iterative solutions. The HE method provides a

novel non-iterative procedure to solve the PF problems by eliminating the

non-convergence and initial-estimate dependency issues appeared in the traditional

iterative methods. The detailed implementation of the HE method is discussed in the

report.

While published work focuses mainly on finding the HV PF solution, modified

holomorphically embedded formulations are proposed in this report to find the

LV/large-angle solutions of the PF problem. It is theoretically proven that the proposed

method is guaranteed to find a total number of 2N solutions to the PF problem

and if no solution exists, the algorithm is guaranteed to indicate such by the oscillations

in the maximal analytic continuation of the coefficients of the voltage power series

obtained.

After presenting the derivation of the LV/large-angle formulations for both PQ

and PV buses, numerical tests on the five-, seven- and 14-bus systems are conducted

to find all the solutions of the system of nonlinear PF equations for those systems using

the proposed HE method.

After completing the derivation to find all the PF solutions using the HE method, it

is shown that the proposed HE method can be used to find only the of interest PF solutions

(i.e. type-1 PF solutions with one positive real-part eigenvalue in the Jacobian

matrix), with a proper algorithm developed. The closet unstable equilibrium point

(UEP), one of the type-1 UEP’s, can be obtained by the proposed HE method with

limited dynamic models included.

The numerical performance as well as the robustness of the proposed HE method is

investigated and presented by implementing the algorithm on the problematic cases and

large-scale power system.
ContributorsMine, Yō (Author) / Tylavsky, Daniel (Thesis advisor) / Armbruster, Dieter (Committee member) / Holbert, Keith E. (Committee member) / Sankar, Lalitha (Committee member) / Vittal, Vijay (Committee member) / Undrill, John (Committee member) / Arizona State University (Publisher)
Created2015
154530-Thumbnail Image.png
Description
The standard optimal power flow (OPF) problem is an economic dispatch (ED) problem combined with transmission constraints, which are based on a static topology. However, topology control (TC) has been proposed in the past as a corrective mechanism to relieve overloads and voltage violations. Even though the benefits of TC

The standard optimal power flow (OPF) problem is an economic dispatch (ED) problem combined with transmission constraints, which are based on a static topology. However, topology control (TC) has been proposed in the past as a corrective mechanism to relieve overloads and voltage violations. Even though the benefits of TC are presented by several research works in the past, the computational complexity associated with TC has been a major deterrent to its implementation. The proposed work develops heuristics for TC and investigates its potential to improve the computational time for TC for various applications. The objective is to develop computationally light methods to harness the flexibility of the grid to derive maximum benefits to the system in terms of reliability. One of the goals of this research is to develop a tool that will be capable of providing TC actions in a minimal time-frame, which can be readily adopted by the industry for real-time corrective applications.

A DC based heuristic, i.e., a greedy algorithm, is developed and applied to improve the computational time for the TC problem while still maintaining the ability to find quality solutions. In the greedy algorithm, an expression is derived, which indicates the impact on the objective for a marginal change in the state of a transmission line. This expression is used to generate a priority list with potential candidate lines for switching, which may provide huge improvements to the system. The advantage of this method is that it is a fast heuristic as compared to using mixed integer programming (MIP) approach.

Alternatively, AC based heuristics are developed for TC problem and tested on actual data from PJM, ERCOT and TVA. AC based N-1 contingency analysis is performed to identify the contingencies that cause network violations. Simple proximity based heuristics are developed and the fast decoupled power flow is solved iteratively to identify the top five TC actions, which provide reduction in violations. Time domain simulations are performed to ensure that the TC actions do not cause system instability. Simulation results show significant reductions in violations in the system by the application of the TC heuristics.
ContributorsBalasubramanian, Pranavamoorthy (Author) / Hedman, Kory W (Thesis advisor) / Vittal, Vijay (Committee member) / Ayyanar, Raja (Committee member) / Sankar, Lalitha (Committee member) / Arizona State University (Publisher)
Created2016
155066-Thumbnail Image.png
Description
With growing concern regarding environmental issues and the need for a more sustainable grid, power systems have seen a fast expansion of renewable resources in the last decade. The uncertainty and variability of renewable resources has posed new challenges on system operators. Due to its energy-shifting and fast-ramping capabilities, energy

With growing concern regarding environmental issues and the need for a more sustainable grid, power systems have seen a fast expansion of renewable resources in the last decade. The uncertainty and variability of renewable resources has posed new challenges on system operators. Due to its energy-shifting and fast-ramping capabilities, energy storage (ES) has been considered as an attractive solution to alleviate the increased renewable uncertainty and variability.

In this dissertation, stochastic optimization is utilized to evaluate the benefit of bulk energy storage to facilitate the integration of high levels of renewable resources in transmission systems. A cost-benefit analysis is performed to study the cost-effectiveness of energy storage. A two-step approach is developed to analyze the effectiveness of using energy storage to provide ancillary services. Results show that as renewable penetrations increase, energy storage can effectively compensate for the variability and uncertainty in renewable energy and has increasing benefits to the system.

With increased renewable penetrations, enhanced dispatch models are needed to efficiently operate energy storage. As existing approaches do not fully utilize the flexibility of energy storage, two approaches are developed in this dissertation to improve the operational strategy of energy storage. The first approach is developed using stochastic programming techniques. A stochastic unit commitment (UC) is solved to obtain schedules for energy storage with different renewable scenarios. Operating policies are then constructed using the solutions from the stochastic UC to efficiently operate energy storage across multiple time periods. The second approach is a policy function approach. By incorporating an offline analysis stage prior to the actual operating stage, the patterns between the system operating conditions and the optimal actions for energy storage are identified using a data mining model. The obtained data mining model is then used in real-time to provide enhancement to a deterministic economic dispatch model and improve the utilization of energy storage. Results show that the policy function approach outperforms a traditional approach where a schedule determined and fixed at a prior look-ahead stage is used. The policy function approach is also shown to have minimal added computational difficulty to the real-time market.
ContributorsLi, Nan (Author) / Hedman, Kory W (Thesis advisor) / Tylavksy, Daniel J (Committee member) / Heydt, Gerald T (Committee member) / Sankar, Lalitha (Committee member) / Arizona State University (Publisher)
Created2016
168444-Thumbnail Image.png
Description
In order to meet the world’s growing energy need, it is necessary to create a reliable, robust, and resilient electric power grid. One way to ensure the creation of such a grid is through the extensive use of synchrophasor technology that is based on devices called phasor measurement units (PMUs),

In order to meet the world’s growing energy need, it is necessary to create a reliable, robust, and resilient electric power grid. One way to ensure the creation of such a grid is through the extensive use of synchrophasor technology that is based on devices called phasor measurement units (PMUs), and their derivatives, such as μPMUs. Global positioning system (GPS) time-synchronized wide-area monitoring, protection, and control enabled by PMUs has opened up new ways in which the power grid can tackle the problems it faces today. However, with implementation of new technologies comes new challenges, and one of those challenges when it comes to PMUs is the misuse of GPS as a method to obtain a time reference.The use of GPS in PMUs is very intuitive as it is a convenient method to time stamp electrical signals, which in turn helps provide an accurate snapshot of the performance of the PMU-monitored section of the grid. However, GPS is susceptible to different types of signal interruptions due to natural (such as weather) or unnatural (jamming, spoofing) causes. The focus of this thesis is on demonstrating the practical feasibility of GPS spoofing attacks on PMUs, as well as developing novel countermeasures for them. Prior research has demonstrated that GPS spoofing attacks on PMUs can cripple power system operation. The research conducted here first provides an experimental evidence of the feasibility of such an attack using commonly available digital radios known as software defined radio (SDR). Next, it introduces a new countermeasure against such attacks using GPS signal redundancy and low power long range (LoRa) spread spectrum modulation technique. The proposed approach checks the integrity of the GPS signal at remote locations and compares the data with the PMU’s current output. This countermeasure is a steppingstone towards developing a ready-to-deploy system that can provide an instant solution to the GPS spoofing detection problem for PMUs already placed in the power grid.
ContributorsSaadedeen, Fakhri G (Author) / Pal, Anamitra (Thesis advisor) / Sankar, Lalitha (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2021
189335-Thumbnail Image.png
Description
Generative Adversarial Networks (GANs) have emerged as a powerful framework for generating realistic and high-quality data. In the original ``vanilla'' GAN formulation, two models -- the generator and discriminator -- are engaged in a min-max game and optimize the same value function. Despite offering an intuitive approach, vanilla GANs often

Generative Adversarial Networks (GANs) have emerged as a powerful framework for generating realistic and high-quality data. In the original ``vanilla'' GAN formulation, two models -- the generator and discriminator -- are engaged in a min-max game and optimize the same value function. Despite offering an intuitive approach, vanilla GANs often face stability challenges such as vanishing gradients and mode collapse. Addressing these common failures, recent work has proposed the use of tunable classification losses in place of traditional value functions. Although parameterized robust loss families, e.g. $\alpha$-loss, have shown promising characteristics as value functions, this thesis argues that the generator and discriminator require separate objective functions to achieve their different goals. As a result, this thesis introduces the $(\alpha_{D}, \alpha_{G})$-GAN, a parameterized class of dual-objective GANs, as an alternative approach to the standard vanilla GAN. The $(\alpha_{D}, \alpha_{G})$-GAN formulation, inspired by $\alpha$-loss, allows practitioners to tune the parameters $(\alpha_{D}, \alpha_{G}) \in [0,\infty)^{2}$ to provide a more stable training process. The objectives for the generator and discriminator in $(\alpha_{D}, \alpha_{G})$-GAN are derived, and the advantages of using these objectives are investigated. In particular, the optimization trajectory of the generator is found to be influenced by the choice of $\alpha_{D}$ and $\alpha_{G}$. Empirical evidence is presented through experiments conducted on various datasets, including the 2D Gaussian Mixture Ring, Celeb-A image dataset, and LSUN Classroom image dataset. Performance metrics such as mode coverage and Fréchet Inception Distance (FID) are used to evaluate the effectiveness of the $(\alpha_{D}, \alpha_{G})$-GAN compared to the vanilla GAN and state-of-the-art Least Squares GAN (LSGAN). The experimental results demonstrate that tuning $\alpha_{D} < 1$ leads to improved stability, robustness to hyperparameter choice, and competitive performance compared to LSGAN.
ContributorsOtstot, Kyle (Author) / Sankar, Lalitha (Thesis advisor) / Kosut, Oliver (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2023
171411-Thumbnail Image.png
Description
In the era of big data, more and more decisions and recommendations are being made by machine learning (ML) systems and algorithms. Despite their many successes, there have been notable deficiencies in the robustness, rigor, and reliability of these ML systems, which have had detrimental societal impacts. In the next

In the era of big data, more and more decisions and recommendations are being made by machine learning (ML) systems and algorithms. Despite their many successes, there have been notable deficiencies in the robustness, rigor, and reliability of these ML systems, which have had detrimental societal impacts. In the next generation of ML, these significant challenges must be addressed through careful algorithmic design, and it is crucial that practitioners and meta-algorithms have the necessary tools to construct ML models that align with human values and interests. In an effort to help address these problems, this dissertation studies a tunable loss function called α-loss for the ML setting of classification. The alpha-loss is a hyperparameterized loss function originating from information theory that continuously interpolates between the exponential (alpha = 1/2), log (alpha = 1), and 0-1 (alpha = infinity) losses, hence providing a holistic perspective of several classical loss functions in ML. Furthermore, the alpha-loss exhibits unique operating characteristics depending on the value (and different regimes) of alpha; notably, for alpha > 1, alpha-loss robustly trains models when noisy training data is present. Thus, the alpha-loss can provide robustness to ML systems for classification tasks, and this has bearing in many applications, e.g., social media, finance, academia, and medicine; indeed, results are presented where alpha-loss produces more robust logistic regression models for COVID-19 survey data with gains over state of the art algorithmic approaches.
ContributorsSypherd, Tyler (Author) / Sankar, Lalitha (Thesis advisor) / Berisha, Visar (Committee member) / Dasarathy, Gautam (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2022