Filtering by
- Creators: Schumann, Robert, 1810-1856
Existing approaches such as differential privacy or information-theoretic privacy try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. The first part of this dissertation introduces models to study consumer-retailer interaction problems and to better understand how retailers/service providers can balance their revenue objectives while being sensitive to user privacy concerns. This dissertation considers the following three scenarios: (i) the consumer-retailer interaction via personalized advertisements; (ii) incentive mechanisms that electrical utility providers need to offer for privacy sensitive consumers with alternative energy sources; (iii) the market viability of offering privacy guaranteed free online services. We use game-theoretic models to capture the behaviors of both consumers and retailers, and provide insights for retailers to maximize their profits when interacting with privacy sensitive consumers.
Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. In the second part, a novel context-aware privacy framework called generative adversarial privacy (GAP) is introduced. Inspired by recent advancements in generative adversarial networks, GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. For appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. Both synthetic and real-world datasets are used to show that GAP can greatly reduce the adversary's capability of inferring private information at a small cost of distorting the data.
This thesis considers a full-duplex MIMO relay which amplifies and forwards the received signals, between a source and a destination that do not a have line of sight. Full-duplex mode raises the problem of self-interference. Though all the links in the system undergo frequency flat fading, the end-to-end effective channel is frequency selective. This is due to the imperfect cancellation of the self-interference at the relay and this residual self-interference acts as intersymbol interference at the destination which is treated by equalization. This also leads to complications in form of recursive equations to determine the input-output relationship of the system. This also leads to complications in the form of recursive equations to determine the input-output relationship of the system.
To overcome this, a signal flow graph approach using Mason's gain formula is proposed, where the effective channel is analyzed with keen notice to every loop and path the signal traverses. This gives a clear understanding and awareness about the orders of the polynomials involved in the transfer function, from which desired conclusions can be drawn. But the complexity of Mason's gain formula increases with the number of antennas at relay which can be overcome by the proposed linear algebraic method. Input-output relationship derived using simple concepts of linear algebra can be generalized to any number of antennas and the computation complexity is comparatively very low.
For a full-duplex amplify-and-forward MIMO relay system, assuming equalization at the destination, new mechanisms have been implemented at the relay that can compensate the effect of residual self-interference namely equal-gain transmission and antenna selection. Though equal-gain transmission does not perform better than the maximal ratio transmission, a trade-off can be made between performance and implementation complexity. Using the proposed antenna selection strategy, one pair of transmit-receive antennas at the relay is selected based on four selection criteria discussed. Outage probability analysis is performed for all the strategies presented and detailed comparison has been established. Considering minimum mean-squared error decision feedback equalizer at the destination, a bound on the outage probability has been obtained for the antenna selection case and is used for comparisons. A cross-over point is observed while comparing the outage probabilities of equal-gain transmission and antenna selection techniques, as the signal-to-noise ratio increases and from that point antenna selection outperforms equal-gain transmission and this is explained by the fact of reduced residual self-interference in antenna selection method.
In underlay CR systems, where secondary users (SUs) transmit simultaneously with primary users (PUs), reliable communication is by all means guaranteed for PUs, which likely deteriorates SUs’ performance. To overcome this issue, cooperation exclusively among SUs is achieved through multi-user diversity (MUD), where each SU is subject to an instantaneous interference constraint at the primary receiver. Therefore, the active number of SUs satisfying this constraint is random. Under different user distributions with the same mean number of SUs, the stochastic ordering of SU performance metrics including bit error rate (BER), outage probability, and ergodic capacity are made possible even without observing closed form expressions. Furthermore, a cooperation is assumed between primary and secondary networks, where those SUs exceeding the interference constraint facilitate PU’s transmission by relaying its signal. A fundamental performance trade-off between primary and secondary networks is observed, and it is illustrated that the proposed scheme outperforms non-cooperative underlay CR systems in the sense of system overall BER and sum achievable rate.
Similar to conventional cellular networks, CR systems suffer from an overloaded receiver having to manage signals from a large number of users. To address this issue, D2D communications has been proposed, where direct transmission links are established between users in close proximity to offload the system traffic. Several new cooperative spectrum access policies are proposed allowing coexistence of multiple D2D pairs in order to improve the spectral efficiency. Despite the additional interference, it is shown that both the cellular user’s (CU) and the individual D2D user's achievable rates can be improved simultaneously when the number of D2D pairs is below a certain threshold, resulting in a significant multiplexing gain in the sense of D2D sum rate. This threshold is quantified for different policies using second order approximations for the average achievable rates for both the CU and the individual D2D user.
power, gas , communication networks. Ensuring the security of these
infrastructures is of utmost importance. This task becomes ever more challenging as
the inter-dependence among these infrastructures grows and a security breach in one
infrastructure can spill over to the others. The implication is that the security practices/
analysis recommended for these infrastructures should be done in coordination.
This thesis, focusing on the power grid, explores strategies to secure the system that
look into the coupling of the power grid to the cyber infrastructure, used to manage
and control it, and to the gas grid, that supplies an increasing amount of reserves to
overcome contingencies.
The first part (Part I) of the thesis, including chapters 2 through 4, focuses on
the coupling of the power and the cyber infrastructure that is used for its control and
operations. The goal is to detect malicious attacks gaining information about the
operation of the power grid to later attack the system. In chapter 2, we propose a
hierarchical architecture that correlates the analysis of high resolution Micro-Phasor
Measurement Unit (microPMU) data and traffic analysis on the Supervisory Control
and Data Acquisition (SCADA) packets, to infer the security status of the grid and
detect the presence of possible intruders. An essential part of this architecture is
tied to the analysis on the microPMU data. In chapter 3 we establish a set of anomaly
detection rules on microPMU data that
flag "abnormal behavior". A placement strategy
of microPMU sensors is also proposed to maximize the sensitivity in detecting anomalies.
In chapter 4, we focus on developing rules that can localize the source of an events
using microPMU to further check whether a cyber attack is causing the anomaly, by
correlating SCADA traffic with the microPMU data analysis results. The thread that
unies the data analysis in this chapter is the fact that decision are made without fully estimating the state of the system; on the contrary, decisions are made using
a set of physical measurements that falls short by orders of magnitude to meet the
needs for observability. More specifically, in the first part of this chapter (sections 4.1-
4.2), using microPMU data in the substation, methodologies for online identification of
the source Thevenin parameters are presented. This methodology is used to identify
reconnaissance activity on the normally-open switches in the substation, initiated
by attackers to gauge its controllability over the cyber network. The applications
of this methodology in monitoring the voltage stability of the grid is also discussed.
In the second part of this chapter (sections 4.3-4.5), we investigate the localization
of faults. Since the number of PMU sensors available to carry out the inference
is insufficient to ensure observability, the problem can be viewed as that of under-sampling
a "graph signal"; the analysis leads to a PMU placement strategy that can
achieve the highest resolution in localizing the fault, for a given number of sensors.
In both cases, the results of the analysis are leveraged in the detection of cyber-physical
attacks, where microPMU data and relevant SCADA network traffic information
are compared to determine if a network breach has affected the integrity of the system
information and/or operations.
In second part of this thesis (Part II), the security analysis considers the adequacy
and reliability of schedules for the gas and power network. The motivation for
scheduling jointly supply in gas and power networks is motivated by the increasing
reliance of power grids on natural gas generators (and, indirectly, on gas pipelines)
as providing critical reserves. Chapter 5 focuses on unveiling the challenges and
providing solution to this problem.
A common aspect of the two frameworks is the packet service time. Thus, the effect of multiple channels on the service time is studied first. The problem is formulated as an optimal stopping rule problem where it is required to decide at which channel the SU should stop sensing and begin transmission. I provide a closed-form expression for this optimal stopping rule and the optimal transmission power of secondary user (SU).
The average-delay framework is then presented in a single CR channel system with a base station (BS) that schedules the SUs to minimize the average delay while protecting the primary users (PUs) from harmful interference. One of the contributions of the proposed algorithm is its suitability for heterogeneous-channels systems where users with statistically low channel quality suffer worse delay performances. The proposed algorithm guarantees the prespecified delay performance to each SU without violating the PU's interference constraint.
Finally, in the hard-deadline framework, I propose three algorithms that maximize the system's throughput while guaranteeing the required percentage of packets to be transmitted by their deadlines. The proposed algorithms work in heterogeneous systems where the BS is serving different types of users having real-time (RT) data and non-real-time (NRT) data. I show that two of the proposed algorithms have the low complexity where the power policies of both the RT and NRT users are in closed-form expressions and a low-complexity scheduler.
Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.