Matching Items (3)
Filtering by

Clear all filters

137100-Thumbnail Image.png
Description
Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise measurements from every pair of sensors in the network and are thus only applicable when the network graph is completely connected, or when data are accumulated at a common fusion center. This thesis presents and exploits a new method that uses maximum-entropy techniques to estimate measurements between pairs of sensors that are not in direct communication, thereby enabling the use of the GC estimate in incompletely connected sensor networks. The research in this thesis culminates in a main conjecture supported by statistical tests regarding the topology of the incomplete network graphs.
ContributorsCrider, Lauren Nicole (Author) / Cochran, Douglas (Thesis director) / Renaut, Rosemary (Committee member) / Kosut, Oliver (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
147972-Thumbnail Image.png
Description

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.

ContributorsHirte, Amanda (Author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
158139-Thumbnail Image.png
Description
Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This

Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This dissertation addresses this challenge.

This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information.

There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks.

Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models.
ContributorsLiao, Jiachun (Author) / Sankar, Lalitha (Thesis advisor) / Kosut, Oliver (Committee member) / Zhang, Junshan (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2020