Matching Items (26)

137100-Thumbnail Image.png

Maximum Entropy Surrogation in Multiple Channel Signal Detection

Description

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise measurements from every pair of sensors in the network and are thus only applicable when the network graph is completely connected, or when data are accumulated at a common fusion center. This thesis presents and exploits a new method that uses maximum-entropy techniques to estimate measurements between pairs of sensors that are not in direct communication, thereby enabling the use of the GC estimate in incompletely connected sensor networks. The research in this thesis culminates in a main conjecture supported by statistical tests regarding the topology of the incomplete network graphs.

Contributors

Agent

Created

Date Created
  • 2014-05

136314-Thumbnail Image.png

Visual Surround Sound and its Applications

Description

The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially

The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of digital output to headphones or speakers. Based on this information, the gamer can discern where a particular stimulus is coming from and whether or not that is a threat to their wellbeing within the virtual world. People with reliable hearing have a distinct advantage over hearing impaired people in the fact that they can gather information not just from what is in front of them, but from every angle relative to the way they're facing. The purpose of this project was to find a way to even the playing field, so that a person hard of hearing could also receive the sensory feedback that any other person would get while playing video games To do this, visual surround sound was created. This is a system that takes a surround sound input, and illuminates LEDs around the periphery of glasses based on the direction, frequency and amplitude of the audio wave. This provides the user with crucial information on the whereabouts of different elements within the game. In this paper, the research and development of Visual Surround Sound is discussed along with its viability in regards to a deaf person's ability to learn the technology, and decipher the visual cues.

Contributors

Agent

Created

Date Created
  • 2015-05

147972-Thumbnail Image.png

Audio Waveform Sample SVD Compression and Impact on Performance

Description

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.

Contributors

Agent

Created

Date Created
  • 2021-05

156661-Thumbnail Image.png

Outage Probability Analysis of Full-Duplex Amplify-and-Forward MIMO Relay Systems

Description

Multiple-input multiple-output systems have gained focus in the last decade due to the benefits they provide in enhancing the quality of communications. On the other hand, full-duplex communication has attracted

Multiple-input multiple-output systems have gained focus in the last decade due to the benefits they provide in enhancing the quality of communications. On the other hand, full-duplex communication has attracted remarkable attention due to its ability to improve the spectral efficiency compared to the existing half-duplex systems. Using full-duplex communications on MIMO co-operative networks can provide us solutions that can completely outperform existing systems with simultaneous transmission and reception at high data rates.

This thesis considers a full-duplex MIMO relay which amplifies and forwards the received signals, between a source and a destination that do not a have line of sight. Full-duplex mode raises the problem of self-interference. Though all the links in the system undergo frequency flat fading, the end-to-end effective channel is frequency selective. This is due to the imperfect cancellation of the self-interference at the relay and this residual self-interference acts as intersymbol interference at the destination which is treated by equalization. This also leads to complications in form of recursive equations to determine the input-output relationship of the system. This also leads to complications in the form of recursive equations to determine the input-output relationship of the system.

To overcome this, a signal flow graph approach using Mason's gain formula is proposed, where the effective channel is analyzed with keen notice to every loop and path the signal traverses. This gives a clear understanding and awareness about the orders of the polynomials involved in the transfer function, from which desired conclusions can be drawn. But the complexity of Mason's gain formula increases with the number of antennas at relay which can be overcome by the proposed linear algebraic method. Input-output relationship derived using simple concepts of linear algebra can be generalized to any number of antennas and the computation complexity is comparatively very low.

For a full-duplex amplify-and-forward MIMO relay system, assuming equalization at the destination, new mechanisms have been implemented at the relay that can compensate the effect of residual self-interference namely equal-gain transmission and antenna selection. Though equal-gain transmission does not perform better than the maximal ratio transmission, a trade-off can be made between performance and implementation complexity. Using the proposed antenna selection strategy, one pair of transmit-receive antennas at the relay is selected based on four selection criteria discussed. Outage probability analysis is performed for all the strategies presented and detailed comparison has been established. Considering minimum mean-squared error decision feedback equalizer at the destination, a bound on the outage probability has been obtained for the antenna selection case and is used for comparisons. A cross-over point is observed while comparing the outage probabilities of equal-gain transmission and antenna selection techniques, as the signal-to-noise ratio increases and from that point antenna selection outperforms equal-gain transmission and this is explained by the fact of reduced residual self-interference in antenna selection method.

Contributors

Agent

Created

Date Created
  • 2018

156646-Thumbnail Image.png

Channel Estimation in Half and Full Duplex Relays

Description

Both two-way relays (TWR) and full-duplex (FD) radios are spectrally efficient, and their integration shows great potential to further improve the spectral efficiency, which offers a solution to the fifth

Both two-way relays (TWR) and full-duplex (FD) radios are spectrally efficient, and their integration shows great potential to further improve the spectral efficiency, which offers a solution to the fifth generation wireless systems. High quality channel state information (CSI) are the key components for the implementation and the performance of the FD TWR system, making channel estimation in FD TWRs crucial.

The impact of channel estimation on spectral efficiency in half-duplex multiple-input-multiple-output (MIMO) TWR systems is investigated. The trade-off between training and data energy is proposed. In the case that two sources are symmetric in power and number of antennas, a closed-form for the optimal ratio of data energy to total energy is derived. It can be shown that the achievable rate is a monotonically increasing function of the data length. The asymmetric case is discussed as well.

Efficient and accurate training schemes for FD TWRs are essential for profiting from the inherent spectrally efficient structures of both FD and TWRs. A novel one-block training scheme with a maximum likelihood (ML) estimator is proposed to estimate the channels between the nodes and the residual self-interference (RSI) channel simultaneously. Baseline training schemes are also considered to compare with the one-block scheme. The Cramer-Rao bounds (CRBs) of the training schemes are derived and analyzed by using the asymptotic properties of Toeplitz matrices. The benefit of estimating the RSI channel is shown analytically in terms of Fisher information.

To obtain fundamental and analytic results of how the RSI affects the spectral efficiency, one-way FD relay systems are studied. Optimal training design and ML channel estimation are proposed to estimate the RSI channel. The CRBs are derived and analyzed in closed-form so that the optimal training sequence can be found via minimizing the CRB. Extensions of the training scheme to frequency-selective channels and multiple relays are also presented.

Simultaneously sensing and transmission in an FD cognitive radio system with MIMO is considered. The trade-off between the transmission rate and the detection accuracy is characterized by the sum-rate of the primary and the secondary users. Different beamforming and combining schemes are proposed and compared.

Contributors

Agent

Created

Date Created
  • 2018

156751-Thumbnail Image.png

Data-Driven and Game-Theoretic Approaches for Privacy

Description

In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers

In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers (businesses) to collect a large amount of data. However, this ``data collection" process can put the privacy of users at risk and also lead to user reluctance in accepting services or sharing data. This dissertation first investigates privacy sensitive consumer-retailers/service providers interactions under different scenarios, and then focuses on a unified framework for various information-theoretic privacy and privacy mechanisms that can be learned directly from data.

Existing approaches such as differential privacy or information-theoretic privacy try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. The first part of this dissertation introduces models to study consumer-retailer interaction problems and to better understand how retailers/service providers can balance their revenue objectives while being sensitive to user privacy concerns. This dissertation considers the following three scenarios: (i) the consumer-retailer interaction via personalized advertisements; (ii) incentive mechanisms that electrical utility providers need to offer for privacy sensitive consumers with alternative energy sources; (iii) the market viability of offering privacy guaranteed free online services. We use game-theoretic models to capture the behaviors of both consumers and retailers, and provide insights for retailers to maximize their profits when interacting with privacy sensitive consumers.

Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. In the second part, a novel context-aware privacy framework called generative adversarial privacy (GAP) is introduced. Inspired by recent advancements in generative adversarial networks, GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. For appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. Both synthetic and real-world datasets are used to show that GAP can greatly reduce the adversary's capability of inferring private information at a small cost of distorting the data.

Contributors

Agent

Created

Date Created
  • 2018

157375-Thumbnail Image.png

Designing a Software Platform for Evaluating Cyber-Attacks on The Electric PowerGrid

Description

Energy management system (EMS) is at the heart of the operation and control of a modern electrical grid. Because of economic, safety, and security reasons, access to industrial grade EMS

Energy management system (EMS) is at the heart of the operation and control of a modern electrical grid. Because of economic, safety, and security reasons, access to industrial grade EMS and real-world power system data is extremely limited. Therefore, the ability to simulate an EMS is invaluable in researching the EMS in normal and anomalous operating conditions.

I first lay the groundwork for a basic EMS loop simulation in modern power grids and review a class of cybersecurity threats called false data injection (FDI) attacks. Then I propose a software architecture as the basis of software simulation of the EMS loop and explain an actual software platform built using the proposed architecture. I also explain in detail the power analysis libraries used for building the platform with examples and illustrations from the implemented application. Finally, I will use the platform to simulate FDI attacks on two synthetic power system test cases and analyze and visualize the consequences using the capabilities built into the platform.

Contributors

Agent

Created

Date Created
  • 2019

158175-Thumbnail Image.png

Anticipating Postoperative Delirium During Cardiac Surgeries Involving Deep Hypothermia Circulatory Arrest

Description

Aortic aneurysms and dissections are life threatening conditions addressed by replacing damaged sections of the aorta. Blood circulation must be halted to facilitate repairs. Ischemia places the body, especially the

Aortic aneurysms and dissections are life threatening conditions addressed by replacing damaged sections of the aorta. Blood circulation must be halted to facilitate repairs. Ischemia places the body, especially the brain, at risk of damage. Deep hypothermia circulatory arrest (DHCA) is employed to protect patients and provide time for surgeons to complete repairs on the basis that reducing body temperature suppresses the metabolic rate. Supplementary surgical techniques can be employed to reinforce the brain's protection and increase the duration circulation can be suspended. Even then, protection is not completely guaranteed though. A medical condition that can arise early in recovery is postoperative delirium, which is correlated with poor long term outcome. This study develops a methodology to intraoperatively monitor neurophysiology through electroencephalography (EEG) and anticipate postoperative delirium. The earliest opportunity to detect occurrences of complications through EEG is immediately following DHCA during warming. The first observable electrophysiological activity after being completely suppressed is a phenomenon known as burst suppression, which is related to the brain's metabolic state and recovery of nominal neurological function. A metric termed burst suppression duty cycle (BSDC) is developed to characterize the changing electrophysiological dynamics. Predictions of postoperative delirium incidences are made by identifying deviations in the way these dynamics evolve. Sixteen cases are examined in this study. Accurate predictions can be made, where on average 89.74% of cases are correctly classified when burst suppression concludes and 78.10% when burst suppression begins. The best case receiver operating characteristic curve has an area under its convex hull of 0.8988, whereas the worst case area under the hull is 0.7889. These results demonstrate the feasibility of monitoring BSDC to anticipate postoperative delirium during burst suppression. They also motivate a further analysis on identifying footprints of causal mechanisms of neural injury within BSDC. Being able to raise warning signs of postoperative delirium early provides an opportunity to intervene and potentially avert neurological complications. Doing so would improve the success rate and quality of life after surgery.

Contributors

Agent

Created

Date Created
  • 2020

158139-Thumbnail Image.png

Quantifying Information Leakage via Adversarial Loss Functions: Theory and Practice

Description

Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility

Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This dissertation addresses this challenge.

This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information.

There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks.

Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models.

Contributors

Agent

Created

Date Created
  • 2020

157701-Thumbnail Image.png

Numerical computation of Wishart eigenvalue distributions for multistatic radar detection

Description

Eigenvalues of the Gram matrix formed from received data frequently appear in sufficient detection statistics for multi-channel detection with Generalized Likelihood Ratio (GLRT) and Bayesian tests. In a frequently presented

Eigenvalues of the Gram matrix formed from received data frequently appear in sufficient detection statistics for multi-channel detection with Generalized Likelihood Ratio (GLRT) and Bayesian tests. In a frequently presented model for passive radar, in which the null hypothesis is that the channels are independent and contain only complex white Gaussian noise and the alternative hypothesis is that the channels contain a common rank-one signal in the mean, the GLRT statistic is the largest eigenvalue $\lambda_1$ of the Gram matrix formed from data. This Gram matrix has a Wishart distribution. Although exact expressions for the distribution of $\lambda_1$ are known under both hypotheses, numerically calculating values of these distribution functions presents difficulties in cases where the dimension of the data vectors is large. This dissertation presents tractable methods for computing the distribution of $\lambda_1$ under both the null and alternative hypotheses through a technique of expanding known expressions for the distribution of $\lambda_1$ as inner products of orthogonal polynomials. These newly presented expressions for the distribution allow for computation of detection thresholds and receiver operating characteristic curves to arbitrary precision in floating point arithmetic. This represents a significant advancement over the state of the art in a problem that could previously only be addressed by Monte Carlo methods.

Contributors

Agent

Created

Date Created
  • 2019