Matching Items (48)

128720-Thumbnail Image.png

The Constant Information Radar

Description

The constant information radar, or CIR, is a tracking radar that modulates target revisit time by maintaining a fixed mutual information measure. For highly dynamic targets that deviate significantly from the path predicted by the tracking motion model, the CIR

The constant information radar, or CIR, is a tracking radar that modulates target revisit time by maintaining a fixed mutual information measure. For highly dynamic targets that deviate significantly from the path predicted by the tracking motion model, the CIR adjusts by illuminating the target more frequently than it would for well-modeled targets. If SNR is low, the radar delays revisit to the target until the state entropy overcomes noise uncertainty. As a result, we show that the information measure is highly dependent on target entropy and target measurement covariance. A constant information measure maintains a fixed spectral efficiency to support the RF convergence of radar and communications. The result is a radar implementing a novel target scheduling algorithm based on information instead of heuristic or ad hoc methods. The CIR mathematically ensures that spectral use is justified.

Contributors

Agent

Created

Date Created
2016-09-19

136314-Thumbnail Image.png

Visual Surround Sound and its Applications

Description

The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most

The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of digital output to headphones or speakers. Based on this information, the gamer can discern where a particular stimulus is coming from and whether or not that is a threat to their wellbeing within the virtual world. People with reliable hearing have a distinct advantage over hearing impaired people in the fact that they can gather information not just from what is in front of them, but from every angle relative to the way they're facing. The purpose of this project was to find a way to even the playing field, so that a person hard of hearing could also receive the sensory feedback that any other person would get while playing video games To do this, visual surround sound was created. This is a system that takes a surround sound input, and illuminates LEDs around the periphery of glasses based on the direction, frequency and amplitude of the audio wave. This provides the user with crucial information on the whereabouts of different elements within the game. In this paper, the research and development of Visual Surround Sound is discussed along with its viability in regards to a deaf person's ability to learn the technology, and decipher the visual cues.

Contributors

Agent

Created

Date Created
2015-05

135499-Thumbnail Image.png

Electroencephalography Feature Extraction of Neural Stimuli

Description

Many mysteries still surround brain function, and yet greater understanding of it is vital to advancing scientific research. Studies on the brain in particular play a huge role in the medical field as analysis can lead to proper diagnosis of

Many mysteries still surround brain function, and yet greater understanding of it is vital to advancing scientific research. Studies on the brain in particular play a huge role in the medical field as analysis can lead to proper diagnosis of patients and to anticipatory treatments. The objective of this research was to apply signal processing techniques on electroencephalogram (EEG) data in order to extract features for which to quantify an activity performed or a response to stimuli. The responses by the brain were shown in eigenspectrum plots in combination with time-frequency plots for each of the sensors to provide both spatial and temporal frequency analysis. Through this method, it was revealed how the brain responds to various stimuli not typically used in current research. Future applications might include testing similar stimuli on patients with neurological diseases to gain further insight into their condition.

Contributors

Agent

Created

Date Created
2016-05

135457-Thumbnail Image.png

Improved Finite Sample Estimate of A Nonparametric Divergence Measure

Description

This work details the bootstrap estimation of a nonparametric information divergence measure, the Dp divergence measure, using a power law model. To address the challenge posed by computing accurate divergence estimates given finite size data, the bootstrap approach is used

This work details the bootstrap estimation of a nonparametric information divergence measure, the Dp divergence measure, using a power law model. To address the challenge posed by computing accurate divergence estimates given finite size data, the bootstrap approach is used in conjunction with a power law curve to calculate an asymptotic value of the divergence estimator. Monte Carlo estimates of Dp are found for increasing values of sample size, and a power law fit is used to relate the divergence estimates as a function of sample size. The fit is also used to generate a confidence interval for the estimate to characterize the quality of the estimate. We compare the performance of this method with the other estimation methods. The calculated divergence is applied to the binary classification problem. Using the inherent relation between divergence measures and classification error rate, an analysis of the Bayes error rate of several data sets is conducted using the asymptotic divergence estimate.

Contributors

Agent

Created

Date Created
2016-05

135912-Thumbnail Image.png

Multi-Static Space-Time-Frequency Channel Modeling

Description

Radio communication has become the dominant form of correspondence in modern society. As the demand for high speed communication grows, the problems associated with an expanding consumer base and limited spectral access become more difficult to address. One communications system

Radio communication has become the dominant form of correspondence in modern society. As the demand for high speed communication grows, the problems associated with an expanding consumer base and limited spectral access become more difficult to address. One communications system in which people commonly find themselves is the multiple access cellular network. Users operate within the same geographical area and bandwidth, so providing access to every user requires advanced processing techniques and careful subdivision of spectral access. This is known as the multiple access problem. This paper addresses this challenge in the context of airborne transceivers operating at high altitudes and long ranges. These operators communicate by transmitting a signal through a target scattering field on the ground without a direct line of sight to the receiver. The objective of this investigation is to develop a model for this communications channel, identify and quantify the relevant characteristics, and evaluate the feasibility of using it to effectively communicate.

Contributors

Agent

Created

Date Created
2015-12

136475-Thumbnail Image.png

Sensitivity Analysis of a Spatiotemporal Correlation Based Seizure Prediction Algorithm

Description

Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with

Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested through a sensitivity analysis. Doing so also provides insight about how to construct more effective feature vectors.

Contributors

Agent

Created

Date Created
2015-05

147972-Thumbnail Image.png

Audio Waveform Sample SVD Compression and Impact on Performance

Description

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.

Contributors

Agent

Created

Date Created
2021-05

152307-Thumbnail Image.png

Adaptive learning and unsupervised clustering of immune responses using microarray random sequence peptides

Description

Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can

Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of biological threats. Currently, traditional bioinformatics tools, such as data mining classification algorithms, are used to process the large amount of peptide microarray data. However, these methods generally require training data and do not adapt to changing immune conditions or additional patient information. This work proposes advanced processing techniques to improve the classification and identification of single and multiple underlying immune response states embedded in immunosignatures, making it possible to detect both known and previously unknown diseases or biothreat agents. Novel adaptive learning methodologies for un- supervised and semi-supervised clustering integrated with immunosignature feature extraction approaches are proposed. The techniques are based on extracting novel stochastic features from microarray binding intensities and use Dirichlet process Gaussian mixture models to adaptively cluster the immunosignatures in the feature space. This learning-while-clustering approach allows continuous discovery of antibody activity by adaptively detecting new disease states, with limited a priori disease or patient information. A beta process factor analysis model to determine underlying patient immune responses is also proposed to further improve the adaptive clustering performance by formatting new relationships between patients and antibody activity. In order to extend the clustering methods for diagnosing multiple states in a patient, the adaptive hierarchical Dirichlet process is integrated with modified beta process factor analysis latent feature modeling to identify relationships between patients and infectious agents. The use of Bayesian nonparametric adaptive learning techniques allows for further clustering if additional patient data is received. Significant improvements in feature identification and immune response clustering are demonstrated using samples from patients with different diseases.

Contributors

Agent

Created

Date Created
2013

152455-Thumbnail Image.png

On the ordering of communication channels

Description

This dissertation introduces stochastic ordering of instantaneous channel powers of fading channels as a general method to compare the performance of a communication system over two different channels, even when a closed-form expression for the metric may not be available.

This dissertation introduces stochastic ordering of instantaneous channel powers of fading channels as a general method to compare the performance of a communication system over two different channels, even when a closed-form expression for the metric may not be available. Such a comparison is with respect to a variety of performance metrics such as error rates, outage probability and ergodic capacity, which share common mathematical properties such as monotonicity, convexity or complete monotonicity. Complete monotonicity of a metric, such as the symbol error rate, in conjunction with the stochastic Laplace transform order between two fading channels implies the ordering of the two channels with respect to the metric. While it has been established previously that certain modulation schemes have convex symbol error rates, there is no study of the complete monotonicity of the same, which helps in establishing stronger channel ordering results. Toward this goal, the current research proves for the first time, that all 1-dimensional and 2-dimensional modulations have completely monotone symbol error rates. Furthermore, it is shown that the frequently used parametric fading distributions for modeling line of sight exhibit a monotonicity in the line of sight parameter with respect to the Laplace transform order. While the Laplace transform order can also be used to order fading distributions based on the ergodic capacity, there exist several distributions which are not Laplace transform ordered, although they have ordered ergodic capacities. To address this gap, a new stochastic order called the ergodic capacity order has been proposed herein, which can be used to compare channels based on the ergodic capacity. Using stochastic orders, average performance of systems involving multiple random variables are compared over two different channels. These systems include diversity combining schemes, relay networks, and signal detection over fading channels with non-Gaussian additive noise. This research also addresses the problem of unifying fading distributions. This unification is based on infinite divisibility, which subsumes almost all known fading distributions, and provides simplified expressions for performance metrics, in addition to enabling stochastic ordering.

Contributors

Agent

Created

Date Created
2014

131527-Thumbnail Image.png

Leveraging Machine Learning and Wireless Sensing for Robot Localization - Location Variance Analysis

Description

Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but

Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic indoor or urban environments. Using recent improvements in the field of machine learning, this project proposes a new method of localization using networks with several wireless transceivers and implemented without heavy computational loads or high costs. This project aims to build a proof-of-concept prototype and demonstrate that the proposed technique is feasible and accurate.

Modern communication networks heavily depend upon an estimate of the communication channel, which represents the distortions that a transmitted signal takes as it moves towards a receiver. A channel can become quite complicated due to signal reflections, delays, and other undesirable effects and, as a result, varies significantly with each different location. This localization system seeks to take advantage of this distinctness by feeding channel information into a machine learning algorithm, which will be trained to associate channels with their respective locations. A device in need of localization would then only need to calculate a channel estimate and pose it to this algorithm to obtain its location.

As an additional step, the effect of location noise is investigated in this report. Once the localization system described above demonstrates promising results, the team demonstrates that the system is robust to noise on its location labels. In doing so, the team demonstrates that this system could be implemented in a continued learning environment, in which some user agents report their estimated (noisy) location over a wireless communication network, such that the model can be implemented in an environment without extensive data collection prior to release.

Contributors

Agent

Created

Date Created
2020-05