Matching Items (6)
Filtering by

Clear all filters

136314-Thumbnail Image.png
Description
The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of

The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of digital output to headphones or speakers. Based on this information, the gamer can discern where a particular stimulus is coming from and whether or not that is a threat to their wellbeing within the virtual world. People with reliable hearing have a distinct advantage over hearing impaired people in the fact that they can gather information not just from what is in front of them, but from every angle relative to the way they're facing. The purpose of this project was to find a way to even the playing field, so that a person hard of hearing could also receive the sensory feedback that any other person would get while playing video games To do this, visual surround sound was created. This is a system that takes a surround sound input, and illuminates LEDs around the periphery of glasses based on the direction, frequency and amplitude of the audio wave. This provides the user with crucial information on the whereabouts of different elements within the game. In this paper, the research and development of Visual Surround Sound is discussed along with its viability in regards to a deaf person's ability to learn the technology, and decipher the visual cues.
ContributorsKadi, Danyal (Co-author) / Burrell, Nathaneal (Co-author) / Butler, Kristi (Co-author) / Wright, Gavin (Co-author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05
137100-Thumbnail Image.png
Description
Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise measurements from every pair of sensors in the network and are thus only applicable when the network graph is completely connected, or when data are accumulated at a common fusion center. This thesis presents and exploits a new method that uses maximum-entropy techniques to estimate measurements between pairs of sensors that are not in direct communication, thereby enabling the use of the GC estimate in incompletely connected sensor networks. The research in this thesis culminates in a main conjecture supported by statistical tests regarding the topology of the incomplete network graphs.
ContributorsCrider, Lauren Nicole (Author) / Cochran, Douglas (Thesis director) / Renaut, Rosemary (Committee member) / Kosut, Oliver (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
135725-Thumbnail Image.png
Description
A distributed sensor network (DSN) is a set of spatially scattered intelligent sensors designed to obtain data across an environment. DSNs are becoming a standard architecture for collecting data over a large area. We need registration of nodal data across the network in order to properly exploit having multiple sensors.

A distributed sensor network (DSN) is a set of spatially scattered intelligent sensors designed to obtain data across an environment. DSNs are becoming a standard architecture for collecting data over a large area. We need registration of nodal data across the network in order to properly exploit having multiple sensors. One major problem worth investigating is ensuring the integrity of the data received, such as time synchronization. Consider a group of match filter sensors. Each sensor is collecting the same data, and comparing the data collected to a known signal. In an ideal world, each sensor would be able to collect the data without offsets or noise in the system. Two models can be followed from this. First, each sensor could make a decision on its own, and then the decisions could be collected at a ``fusion center'' which could then decide if the signal is present or not. The fusion center can then decide if the signal is present or not based on the number true-or-false decisions that each sensor has made. Alternatively, each sensor could relay the data that it collects to the fusion center, and it could then make a decision based on all of the data that it then receives. Since the fusion center would have more information to base its decision on in the latter case--as opposed to the former case where it only receives a true or false from each sensor--one would expect the latter model to perform better. In fact, this would be the gold standard for detection across a DSN. However, there is random noise in the world that causes corruption of data collection, especially among sensors in a DSN. Each sensor does not collect the data in the exact same way or with the same precision. We classify these imperfections in data collections as offsets, specifically the offset present in the data collected by one sensor with respect to the rest of the sensors in the network. Therefore, reconsider the two models for a DSN described above. We can naively implement either of these models for data collection. Alternatively, we can attempt to estimate the offsets between the sensors and compensate. One could see how it would be expected that estimating the offsets within the DSN would provide better overall results than not finding estimators. This thesis will be structured as follows. First, there will be an extensive investigation into detection theory and the impact that different types of offsets have on sensor networks. Following the theory, an algorithm for estimating the data offsets will be proposed correct for the offsets. Next, we will look at Monte Carlo simulation results to see the impact on sensor performance of data offsets in comparison to a sensor network without offsets present. The algorithm is then implemented, and further experiments will demonstrate sensor performance with offset detection.
ContributorsMonardo, Vincent James (Author) / Cochran, Douglas (Thesis director) / Kierstead, Hal (Committee member) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
147972-Thumbnail Image.png
Description

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.

ContributorsHirte, Amanda (Author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
132193-Thumbnail Image.png
Description
Power spectral analysis is a fundamental aspect of signal processing used in the detection and \\estimation of various signal features. Signals spaced closely in frequency are problematic and lead analysts to miss crucial details surrounding the data. The Capon and Bartlett methods are non-parametric filterbank approaches to power spectrum estimation.

Power spectral analysis is a fundamental aspect of signal processing used in the detection and \\estimation of various signal features. Signals spaced closely in frequency are problematic and lead analysts to miss crucial details surrounding the data. The Capon and Bartlett methods are non-parametric filterbank approaches to power spectrum estimation. The Capon algorithm is known as the "adaptive" approach to power spectrum estimation because its filter impulse responses are adapted to fit the characteristics of the data. The Bartlett method is known as the "conventional" approach to power spectrum estimation (PSE) and has a fixed deterministic filter. Both techniques rely on the Sample Covariance Matrix (SCM). The first objective of this project is to analyze the origins and characteristics of the Capon and Bartlett methods to understand their abilities to resolve signals closely spaced in frequency. Taking into consideration the Capon and Bartlett's reliance on the SCM, there is a novelty in combining these two algorithms using their cross-coherence. The second objective of this project is to analyze the performance of the Capon-Bartlett Cross Spectra. This study will involve Matlab simulations of known test cases and comparisons with approximate theoretical predictions.
ContributorsYoshiyama, Cassidy (Author) / Richmond, Christ (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
130901-Thumbnail Image.png
Description
Alzheimer's disease is the 6th leading cause of death in the United States and vastly affects millions across the world each year. Currently, there are no medications or treatments available to slow or stop the progression of Alzheimer’s Disease. The GENUS therapy out of the Massachusetts Institute of Technology presently

Alzheimer's disease is the 6th leading cause of death in the United States and vastly affects millions across the world each year. Currently, there are no medications or treatments available to slow or stop the progression of Alzheimer’s Disease. The GENUS therapy out of the Massachusetts Institute of Technology presently shows positive results in slowing the progression of the disease among animal trials. This thesis is a continuation of that study, to develop and build a testing apparatus for human clinical trials. Included is a complete outline into the design, development, testing measures, and instructional aid for the final apparatus.
ContributorsScheller, Rachel D (Author) / Bliss, Daniel (Thesis director) / Corman, Steven (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12