The thesis starts with a careful review of existing signal processing techniques and state of the art methods possible for vital signs monitoring using UWB impulse systems. Then an in-depth analysis of various approaches is presented.
Robust heart-rate monitoring methods are proposed based on a novel result: spectrally the fundamental heartbeat frequency is respiration-interference-limited while its higher-order harmonics are noise-limited. The higher-order statistics related to heartbeat can be a robust indication when the fundamental heartbeat is masked by the strong lower-order harmonics of respiration or when phase calibration is not accurate if phase-based method is used. Analytical spectral analysis is performed to validate that the higher-order harmonics of heartbeat is almost respiration-interference free. Extensive experiments have been conducted to justify an adaptive heart-rate monitoring algorithm. The scenarios of interest are, 1) single subject, 2) multiple subjects at different ranges, 3) multiple subjects at same range, and 4) through wall monitoring.
A remote sensing radar system implemented using the proposed adaptive heart-rate estimation algorithm is compared to the competing remote sensing technology, a remote imaging photoplethysmography system, showing promising results.
State of the art methods for vital signs monitoring are fundamentally related to process the phase variation due to vital signs motions. Their performance are determined by a phase calibration procedure. Existing methods fail to consider the time-varying nature of phase noise. There is no prior knowledge about which of the corrupted complex signals, in-phase component (I) and quadrature component (Q), need to be corrected. A precise phase calibration routine is proposed based on the respiration pattern. The I/Q samples from every breath are more likely to experience similar motion noise and therefore they should be corrected independently. High slow-time sampling rate is used to ensure phase calibration accuracy. Occasionally, a 180-degree phase shift error occurs after the initial calibration step and should be corrected as well. All phase trajectories in the I/Q plot are only allowed in certain angular spaces. This precise phase calibration routine is validated through computer simulations incorporating a time-varying phase noise model, controlled mechanic system, and human subject experiment.
a phonatory source. Identification of this
phonatory source and articulatory geometry are
individually challenging and ill-posed
problems, called speech separation and
articulatory inversion, respectively.
There exists a trade-off
between decomposition and recovered
articulatory geometry due to multiple
possible mappings between an
articulatory configuration
and the speech produced. However, if measurements
are obtained only from a microphone sensor,
they lack any invasive insight and add
additional challenge to an already difficult
problem.
A joint non-invasive estimation
strategy that couples articulatory and
phonatory knowledge would lead to better
articulatory speech synthesis. In this thesis,
a joint estimation strategy for speech
separation and articulatory geometry recovery
is studied. Unlike previous
periodic/aperiodic decomposition methods that
use stationary speech models within a
frame, the proposed model presents a
non-stationary speech decomposition method.
A parametric glottal source model and an
articulatory vocal tract response are
represented in a dynamic state space formulation.
The unknown parameters of the
speech generation components are estimated
using sequential Monte Carlo methods
under some specific assumptions.
The proposed approach is compared with other
glottal inverse filtering methods,
including iterative adaptive inverse filtering,
state-space inverse filtering, and
the quasi-closed phase method.
The thesis mainly looks at the different sections of the work done, to prime the development of the protocol development engine. It discusses channel modeling, and system integration of receiver and channel noise. It also proposes a Carrier-Sense Multiple Access (CSMA) Media Access Control (MAC) layer protocol implementation for (Wireless Fidelity) Wi-Fi protocol. This work also talks about the Graphical User Interface (GUI), which is a part of Protocol Development Kit (PDK) - a combination of the Protocol Recommendation Engine (PRE) and simulation package to aid the development of protocols. It also sheds light on the Automatic Dependent Surveillance - Broadcast (ADS-B) radio protocol, that will eventually replace radar as Air Traffic Control's (ATC) primary tool for separating aircraft.
All the algorithms used in this thesis, to define radio operation were in principle defined by mathematical descriptions; however, to test and implement these algorithms they had to be converted to a computer language. There were multiple phases of this conversion. In the first phase, the implementation of these algorithms was done in Matrix Laboratory (MATLAB). To aid this development, basic radio finite state machines and radio algorithmic tools were provided.
A common aspect of the two frameworks is the packet service time. Thus, the effect of multiple channels on the service time is studied first. The problem is formulated as an optimal stopping rule problem where it is required to decide at which channel the SU should stop sensing and begin transmission. I provide a closed-form expression for this optimal stopping rule and the optimal transmission power of secondary user (SU).
The average-delay framework is then presented in a single CR channel system with a base station (BS) that schedules the SUs to minimize the average delay while protecting the primary users (PUs) from harmful interference. One of the contributions of the proposed algorithm is its suitability for heterogeneous-channels systems where users with statistically low channel quality suffer worse delay performances. The proposed algorithm guarantees the prespecified delay performance to each SU without violating the PU's interference constraint.
Finally, in the hard-deadline framework, I propose three algorithms that maximize the system's throughput while guaranteeing the required percentage of packets to be transmitted by their deadlines. The proposed algorithms work in heterogeneous systems where the BS is serving different types of users having real-time (RT) data and non-real-time (NRT) data. I show that two of the proposed algorithms have the low complexity where the power policies of both the RT and NRT users are in closed-form expressions and a low-complexity scheduler.
Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.