Matching Items (2)
155805-Thumbnail Image.png
Description
This thesis addresses two problems in digital baseband design of wireless communication systems, namely, those in Internet of Things (IoT) terminals that support long range communications and those in full-duplex systems that are designed for high spectral efficiency.

IoT terminals for long range communications are typically based on Orthogonal Frequency-Division Multiple

This thesis addresses two problems in digital baseband design of wireless communication systems, namely, those in Internet of Things (IoT) terminals that support long range communications and those in full-duplex systems that are designed for high spectral efficiency.

IoT terminals for long range communications are typically based on Orthogonal Frequency-Division Multiple Access (OFDMA) and spread spectrum technologies. In order to design an efficient baseband architecture for such terminals, the workload profiles of both systems are analyzed. Since frame detection unit has by far the highest computational load, a simple architecture that uses only a scalar datapath is proposed. To optimize for low energy consumption, application-specific instructions that minimize register accesses and address generation units for streamlined memory access are introduced. Two parameters, namely, correlation window size and threshold value, affect the detection probability, the false alarm probability and hence energy consumption. Next, energy-optimal operation settings for correlation window size and threshold value are derived for different channel conditions. For both good and bad channel conditions, if target signal detection probability is greater than 0.9, the baseband processor has the lowest energy when the frame detection algorithm uses the longest correlation window and the highest threshold value.

A full-duplex system has high spectral efficiency but suffers from self-interference. Part of the interference can be cancelled digitally using equalization techniques. The cancellation performance and computation complexity of the competing equalization algorithms, namely, Least Mean Square (LMS), Normalized LMS (NLMS), Recursive Least Square (RLS) and feedback equalizers based on LMS, NLMS and RLS are analyzed, and a trade-off between performance and complexity established. NLMS linear equalizer is found to be suitable for resource-constrained mobile devices and NLMS decision feedback equalizer is more appropriate for base stations that are not energy constrained.
ContributorsWu, Shunyao (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Lee, Hyunseok (Committee member) / Arizona State University (Publisher)
Created2017
168677-Thumbnail Image.png
Description

This work addresses the following four problems: (i) Will a blockage occur in the near future? (ii) When will this blockage occur? (iii) What is the type of the blockage? And (iv) what is the direction of the moving blockage? The proposed solution utilizes deep neural networks (DNN) as well

This work addresses the following four problems: (i) Will a blockage occur in the near future? (ii) When will this blockage occur? (iii) What is the type of the blockage? And (iv) what is the direction of the moving blockage? The proposed solution utilizes deep neural networks (DNN) as well as non-machine learning (ML) algorithms. At the heart of the proposed method is identification of special patterns of received signal and sensory data before the blockage occurs (\textit{pre-blockage signatures}) and to infer future blockages utilizing these signatures. To evaluate the proposed approach, first real-world datasets are built for both in-band mmWave system and LiDAR-aided in mmWave systems based on the DeepSense 6G structure. In particular, for in-band mmWave system, two real-world datasets are constructed -- one for indoor scenario and the other for outdoor scenario. Then DNN models are developed to proactively predict the incoming blockages for both scenarios. For LiDAR-aided blockage prediction, a large-scale real-world dataset that includes co-existing LiDAR and mmWave communication measurements is constructed for outdoor scenarios. Then, an efficient LiDAR data denoising (static cluster removal) algorithm is designed to clear the dataset noise. Finally, a non-ML method and a DNN model that proactively predict dynamic link blockages are developed. Experiments using in-band mmWave datasets show that, the proposed approach can successfully predict the occurrence of future dynamic blockages (up to 5 s) with more than 80% accuracy (indoor scenario). Further, for the outdoor scenario with highly-mobile vehicular blockages, the proposed model can predict the exact time of the future blockage with less than 100 ms error for blockages happening within the future 600 ms. Further, our proposed method can predict the size and moving direction of the blockages. For the co-existing LiDAR and mmWave real-world dataset, our LiDAR-aided approach is shown to achieve above 95% accuracy in predicting blockages occurring within 100 ms and more than 80% prediction accuracy for blockages occurring within one second. Further, for the outdoor scenario with highly-mobile vehicular blockages, the proposed model can predict the exact time of the future blockage with less than 150 ms error for blockages happening within one second. In addition, our method achieves above 92% accuracy to classify the type of blockages and above 90% accuracy predicting the blockage moving direction. The proposed solutions can potentially provide an order of magnitude saving in the network latency, thereby highlighting a promising approach for addressing the blockage challenges in mmWave/sub-THz networks.

ContributorsWu, Shunyao (Author) / Chakrabarti, Chaitali CC (Thesis advisor) / Alkhateeb, Ahmed AA (Committee member) / Bliss, Daniel DB (Committee member) / Papandreou-Suppappola, Antonia AP (Committee member) / Arizona State University (Publisher)
Created2022