Matching Items (17)
156976-Thumbnail Image.png
Description
In the past half century, low-power wireless signals from portable radar sensors, initially continuous-wave (CW) radars and more recently ultra-wideband (UWB) radar systems, have been successfully used to detect physiological movements of stationary human beings.

The thesis starts with a careful review of existing signal processing techniques and state

In the past half century, low-power wireless signals from portable radar sensors, initially continuous-wave (CW) radars and more recently ultra-wideband (UWB) radar systems, have been successfully used to detect physiological movements of stationary human beings.

The thesis starts with a careful review of existing signal processing techniques and state of the art methods possible for vital signs monitoring using UWB impulse systems. Then an in-depth analysis of various approaches is presented.

Robust heart-rate monitoring methods are proposed based on a novel result: spectrally the fundamental heartbeat frequency is respiration-interference-limited while its higher-order harmonics are noise-limited. The higher-order statistics related to heartbeat can be a robust indication when the fundamental heartbeat is masked by the strong lower-order harmonics of respiration or when phase calibration is not accurate if phase-based method is used. Analytical spectral analysis is performed to validate that the higher-order harmonics of heartbeat is almost respiration-interference free. Extensive experiments have been conducted to justify an adaptive heart-rate monitoring algorithm. The scenarios of interest are, 1) single subject, 2) multiple subjects at different ranges, 3) multiple subjects at same range, and 4) through wall monitoring.

A remote sensing radar system implemented using the proposed adaptive heart-rate estimation algorithm is compared to the competing remote sensing technology, a remote imaging photoplethysmography system, showing promising results.

State of the art methods for vital signs monitoring are fundamentally related to process the phase variation due to vital signs motions. Their performance are determined by a phase calibration procedure. Existing methods fail to consider the time-varying nature of phase noise. There is no prior knowledge about which of the corrupted complex signals, in-phase component (I) and quadrature component (Q), need to be corrected. A precise phase calibration routine is proposed based on the respiration pattern. The I/Q samples from every breath are more likely to experience similar motion noise and therefore they should be corrected independently. High slow-time sampling rate is used to ensure phase calibration accuracy. Occasionally, a 180-degree phase shift error occurs after the initial calibration step and should be corrected as well. All phase trajectories in the I/Q plot are only allowed in certain angular spaces. This precise phase calibration routine is validated through computer simulations incorporating a time-varying phase noise model, controlled mechanic system, and human subject experiment.
ContributorsRong, Yu (Author) / Bliss, Daniel W (Thesis advisor) / Richmond, Christ D (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Alkhateeb, Ahmed (Committee member) / Arizona State University (Publisher)
Created2018
131527-Thumbnail Image.png
Description
Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic

Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic indoor or urban environments. Using recent improvements in the field of machine learning, this project proposes a new method of localization using networks with several wireless transceivers and implemented without heavy computational loads or high costs. This project aims to build a proof-of-concept prototype and demonstrate that the proposed technique is feasible and accurate.

Modern communication networks heavily depend upon an estimate of the communication channel, which represents the distortions that a transmitted signal takes as it moves towards a receiver. A channel can become quite complicated due to signal reflections, delays, and other undesirable effects and, as a result, varies significantly with each different location. This localization system seeks to take advantage of this distinctness by feeding channel information into a machine learning algorithm, which will be trained to associate channels with their respective locations. A device in need of localization would then only need to calculate a channel estimate and pose it to this algorithm to obtain its location.

As an additional step, the effect of location noise is investigated in this report. Once the localization system described above demonstrates promising results, the team demonstrates that the system is robust to noise on its location labels. In doing so, the team demonstrates that this system could be implemented in a continued learning environment, in which some user agents report their estimated (noisy) location over a wireless communication network, such that the model can be implemented in an environment without extensive data collection prior to release.
ContributorsChang, Roger (Co-author) / Kann, Trevor (Co-author) / Alkhateeb, Ahmed (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
148033-Thumbnail Image.png
Description

Every communication system has a receiver and a transmitter. Irrespective if it is wired or wireless.The future of wireless communication consists of a massive number of transmitters and receivers. The question arises, can we use computer vision to help wireless communication? To satisfy the high data requirement, a large number

Every communication system has a receiver and a transmitter. Irrespective if it is wired or wireless.The future of wireless communication consists of a massive number of transmitters and receivers. The question arises, can we use computer vision to help wireless communication? To satisfy the high data requirement, a large number of antennas are required. The devices that employ large-antenna arrays have other sensors such as RGB camera, depth camera, or LiDAR sensors.These vision sensors help us overcome the non-trivial wireless communication challenges, such as beam blockage prediction and hand-over prediction.This is further motivated by the recent advances in deep learning and computer vision that can extract high-level semantics from complex visual scenes, and the increasing interest of leveraging machine/deep learning tools in wireless communication problems.[1] <br/><br/>The research was focused solely based on technology like 3D cameras,object detection and object tracking using Computer vision and compression techniques. The main objective of using computer vision was to make Milli-meter Wave communication more robust, and to collect more data for the machine learning algorithms. Pre-build lossless and lossy compression algorithms, such as FFMPEG, were used in the research. An algorithm was developed that could use 3D cameras and machine learning models such as YOLOV3, to track moving objects using servo motors and low powered computers like the raspberry pi or the Jetson Nano. In other words, the receiver could track the highly mobile transmitter in 1 dimension using a 3D camera. Not only that, during the research, the transmitter was loaded on a DJI M600 pro drone, and then machine learning and object tracking was used to track the highly mobile drone. In order to build this machine learning model and object tracker, collecting data like depth, RGB images and position coordinates were the first yet the most important step. GPS coordinates from the DJI M600 were also pulled and were successfully plotted on google earth. This proved to be very useful during data collection using a drone and for the future applications of position estimation for a drone using machine learning. <br/><br/>Initially, images were taken from transmitter camera every second,and those frames were then converted to a text file containing hex-decimal values. Each text file was then transmitted from the transmitter to receiver, and on the receiver side, a python code converted the hex-decimal to JPG. This would give an efect of real time video transmission. However, towards the end of the research, an industry standard, real time video was streamed using pre-built FFMPEG modules, GNU radio and Universal Software Radio Peripheral (USRP). The transmitter camera was a PI-camera. More details will be discussed as we further dive deep into this research report.

ContributorsSeth, Madhav (Author) / Alkhateeb, Ahmed (Thesis director) / Alrabeiah, Muhammad (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
171482-Thumbnail Image.png
Description
The recent trends in wireless communication, fueled by the demand for lower latency and higher bandwidth, have caused the migration of users from lower frequencies to higher frequencies, i.e., from 2.5GHz to millimeter wave. However, the migration to higher frequencies has its challenges. The sensitivity to blockages is a key

The recent trends in wireless communication, fueled by the demand for lower latency and higher bandwidth, have caused the migration of users from lower frequencies to higher frequencies, i.e., from 2.5GHz to millimeter wave. However, the migration to higher frequencies has its challenges. The sensitivity to blockages is a key challenge for millimeter wave and terahertz networks in 5G and beyond. Since these networks mainly rely on line-of-sight (LOS) links, sudden link blockages highly threaten the reliability of such networks. Further, when the LOS link is blocked, the network typically needs to hand off the user to another LOS basestation, which may incur critical time latency, especially if a search over a large codebook of narrow beams is needed. A promising way to tackle the reliability and latency challenges lies in enabling proaction in wireless networks. Proaction allows the network to anticipate future blockages, especially dynamic blockages, and initiate user hand-off beforehand. This thesis presents a complete machine learning framework for enabling proaction in wireless networks relying on the multi-modal 3D LiDAR(Light Detection and Ranging) point cloud and position data. In particular, the paper proposes a sensing-aided wireless communication solution that utilizes bimodal machine learning to predict the user link status. This is mainly achieved via a deep learning algorithm that learns from LiDAR point-cloud and position data to distinguish between LOS and NLOS(non line-of-sight) links. The algorithm is evaluated on the multi-modal wireless Communication Dataset DeepSense6G dataset. It is a time-synchronized collection of data from various sensors such as millimeter wave power, position, camera, radar, and LiDAR. Experimental results indicate that the algorithm can accurately predict link status with 87% accuracy. This highlights a promising direction for enabling high reliability and low latency in future wireless networks.
ContributorsSrinivas, Tirumalai Vinjamoor Nikhil (Author) / Alkhateeb, Ahmed (Thesis advisor) / Trichopoulos, Georgios (Committee member) / Myhajlenko, Stefan (Committee member) / Arizona State University (Publisher)
Created2022
189258-Thumbnail Image.png
Description
Predicting nonlinear dynamical systems has been a long-standing challenge in science. This field is currently witnessing a revolution with the advent of machine learning methods. Concurrently, the analysis of dynamics in various nonlinear complex systems continues to be crucial. Guided by these directions, I conduct the following studies. Predicting critical

Predicting nonlinear dynamical systems has been a long-standing challenge in science. This field is currently witnessing a revolution with the advent of machine learning methods. Concurrently, the analysis of dynamics in various nonlinear complex systems continues to be crucial. Guided by these directions, I conduct the following studies. Predicting critical transitions and transient states in nonlinear dynamics is a complex problem. I developed a solution called parameter-aware reservoir computing, which uses machine learning to track how system dynamics change with a driving parameter. I show that the transition point can be accurately predicted while trained in a sustained functioning regime before the transition. Notably, it can also predict if the system will enter a transient state, the distribution of transient lifetimes, and their average before a final collapse, which are crucial for management. I introduce a machine-learning-based digital twin for monitoring and predicting the evolution of externally driven nonlinear dynamical systems, where reservoir computing is exploited. Extensive tests on various models, encompassing optics, ecology, and climate, verify the approach’s effectiveness. The digital twins can extrapolate unknown system dynamics, continually forecast and monitor under non-stationary external driving, infer hidden variables, adapt to different driving waveforms, and extrapolate bifurcation behaviors across varying system sizes. Integrating engineered gene circuits into host cells poses a significant challenge in synthetic biology due to circuit-host interactions, such as growth feedback. I conducted systematic studies on hundreds of circuit structures exhibiting various functionalities, and identified a comprehensive categorization of growth-induced failures. I discerned three dynamical mechanisms behind these circuit failures. Moreover, my comprehensive computations reveal a scaling law between the circuit robustness and the intensity of growth feedback. A class of circuits with optimal robustness is also identified. Chimera states, a phenomenon of symmetry-breaking in oscillator networks, traditionally have transient lifetimes that grow exponentially with system size. However, my research on high-dimensional oscillators leads to the discovery of ’short-lived’ chimera states. Their lifetime increases logarithmically with system size and decreases logarithmically with random perturbations, indicating a unique fragility. To understand these states, I use a transverse stability analysis supported by simulations.
ContributorsKong, Lingwei (Author) / Lai, Ying-Cheng (Thesis advisor) / Tian, Xiaojun (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Alkhateeb, Ahmed (Committee member) / Arizona State University (Publisher)
Created2023
158413-Thumbnail Image.png
Description
Within the near future, a vast demand for autonomous vehicular techniques can be forecast on both aviation and ground platforms, including autonomous driving, automatic landing, air traffic management. These techniques usually rely on the positioning system and the communication system independently, where it potentially causes spectrum congestion. Inspired by the

Within the near future, a vast demand for autonomous vehicular techniques can be forecast on both aviation and ground platforms, including autonomous driving, automatic landing, air traffic management. These techniques usually rely on the positioning system and the communication system independently, where it potentially causes spectrum congestion. Inspired by the spectrum sharing technique, Communications and High-Precision Positioning (CHP2) system is invented to provide a high precision position service (precision ~1cm) while performing the communication task simultaneously under the same spectrum. CHP2 system is implemented on the consumer-off-the-shelf (COTS) software-defined radio (SDR) platform with customized hardware. Taking the advantages of the SDR platform, the completed baseband processing chain, time-of-arrival estimation (ToA), time-of-flight estimation (ToF) are mathematically modeled and then implemented onto the system-on-chip (SoC) system. Due to the compact size and cost economy, the CHP2 system can be installed on different aerial or ground platforms enabling a high-mobile and reconfigurable network.

In this dissertation report, the implementation procedure of the CHP2 system is discussed in detail. It mainly focuses on the system construction on the Xilinx Ultrascale+ SoC platform. The CHP2 waveform design, ToA solution, and timing exchanging algorithms are also introduced. Finally, several in-lab tests and over-the-air demonstrations are conducted. The demonstration shows the best ranging performance achieves the ~1 cm standard deviation and 10Hz refreshing rate of estimation by using a 10MHz narrow-band signal over 915MHz (US ISM) or 783MHz (EU Licensed) carrier frequency.
ContributorsYu, Hanguang (Author) / Bliss, Daniel (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Alkhateeb, Ahmed (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2020
187375-Thumbnail Image.png
Description
With the rapid development of reflect-arrays and software-defined meta-surfaces, reconfigurable intelligent surfaces (RISs) have been envisioned as promising technologies for next-generation wireless communication and sensing systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals in a smart way to improve the performance of such

With the rapid development of reflect-arrays and software-defined meta-surfaces, reconfigurable intelligent surfaces (RISs) have been envisioned as promising technologies for next-generation wireless communication and sensing systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals in a smart way to improve the performance of such systems. In RIS-aided communication systems, designing this smart interaction, however, requires acquiring large-dimensional channel knowledge between the RIS and the transmitter/receiver. Acquiring this knowledge is one of the most crucial challenges in RISs as it is associated with large computational and hardware complexity. For RIS-aided sensing systems, it is interesting to first investigate scene depth perception based on millimeter wave (mmWave) multiple-input multiple-output (MIMO) sensing. While mmWave MIMO sensing systems address some critical limitations suffered by optical sensors, realizing these systems possess several key challenges: communication-constrained sensing framework design, beam codebook design, and scene depth estimation challenges. Given the high spatial resolution provided by the RISs, RIS-aided mmWave sensing systems have the potential to improve the scene depth perception, while imposing some key challenges too. In this dissertation, for RIS-aided communication systems, efficient RIS interaction design solutions are proposed by leveraging tools from compressive sensing and deep learning. The achievable rates of these solutions approach the upper bound, which assumes perfect channel knowledge, with negligible training overhead. For RIS-aided sensing systems, a mmWave MIMO based sensing framework is first developed for building accurate depth maps under the constraints imposed by the communication transceivers. Then, a scene depth estimation framework based on RIS-aided sensing is developed for building high-resolution accurate depth maps. Numerical simulations illustrate the promising performance of the proposed solutions, highlighting their potential for next-generation communication and sensing systems.
ContributorsTaha, Abdelrahman (Author) / Alkhateeb, Ahmed (Thesis advisor) / Bliss, Daniel (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2023
187540-Thumbnail Image.png
Description
In this dissertation, I implement and demonstrate a distributed coherent mesh beamforming system, for wireless communications, that provides increased range, data rate, and robustness to interference. By using one or multiple distributed, locally-coherent meshes as antenna arrays, I develop an approach that realizes a performance improvement, related to the number

In this dissertation, I implement and demonstrate a distributed coherent mesh beamforming system, for wireless communications, that provides increased range, data rate, and robustness to interference. By using one or multiple distributed, locally-coherent meshes as antenna arrays, I develop an approach that realizes a performance improvement, related to the number of mesh elements, in signal-to-noise ratio over a traditional single-antenna to single-antenna link without interference. I further demonstrate that in the presence of interference, the signal-to-interference-plus-noise ratio improvement is significantly greater for a wide range of environments. I also discuss key performance bounds that drive system design decisions as well as techniques for robust distributed adaptive beamformer construction. I develop and implement an over-the-air distributed time and frequency synchronization algorithm to enable distributed coherence on software-defined radios. Finally, I implement the distributed coherent mesh beamforming system over-the-air on a network of software-defined radios and demonstrate both simulated and experimental results both with and without interference that achieve performance approaching the theoretical bounds.
ContributorsHoltom, Jacob (Author) / Bliss, Daniel W (Thesis advisor) / Alkhateeb, Ahmed (Committee member) / Herschfelt, Andrew (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2023
171374-Thumbnail Image.png
Description
Terahertz (THz) waves (300 GHz to 10 THz) constitute the least studied part of the electromagnetic (EM) spectrum with unique propagation properties that make them attractive to emerging sensing and imaging application. As opposed to optical signals, THz waves can penetrate several non-metallic materials (e.g., plastic, wood, and thin tissues),

Terahertz (THz) waves (300 GHz to 10 THz) constitute the least studied part of the electromagnetic (EM) spectrum with unique propagation properties that make them attractive to emerging sensing and imaging application. As opposed to optical signals, THz waves can penetrate several non-metallic materials (e.g., plastic, wood, and thin tissues), thus enabling several applications in security monitoring, non-destructive evaluation, and biometrics. Additionally, THz waves scatter on most surfaces distinctively compared with lower/higher frequencies (e.g., microwave/optical bands). Therefore, based on these two interesting THz wave propagation properties, namely penetration and scattering, I worked on THz imaging methods that explore non-line-of-sight (NLoS) information. First, I use a THz microscopy method to probe the fingertips as a new technique for fingerprint scanning. Due to the wave penetration in the THz range, I can exploit sub-skin traits not visible with current approaches to obtain a more robust and secure fingerprint scanning method. I also fabricated fingerprint spoofs using latex to compare the imaging results between real and fake fingers. Next, I focus on THz imaging hardware topologies and algorithms for longer-distance imaging applications. As such, I compare the imaging performance of dense and sparse antenna arrays through simulations and measurements. I show that sparse arrays with nonuniform amplitudes can provide lower side lobes in the images. Besides, although sparse arrays feature a much smaller total number of elements, dense arrays have advantages when imaging scenarios with multiple objects. Afterward, I propose a THz imaging method to see around obstacles/corners. THz waves’ unique scattering properties are helpful to implement around-the-corner imaging. I carried out both simulations and measurements in various scenarios to validate the proposed method. The results indicate that THz waves can reveal the hidden scene with centimeter-scale resolution using proper rough surfaces and moderately sized apertures. Moreover, I demonstrate that this imaging technique can benefit simultaneous localization and mapping (SLAM) in future communication systems. NLoS images enable accurate localization of blocked users, hence increasing the link robustness. I present both simulation and measurement results to validate this SLAM method. I also show that better localization accuracy is achieved when the user's antenna is omnidirectional rather than directional.
ContributorsCui, Yiran (Author) / Trichopoulos, Georgios (Thesis advisor) / Balanis, Constantine (Committee member) / Aberle, James (Committee member) / Alkhateeb, Ahmed (Committee member) / Arizona State University (Publisher)
Created2022
191748-Thumbnail Image.png
Description
Millimeter-wave (mmWave) and sub-terahertz (sub-THz) systems aim to utilize the large bandwidth available at these frequencies. This has the potential to enable several future applications that require high data rates, such as autonomous vehicles and digital twins. These systems, however, have several challenges that need to be addressed to realize

Millimeter-wave (mmWave) and sub-terahertz (sub-THz) systems aim to utilize the large bandwidth available at these frequencies. This has the potential to enable several future applications that require high data rates, such as autonomous vehicles and digital twins. These systems, however, have several challenges that need to be addressed to realize their gains in practice. First, they need to deploy large antenna arrays and use narrow beams to guarantee sufficient receive power. Adjusting the narrow beams of the large antenna arrays incurs massive beam training overhead. Second, the sensitivity to blockages is a key challenge for mmWave and THz networks. Since these networks mainly rely on line-of-sight (LOS) links, sudden link blockages highly threaten the reliability of the networks. Further, when the LOS link is blocked, the network typically needs to hand off the user to another LOS basestation, which may incur critical time latency, especially if a search over a large codebook of narrow beams is needed. A promising way to tackle both these challenges lies in leveraging additional side information such as visual, LiDAR, radar, and position data. These sensors provide rich information about the wireless environment, which can be utilized for fast beam and blockage prediction. This dissertation presents a machine-learning framework for sensing-aided beam and blockage prediction. In particular, for beam prediction, this work proposes to utilize visual and positional data to predict the optimal beam indices. For the first time, this work investigates the sensing-aided beam prediction task in a real-world vehicle-to-infrastructure and drone communication scenario. Similarly, for blockage prediction, this dissertation proposes a multi-modal wireless communication solution that utilizes bimodal machine learning to perform proactive blockage prediction and user hand-off. Evaluations on both real-world and synthetic datasets illustrate the promising performance of the proposed solutions and highlight their potential for next-generation communication and sensing systems.
ContributorsCharan, Gouranga (Author) / Alkhateeb, Ahmed (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Turaga, Pavan (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2024