Matching Items (24)
148033-Thumbnail Image.png
Description

Every communication system has a receiver and a transmitter. Irrespective if it is wired or wireless.The future of wireless communication consists of a massive number of transmitters and receivers. The question arises, can we use computer vision to help wireless communication? To satisfy the high data requirement, a large number

Every communication system has a receiver and a transmitter. Irrespective if it is wired or wireless.The future of wireless communication consists of a massive number of transmitters and receivers. The question arises, can we use computer vision to help wireless communication? To satisfy the high data requirement, a large number of antennas are required. The devices that employ large-antenna arrays have other sensors such as RGB camera, depth camera, or LiDAR sensors.These vision sensors help us overcome the non-trivial wireless communication challenges, such as beam blockage prediction and hand-over prediction.This is further motivated by the recent advances in deep learning and computer vision that can extract high-level semantics from complex visual scenes, and the increasing interest of leveraging machine/deep learning tools in wireless communication problems.[1] <br/><br/>The research was focused solely based on technology like 3D cameras,object detection and object tracking using Computer vision and compression techniques. The main objective of using computer vision was to make Milli-meter Wave communication more robust, and to collect more data for the machine learning algorithms. Pre-build lossless and lossy compression algorithms, such as FFMPEG, were used in the research. An algorithm was developed that could use 3D cameras and machine learning models such as YOLOV3, to track moving objects using servo motors and low powered computers like the raspberry pi or the Jetson Nano. In other words, the receiver could track the highly mobile transmitter in 1 dimension using a 3D camera. Not only that, during the research, the transmitter was loaded on a DJI M600 pro drone, and then machine learning and object tracking was used to track the highly mobile drone. In order to build this machine learning model and object tracker, collecting data like depth, RGB images and position coordinates were the first yet the most important step. GPS coordinates from the DJI M600 were also pulled and were successfully plotted on google earth. This proved to be very useful during data collection using a drone and for the future applications of position estimation for a drone using machine learning. <br/><br/>Initially, images were taken from transmitter camera every second,and those frames were then converted to a text file containing hex-decimal values. Each text file was then transmitted from the transmitter to receiver, and on the receiver side, a python code converted the hex-decimal to JPG. This would give an efect of real time video transmission. However, towards the end of the research, an industry standard, real time video was streamed using pre-built FFMPEG modules, GNU radio and Universal Software Radio Peripheral (USRP). The transmitter camera was a PI-camera. More details will be discussed as we further dive deep into this research report.

ContributorsSeth, Madhav (Author) / Alkhateeb, Ahmed (Thesis director) / Alrabeiah, Muhammad (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
131527-Thumbnail Image.png
Description
Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic

Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic indoor or urban environments. Using recent improvements in the field of machine learning, this project proposes a new method of localization using networks with several wireless transceivers and implemented without heavy computational loads or high costs. This project aims to build a proof-of-concept prototype and demonstrate that the proposed technique is feasible and accurate.

Modern communication networks heavily depend upon an estimate of the communication channel, which represents the distortions that a transmitted signal takes as it moves towards a receiver. A channel can become quite complicated due to signal reflections, delays, and other undesirable effects and, as a result, varies significantly with each different location. This localization system seeks to take advantage of this distinctness by feeding channel information into a machine learning algorithm, which will be trained to associate channels with their respective locations. A device in need of localization would then only need to calculate a channel estimate and pose it to this algorithm to obtain its location.

As an additional step, the effect of location noise is investigated in this report. Once the localization system described above demonstrates promising results, the team demonstrates that the system is robust to noise on its location labels. In doing so, the team demonstrates that this system could be implemented in a continued learning environment, in which some user agents report their estimated (noisy) location over a wireless communication network, such that the model can be implemented in an environment without extensive data collection prior to release.
ContributorsChang, Roger (Co-author) / Kann, Trevor (Co-author) / Alkhateeb, Ahmed (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
171482-Thumbnail Image.png
Description
The recent trends in wireless communication, fueled by the demand for lower latency and higher bandwidth, have caused the migration of users from lower frequencies to higher frequencies, i.e., from 2.5GHz to millimeter wave. However, the migration to higher frequencies has its challenges. The sensitivity to blockages is a key

The recent trends in wireless communication, fueled by the demand for lower latency and higher bandwidth, have caused the migration of users from lower frequencies to higher frequencies, i.e., from 2.5GHz to millimeter wave. However, the migration to higher frequencies has its challenges. The sensitivity to blockages is a key challenge for millimeter wave and terahertz networks in 5G and beyond. Since these networks mainly rely on line-of-sight (LOS) links, sudden link blockages highly threaten the reliability of such networks. Further, when the LOS link is blocked, the network typically needs to hand off the user to another LOS basestation, which may incur critical time latency, especially if a search over a large codebook of narrow beams is needed. A promising way to tackle the reliability and latency challenges lies in enabling proaction in wireless networks. Proaction allows the network to anticipate future blockages, especially dynamic blockages, and initiate user hand-off beforehand. This thesis presents a complete machine learning framework for enabling proaction in wireless networks relying on the multi-modal 3D LiDAR(Light Detection and Ranging) point cloud and position data. In particular, the paper proposes a sensing-aided wireless communication solution that utilizes bimodal machine learning to predict the user link status. This is mainly achieved via a deep learning algorithm that learns from LiDAR point-cloud and position data to distinguish between LOS and NLOS(non line-of-sight) links. The algorithm is evaluated on the multi-modal wireless Communication Dataset DeepSense6G dataset. It is a time-synchronized collection of data from various sensors such as millimeter wave power, position, camera, radar, and LiDAR. Experimental results indicate that the algorithm can accurately predict link status with 87% accuracy. This highlights a promising direction for enabling high reliability and low latency in future wireless networks.
ContributorsSrinivas, Tirumalai Vinjamoor Nikhil (Author) / Alkhateeb, Ahmed (Thesis advisor) / Trichopoulos, Georgios (Committee member) / Myhajlenko, Stefan (Committee member) / Arizona State University (Publisher)
Created2022
190918-Thumbnail Image.png
Description
Reconfigurable metasurfaces (RMSs) are promising solutions for beamforming and sensing applications including 5G and beyond wireless communications, satellite and radar systems, and biomarker sensing. In this work, three distinct RMS architectures – reconfigurable intelligent surfaces (RISs), meta-transmission lines (meta-TLs), and substrate integrated waveguide leaky-wave antennas (SIW-LWAs) are developed and characterized.

Reconfigurable metasurfaces (RMSs) are promising solutions for beamforming and sensing applications including 5G and beyond wireless communications, satellite and radar systems, and biomarker sensing. In this work, three distinct RMS architectures – reconfigurable intelligent surfaces (RISs), meta-transmission lines (meta-TLs), and substrate integrated waveguide leaky-wave antennas (SIW-LWAs) are developed and characterized. The ever-increasing demand for higher data rates and lower latencies has propelled the telecommunications industry to adopt higher frequencies for 5G and beyond wireless communications. However, this transition to higher frequencies introduces challenges in terms of signal coverage and path loss. Many base stations would be necessary to ensure signal fidelity in such a setting, making bulky phased array-based solutions impractical. Consequently, to meet the unique needs of 5G and beyond wireless communication networks, this work proposes the use of RISs characterized by low-profile, low-RF losses, low-power consumption, and high-gain capabilities, making them excellent candidates for future wireless communication applications. Specifically, RISs at sub-6GHz, mmWave and sub-THz frequencies are analyzed to demonstrate their ability to improve signal strength and coverage. Further, a linear meta-TL wave space is designed to achieve miniaturization of true-time delay beamforming structures such as Rotman lenses which are traditionally bulky. To address this challenge, a modified lumped element TL model is proposed. A meta-TL is created by including the mutual coupling effects and can be used to slow down the electromagnetic signal and realize miniaturized lenses. A proof-of-concept 1D meta-TL is developed to demonstrate about 90% size reduction and 40% bandwidth improvement. Furthermore, a conformable antenna design for radio frequency-based tracking of hand gestures is also detailed. SIW-LWA is employed as the radiating element to couple RF signals into the human hand. The antenna is envisaged to be integrated in a wristband topology and capture the changes in the electric field caused by various movements of the hand. The scattering parameters are used to track the changes in the wrist anatomy. Sensor characterization showed significant sensitivity suppression due to lossy multi-dielectric nature tissues in the wrist. However, the sensor demonstrates good coupling of electromagnetic energy making it suitable for on-body wireless communications and magnetic resonance imaging applications.
ContributorsKashyap, Bharath Gundappa (Author) / Trichopoulos, Georgios C (Thesis advisor) / Balanis, Constantine A (Committee member) / Aberle, James T (Committee member) / Alkhateeb, Ahmed (Committee member) / Imani, Seyedmohammedreza F (Committee member) / Arizona State University (Publisher)
Created2023
171374-Thumbnail Image.png
Description
Terahertz (THz) waves (300 GHz to 10 THz) constitute the least studied part of the electromagnetic (EM) spectrum with unique propagation properties that make them attractive to emerging sensing and imaging application. As opposed to optical signals, THz waves can penetrate several non-metallic materials (e.g., plastic, wood, and thin tissues),

Terahertz (THz) waves (300 GHz to 10 THz) constitute the least studied part of the electromagnetic (EM) spectrum with unique propagation properties that make them attractive to emerging sensing and imaging application. As opposed to optical signals, THz waves can penetrate several non-metallic materials (e.g., plastic, wood, and thin tissues), thus enabling several applications in security monitoring, non-destructive evaluation, and biometrics. Additionally, THz waves scatter on most surfaces distinctively compared with lower/higher frequencies (e.g., microwave/optical bands). Therefore, based on these two interesting THz wave propagation properties, namely penetration and scattering, I worked on THz imaging methods that explore non-line-of-sight (NLoS) information. First, I use a THz microscopy method to probe the fingertips as a new technique for fingerprint scanning. Due to the wave penetration in the THz range, I can exploit sub-skin traits not visible with current approaches to obtain a more robust and secure fingerprint scanning method. I also fabricated fingerprint spoofs using latex to compare the imaging results between real and fake fingers. Next, I focus on THz imaging hardware topologies and algorithms for longer-distance imaging applications. As such, I compare the imaging performance of dense and sparse antenna arrays through simulations and measurements. I show that sparse arrays with nonuniform amplitudes can provide lower side lobes in the images. Besides, although sparse arrays feature a much smaller total number of elements, dense arrays have advantages when imaging scenarios with multiple objects. Afterward, I propose a THz imaging method to see around obstacles/corners. THz waves’ unique scattering properties are helpful to implement around-the-corner imaging. I carried out both simulations and measurements in various scenarios to validate the proposed method. The results indicate that THz waves can reveal the hidden scene with centimeter-scale resolution using proper rough surfaces and moderately sized apertures. Moreover, I demonstrate that this imaging technique can benefit simultaneous localization and mapping (SLAM) in future communication systems. NLoS images enable accurate localization of blocked users, hence increasing the link robustness. I present both simulation and measurement results to validate this SLAM method. I also show that better localization accuracy is achieved when the user's antenna is omnidirectional rather than directional.
ContributorsCui, Yiran (Author) / Trichopoulos, Georgios (Thesis advisor) / Balanis, Constantine (Committee member) / Aberle, James (Committee member) / Alkhateeb, Ahmed (Committee member) / Arizona State University (Publisher)
Created2022
193371-Thumbnail Image.png
Description
The fifth generation (5G) of cellular communication is migrating towards higher frequenciesto cater to the demand for higher data rate applications. However, in higher frequency ranges, like mmWave and terahertz, physical blockage poses a significant challenge to the large-scale deployment of this new technology. Reconfigurable Intelligent Surfaces (RISs) have shown promising potential in

The fifth generation (5G) of cellular communication is migrating towards higher frequenciesto cater to the demand for higher data rate applications. However, in higher frequency ranges, like mmWave and terahertz, physical blockage poses a significant challenge to the large-scale deployment of this new technology. Reconfigurable Intelligent Surfaces (RISs) have shown promising potential in extending the signal coverage and overcoming signal blockages in wireless communications. However, RIS integration in networks requires high coordination between network notes, resulting in barriers to the wide adoption of RISs and similar IoT devices. To this end, this work introduces a practical study of integrating a remotely controlled RIS in an Open RAN (ORAN) compliant 5G private network with minimal software stack modifications. This thesis proposes using cloud technologies and ORAN features, such as the Radio Intelligent Controller (RIC) and eXternal Applications (xApps), to coordinate the RIS transparently with a 5G base station operation. The proposed framework has been integrated into a proof-of-concept hardware prototype with a 5.8 GHz RIS. Experimental results demonstrate that the framework can control the beam steering in the RIS accurately within the network. The proposed framework shows promising potential for near real-time RIS beamforming control with minimal power consumption overhead.
ContributorsRoy, Abhradeep (Author) / Alkhateeb, Ahmed (Thesis advisor) / Syrotiuk, Violet (Thesis advisor) / Trichopoulos, Georgios (Committee member) / Arizona State University (Publisher)
Created2024
189258-Thumbnail Image.png
Description
Predicting nonlinear dynamical systems has been a long-standing challenge in science. This field is currently witnessing a revolution with the advent of machine learning methods. Concurrently, the analysis of dynamics in various nonlinear complex systems continues to be crucial. Guided by these directions, I conduct the following studies. Predicting critical

Predicting nonlinear dynamical systems has been a long-standing challenge in science. This field is currently witnessing a revolution with the advent of machine learning methods. Concurrently, the analysis of dynamics in various nonlinear complex systems continues to be crucial. Guided by these directions, I conduct the following studies. Predicting critical transitions and transient states in nonlinear dynamics is a complex problem. I developed a solution called parameter-aware reservoir computing, which uses machine learning to track how system dynamics change with a driving parameter. I show that the transition point can be accurately predicted while trained in a sustained functioning regime before the transition. Notably, it can also predict if the system will enter a transient state, the distribution of transient lifetimes, and their average before a final collapse, which are crucial for management. I introduce a machine-learning-based digital twin for monitoring and predicting the evolution of externally driven nonlinear dynamical systems, where reservoir computing is exploited. Extensive tests on various models, encompassing optics, ecology, and climate, verify the approach’s effectiveness. The digital twins can extrapolate unknown system dynamics, continually forecast and monitor under non-stationary external driving, infer hidden variables, adapt to different driving waveforms, and extrapolate bifurcation behaviors across varying system sizes. Integrating engineered gene circuits into host cells poses a significant challenge in synthetic biology due to circuit-host interactions, such as growth feedback. I conducted systematic studies on hundreds of circuit structures exhibiting various functionalities, and identified a comprehensive categorization of growth-induced failures. I discerned three dynamical mechanisms behind these circuit failures. Moreover, my comprehensive computations reveal a scaling law between the circuit robustness and the intensity of growth feedback. A class of circuits with optimal robustness is also identified. Chimera states, a phenomenon of symmetry-breaking in oscillator networks, traditionally have transient lifetimes that grow exponentially with system size. However, my research on high-dimensional oscillators leads to the discovery of ’short-lived’ chimera states. Their lifetime increases logarithmically with system size and decreases logarithmically with random perturbations, indicating a unique fragility. To understand these states, I use a transverse stability analysis supported by simulations.
ContributorsKong, Lingwei (Author) / Lai, Ying-Cheng (Thesis advisor) / Tian, Xiaojun (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Alkhateeb, Ahmed (Committee member) / Arizona State University (Publisher)
Created2023
187540-Thumbnail Image.png
Description
In this dissertation, I implement and demonstrate a distributed coherent mesh beamforming system, for wireless communications, that provides increased range, data rate, and robustness to interference. By using one or multiple distributed, locally-coherent meshes as antenna arrays, I develop an approach that realizes a performance improvement, related to the number

In this dissertation, I implement and demonstrate a distributed coherent mesh beamforming system, for wireless communications, that provides increased range, data rate, and robustness to interference. By using one or multiple distributed, locally-coherent meshes as antenna arrays, I develop an approach that realizes a performance improvement, related to the number of mesh elements, in signal-to-noise ratio over a traditional single-antenna to single-antenna link without interference. I further demonstrate that in the presence of interference, the signal-to-interference-plus-noise ratio improvement is significantly greater for a wide range of environments. I also discuss key performance bounds that drive system design decisions as well as techniques for robust distributed adaptive beamformer construction. I develop and implement an over-the-air distributed time and frequency synchronization algorithm to enable distributed coherence on software-defined radios. Finally, I implement the distributed coherent mesh beamforming system over-the-air on a network of software-defined radios and demonstrate both simulated and experimental results both with and without interference that achieve performance approaching the theoretical bounds.
ContributorsHoltom, Jacob (Author) / Bliss, Daniel W (Thesis advisor) / Alkhateeb, Ahmed (Committee member) / Herschfelt, Andrew (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2023
187467-Thumbnail Image.png
Description
A distributed framework is proposed for addressing resource sharing problems in communications, micro-economics, and various other network systems. The approach uses a hierarchical multi-layer decomposition for network utility maximization. This methodology uses central management and distributed computations to allocate resources, and in dynamic environments, it aims to efficiently respond to

A distributed framework is proposed for addressing resource sharing problems in communications, micro-economics, and various other network systems. The approach uses a hierarchical multi-layer decomposition for network utility maximization. This methodology uses central management and distributed computations to allocate resources, and in dynamic environments, it aims to efficiently respond to network changes. The main contributions include a comprehensive description of an exemplary unifying optimization framework to share resources across different operators and platforms, and a detailed analysis of the generalized methods under the assumption that the network changes are on the same time-scale as the convergence time of the algorithms employed for local computations.Assuming strong concavity and smoothness of the objective functions, and under some stability conditions for each layer, convergence rates and optimality bounds are presented. The effectiveness of the framework is demonstrated through numerical examples. Furthermore, a novel Federated Edge Network Utility Maximization (FEdg-NUM) architecture is proposed for solving large-scale distributed network utility maximization problems in a fully decentralized way. In FEdg-NUM, clients with private utilities communicate with a peer-to-peer network of edge servers. Convergence properties are examined both through analysis and numerical simulations, and potential applications are highlighted. Finally, problems in a complex stochastic dynamic environment, specifically motivated by resource sharing during disasters occurring in multiple areas, are studied. In a hierarchical management scenario, a method of applying a primal-dual algorithm in higher-layer along with deep reinforcement learning algorithms in localities is presented. Analytical details as well as case studies such as pandemic and wildfire response are provided.
ContributorsKarakoc, Nurullah (Author) / Scaglione, Anna (Thesis advisor) / Reisslein, Martin (Thesis advisor) / Nedich, Angelia (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2023
187375-Thumbnail Image.png
Description
With the rapid development of reflect-arrays and software-defined meta-surfaces, reconfigurable intelligent surfaces (RISs) have been envisioned as promising technologies for next-generation wireless communication and sensing systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals in a smart way to improve the performance of such

With the rapid development of reflect-arrays and software-defined meta-surfaces, reconfigurable intelligent surfaces (RISs) have been envisioned as promising technologies for next-generation wireless communication and sensing systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals in a smart way to improve the performance of such systems. In RIS-aided communication systems, designing this smart interaction, however, requires acquiring large-dimensional channel knowledge between the RIS and the transmitter/receiver. Acquiring this knowledge is one of the most crucial challenges in RISs as it is associated with large computational and hardware complexity. For RIS-aided sensing systems, it is interesting to first investigate scene depth perception based on millimeter wave (mmWave) multiple-input multiple-output (MIMO) sensing. While mmWave MIMO sensing systems address some critical limitations suffered by optical sensors, realizing these systems possess several key challenges: communication-constrained sensing framework design, beam codebook design, and scene depth estimation challenges. Given the high spatial resolution provided by the RISs, RIS-aided mmWave sensing systems have the potential to improve the scene depth perception, while imposing some key challenges too. In this dissertation, for RIS-aided communication systems, efficient RIS interaction design solutions are proposed by leveraging tools from compressive sensing and deep learning. The achievable rates of these solutions approach the upper bound, which assumes perfect channel knowledge, with negligible training overhead. For RIS-aided sensing systems, a mmWave MIMO based sensing framework is first developed for building accurate depth maps under the constraints imposed by the communication transceivers. Then, a scene depth estimation framework based on RIS-aided sensing is developed for building high-resolution accurate depth maps. Numerical simulations illustrate the promising performance of the proposed solutions, highlighting their potential for next-generation communication and sensing systems.
ContributorsTaha, Abdelrahman (Author) / Alkhateeb, Ahmed (Thesis advisor) / Bliss, Daniel (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2023