Matching Items (6)
Filtering by

Clear all filters

131527-Thumbnail Image.png
Description
Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic

Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic indoor or urban environments. Using recent improvements in the field of machine learning, this project proposes a new method of localization using networks with several wireless transceivers and implemented without heavy computational loads or high costs. This project aims to build a proof-of-concept prototype and demonstrate that the proposed technique is feasible and accurate.

Modern communication networks heavily depend upon an estimate of the communication channel, which represents the distortions that a transmitted signal takes as it moves towards a receiver. A channel can become quite complicated due to signal reflections, delays, and other undesirable effects and, as a result, varies significantly with each different location. This localization system seeks to take advantage of this distinctness by feeding channel information into a machine learning algorithm, which will be trained to associate channels with their respective locations. A device in need of localization would then only need to calculate a channel estimate and pose it to this algorithm to obtain its location.

As an additional step, the effect of location noise is investigated in this report. Once the localization system described above demonstrates promising results, the team demonstrates that the system is robust to noise on its location labels. In doing so, the team demonstrates that this system could be implemented in a continued learning environment, in which some user agents report their estimated (noisy) location over a wireless communication network, such that the model can be implemented in an environment without extensive data collection prior to release.
ContributorsChang, Roger (Co-author) / Kann, Trevor (Co-author) / Alkhateeb, Ahmed (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
189258-Thumbnail Image.png
Description
Predicting nonlinear dynamical systems has been a long-standing challenge in science. This field is currently witnessing a revolution with the advent of machine learning methods. Concurrently, the analysis of dynamics in various nonlinear complex systems continues to be crucial. Guided by these directions, I conduct the following studies. Predicting critical

Predicting nonlinear dynamical systems has been a long-standing challenge in science. This field is currently witnessing a revolution with the advent of machine learning methods. Concurrently, the analysis of dynamics in various nonlinear complex systems continues to be crucial. Guided by these directions, I conduct the following studies. Predicting critical transitions and transient states in nonlinear dynamics is a complex problem. I developed a solution called parameter-aware reservoir computing, which uses machine learning to track how system dynamics change with a driving parameter. I show that the transition point can be accurately predicted while trained in a sustained functioning regime before the transition. Notably, it can also predict if the system will enter a transient state, the distribution of transient lifetimes, and their average before a final collapse, which are crucial for management. I introduce a machine-learning-based digital twin for monitoring and predicting the evolution of externally driven nonlinear dynamical systems, where reservoir computing is exploited. Extensive tests on various models, encompassing optics, ecology, and climate, verify the approach’s effectiveness. The digital twins can extrapolate unknown system dynamics, continually forecast and monitor under non-stationary external driving, infer hidden variables, adapt to different driving waveforms, and extrapolate bifurcation behaviors across varying system sizes. Integrating engineered gene circuits into host cells poses a significant challenge in synthetic biology due to circuit-host interactions, such as growth feedback. I conducted systematic studies on hundreds of circuit structures exhibiting various functionalities, and identified a comprehensive categorization of growth-induced failures. I discerned three dynamical mechanisms behind these circuit failures. Moreover, my comprehensive computations reveal a scaling law between the circuit robustness and the intensity of growth feedback. A class of circuits with optimal robustness is also identified. Chimera states, a phenomenon of symmetry-breaking in oscillator networks, traditionally have transient lifetimes that grow exponentially with system size. However, my research on high-dimensional oscillators leads to the discovery of ’short-lived’ chimera states. Their lifetime increases logarithmically with system size and decreases logarithmically with random perturbations, indicating a unique fragility. To understand these states, I use a transverse stability analysis supported by simulations.
ContributorsKong, Lingwei (Author) / Lai, Ying-Cheng (Thesis advisor) / Tian, Xiaojun (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Alkhateeb, Ahmed (Committee member) / Arizona State University (Publisher)
Created2023
157900-Thumbnail Image.png
Description
Readout Integrated Circuits(ROICs) are important components of infrared(IR) imag

ing systems. Performance of ROICs affect the quality of images obtained from IR

imaging systems. Contemporary infrared imaging applications demand ROICs that

can support large dynamic range, high frame rate, high output data rate, at low

cost, size and power. Some of these applications are

Readout Integrated Circuits(ROICs) are important components of infrared(IR) imag

ing systems. Performance of ROICs affect the quality of images obtained from IR

imaging systems. Contemporary infrared imaging applications demand ROICs that

can support large dynamic range, high frame rate, high output data rate, at low

cost, size and power. Some of these applications are military surveillance, remote

sensing in space and earth science missions and medical diagnosis. This work focuses

on developing a ROIC unit cell prototype for National Aeronautics and Space Ad

ministration(NASA), Jet Propulsion Laboratory’s(JPL’s) space applications. These

space applications also demand high sensitivity, longer integration times(large well

capacity), wide operating temperature range, wide input current range and immunity

to radiation events such as Single Event Latchup(SEL).

This work proposes a digital ROIC(DROIC) unit cell prototype of 30ux30u size,

to be used mainly with NASA JPL’s High Operating Temperature Barrier Infrared

Detectors(HOT BIRDs). Current state of the art DROICs achieve a dynamic range

of 16 bits using advanced 65-90nm CMOS processes which adds a lot of cost overhead.

The DROIC pixel proposed in this work uses a low cost 180nm CMOS process and

supports a dynamic range of 20 bits operating at a low frame rate of 100 frames per

second(fps), and a dynamic range of 12 bits operating at a high frame rate of 5kfps.

The total electron well capacity of this DROIC pixel is 1.27 billion electrons, enabling

integration times as long as 10ms, to achieve better dynamic range. The DROIC unit

cell uses an in-pixel 12-bit coarse ADC and an external 8-bit DAC based fine ADC.

The proposed DROIC uses layout techniques that make it immune to radiation up to

300krad(Si) of total ionizing dose(TID) and single event latch-up(SEL). It also has a

wide input current range from 10pA to 1uA and supports detectors operating from

Short-wave infrared (SWIR) to longwave infrared (LWIR) regions.
ContributorsPraveen, Subramanya Chilukuri (Author) / Bakkaloglu, Bertan (Thesis advisor) / Kitchen, Jennifer (Committee member) / Long, Yu (Committee member) / Arizona State University (Publisher)
Created2019
157812-Thumbnail Image.png
Description
The objective of this work is to design a novel method for imaging targets and scenes which are not directly visible to the observer. The unique scattering properties of terahertz (THz) waves can turn most building surfaces into mirrors, thus allowing someone to see around corners and various occlusions. In

The objective of this work is to design a novel method for imaging targets and scenes which are not directly visible to the observer. The unique scattering properties of terahertz (THz) waves can turn most building surfaces into mirrors, thus allowing someone to see around corners and various occlusions. In the visible regime, most surfaces are very rough compared to the wavelength. As a result, the spatial coherency of reflected signals is lost, and the geometry of the objects where the light bounced on cannot be retrieved. Interestingly, the roughness of most surfaces is comparable to the wavelengths at lower frequencies (100 GHz – 10 THz) without significantly disturbing the wavefront of the scattered signals, behaving approximately as mirrors. Additionally, this electrically small roughness is beneficial because it can be used by the THz imaging system to locate the pose (location and orientation) of the mirror surfaces, thus enabling the reconstruction of both line-of-sight (LoS) and non-line-of-sight (NLoS) objects.

Back-propagation imaging methods are modified to reconstruct the image of the 2-D scenario (range, cross-range). The reflected signal from the target is collected using a SAR (Synthetic Aperture Radar) set-up in a lab environment. This imaging technique is verified using both full-wave 3-D numerical analysis models and lab experiments.

The novel imaging approach of non-line-of-sight-imaging could enable novel applications in rescue and surveillance missions, highly accurate localization methods, and improve channel estimation in mmWave and sub-mmWave wireless communication systems.
ContributorsDoddalla, Sai Kiran kiran (Author) / Trichopoulos, George (Thesis advisor) / Alkhateeb, Ahmed (Committee member) / Zeinolabedinzadeh, Saeed (Committee member) / Aberle, James T., 1961- (Committee member) / Arizona State University (Publisher)
Created2019
157591-Thumbnail Image.png
Description
In this thesis, the synergy between millimeter-wave (mmWave) imaging and wireless communications is used to achieve high accuracy user localization and mapping (SLAM) mobile users in an uncharted environment. Such capability is enabled by taking advantage of the high-resolution image of both line-of-sight (LoS) and non-line-of-sight (NLoS) objects that mmWave

In this thesis, the synergy between millimeter-wave (mmWave) imaging and wireless communications is used to achieve high accuracy user localization and mapping (SLAM) mobile users in an uncharted environment. Such capability is enabled by taking advantage of the high-resolution image of both line-of-sight (LoS) and non-line-of-sight (NLoS) objects that mmWave imaging provides, and by utilizing angle of arrival (AoA) and time of arrival (ToA) estimators from communications. The motivations of this work are as follows: first, enable accurate SLAM from a single viewpoint i.e., using only one antenna array at the base station without any prior knowledge of the environment. The second motivation is the ability to localize in NLoS-only scenarios where the user signal may experience more than one reflection until it reaches the base station. As such, this proposed work will not make any assumptions on what region the user is and will use mmWave imaging techniques that will work for both near and far field region of the base station and account for the scattering properties of mmWave. Similarly, a near field signal model is developed to correctly estimate the AoA regardless of the user location.

This SLAM approach is enabled by reconstructing the mmWave image of the environment as seen by the base station. Then, an uplink pilot signal from the user is used to estimate both AoA and ToA of the dominant channel paths. Finally, AoA/ToA information is projected into the mmWave image to fully localize the user. Simulations using full-wave electromagnetic solvers are carried out to emulate an environment both in the near and far field. Then, to validate, an experiment carried in laboratory by creating a simple two-dimensional scenario in the 220-300 GHz range using a synthesized 13-cm linear antenna array formed by using vector network analyzer extenders and a one-dimensional linear motorized stage that replicates the base station. After taking measurements, this method successfully reconstructs the image of the environment and localize the user position with centimeter accuracy.
ContributorsAladsani, Mohammad A M S A (Author) / Trichopoulos, Georgios (Thesis advisor) / Alkhateeb, Ahmed (Committee member) / Balanis, Constantine (Committee member) / Arizona State University (Publisher)
Created2019
158552-Thumbnail Image.png
Description
The recent increase in users of cellular networks necessitates the use of new technologies to meet this demand. Massive multiple input multiple output (MIMO) communication systems have great potential for increasing the network capacity of the emerging 5G+ cellular networks. However, leveraging the multiplexing and beamforming gains from these large-scale

The recent increase in users of cellular networks necessitates the use of new technologies to meet this demand. Massive multiple input multiple output (MIMO) communication systems have great potential for increasing the network capacity of the emerging 5G+ cellular networks. However, leveraging the multiplexing and beamforming gains from these large-scale MIMO systems requires the channel knowlege between each antenna and each user. Obtaining channel information on such a massive scale is not feasible with the current technology available due to the complexity of such large systems. Recent research shows that deep learning methods can lead to interesting gains for massive MIMO systems by mapping the channel information from the uplink frequency band to the channel information for the downlink frequency band as well as between antennas at nearby locations. This thesis presents the research to develop a deep learning based channel mapping proof-of-concept prototype.



Due to deep neural networks' need of large training sets for accurate performance, this thesis outlines the design and implementation of an autonomous channel measurement system to analyze the performance of the proposed deep learning based channel mapping concept. This system obtains channel magnitude measurements from eight antennas autonomously using a mobile robot carrying a transmitter which receives wireless commands from the central computer connected to the static receiver system. The developed autonomous channel measurement system is capable of obtaining accurate and repeatable channel magnitude measurements. It is shown that the proposed deep learning based channel mapping system accurately predicts channel information containing few multi-path effects.
ContributorsBooth, Jayden Charles (Author) / Spanias, Andreas (Thesis advisor) / Alkhateeb, Ahmed (Thesis advisor) / Ewaisha, Ahmed (Committee member) / Arizona State University (Publisher)
Created2020