Matching Items (4)
Filtering by

Clear all filters

157900-Thumbnail Image.png
Description
Readout Integrated Circuits(ROICs) are important components of infrared(IR) imag

ing systems. Performance of ROICs affect the quality of images obtained from IR

imaging systems. Contemporary infrared imaging applications demand ROICs that

can support large dynamic range, high frame rate, high output data rate, at low

cost, size and power. Some of these applications are

Readout Integrated Circuits(ROICs) are important components of infrared(IR) imag

ing systems. Performance of ROICs affect the quality of images obtained from IR

imaging systems. Contemporary infrared imaging applications demand ROICs that

can support large dynamic range, high frame rate, high output data rate, at low

cost, size and power. Some of these applications are military surveillance, remote

sensing in space and earth science missions and medical diagnosis. This work focuses

on developing a ROIC unit cell prototype for National Aeronautics and Space Ad

ministration(NASA), Jet Propulsion Laboratory’s(JPL’s) space applications. These

space applications also demand high sensitivity, longer integration times(large well

capacity), wide operating temperature range, wide input current range and immunity

to radiation events such as Single Event Latchup(SEL).

This work proposes a digital ROIC(DROIC) unit cell prototype of 30ux30u size,

to be used mainly with NASA JPL’s High Operating Temperature Barrier Infrared

Detectors(HOT BIRDs). Current state of the art DROICs achieve a dynamic range

of 16 bits using advanced 65-90nm CMOS processes which adds a lot of cost overhead.

The DROIC pixel proposed in this work uses a low cost 180nm CMOS process and

supports a dynamic range of 20 bits operating at a low frame rate of 100 frames per

second(fps), and a dynamic range of 12 bits operating at a high frame rate of 5kfps.

The total electron well capacity of this DROIC pixel is 1.27 billion electrons, enabling

integration times as long as 10ms, to achieve better dynamic range. The DROIC unit

cell uses an in-pixel 12-bit coarse ADC and an external 8-bit DAC based fine ADC.

The proposed DROIC uses layout techniques that make it immune to radiation up to

300krad(Si) of total ionizing dose(TID) and single event latch-up(SEL). It also has a

wide input current range from 10pA to 1uA and supports detectors operating from

Short-wave infrared (SWIR) to longwave infrared (LWIR) regions.
ContributorsPraveen, Subramanya Chilukuri (Author) / Bakkaloglu, Bertan (Thesis advisor) / Kitchen, Jennifer (Committee member) / Long, Yu (Committee member) / Arizona State University (Publisher)
Created2019
191748-Thumbnail Image.png
Description
Millimeter-wave (mmWave) and sub-terahertz (sub-THz) systems aim to utilize the large bandwidth available at these frequencies. This has the potential to enable several future applications that require high data rates, such as autonomous vehicles and digital twins. These systems, however, have several challenges that need to be addressed to realize

Millimeter-wave (mmWave) and sub-terahertz (sub-THz) systems aim to utilize the large bandwidth available at these frequencies. This has the potential to enable several future applications that require high data rates, such as autonomous vehicles and digital twins. These systems, however, have several challenges that need to be addressed to realize their gains in practice. First, they need to deploy large antenna arrays and use narrow beams to guarantee sufficient receive power. Adjusting the narrow beams of the large antenna arrays incurs massive beam training overhead. Second, the sensitivity to blockages is a key challenge for mmWave and THz networks. Since these networks mainly rely on line-of-sight (LOS) links, sudden link blockages highly threaten the reliability of the networks. Further, when the LOS link is blocked, the network typically needs to hand off the user to another LOS basestation, which may incur critical time latency, especially if a search over a large codebook of narrow beams is needed. A promising way to tackle both these challenges lies in leveraging additional side information such as visual, LiDAR, radar, and position data. These sensors provide rich information about the wireless environment, which can be utilized for fast beam and blockage prediction. This dissertation presents a machine-learning framework for sensing-aided beam and blockage prediction. In particular, for beam prediction, this work proposes to utilize visual and positional data to predict the optimal beam indices. For the first time, this work investigates the sensing-aided beam prediction task in a real-world vehicle-to-infrastructure and drone communication scenario. Similarly, for blockage prediction, this dissertation proposes a multi-modal wireless communication solution that utilizes bimodal machine learning to perform proactive blockage prediction and user hand-off. Evaluations on both real-world and synthetic datasets illustrate the promising performance of the proposed solutions and highlight their potential for next-generation communication and sensing systems.
ContributorsCharan, Gouranga (Author) / Alkhateeb, Ahmed (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Turaga, Pavan (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2024
157812-Thumbnail Image.png
Description
The objective of this work is to design a novel method for imaging targets and scenes which are not directly visible to the observer. The unique scattering properties of terahertz (THz) waves can turn most building surfaces into mirrors, thus allowing someone to see around corners and various occlusions. In

The objective of this work is to design a novel method for imaging targets and scenes which are not directly visible to the observer. The unique scattering properties of terahertz (THz) waves can turn most building surfaces into mirrors, thus allowing someone to see around corners and various occlusions. In the visible regime, most surfaces are very rough compared to the wavelength. As a result, the spatial coherency of reflected signals is lost, and the geometry of the objects where the light bounced on cannot be retrieved. Interestingly, the roughness of most surfaces is comparable to the wavelengths at lower frequencies (100 GHz – 10 THz) without significantly disturbing the wavefront of the scattered signals, behaving approximately as mirrors. Additionally, this electrically small roughness is beneficial because it can be used by the THz imaging system to locate the pose (location and orientation) of the mirror surfaces, thus enabling the reconstruction of both line-of-sight (LoS) and non-line-of-sight (NLoS) objects.

Back-propagation imaging methods are modified to reconstruct the image of the 2-D scenario (range, cross-range). The reflected signal from the target is collected using a SAR (Synthetic Aperture Radar) set-up in a lab environment. This imaging technique is verified using both full-wave 3-D numerical analysis models and lab experiments.

The novel imaging approach of non-line-of-sight-imaging could enable novel applications in rescue and surveillance missions, highly accurate localization methods, and improve channel estimation in mmWave and sub-mmWave wireless communication systems.
ContributorsDoddalla, Sai Kiran kiran (Author) / Trichopoulos, George (Thesis advisor) / Alkhateeb, Ahmed (Committee member) / Zeinolabedinzadeh, Saeed (Committee member) / Aberle, James T., 1961- (Committee member) / Arizona State University (Publisher)
Created2019
158552-Thumbnail Image.png
Description
The recent increase in users of cellular networks necessitates the use of new technologies to meet this demand. Massive multiple input multiple output (MIMO) communication systems have great potential for increasing the network capacity of the emerging 5G+ cellular networks. However, leveraging the multiplexing and beamforming gains from these large-scale

The recent increase in users of cellular networks necessitates the use of new technologies to meet this demand. Massive multiple input multiple output (MIMO) communication systems have great potential for increasing the network capacity of the emerging 5G+ cellular networks. However, leveraging the multiplexing and beamforming gains from these large-scale MIMO systems requires the channel knowlege between each antenna and each user. Obtaining channel information on such a massive scale is not feasible with the current technology available due to the complexity of such large systems. Recent research shows that deep learning methods can lead to interesting gains for massive MIMO systems by mapping the channel information from the uplink frequency band to the channel information for the downlink frequency band as well as between antennas at nearby locations. This thesis presents the research to develop a deep learning based channel mapping proof-of-concept prototype.



Due to deep neural networks' need of large training sets for accurate performance, this thesis outlines the design and implementation of an autonomous channel measurement system to analyze the performance of the proposed deep learning based channel mapping concept. This system obtains channel magnitude measurements from eight antennas autonomously using a mobile robot carrying a transmitter which receives wireless commands from the central computer connected to the static receiver system. The developed autonomous channel measurement system is capable of obtaining accurate and repeatable channel magnitude measurements. It is shown that the proposed deep learning based channel mapping system accurately predicts channel information containing few multi-path effects.
ContributorsBooth, Jayden Charles (Author) / Spanias, Andreas (Thesis advisor) / Alkhateeb, Ahmed (Thesis advisor) / Ewaisha, Ahmed (Committee member) / Arizona State University (Publisher)
Created2020