Filtering by
- All Subjects: wireless
- All Subjects: Internet of Things
- Creators: Electrical Engineering Program
- Creators: Zhang, Yanchao
Every communication system has a receiver and a transmitter. Irrespective if it is wired or wireless.The future of wireless communication consists of a massive number of transmitters and receivers. The question arises, can we use computer vision to help wireless communication? To satisfy the high data requirement, a large number of antennas are required. The devices that employ large-antenna arrays have other sensors such as RGB camera, depth camera, or LiDAR sensors.These vision sensors help us overcome the non-trivial wireless communication challenges, such as beam blockage prediction and hand-over prediction.This is further motivated by the recent advances in deep learning and computer vision that can extract high-level semantics from complex visual scenes, and the increasing interest of leveraging machine/deep learning tools in wireless communication problems.[1] <br/><br/>The research was focused solely based on technology like 3D cameras,object detection and object tracking using Computer vision and compression techniques. The main objective of using computer vision was to make Milli-meter Wave communication more robust, and to collect more data for the machine learning algorithms. Pre-build lossless and lossy compression algorithms, such as FFMPEG, were used in the research. An algorithm was developed that could use 3D cameras and machine learning models such as YOLOV3, to track moving objects using servo motors and low powered computers like the raspberry pi or the Jetson Nano. In other words, the receiver could track the highly mobile transmitter in 1 dimension using a 3D camera. Not only that, during the research, the transmitter was loaded on a DJI M600 pro drone, and then machine learning and object tracking was used to track the highly mobile drone. In order to build this machine learning model and object tracker, collecting data like depth, RGB images and position coordinates were the first yet the most important step. GPS coordinates from the DJI M600 were also pulled and were successfully plotted on google earth. This proved to be very useful during data collection using a drone and for the future applications of position estimation for a drone using machine learning. <br/><br/>Initially, images were taken from transmitter camera every second,and those frames were then converted to a text file containing hex-decimal values. Each text file was then transmitted from the transmitter to receiver, and on the receiver side, a python code converted the hex-decimal to JPG. This would give an efect of real time video transmission. However, towards the end of the research, an industry standard, real time video was streamed using pre-built FFMPEG modules, GNU radio and Universal Software Radio Peripheral (USRP). The transmitter camera was a PI-camera. More details will be discussed as we further dive deep into this research report.
This thesis investigates how to design a radar using a field–programmable gate array board to generate the radar signal, and process the returned signal to determine the distance and concentration of objects (in this case, ash). The purpose of using such a board lies in its reconfigurability—a design can (relatively easily) be adjusted, recompiled, and reuploaded to the hardware with none of the cost or time overhead required of a standard weather radar.
The design operates on the principle of frequency–modulated continuous–waves, in which the output signal frequency changes as a function of time. The difference in transmit and echo frequencies determines the distance of an object, while the magnitude of a particular difference frequency corresponds to concentration. Thus, by viewing a spectrum of frequency differences, one is able to see both the concentration and distances of ash from the radar.
The transmit signal data was created in MATLAB®, while the radar was designed with MATLAB® Simulink® using hardware IP blocks and implemented on the ROACH2 signal processing hardware, which utilizes a Xilinx® Virtex®–6 chip. The output is read from a computer linked to the hardware through Ethernet, using a Python™ script. Testing revealed minor flaws due to the usage of lower–grade components in the prototype. However, the functionality of the proposed radar design was proven, making this approach to radar a promising path for modern vulcanology.
This thesis investigates how to design a radar using a field–programmable gate array board to generate the radar signal, and process the returned signal to determine the distance and concentration of objects (in this case, ash). The purpose of using such a board lies in its reconfigurability—a design can (relatively easily) be adjusted, recompiled, and reuploaded to the hardware with none of the cost or time overhead required of a standard weather radar.
The design operates on the principle of frequency–modulated continuous–waves, in which the output signal frequency changes as a function of time. The difference in transmit and echo frequencies determines the distance of an object, while the magnitude of a particular difference frequency corresponds to concentration. Thus, by viewing a spectrum of frequency differences, one is able to see both the concentration and distances of ash from the radar.
The transmit signal data was created in MATLAB®, while the radar was designed with MATLAB® Simulink® using hardware IP blocks and implemented on the ROACH2 signal processing hardware, which utilizes a Xilinx® Virtex®–6 chip. The output is read from a computer linked to the hardware through Ethernet, using a Python™ script. Testing revealed minor flaws due to the usage of lower–grade components in the prototype. However, the functionality of the proposed radar design was proven, making this approach to radar a promising path for modern vulcanology.
Recently edge computing has emerged to reduce the negative impacts of tasks offloading to remote computing systems. As edge computing is in close proximity to IoT devices, it can reduce the latency of task offloading and reduce network congestion. Yet, edge computing has its drawbacks, such as the limited computing resources of some edge computing devices and the unbalanced loads among these devices. In order to effectively explore the potential of edge computing to support IoT applications, it is necessary to have efficient task management and load balancing in edge computing networks.
In this dissertation research, an approach is presented to periodically distributing tasks within the edge computing network while satisfying the quality-of-service (QoS) requirements of tasks. The QoS requirements include task completion deadline and security requirement. The approach aims to maximize the number of tasks that can be accommodated in the edge computing network, with consideration of tasks’ priorities. The goal is achieved through the joint optimization of the computing resource allocation and network bandwidth provisioning. Evaluation results show the improvement of the approach in increasing the number of tasks that can be accommodated in the edge computing network and the efficiency in resource utilization.
I believe that the resource issue in IoT will persist in the near future due to technological, economic and environmental factors. In this dissertation, I seek to address this issue by means of smart resource allocation. I propose mathematical models to formally describe various resource constraints and application scenarios in IoT. Based on these, I design smart resource allocation algorithms and protocols to maximize the system performance in face of resource restrictions. Different aspects are tackled, including networking, security, and economics of the entire IoT ecosystem. For different problems, different algorithmic solutions are devised, including optimal algorithms, provable approximation algorithms, and distributed protocols. The solutions are validated with rigorous theoretical analysis and/or extensive simulation experiments.