Matching Items (24)
Filtering by

Clear all filters

148033-Thumbnail Image.png
Description

Every communication system has a receiver and a transmitter. Irrespective if it is wired or wireless.The future of wireless communication consists of a massive number of transmitters and receivers. The question arises, can we use computer vision to help wireless communication? To satisfy the high data requirement, a large number

Every communication system has a receiver and a transmitter. Irrespective if it is wired or wireless.The future of wireless communication consists of a massive number of transmitters and receivers. The question arises, can we use computer vision to help wireless communication? To satisfy the high data requirement, a large number of antennas are required. The devices that employ large-antenna arrays have other sensors such as RGB camera, depth camera, or LiDAR sensors.These vision sensors help us overcome the non-trivial wireless communication challenges, such as beam blockage prediction and hand-over prediction.This is further motivated by the recent advances in deep learning and computer vision that can extract high-level semantics from complex visual scenes, and the increasing interest of leveraging machine/deep learning tools in wireless communication problems.[1] <br/><br/>The research was focused solely based on technology like 3D cameras,object detection and object tracking using Computer vision and compression techniques. The main objective of using computer vision was to make Milli-meter Wave communication more robust, and to collect more data for the machine learning algorithms. Pre-build lossless and lossy compression algorithms, such as FFMPEG, were used in the research. An algorithm was developed that could use 3D cameras and machine learning models such as YOLOV3, to track moving objects using servo motors and low powered computers like the raspberry pi or the Jetson Nano. In other words, the receiver could track the highly mobile transmitter in 1 dimension using a 3D camera. Not only that, during the research, the transmitter was loaded on a DJI M600 pro drone, and then machine learning and object tracking was used to track the highly mobile drone. In order to build this machine learning model and object tracker, collecting data like depth, RGB images and position coordinates were the first yet the most important step. GPS coordinates from the DJI M600 were also pulled and were successfully plotted on google earth. This proved to be very useful during data collection using a drone and for the future applications of position estimation for a drone using machine learning. <br/><br/>Initially, images were taken from transmitter camera every second,and those frames were then converted to a text file containing hex-decimal values. Each text file was then transmitted from the transmitter to receiver, and on the receiver side, a python code converted the hex-decimal to JPG. This would give an efect of real time video transmission. However, towards the end of the research, an industry standard, real time video was streamed using pre-built FFMPEG modules, GNU radio and Universal Software Radio Peripheral (USRP). The transmitter camera was a PI-camera. More details will be discussed as we further dive deep into this research report.

ContributorsSeth, Madhav (Author) / Alkhateeb, Ahmed (Thesis director) / Alrabeiah, Muhammad (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

We present in this paper a method to compare scene classification accuracy of C-band Synthetic aperture radar (SAR) and optical images utilizing both classical and quantum computing algorithms. This REU study uses data from the Sentinel satellite. The dataset contains (i) synthetic aperture radar images collected from the Sentinel-1 satellite

We present in this paper a method to compare scene classification accuracy of C-band Synthetic aperture radar (SAR) and optical images utilizing both classical and quantum computing algorithms. This REU study uses data from the Sentinel satellite. The dataset contains (i) synthetic aperture radar images collected from the Sentinel-1 satellite and (ii) optical images for the same area as the SAR images collected from the Sentinel-2 satellite. We utilize classical neural networks to classify four classes of images. We then use Quantum Convolutional Neural Networks and deep learning techniques to take advantage of machine learning to help the system train, learn, and identify at a higher classification accuracy. A hybrid Quantum-classical model that is trained on the Sentinel1-2 dataset is proposed, and the performance is then compared against the classical in terms of classification accuracy.

ContributorsMiller, Leslie (Author) / Spanias, Andreas (Thesis director) / Uehara, Glen (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2023-05
Description

To reduce the cost of silicon solar cells and improve their efficiency, it is crucial to identify and understand the defects limiting the electrical performance in silicon wafers. Bulk defects in semiconductors produce discrete energy levels within the bandgap and may act as recombination centers. This project investigates the viability

To reduce the cost of silicon solar cells and improve their efficiency, it is crucial to identify and understand the defects limiting the electrical performance in silicon wafers. Bulk defects in semiconductors produce discrete energy levels within the bandgap and may act as recombination centers. This project investigates the viability of using machine learning for characterizing bulk defects in Silicon by using a Random Forest Regressor to extract the defect energy level and capture cross section ratios for a simulated Molybdenum defect and experimental Silicon Vacancy defect. Additionally, a dual convolutional neural network is used to classify the defect energy level in the upper or lower half bandgap.

ContributorsWoo, Vanessa (Author) / Bertoni, Mariana (Thesis director) / Rolston, Nicholas (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2023-05
Description
Machine learning has been increasingly integrated into several new areas, namely those related to vision processing and language learning models. These implementations of these processes in new products have demanded increasingly more expensive memory usage and computational requirements. Microcontrollers can lower this increasing cost. However, implementation of such a system

Machine learning has been increasingly integrated into several new areas, namely those related to vision processing and language learning models. These implementations of these processes in new products have demanded increasingly more expensive memory usage and computational requirements. Microcontrollers can lower this increasing cost. However, implementation of such a system on a microcontroller is difficult and has to be culled appropriately in order to find the right balance between optimization of the system and allocation of resources present in the system. A proof of concept that these algorithms can be implemented on such as system will be attempted in order to find points of contention of the construction of such a system on such limited hardware, as well as the steps taken to enable the usage of machine learning onto a limited system such as the general purpose MSP430 from Texas Instruments.
ContributorsMalcolm, Ian (Author) / Allee, David (Thesis director) / Spanias, Andreas (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2024-05
161220-Thumbnail Image.png
Description

Classification in machine learning is quite crucial to solve many problems that the world is presented with today. Therefore, it is key to understand one’s problem and develop an efficient model to achieve a solution. One technique to achieve greater model selection and thus further ease in problem solving is

Classification in machine learning is quite crucial to solve many problems that the world is presented with today. Therefore, it is key to understand one’s problem and develop an efficient model to achieve a solution. One technique to achieve greater model selection and thus further ease in problem solving is estimation of the Bayes Error Rate. This paper provides the development and analysis of two methods used to estimate the Bayes Error Rate on a given set of data to evaluate performance. The first method takes a “global” approach, looking at the data as a whole, and the second is more “local”—partitioning the data at the outset and then building up to a Bayes Error Estimation of the whole. It is found that one of the methods provides an accurate estimation of the true Bayes Error Rate when the dataset is at high dimension, while the other method provides accurate estimation at large sample size. This second conclusion, in particular, can have significant ramifications on “big data” problems, as one would be able to clarify the distribution with an accurate estimation of the Bayes Error Rate by using this method.

ContributorsLattus, Robert (Author) / Dasarathy, Gautam (Thesis director) / Berisha, Visar (Committee member) / Turaga, Pavan (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2021-12
132021-Thumbnail Image.png
Description
Machine learning is a powerful tool for processing and understanding the vast amounts of data produced by sensors every day. Machine learning has found use in a wide variety of fields, from making medical predictions through correlations invisible to the human eye to classifying images in computer vision applications. A

Machine learning is a powerful tool for processing and understanding the vast amounts of data produced by sensors every day. Machine learning has found use in a wide variety of fields, from making medical predictions through correlations invisible to the human eye to classifying images in computer vision applications. A wide range of machine learning algorithms have been developed to attempt to solve these problems, each with different metrics in accuracy, throughput, and energy efficiency. However, even after they are trained, these algorithms require substantial computations to make a prediction. General-purpose CPUs are not well-optimized to this task, so other hardware solutions have developed over time, including the use of a GPU, FPGA, or ASIC.

This project considers the FPGA implementations of MLP and CNN feedforward. While FPGAs provide significant performance improvements, they come at a substantial financial cost. We explore the options of implementing these algorithms on a smaller budget. We successfully implement a multilayer perceptron that identifies handwritten digits from the MNIST dataset on a student-level DE10-Lite FPGA with a test accuracy of 91.99%. We also apply our trained network to external image data loaded through a webcam and a Raspberry Pi, but we observe lower test accuracy in these images. Later, we consider the requirements necessary to implement a more elaborate convolutional neural network on the same FPGA. The study deems the CNN implementation feasible in the criteria of memory requirements and basic architecture. We suggest the CNN implementation on the same FPGA to be worthy of further exploration.
ContributorsLythgoe, Zachary James (Author) / Allee, David (Thesis director) / Hartin, Olin (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
131572-Thumbnail Image.png
Description
In the world we live in today, nothing is impossible. Due to the advancements of technology, humans around the globe are able to hold computers that fit within the size of their pocket. These computers can do marvelous things, however run off batteries. These batteries need to be charged

In the world we live in today, nothing is impossible. Due to the advancements of technology, humans around the globe are able to hold computers that fit within the size of their pocket. These computers can do marvelous things, however run off batteries. These batteries need to be charged and up until a little while ago there was only one option available: wired chargers; however, because of the advancement of technology society has created a way to transfer power via magnetic fields. Now this concept has been around for a long time since the days of Nikola Tesla but just recently society has been able to apply his discoveries to charging these computers in our pockets. Unfortunately, the current models of these chargers come with a drawback as they are less efficient than wired chargers. However, this is the question our group has set out to answer. Is there any way possible to improve the efficiency of these wireless chargers so they are equal or even more efficient than wired chargers. This paper explores how to improve the efficiency in wireless chargers. Through research, simulations and testing the group has discovered areas that efficiency can be improved as well as makes recommendations to change the current wireless chargers on the market today. This paper also explores future applications of wireless chargers that can not only make life much easier but could also save lives in some cases. These applications can have many effects on hospitality, the medical field, as well as the supply chain and logistics of America.
ContributorsMcCulley, Matthew Alan (Co-author) / Cole, Kennedy (Co-author) / Chickamenahalli, Shamala (Thesis director) / Chakrabarti, Chaitali (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
165468-Thumbnail Image.png
Description
Most machine learning algorithms, and specifically neural networks, utilize vector-matrix multiplication (VMM) to process information, but these calculations are CPU intensive and can have long run-times. This issue is fundamentally outlined by the von Neumann bottleneck. Because of this undesirable expense associated with performing VMM via software, the exploration of

Most machine learning algorithms, and specifically neural networks, utilize vector-matrix multiplication (VMM) to process information, but these calculations are CPU intensive and can have long run-times. This issue is fundamentally outlined by the von Neumann bottleneck. Because of this undesirable expense associated with performing VMM via software, the exploration of new ways to perform the same calculations via hardware have grown more popular. When performed with hardware that is specialized to perform these calculations, VMM becomes far more power-efficient and less time consuming. This project expands upon those principles and seeks to validate the use of RRAM in this hardware. The flexibility of the conductance of RRAM makes these devices a strong contender for hardware-driven VMM calculation for neural network computing. The conductance of these devices is affected by the pulse width of a voltage signal sent across the devices at each node. This pulse is produced on-chip and can be modified by user inputs. The design of this pulse- producing circuit, as well as the simulated and physical functionality of the design, is discussed in this Honors Thesis. Simulation and physical testing of the pulse-producing design on the ASIC have verified correct operation of the design. This operation is imperative to the future ability of the ASIC to perform accurate VMM.
ContributorsPearson, Katherine (Author) / Barnaby, Hugh (Thesis director) / Wilson, Donald (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of International Letters and Cultures (Contributor)
Created2022-05
164938-Thumbnail Image.png
Description

In wireless communication systems, the process of data transmission includes the estimation of channels. Implementing machine learning in this process can reduce the amount of time it takes to estimate channels, thus, resulting in an increase of the system’s transmission throughput. This maximizes the performance of applications relating to device-to-device

In wireless communication systems, the process of data transmission includes the estimation of channels. Implementing machine learning in this process can reduce the amount of time it takes to estimate channels, thus, resulting in an increase of the system’s transmission throughput. This maximizes the performance of applications relating to device-to-device communications and 5G systems. However, applying machine learning algorithms to multi-base-station systems is not well understood in literature, which is the focus of this thesis.

ContributorsCosio, Karla (Author) / Ewaisha, Ahmed (Thesis director) / Spanias, Andreas (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2022-05
165107-Thumbnail Image.png
Description

The stability of cheerleading stunts is crucial to athlete safety and team success. Consistency in stunt technique contributes to success in stunting skills, giving a team the tools to win competitions. Increased stunt technique reduces the chances of falls and the severity of those falls. Proper technique also prevents injuries

The stability of cheerleading stunts is crucial to athlete safety and team success. Consistency in stunt technique contributes to success in stunting skills, giving a team the tools to win competitions. Increased stunt technique reduces the chances of falls and the severity of those falls. Proper technique also prevents injuries caused by improper positions that place pressure on the lower back and shoulders. Bases must maintain strong technique with proper lines of support in order to maximize stunt stability. Through exploration of the EmbeddedML system, involving a neural network implemented using a SensorTile, cheerleading motions can be successfully classified. Using this system, it is possible to identify motions that result in both weak and injurious positions almost instantly. By alerting athletes to these incorrect motions, improper stunt technique can be corrected quickly and without the involvement of a coach. This automated technique correction would be incredibly beneficial to the sport of competitive cheerleading

ContributorsOspina, Lauren (Author) / Wang, Chao (Thesis director) / Chakrabarti, Chaitali (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2022-05