Matching Items (3)
Filtering by

Clear all filters

156468-Thumbnail Image.png
Description
With the emergence of edge computing paradigm, many applications such as image recognition and augmented reality require to perform machine learning (ML) and artificial intelligence (AI) tasks on edge devices. Most AI and ML models are large and computational heavy, whereas edge devices are usually equipped with limited computational and

With the emergence of edge computing paradigm, many applications such as image recognition and augmented reality require to perform machine learning (ML) and artificial intelligence (AI) tasks on edge devices. Most AI and ML models are large and computational heavy, whereas edge devices are usually equipped with limited computational and storage resources. Such models can be compressed and reduced in order to be placed on edge devices, but they may loose their capability and may not generalize and perform well compared to large models. Recent works used knowledge transfer techniques to transfer information from a large network (termed teacher) to a small one (termed student) in order to improve the performance of the latter. This approach seems to be promising for learning on edge devices, but a thorough investigation on its effectiveness is lacking.

The purpose of this work is to provide an extensive study on the performance (both in terms of accuracy and convergence speed) of knowledge transfer, considering different student-teacher architectures, datasets and different techniques for transferring knowledge from teacher to student.

A good performance improvement is obtained by transferring knowledge from both the intermediate layers and last layer of the teacher to a shallower student. But other architectures and transfer techniques do not fare so well and some of them even lead to negative performance impact. For example, a smaller and shorter network, trained with knowledge transfer on Caltech 101 achieved a significant improvement of 7.36\% in the accuracy and converges 16 times faster compared to the same network trained without knowledge transfer. On the other hand, smaller network which is thinner than the teacher network performed worse with an accuracy drop of 9.48\% on Caltech 101, even with utilization of knowledge transfer.
ContributorsSistla, Ragini (Author) / Zhao, Ming (Thesis advisor, Committee member) / Li, Baoxin (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2018
187320-Thumbnail Image.png
Description
As threats emerge and change, the life of a police officer continues to intensify. To better support police training curriculums and police cadets through this critical career juncture, this thesis proposes a state-of-the-art framework for stress detection using real-world data and deep neural networks. As an integral step of a

As threats emerge and change, the life of a police officer continues to intensify. To better support police training curriculums and police cadets through this critical career juncture, this thesis proposes a state-of-the-art framework for stress detection using real-world data and deep neural networks. As an integral step of a larger study, this thesis investigates data processing techniques to handle the ambiguity of data collected in naturalistic contexts and leverages data structuring approaches to train deep neural networks. The analysis used data collected from 37 police training cadetsin five different training cohorts at the Phoenix Police Regional Training Academy. The data was collected at different intervals during the cadets’ rigorous six-month training course. In total, data were collected over 11 months from all the cohorts combined. All cadets were equipped with a Fitbit wearable device with a custom-built application to collect biometric data, including heart rate and self-reported stress levels. Throughout the data collection period, the cadets were asked to wear the Fitbit device and respond to stress level prompts to capture real-time responses. To manage this naturalistic data, this thesis leveraged heart rate filtering algorithms, including Hampel, Median, Savitzky-Golay, and Wiener, to remove potentially noisy data. After data processing and noise removal, the heart rate data and corresponding stress level labels are processed into two different dataset sizes. The data is then fed into a Deep ECGNet (created by Prajod et al.), a simple Feed Forward network (created by Sim et al.), and a Multilayer Perceptron (MLP) network for binary classification. Experimental results show that the Feed Forward network achieves the highest accuracy (90.66%) for data from a single cohort, while the MLP model performs best on data across cohorts, achieving an 85.92% accuracy. These findings suggest that stress detection is feasible on a variate set of real-world data using deepneural networks.
ContributorsParanjpe, Tara Anand (Author) / Zhao, Ming (Thesis advisor) / Roberts, Nicole (Thesis advisor) / Duran, Nicholas (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2023
131863-Thumbnail Image.png
Description
Quantum computers provide a promising future, where computationally difficult
problems can be executed exponentially faster than the current classical computers we have in use today. While there is tremendous research and development in the creation of quantum computers, there is a fundamental challenge that exists in the quantum world. Due to

Quantum computers provide a promising future, where computationally difficult
problems can be executed exponentially faster than the current classical computers we have in use today. While there is tremendous research and development in the creation of quantum computers, there is a fundamental challenge that exists in the quantum world. Due to the fragility of the quantum world, error correction methods have originated since 1995 to tackle the giant problem. Since the birth of the idea that these powerful computers can crunch and process numbers beyond the limit of the current computers, there exist several mathematical error correcting codes that could potentially give the required stability in the fragile and fault tolerant quantum world. While there has been a multitude of possible solutions, there is no one single error correcting code that is the key to solving the problem. Almost every solution presented has shared with it a limiting factor or an issue that prevents it from becoming the breakthrough that is desperately needed.

This paper gives an introductory knowledge of what is the quantum world and why there is a need for error correcting topologies. Finally, it introduces one recent topology that could be added to the list of possible solutions to this central problem. Rather than focusing on the mathematical frameworks, the paper introduces the main concepts so that most readers even outside the major field of computer science can understand what the main problem is and how this topology attempts to solve it.
ContributorsAhmed, Umer (Author) / Colbourn, Charles (Thesis director) / Zhao, Ming (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05