Covid-19 Hotspot Estimation Using Consensus Methods, SEIR Models and ML Algorithms

193540-Thumbnail Image.png
Description
The primary objective of this thesis is to identify locations or regions where COVID-19 transmission is more prevalent, termed “hotspots,” assess the likelihood of contracting the virus after visiting crowded areas or potential hotspots, and make predictions on confirmed COVID-19

The primary objective of this thesis is to identify locations or regions where COVID-19 transmission is more prevalent, termed “hotspots,” assess the likelihood of contracting the virus after visiting crowded areas or potential hotspots, and make predictions on confirmed COVID-19 cases and recoveries. A consensus algorithm is used to identify such hotspots; the SEIR epidemiological model tracks COVID-19 cases, allowing for a better understanding of the disease dynamics and enabling informed decision-making in public health strategies. Consensus-based distributed methodologies have been developed to estimate the magnitude, density, and locations of COVID-19 hotspots to provide well-informed alerts based on continuous data risk assessments. Assuming agents own a mobile device, transmission hotspots use information from user devices with Bluetooth and WiFi. In a consensus-based distributed clustering algorithm, users are divided into smaller groups, and then the number of users is estimated in each group. This process allows for the determination of the population of an outdoor site and the distances between individuals. The proposed algorithm demonstrates versatility by being applicable not only in outdoor environments but also in indoor settings. Considerations are made for signal attenuation caused by walls and other barriers to adapt to indoor environments, and a wall detection algorithm is employed for this purpose. The clustering mechanism is designed to dynamically choose the appropriate clustering technique based on data-dependent patterns, ensuring that every node undergoes proper clustering. After networks have been established and clustered, the output of the consensus algorithmis fed as one of many inputs into the SEIR model. SEIR, representing Susceptible, Exposed, Infectious, and Removed, forms the basis of a model designed to assess the probability of infection at a Point of Interest (POI). The SEIR model utilizes calculated parameters such as β (contact), σ (latency),γ (recovery), ω (loss of immunity) along with current COVID-19 case data to precisely predict the infection spread in a specific area. The SEIR model is implemented with diverse methodologies for transitioning populations between compartments. Hence, the model identifies optimal parameter values under different conditions and scenarios and forecasts the number of infected and recovered cases for the upcoming days.
Date Created
2024
Agent

Implementation of Machine Learning on Low Power Microcontrollers

Description
Machine learning has been increasingly integrated into several new areas, namely those related to vision processing and language learning models. These implementations of these processes in new products have demanded increasingly more expensive memory usage and computational requirements. Microcontrollers can

Machine learning has been increasingly integrated into several new areas, namely those related to vision processing and language learning models. These implementations of these processes in new products have demanded increasingly more expensive memory usage and computational requirements. Microcontrollers can lower this increasing cost. However, implementation of such a system on a microcontroller is difficult and has to be culled appropriately in order to find the right balance between optimization of the system and allocation of resources present in the system. A proof of concept that these algorithms can be implemented on such as system will be attempted in order to find points of contention of the construction of such a system on such limited hardware, as well as the steps taken to enable the usage of machine learning onto a limited system such as the general purpose MSP430 from Texas Instruments.
Date Created
2024-05
Agent

Learning Robust and Repeatable Speech Features for Clinical Applications

190765-Thumbnail Image.png
Description
Speech analysis for clinical applications has emerged as a burgeoning field, providing valuable insights into an individual's physical and physiological state. Researchers have explored speech features for clinical applications, such as diagnosing, predicting, and monitoring various pathologies. Before presenting the

Speech analysis for clinical applications has emerged as a burgeoning field, providing valuable insights into an individual's physical and physiological state. Researchers have explored speech features for clinical applications, such as diagnosing, predicting, and monitoring various pathologies. Before presenting the new deep learning frameworks, this thesis introduces a study on conventional acoustic feature changes in subjects with post-traumatic headache (PTH) attributed to mild traumatic brain injury (mTBI). This work demonstrates the effectiveness of using speech signals to assess the pathological status of individuals. At the same time, it highlights some of the limitations of conventional acoustic and linguistic features, such as low repeatability and generalizability. Two critical characteristics of speech features are (1) good robustness, as speech features need to generalize across different corpora, and (2) high repeatability, as speech features need to be invariant to all confounding factors except the pathological state of targets. This thesis presents two research thrusts in the context of speech signals in clinical applications that focus on improving the robustness and repeatability of speech features, respectively. The first thrust introduces a deep learning framework to generate acoustic feature embeddings sensitive to vocal quality and robust across different corpora. A contrastive loss combined with a classification loss is used to train the model jointly, and data-warping techniques are employed to improve the robustness of embeddings. Empirical results demonstrate that the proposed method achieves high in-corpus and cross-corpus classification accuracy and generates good embeddings sensitive to voice quality and robust across different corpora. The second thrust introduces using the intra-class correlation coefficient (ICC) to evaluate the repeatability of embeddings. A novel regularizer, the ICC regularizer, is proposed to regularize deep neural networks to produce embeddings with higher repeatability. This ICC regularizer is implemented and applied to three speech applications: a clinical application, speaker verification, and voice style conversion. The experimental results reveal that the ICC regularizer improves the repeatability of learned embeddings compared to the contrastive loss, leading to enhanced performance in downstream tasks.
Date Created
2023
Agent

Software-Defined Imaging for Embedded Computer Vision: Adaptive Subsampling and Event-based Visual Navigation

190757-Thumbnail Image.png
Description
Huge advancements have been made over the years in terms of modern image-sensing hardware and visual computing algorithms (e.g. computer vision, image processing, computational photography). However, to this day, there still exists a current gap between the hardware and software

Huge advancements have been made over the years in terms of modern image-sensing hardware and visual computing algorithms (e.g. computer vision, image processing, computational photography). However, to this day, there still exists a current gap between the hardware and software design in an imaging system, which silos one research domain from another. Bridging this gap is the key to unlocking new visual computing capabilities for end applications in commercial photography, industrial inspection, and robotics. This thesis explores avenues where hardware-software co-design of image sensors can be leveraged to replace conventional hardware components in an imaging system with software for enhanced reconfigurability. As a result, the user can program the image sensor in a way best suited to the end application. This is referred to as software-defined imaging (SDI), where image sensor behavior can be altered by the system software depending on the user's needs. The scope of this thesis covers the development and deployment of SDI algorithms for low-power computer vision. Strategies for sparse spatial sampling have been developed in this thesis for power optimization of the vision sensor. This dissertation shows how a hardware-compatible state-of-the-art object tracker can be coupled with a Kalman filter for energy gains at the sensor level. Extensive experiments reveal how adaptive spatial sampling of image frames with this hardware-friendly framework offers attractive energy-accuracy tradeoffs. Another thrust of this thesis is to demonstrate the benefits of reinforcement learning in this research avenue. A major finding reported in this dissertation shows how neural-network-based reinforcement learning can be exploited for the adaptive subsampling framework to achieve improved sampling performance, thereby optimizing the energy efficiency of the image sensor. The last thrust of this thesis is to leverage emerging event-based SDI technology for building a low-power navigation system. A homography estimation pipeline has been proposed in this thesis which couples the right data representation with a differential scale-invariant feature transform (SIFT) module to extract rich visual cues from event streams. Positional encoding is leveraged with a multilayer perceptron (MLP) network to get robust homography estimation from event data.
Date Created
2023
Agent

Development of Signal Analysis Synthesis Methods : Quantum Fourier Transforms and Quantum Linear Prediction Algorithms

189305-Thumbnail Image.png
Description
Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two

Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction (QLP). The research is motivated by the potential advantages offered by quantum computing in massive signal processing tasks and presents novel quantum circuit designs for QFT, quantum autocorrelation, and QLP, enabling signal analysis synthesis using quantum algorithms. The two approaches are explained as follows. The Quantum Fourier transform (QFT) demonstrates the potential for improved speed in quantum computing compared to classical methods. This thesis focuses on quantum encoding of signals and designing quantum algorithms for signal analysis synthesis, and signal compression using QFTs. Comparative studies are conducted to evaluate quantum computations for Fourier transform applications, considering Signal-to-Noise-Ratio results. The effects of qubit precision and quantum noise are also analyzed. The QFT algorithm is also developed in the J-DSP simulation environment, providing hands-on laboratory experiences for signal-processing students. User-friendly simulation programs on QFT-based signal analysis synthesis using peak picking, and perceptual selection using psychoacoustics in the J-DSP are developed. Further, this research is extended to analyze the autocorrelation of the signal using QFTs and develop a quantum linear prediction (QLP) algorithm for speech processing applications. QFTs and IQFTs are used to compute the quantum autocorrelation of the signal, and the HHL algorithm is modified and used to compute the solutions of the linear equations using quantum computing. The performance of the QLP algorithm is evaluated for system identification, spectral estimation, and speech analysis synthesis, and comparisons are performed for QLP and CLP results. The results demonstrate the following: effective quantum circuits for accurate QFT-based speech analysis synthesis, evaluation of performance with quantum noise, design of accurate quantum autocorrelation, and development of a modified HHL algorithm for efficient QLP. Overall, this thesis contributes to the research on quantum computing for signal processing applications and provides a foundation for further exploration of quantum algorithms for signal analysis synthesis.
Date Created
2023
Agent

Distributed Learning and Data Collection with Strategic Agents

187813-Thumbnail Image.png
Description
The presence of strategic agents can pose unique challenges to data collection and distributed learning. This dissertation first explores the social network dimension of data collection markets, and then focuses on how the strategic agents can be efficiently and effectively

The presence of strategic agents can pose unique challenges to data collection and distributed learning. This dissertation first explores the social network dimension of data collection markets, and then focuses on how the strategic agents can be efficiently and effectively incentivized to cooperate in distributed machine learning frameworks. The first problem explores the impact of social learning in collecting and trading unverifiable information where a data collector purchases data from users through a payment mechanism. Each user starts with a personal signal which represents the knowledge about the underlying state the data collector desires to learn. Through social interactions, each user also acquires additional information from his neighbors in the social network. It is revealed that both the data collector and the users can benefit from social learning which drives down the privacy costs and helps to improve the state estimation for a given total payment budget. In the second half, a federated learning scheme to train a global learning model with strategic agents, who are not bound to contribute their resources unconditionally, is considered. Since the agents are not obliged to provide their true stochastic gradient updates and the server is not capable of directly validating the authenticity of reported updates, the learning process may reach a noncooperative equilibrium. First, the actions of the agents are assumed to be binary: cooperative or defective. If the cooperative action is taken, the agent sends a privacy-preserved version of stochastic gradient signal. If the defective action is taken, the agent sends an arbitrary uninformative noise signal. Furthermore, this setup is extended into the scenarios with more general actions spaces where the quality of the stochastic gradient updates have a range of discrete levels. The proposed methodology evaluates each agent's stochastic gradient according to a reference gradient estimate which is constructed from the gradients provided by other agents, and rewards the agent based on that evaluation.
Date Created
2023
Agent

Investigating Quantum Approaches to Algorithm Privacy and Speech Processing

187804-Thumbnail Image.png
Description
Quantum computing is becoming more accessible through modern noisy intermediate scale quantum (NISQ) devices. These devices require substantial error correction and scaling before they become capable of fulfilling many of the promises that quantum computing algorithms make. This work investigates

Quantum computing is becoming more accessible through modern noisy intermediate scale quantum (NISQ) devices. These devices require substantial error correction and scaling before they become capable of fulfilling many of the promises that quantum computing algorithms make. This work investigates the current state of NISQ devices by implementing multiple classical computing scenarios with a quantum analog to observe how current quantum technology can be leveraged to achieve different tasks. First, quantum homomorphic encryption (QHE) is applied to the quantum teleportation protocol to show that this form of algorithm security is possible to implement with modern quantum computing simulators. QHE is capable of completely obscuring a teleported state with a liner increase in the number of qubit gates O(n). Additionally, the circuit depth increases minimally by only a constant factor O(c) when using only stabilizer circuits. Quantum machine learning (QML) is another potential application of NISQ technology that can be used to modify classical AI. QML is investigated using quantum hybrid neural networks for the classification of spoken commands on live audio data. Additionally, an edge computing scenario is examined to profile the interactions between a quantum simulator acting as a cloud server and an embedded processor board at the network edge. It is not practical to embed NISQ processors at a network edge, so this paradigm is important to study for practical quantum computing systems. The quantum hybrid neural network (QNN) learned to classify audio with equivalent accuracy (~94%) to a classical recurrent neural network. Introducing quantum simulation slows the systems responsiveness because it takes significantly longer to process quantum simulations than a classical neural network. This work shows that it is viable to implement classical computing techniques with quantum algorithms, but that current NISQ processing is sub-optimal when compared to classical methods.
Date Created
2023
Agent

Methodologies to Improve Fidelity and Reliability of Deep Learning Models for Real-World Deployment

187456-Thumbnail Image.png
Description
The past decade witnessed the success of deep learning models in various applications of computer vision and natural language processing. This success can be predominantly attributed to the (i) availability of large amounts of training data; (ii) access of domain

The past decade witnessed the success of deep learning models in various applications of computer vision and natural language processing. This success can be predominantly attributed to the (i) availability of large amounts of training data; (ii) access of domain aware knowledge; (iii) i.i.d assumption between the train and target distributions and (iv) belief on existing metrics as reliable indicators of performance. When any of these assumptions are violated, the models exhibit brittleness producing adversely varied behavior. This dissertation focuses on methods for accurate model design and characterization that enhance process reliability when certain assumptions are not met. With the need to safely adopt artificial intelligence tools in practice, it is vital to build reliable failure detectors that indicate regimes where the model must not be invoked. To that end, an error predictor trained with a self-calibration objective is developed to estimate loss consistent with the underlying model. The properties of the error predictor are described and their utility in supporting introspection via feature importances and counterfactual explanations is elucidated. While such an approach can signal data regime changes, it is critical to calibrate models using regimes of inlier (training) and outlier data to prevent under- and over-generalization in models i.e., incorrectly identifying inliers as outliers and vice-versa. By identifying the space for specifying inliers and outliers, an anomaly detector that can effectively flag data of varying semantic complexities in medical imaging is next developed. Uncertainty quantification in deep learning models involves identifying sources of failure and characterizing model confidence to enable actionability. A training strategy is developed that allows the accurate estimation of model uncertainties and its benefits are demonstrated for active learning and generalization gap prediction. This helps identify insufficiently sampled regimes and representation insufficiency in models. In addition, the task of deep inversion under data scarce scenarios is considered, which in practice requires a prior to control the optimization. By identifying limitations in existing work, data priors powered by generative models and deep model priors are designed for audio restoration. With relevant empirical studies on a variety of benchmarks, the need for such design strategies is demonstrated.
Date Created
2023
Agent

Evaluating the Efficiency of Quantum Simulators using Practical Application Benchmarks

187351-Thumbnail Image.png
Description
Quantum computing holds the potential to revolutionize various industries by solving problems that classical computers cannot solve efficiently. However, building quantum computers is still in its infancy, and simulators are currently the best available option to explore the potential of

Quantum computing holds the potential to revolutionize various industries by solving problems that classical computers cannot solve efficiently. However, building quantum computers is still in its infancy, and simulators are currently the best available option to explore the potential of quantum computing. Therefore, developing comprehensive benchmarking suites for quantum computing simulators is essential to evaluate their performance and guide the development of future quantum algorithms and hardware. This study presents a systematic evaluation of quantum computing simulators’ performance using a benchmarking suite. The benchmarking suite is designed to meet the industry-standard performance benchmarks established by the Defense Advanced Research Projects Agency (DARPA) and includes standardized test data and comparison metrics that encompass a wide range of applications, deep neural network models, and optimization techniques. The thesis is divided into two parts to cover basic quantum algorithms and variational quantum algorithms for practical machine-learning tasks. In the first part, the run time and memory performance of quantum computing simulators are analyzed using basic quantum algorithms. The performance is evaluated using standardized test data and comparison metrics that cover fundamental quantum algorithms, including Quantum Fourier Transform (QFT), Inverse Quantum Fourier Transform (IQFT), Quantum Adder, and Variational Quantum Eigensolver (VQE). The analysis provides valuable insights into the simulators’ strengths and weaknesses and highlights the need for further development to enhance their performance. In the second part, benchmarks are developed using variational quantum algorithms for practical machine learning tasks such as image classification, natural language processing, and recommendation. The benchmarks address several unique challenges posed by benchmarking quantum machine learning (QML), including the effect of optimizations on time-to-solution, the stochastic nature of training, the inclusion of hybrid quantum-classical layers, and the diversity of software and hardware systems. The findings offer valuable insights into the simulators’ ability to solve practical machine-learning tasks and pinpoint areas for future research and enhancement. In conclusion, this study provides a rigorous evaluation of quantum computing simulators’ performance using a benchmarking suite that meets industry-standard performance benchmarks.
Date Created
2023
Agent

Evaluation of Machine Learning Techniques for Pneumonia Detection

Description

Although relatively new technology, machine learning has rapidly demonstrated its many uses. One potential application of machine learning is the diagnosis of ailments in medical imaging. Ideally, through classification methods, a computer program would be able to identify different medical

Although relatively new technology, machine learning has rapidly demonstrated its many uses. One potential application of machine learning is the diagnosis of ailments in medical imaging. Ideally, through classification methods, a computer program would be able to identify different medical conditions when provided with an X-ray or other such scan. This would be very beneficial for overworked doctors, and could act as a potential crutch to aid in giving accurate diagnoses. For this thesis project, five different machine-learning algorithms were tested on two datasets containing 5,856 lung X-ray scans labeled as either “Pneumonia” or “Normal”. The goal was to determine which algorithm achieved the highest accuracy, as well as how preprocessing the data affected the accuracy of the models. The following supervised-learning methods were tested: support vector machines, logistic regression, decision trees, random forest, and a convolutional neural network. Each model was adjusted independently in order to achieve maximum performance before accuracy metrics were generated to pit the models against each other. Additionally, the effect of resizing images on model performance was investigated. Overall, a convolutional neural network proved to be the superior model for pneumonia detection, with a 91% accuracy. After resizing to 28x28, CNN accuracy decreased to 85%. The random forest model performed second best. The 28x28 PneumoniaMNIST dataset achieved higher accuracy using traditional machine learning models than the HD Chest X-Ray dataset. Resizing the Chest X-ray images had minimal effect on traditional model performance when resized to 28x28 or larger.

Date Created
2023-05
Agent