Matching Items (32)
153463-Thumbnail Image.png
Description
Parkinson's disease is a neurodegenerative condition diagnosed on patients with

clinical history and motor signs of tremor, rigidity and bradykinesia, and the estimated

number of patients living with Parkinson's disease around the world is seven

to ten million. Deep brain stimulation (DBS) provides substantial relief of the motor

signs of Parkinson's disease patients. It

Parkinson's disease is a neurodegenerative condition diagnosed on patients with

clinical history and motor signs of tremor, rigidity and bradykinesia, and the estimated

number of patients living with Parkinson's disease around the world is seven

to ten million. Deep brain stimulation (DBS) provides substantial relief of the motor

signs of Parkinson's disease patients. It is an advanced surgical technique that is used

when drug therapy is no longer sufficient for Parkinson's disease patients. DBS alleviates the motor symptoms of Parkinson's disease by targeting the subthalamic nucleus using high-frequency electrical stimulation.

This work proposes a behavior recognition model for patients with Parkinson's

disease. In particular, an adaptive learning method is proposed to classify behavioral

tasks of Parkinson's disease patients using local field potential and electrocorticography

signals that are collected during DBS implantation surgeries. Unique patterns

exhibited between these signals in a matched feature space would lead to distinction

between motor and language behavioral tasks. Unique features are first extracted

from deep brain signals in the time-frequency space using the matching pursuit decomposition

algorithm. The Dirichlet process Gaussian mixture model uses the extracted

features to cluster the different behavioral signal patterns, without training or

any prior information. The performance of the method is then compared with other

machine learning methods and the advantages of each method is discussed under

different conditions.
ContributorsDutta, Arindam (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Holbert, Keith E. (Committee member) / Bliss, Daniel W. (Committee member) / Arizona State University (Publisher)
Created2015
155967-Thumbnail Image.png
Description
This thesis work present the simulation of Bluetooth and Wi-Fi radios in real life interference environments. When information is transmitted via communication channels, data may get corrupted due to noise and other channel discrepancies. In order to receive the information safely and correctly, error correction coding schemes are generally employed

This thesis work present the simulation of Bluetooth and Wi-Fi radios in real life interference environments. When information is transmitted via communication channels, data may get corrupted due to noise and other channel discrepancies. In order to receive the information safely and correctly, error correction coding schemes are generally employed during the design of communication systems. Usually the simulations of wireless communication systems are done in such a way that they focus on some aspect of communications and neglect the others. The simulators available currently will either do network layer simulations or physical layer level simulations. In many situations, simulations are required which show inter-layer aspects of communication systems. For all such scenarios, a simulation environment, WiscaComm which is based on time-domain samples is built. WiscaComm allows the study of network and physical layer interactions in detail. The advantage of time domain sampling is that it allows the simulation of different radios together which is better than the complex baseband representation of symbols. The environment also supports study of multiple protocols operating simultaneously, which is of increasing importance in today's environment.
ContributorsNolastname, Ujjwala (Author) / Bliss, Daniel W. (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / McGiffen, Thomas (Committee member) / Arizona State University (Publisher)
Created2017
156384-Thumbnail Image.png
Description
Digital imaging and image processing technologies have revolutionized the way in which

we capture, store, receive, view, utilize, and share images. In image-based applications,

through different processing stages (e.g., acquisition, compression, and transmission), images

are subjected to different types of distortions which degrade their visual quality. Image

Quality Assessment (IQA) attempts to use computational

Digital imaging and image processing technologies have revolutionized the way in which

we capture, store, receive, view, utilize, and share images. In image-based applications,

through different processing stages (e.g., acquisition, compression, and transmission), images

are subjected to different types of distortions which degrade their visual quality. Image

Quality Assessment (IQA) attempts to use computational models to automatically evaluate

and estimate the image quality in accordance with subjective evaluations. Moreover, with

the fast development of computer vision techniques, it is important in practice to extract

and understand the information contained in blurred images or regions.

The work in this dissertation focuses on reduced-reference visual quality assessment of

images and textures, as well as perceptual-based spatially-varying blur detection.

A training-free low-cost Reduced-Reference IQA (RRIQA) method is proposed. The

proposed method requires a very small number of reduced-reference (RR) features. Extensive

experiments performed on different benchmark databases demonstrate that the proposed

RRIQA method, delivers highly competitive performance as compared with the

state-of-the-art RRIQA models for both natural and texture images.

In the context of texture, the effect of texture granularity on the quality of synthesized

textures is studied. Moreover, two RR objective visual quality assessment methods that

quantify the perceived quality of synthesized textures are proposed. Performance evaluations

on two synthesized texture databases demonstrate that the proposed RR metrics outperforms

full-reference (FR), no-reference (NR), and RR state-of-the-art quality metrics in

predicting the perceived visual quality of the synthesized textures.

Last but not least, an effective approach to address the spatially-varying blur detection

problem from a single image without requiring any knowledge about the blur type, level,

or camera settings is proposed. The evaluations of the proposed approach on a diverse

sets of blurry images with different blur types, levels, and content demonstrate that the

proposed algorithm performs favorably against the state-of-the-art methods qualitatively

and quantitatively.
ContributorsGolestaneh, Seyedalireza (Author) / Karam, Lina (Thesis advisor) / Bliss, Daniel W. (Committee member) / Li, Baoxin (Committee member) / Turaga, Pavan K. (Committee member) / Arizona State University (Publisher)
Created2018
157215-Thumbnail Image.png
Description
Non-line-of-sight (NLOS) imaging of objects not visible to either the camera or illumina-

tion source is a challenging task with vital applications including surveillance and robotics.

Recent NLOS reconstruction advances have been achieved using time-resolved measure-

ments. Acquiring these time-resolved measurements requires expensive and specialized

detectors and laser sources. In work proposes a data-driven

Non-line-of-sight (NLOS) imaging of objects not visible to either the camera or illumina-

tion source is a challenging task with vital applications including surveillance and robotics.

Recent NLOS reconstruction advances have been achieved using time-resolved measure-

ments. Acquiring these time-resolved measurements requires expensive and specialized

detectors and laser sources. In work proposes a data-driven approach for NLOS 3D local-

ization requiring only a conventional camera and projector. The localisation is performed

using a voxelisation and a regression problem. Accuracy of greater than 90% is achieved

in localizing a NLOS object to a 5cm × 5cm × 5cm volume in real data. By adopting

the regression approach an object of width 10cm to localised to approximately 1.5cm. To

generalize to line-of-sight (LOS) scenes with non-planar surfaces, an adaptive lighting al-

gorithm is adopted. This algorithm, based on radiosity, identifies and illuminates scene

patches in the LOS which most contribute to the NLOS light paths, and can factor in sys-

tem power constraints. Improvements ranging from 6%-15% in accuracy with a non-planar

LOS wall using adaptive lighting is reported, demonstrating the advantage of combining

the physics of light transport with active illumination for data-driven NLOS imaging.
ContributorsChandran, Sreenithy (Author) / Jayasuriya, Suren (Thesis advisor) / Turaga, Pavan (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2019
155255-Thumbnail Image.png
Description
RF convergence of radar and communications users is rapidly becoming an issue for a multitude of stakeholders. To hedge against growing spectral congestion, research into cooperative radar and communications systems has been identified as a critical necessity for the United States and other countries. Further, the joint sensing-communicating paradigm appears

RF convergence of radar and communications users is rapidly becoming an issue for a multitude of stakeholders. To hedge against growing spectral congestion, research into cooperative radar and communications systems has been identified as a critical necessity for the United States and other countries. Further, the joint sensing-communicating paradigm appears imminent in several technological domains. In the pursuit of co-designing radar and communications systems that work cooperatively and benefit from each other's existence, joint radar-communications metrics are defined and bounded as a measure of performance. Estimation rate is introduced, a novel measure of radar estimation information as a function of time. Complementary to communications data rate, the two systems can now be compared on the same scale. An information-centric approach has a number of advantages, defining precisely what is gained through radar illumination and serves as a measure of spectral efficiency. Bounding radar estimation rate and communications data rate jointly, systems can be designed as a joint optimization problem.
ContributorsPaul, Bryan (Author) / Bliss, Daniel W. (Thesis advisor) / Berisha, Visar (Committee member) / Kosut, Oliver (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2017
187685-Thumbnail Image.png
Description
Computed tomography (CT) and synthetic aperture sonar (SAS) are tomographic imaging techniques that are fundamental for applications within medical and remote sensing. Despite their successes, a number of factors constrain their image quality. For example, a time-varying scene during measurement acquisition yields image artifacts. Additionally, factors such as bandlimited or

Computed tomography (CT) and synthetic aperture sonar (SAS) are tomographic imaging techniques that are fundamental for applications within medical and remote sensing. Despite their successes, a number of factors constrain their image quality. For example, a time-varying scene during measurement acquisition yields image artifacts. Additionally, factors such as bandlimited or sparse measurements limit image resolution. This thesis presents novel algorithms and techniques to account for these factors during image formation and outperform traditional reconstruction methods. In particular, this thesis formulates analysis-by-synthesis optimizations that leverage neural fields to predict the scene and differentiable physics models that incorporate prior knowledge about image formation. The specific contributions include: (1) a method for reconstructing CT measurements from time-varying (non-stationary) scenes; (2) a method for deconvolving SAS images, which benefits image quality; (3) a method that couples neural fields and a differentiable acoustic model for 3D SAS reconstructions.
ContributorsReed, Albert William (Author) / Jayasuriya, Suren (Thesis advisor) / Brown, Daniel C (Committee member) / Dasarathy, Gautam (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2023
189226-Thumbnail Image.png
Description
This dissertation explores the use of artificial intelligence and machine learningtechniques for the development of controllers for fully-powered robotic prosthetics. The aim of the research is to enable prosthetics to predict future states and control biomechanical properties in both linear and nonlinear fashions, with a particular focus on ergonomics. The research is motivated by

This dissertation explores the use of artificial intelligence and machine learningtechniques for the development of controllers for fully-powered robotic prosthetics. The aim of the research is to enable prosthetics to predict future states and control biomechanical properties in both linear and nonlinear fashions, with a particular focus on ergonomics. The research is motivated by the need to provide amputees with prosthetic devices that not only replicate the functionality of the missing limb, but also offer a high level of comfort and usability. Traditional prosthetic devices lack the sophistication to adjust to a user’s movement patterns and can cause discomfort and pain over time. The proposed solution involves the development of machine learning-based controllers that can learn from user movements and adjust the prosthetic device’s movements accordingly. The research involves a combination of simulation and real-world testing to evaluate the effectiveness of the proposed approach. The simulation involves the creation of a model of the prosthetic device and the use of machine learning algorithms to train controllers that predict future states and control biomechanical properties. The real- world testing involves the use of human subjects wearing the prosthetic device to evaluate its performance and usability. The research focuses on two main areas: the prediction of future states and the control of biomechanical properties. The prediction of future states involves the development of machine learning algorithms that can analyze a user’s movements and predict the next movements with a high degree of accuracy. The control of biomechanical properties involves the development of algorithms that can adjust the prosthetic device’s movements to ensure maximum comfort and usability for the user. The results of the research show that the use of artificial intelligence and machine learning techniques can significantly improve the performance and usability of pros- thetic devices. The machine learning-based controllers developed in this research are capable of predicting future states and adjusting the prosthetic device’s movements in real-time, leading to a significant improvement in ergonomics and usability. Overall, this dissertation provides a comprehensive analysis of the use of artificial intelligence and machine learning techniques for the development of controllers for fully-powered robotic prosthetics.
ContributorsCLARK, GEOFFEY M (Author) / Ben Amor, Heni (Thesis advisor) / Dasarathy, Gautam (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Ward, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2023
187804-Thumbnail Image.png
Description
Quantum computing is becoming more accessible through modern noisy intermediate scale quantum (NISQ) devices. These devices require substantial error correction and scaling before they become capable of fulfilling many of the promises that quantum computing algorithms make. This work investigates the current state of NISQ devices by implementing multiple classical

Quantum computing is becoming more accessible through modern noisy intermediate scale quantum (NISQ) devices. These devices require substantial error correction and scaling before they become capable of fulfilling many of the promises that quantum computing algorithms make. This work investigates the current state of NISQ devices by implementing multiple classical computing scenarios with a quantum analog to observe how current quantum technology can be leveraged to achieve different tasks. First, quantum homomorphic encryption (QHE) is applied to the quantum teleportation protocol to show that this form of algorithm security is possible to implement with modern quantum computing simulators. QHE is capable of completely obscuring a teleported state with a liner increase in the number of qubit gates O(n). Additionally, the circuit depth increases minimally by only a constant factor O(c) when using only stabilizer circuits. Quantum machine learning (QML) is another potential application of NISQ technology that can be used to modify classical AI. QML is investigated using quantum hybrid neural networks for the classification of spoken commands on live audio data. Additionally, an edge computing scenario is examined to profile the interactions between a quantum simulator acting as a cloud server and an embedded processor board at the network edge. It is not practical to embed NISQ processors at a network edge, so this paradigm is important to study for practical quantum computing systems. The quantum hybrid neural network (QNN) learned to classify audio with equivalent accuracy (~94%) to a classical recurrent neural network. Introducing quantum simulation slows the systems responsiveness because it takes significantly longer to process quantum simulations than a classical neural network. This work shows that it is viable to implement classical computing techniques with quantum algorithms, but that current NISQ processing is sub-optimal when compared to classical methods.
ContributorsYarter, Maxwell (Author) / Spanias, Andreas (Thesis advisor) / Arenz, Christian (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2023
168621-Thumbnail Image.png
Description
Due to their effectiveness in capturing similarities between different entities, graphical models are widely used to represent datasets that reside on irregular and complex manifolds. Graph signal processing offers support to handle such complex datasets. By extending the digital signal processing conceptual frame from time and frequency domain to graph

Due to their effectiveness in capturing similarities between different entities, graphical models are widely used to represent datasets that reside on irregular and complex manifolds. Graph signal processing offers support to handle such complex datasets. By extending the digital signal processing conceptual frame from time and frequency domain to graph domain, operators such as graph shift, graph filter and graph Fourier transform are defined. In this dissertation, two novel graph filter design methods are proposed. First, a graph filter with multiple shift matrices is applied to semi-supervised classification, which can handle features with uneven qualities through an embedded feature importance evaluation process. Three optimization solutions are provided: an alternating minimization method that is simple to implement, a convex relaxation method that provides a theoretical performance benchmark and a genetic algorithm, which is computationally efficient and better at configuring overfitting. Second, a graph filter with splitting-and-merging scheme is proposed, which splits the graph into multiple subgraphs. The corresponding subgraph filters are trained parallelly and in the last, by merging all the subgraph filters, the final graph filter is obtained. Due to the splitting process, the redundant edges in the original graph are dropped, which can save computational cost in semi-supervised classification. At the same time, this scheme also enables the filter to represent unevenly sampled data in manifold learning. To evaluate the performance of the proposed graph filter design approaches, simulation experiments with synthetic and real datasets are conduct. The Monte Carlo cross validation method is employed to demonstrate the need for the proposed graph filter design approaches in various application scenarios. Criterions, such as accuracy, Gini score, F1-score and learning curves, are provided to analyze the performance of the proposed methods and their competitors.
ContributorsFan, Jie (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2022
157977-Thumbnail Image.png
Description
Deep neural networks (DNNs) have had tremendous success in a variety of

statistical learning applications due to their vast expressive power. Most

applications run DNNs on the cloud on parallelized architectures. There is a need

for for efficient DNN inference on edge with low precision hardware and analog

accelerators. To make trained models more

Deep neural networks (DNNs) have had tremendous success in a variety of

statistical learning applications due to their vast expressive power. Most

applications run DNNs on the cloud on parallelized architectures. There is a need

for for efficient DNN inference on edge with low precision hardware and analog

accelerators. To make trained models more robust for this setting, quantization and

analog compute noise are modeled as weight space perturbations to DNNs and an

information theoretic regularization scheme is used to penalize the KL-divergence

between perturbed and unperturbed models. This regularizer has similarities to

both natural gradient descent and knowledge distillation, but has the advantage of

explicitly promoting the network to and a broader minimum that is robust to

weight space perturbations. In addition to the proposed regularization,

KL-divergence is directly minimized using knowledge distillation. Initial validation

on FashionMNIST and CIFAR10 shows that the information theoretic regularizer

and knowledge distillation outperform existing quantization schemes based on the

straight through estimator or L2 constrained quantization.
ContributorsKadambi, Pradyumna (Author) / Berisha, Visar (Thesis advisor) / Dasarathy, Gautam (Committee member) / Seo, Jae-Sun (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2019