Matching Items (25)
157215-Thumbnail Image.png
Description
Non-line-of-sight (NLOS) imaging of objects not visible to either the camera or illumina-

tion source is a challenging task with vital applications including surveillance and robotics.

Recent NLOS reconstruction advances have been achieved using time-resolved measure-

ments. Acquiring these time-resolved measurements requires expensive and specialized

detectors and laser sources. In work proposes a data-driven

Non-line-of-sight (NLOS) imaging of objects not visible to either the camera or illumina-

tion source is a challenging task with vital applications including surveillance and robotics.

Recent NLOS reconstruction advances have been achieved using time-resolved measure-

ments. Acquiring these time-resolved measurements requires expensive and specialized

detectors and laser sources. In work proposes a data-driven approach for NLOS 3D local-

ization requiring only a conventional camera and projector. The localisation is performed

using a voxelisation and a regression problem. Accuracy of greater than 90% is achieved

in localizing a NLOS object to a 5cm × 5cm × 5cm volume in real data. By adopting

the regression approach an object of width 10cm to localised to approximately 1.5cm. To

generalize to line-of-sight (LOS) scenes with non-planar surfaces, an adaptive lighting al-

gorithm is adopted. This algorithm, based on radiosity, identifies and illuminates scene

patches in the LOS which most contribute to the NLOS light paths, and can factor in sys-

tem power constraints. Improvements ranging from 6%-15% in accuracy with a non-planar

LOS wall using adaptive lighting is reported, demonstrating the advantage of combining

the physics of light transport with active illumination for data-driven NLOS imaging.
ContributorsChandran, Sreenithy (Author) / Jayasuriya, Suren (Thesis advisor) / Turaga, Pavan (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2019
187685-Thumbnail Image.png
Description
Computed tomography (CT) and synthetic aperture sonar (SAS) are tomographic imaging techniques that are fundamental for applications within medical and remote sensing. Despite their successes, a number of factors constrain their image quality. For example, a time-varying scene during measurement acquisition yields image artifacts. Additionally, factors such as bandlimited or

Computed tomography (CT) and synthetic aperture sonar (SAS) are tomographic imaging techniques that are fundamental for applications within medical and remote sensing. Despite their successes, a number of factors constrain their image quality. For example, a time-varying scene during measurement acquisition yields image artifacts. Additionally, factors such as bandlimited or sparse measurements limit image resolution. This thesis presents novel algorithms and techniques to account for these factors during image formation and outperform traditional reconstruction methods. In particular, this thesis formulates analysis-by-synthesis optimizations that leverage neural fields to predict the scene and differentiable physics models that incorporate prior knowledge about image formation. The specific contributions include: (1) a method for reconstructing CT measurements from time-varying (non-stationary) scenes; (2) a method for deconvolving SAS images, which benefits image quality; (3) a method that couples neural fields and a differentiable acoustic model for 3D SAS reconstructions.
ContributorsReed, Albert William (Author) / Jayasuriya, Suren (Thesis advisor) / Brown, Daniel C (Committee member) / Dasarathy, Gautam (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2023
189226-Thumbnail Image.png
Description
This dissertation explores the use of artificial intelligence and machine learningtechniques for the development of controllers for fully-powered robotic prosthetics. The aim of the research is to enable prosthetics to predict future states and control biomechanical properties in both linear and nonlinear fashions, with a particular focus on ergonomics. The research is motivated by

This dissertation explores the use of artificial intelligence and machine learningtechniques for the development of controllers for fully-powered robotic prosthetics. The aim of the research is to enable prosthetics to predict future states and control biomechanical properties in both linear and nonlinear fashions, with a particular focus on ergonomics. The research is motivated by the need to provide amputees with prosthetic devices that not only replicate the functionality of the missing limb, but also offer a high level of comfort and usability. Traditional prosthetic devices lack the sophistication to adjust to a user’s movement patterns and can cause discomfort and pain over time. The proposed solution involves the development of machine learning-based controllers that can learn from user movements and adjust the prosthetic device’s movements accordingly. The research involves a combination of simulation and real-world testing to evaluate the effectiveness of the proposed approach. The simulation involves the creation of a model of the prosthetic device and the use of machine learning algorithms to train controllers that predict future states and control biomechanical properties. The real- world testing involves the use of human subjects wearing the prosthetic device to evaluate its performance and usability. The research focuses on two main areas: the prediction of future states and the control of biomechanical properties. The prediction of future states involves the development of machine learning algorithms that can analyze a user’s movements and predict the next movements with a high degree of accuracy. The control of biomechanical properties involves the development of algorithms that can adjust the prosthetic device’s movements to ensure maximum comfort and usability for the user. The results of the research show that the use of artificial intelligence and machine learning techniques can significantly improve the performance and usability of pros- thetic devices. The machine learning-based controllers developed in this research are capable of predicting future states and adjusting the prosthetic device’s movements in real-time, leading to a significant improvement in ergonomics and usability. Overall, this dissertation provides a comprehensive analysis of the use of artificial intelligence and machine learning techniques for the development of controllers for fully-powered robotic prosthetics.
ContributorsCLARK, GEOFFEY M (Author) / Ben Amor, Heni (Thesis advisor) / Dasarathy, Gautam (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Ward, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2023
187804-Thumbnail Image.png
Description
Quantum computing is becoming more accessible through modern noisy intermediate scale quantum (NISQ) devices. These devices require substantial error correction and scaling before they become capable of fulfilling many of the promises that quantum computing algorithms make. This work investigates the current state of NISQ devices by implementing multiple classical

Quantum computing is becoming more accessible through modern noisy intermediate scale quantum (NISQ) devices. These devices require substantial error correction and scaling before they become capable of fulfilling many of the promises that quantum computing algorithms make. This work investigates the current state of NISQ devices by implementing multiple classical computing scenarios with a quantum analog to observe how current quantum technology can be leveraged to achieve different tasks. First, quantum homomorphic encryption (QHE) is applied to the quantum teleportation protocol to show that this form of algorithm security is possible to implement with modern quantum computing simulators. QHE is capable of completely obscuring a teleported state with a liner increase in the number of qubit gates O(n). Additionally, the circuit depth increases minimally by only a constant factor O(c) when using only stabilizer circuits. Quantum machine learning (QML) is another potential application of NISQ technology that can be used to modify classical AI. QML is investigated using quantum hybrid neural networks for the classification of spoken commands on live audio data. Additionally, an edge computing scenario is examined to profile the interactions between a quantum simulator acting as a cloud server and an embedded processor board at the network edge. It is not practical to embed NISQ processors at a network edge, so this paradigm is important to study for practical quantum computing systems. The quantum hybrid neural network (QNN) learned to classify audio with equivalent accuracy (~94%) to a classical recurrent neural network. Introducing quantum simulation slows the systems responsiveness because it takes significantly longer to process quantum simulations than a classical neural network. This work shows that it is viable to implement classical computing techniques with quantum algorithms, but that current NISQ processing is sub-optimal when compared to classical methods.
ContributorsYarter, Maxwell (Author) / Spanias, Andreas (Thesis advisor) / Arenz, Christian (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2023
171411-Thumbnail Image.png
Description
In the era of big data, more and more decisions and recommendations are being made by machine learning (ML) systems and algorithms. Despite their many successes, there have been notable deficiencies in the robustness, rigor, and reliability of these ML systems, which have had detrimental societal impacts. In the next

In the era of big data, more and more decisions and recommendations are being made by machine learning (ML) systems and algorithms. Despite their many successes, there have been notable deficiencies in the robustness, rigor, and reliability of these ML systems, which have had detrimental societal impacts. In the next generation of ML, these significant challenges must be addressed through careful algorithmic design, and it is crucial that practitioners and meta-algorithms have the necessary tools to construct ML models that align with human values and interests. In an effort to help address these problems, this dissertation studies a tunable loss function called α-loss for the ML setting of classification. The alpha-loss is a hyperparameterized loss function originating from information theory that continuously interpolates between the exponential (alpha = 1/2), log (alpha = 1), and 0-1 (alpha = infinity) losses, hence providing a holistic perspective of several classical loss functions in ML. Furthermore, the alpha-loss exhibits unique operating characteristics depending on the value (and different regimes) of alpha; notably, for alpha > 1, alpha-loss robustly trains models when noisy training data is present. Thus, the alpha-loss can provide robustness to ML systems for classification tasks, and this has bearing in many applications, e.g., social media, finance, academia, and medicine; indeed, results are presented where alpha-loss produces more robust logistic regression models for COVID-19 survey data with gains over state of the art algorithmic approaches.
ContributorsSypherd, Tyler (Author) / Sankar, Lalitha (Thesis advisor) / Berisha, Visar (Committee member) / Dasarathy, Gautam (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2022
158636-Thumbnail Image.png
Description
According to the Center for Disease Control and Prevention report around 29,668 United States residents aged greater than 65 years had died as a result of a fall in 2016. Other injuries like wrist fractures, hip fractures, and head injuries occur as a result of a fall. Certain groups of

According to the Center for Disease Control and Prevention report around 29,668 United States residents aged greater than 65 years had died as a result of a fall in 2016. Other injuries like wrist fractures, hip fractures, and head injuries occur as a result of a fall. Certain groups of people are more prone to experience falls than others, one of which being individuals with stroke. The two most common issues with individuals with strokes are ankle weakness and foot drop, both of which contribute to falls. To mitigate this issue, the most popular clinical remedy given to these users is thermoplastic Ankle Foot Orthosis. These AFO's help improving gait velocity, stride length, and cadence. However, studies have shown that a continuous restraint on the ankle harms the compensatory stepping response and forward propulsion. It has been shown in previous studies that compensatory stepping and forward propulsion are crucial for the user's ability to recover from postural perturbations. Hence, there is a need for active devices that can supply a plantarflexion during the push-off and dorsiflexion during the swing phase of gait. Although advancements in the orthotic research have shown major improvements in supporting the ankle joint for rehabilitation, there is a lack of available active devices that can help impaired users in daily activities. In this study, our primary focus is to build an unobtrusive, cost-effective, and easy to wear active device for gait rehabilitation and fall prevention in individuals who are at risk. The device will be using a double-acting cylinder that can be easily incorporated into the user's footwear using a novel custom-designed powered ankle brace. The device will use Inertial Measurement Units to measure kinematic parameters of the lower body and a custom control algorithm to actuate the device based on the measurements. The study can be used to advance the field of gait assistance, rehabilitation, and potentially fall prevention of individuals with lower-limb impairments through the use of Active Ankle Foot Orthosis.
ContributorsRay, Sambarta (Author) / Honeycutt, Claire (Thesis advisor) / Dasarathy, Gautam (Thesis advisor) / Redkar, Sangram (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2020
Description

In this thesis, six experiments which were computer simulations were conducted in order to replicate the negative association between sample size and accuracy that is repeatedly found in ML literature by accounting for data leakage and publication bias. The reason why it is critical to understand why this negative association

In this thesis, six experiments which were computer simulations were conducted in order to replicate the negative association between sample size and accuracy that is repeatedly found in ML literature by accounting for data leakage and publication bias. The reason why it is critical to understand why this negative association is occurring is that in published studies, there have been multiple reports that the accuracies in ML models are overoptimistic leading to cases where the results are irreproducible despite conducting multiple trials and experiments. Additionally, after replicating the negative association between sample size and accuracy, parametric curves (learning curves with the parametric function) were fitted along the empirical learning curves in order to evaluate the performance. It was found that there is a significant variance in accuracies when the sample size is small, but little to no variation when the sample size is large. In other words, the empirical learning curves with data leakage and publication bias were able to achieve the same accuracy as the learning curve without data leakage at a large sample size.

ContributorsKottooru, Rishab (Author) / Berisha, Visar (Thesis director) / Dasarathy, Gautam (Committee member) / Saidi, Pouria (Committee member) / Barrett, The Honors College (Contributor) / Chemical Engineering Program (Contributor)
Created2023-05
187351-Thumbnail Image.png
Description
Quantum computing holds the potential to revolutionize various industries by solving problems that classical computers cannot solve efficiently. However, building quantum computers is still in its infancy, and simulators are currently the best available option to explore the potential of quantum computing. Therefore, developing comprehensive benchmarking suites for quantum computing

Quantum computing holds the potential to revolutionize various industries by solving problems that classical computers cannot solve efficiently. However, building quantum computers is still in its infancy, and simulators are currently the best available option to explore the potential of quantum computing. Therefore, developing comprehensive benchmarking suites for quantum computing simulators is essential to evaluate their performance and guide the development of future quantum algorithms and hardware. This study presents a systematic evaluation of quantum computing simulators’ performance using a benchmarking suite. The benchmarking suite is designed to meet the industry-standard performance benchmarks established by the Defense Advanced Research Projects Agency (DARPA) and includes standardized test data and comparison metrics that encompass a wide range of applications, deep neural network models, and optimization techniques. The thesis is divided into two parts to cover basic quantum algorithms and variational quantum algorithms for practical machine-learning tasks. In the first part, the run time and memory performance of quantum computing simulators are analyzed using basic quantum algorithms. The performance is evaluated using standardized test data and comparison metrics that cover fundamental quantum algorithms, including Quantum Fourier Transform (QFT), Inverse Quantum Fourier Transform (IQFT), Quantum Adder, and Variational Quantum Eigensolver (VQE). The analysis provides valuable insights into the simulators’ strengths and weaknesses and highlights the need for further development to enhance their performance. In the second part, benchmarks are developed using variational quantum algorithms for practical machine learning tasks such as image classification, natural language processing, and recommendation. The benchmarks address several unique challenges posed by benchmarking quantum machine learning (QML), including the effect of optimizations on time-to-solution, the stochastic nature of training, the inclusion of hybrid quantum-classical layers, and the diversity of software and hardware systems. The findings offer valuable insights into the simulators’ ability to solve practical machine-learning tasks and pinpoint areas for future research and enhancement. In conclusion, this study provides a rigorous evaluation of quantum computing simulators’ performance using a benchmarking suite that meets industry-standard performance benchmarks.
ContributorsSathyakumar, Rajesh (Author) / Spanias, Andreas (Thesis advisor) / Sen, Arunabha (Thesis advisor) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2023
171923-Thumbnail Image.png
Description
Modern physical systems are experiencing tremendous evolutions with growing size, more and more complex structures, and the incorporation of new devices. This calls for better planning, monitoring, and control. However, achieving these goals is challenging since the system knowledge (e.g., system structures and edge parameters) may be unavailable for a

Modern physical systems are experiencing tremendous evolutions with growing size, more and more complex structures, and the incorporation of new devices. This calls for better planning, monitoring, and control. However, achieving these goals is challenging since the system knowledge (e.g., system structures and edge parameters) may be unavailable for a normal system, let alone some dynamic changes like maintenance, reconfigurations, and events, etc. Therefore, extracting system knowledge becomes a central topic. Luckily, advanced metering techniques bring numerous data, leading to the emergence of Machine Learning (ML) methods with efficient learning and fast inference. This work tries to propose a systematic framework of ML-based methods to learn system knowledge under three what-if scenarios: (i) What if the system is normally operated? (ii) What if the system suffers dynamic interventions? (iii) What if the system is new with limited data? For each case, this thesis proposes principled solutions with extensive experiments. Chapter 2 tackles scenario (i) and the golden rule is to learn an ML model that maintains physical consistency, bringing high extrapolation capacity for changing operational conditions. The key finding is that physical consistency can be linked to convexity, a central concept in optimization. Therefore, convexified ML designs are proposed and the global optimality implies faithfulness to the underlying physics. Chapter 3 handles scenario (ii) and the goal is to identify the event time, type, and locations. The problem is formalized as multi-class classification with special attention to accuracy and speed. Subsequently, Chapter 3 builds an ensemble learning framework to aggregate different ML models for better prediction. Next, to tackle high-volume data quickly, a tensor as the multi-dimensional array is used to store and process data, yielding compact and informative vectors for fast inference. Finally, if no labels exist, Chapter 3 uses physical properties to generate labels for learning. Chapter 4 deals with scenario (iii) and a doable process is to transfer knowledge from similar systems, under the framework of Transfer Learning (TL). Chapter 4 proposes cutting-edge system-level TL by considering the network structure, complex spatial-temporal correlations, and different physical information.
ContributorsLi, Haoran (Author) / Weng, Yang (Thesis advisor) / Tong, Hanghang (Committee member) / Dasarathy, Gautam (Committee member) / Sankar, Lalitha (Committee member) / Arizona State University (Publisher)
Created2022
171928-Thumbnail Image.png
Description
Linear-regression estimators have become widely accepted as a reliable statistical tool in predicting outcomes. Because linear regression is a long-established procedure, the properties of linear-regression estimators are well understood and can be trained very quickly. Many estimators exist for modeling linear relationships, each having ideal conditions for optimal performance. The

Linear-regression estimators have become widely accepted as a reliable statistical tool in predicting outcomes. Because linear regression is a long-established procedure, the properties of linear-regression estimators are well understood and can be trained very quickly. Many estimators exist for modeling linear relationships, each having ideal conditions for optimal performance. The differences stem from the introduction of a bias into the parameter estimation through the use of various regularization strategies. One of the more popular ones is ridge regression which uses ℓ2-penalization of the parameter vector. In this work, the proposed graph regularized linear estimator is pitted against the popular ridge regression when the parameter vector is known to be dense. When additional knowledge that parameters are smooth with respect to a graph is available, it can be used to improve the parameter estimates. To achieve this goal an additional smoothing penalty is introduced into the traditional loss function of ridge regression. The mean squared error(m.s.e) is used as a performance metric and the analysis is presented for fixed design matrices having a unit covariance matrix. The specific problem setup enables us to study the theoretical conditions where the graph regularized estimator out-performs the ridge estimator. The eigenvectors of the laplacian matrix indicating the graph of connections between the various dimensions of the parameter vector form an integral part of the analysis. Experiments have been conducted on simulated data to compare the performance of the two estimators for laplacian matrices of several types of graphs – complete, star, line and 4-regular. The experimental results indicate that the theory can possibly be extended to more general settings taking smoothness, a concept defined in this work, into consideration.
ContributorsSajja, Akarshan (Author) / Dasarathy, Gautam (Thesis advisor) / Berisha, Visar (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2022