Matching Items (110)
193330-Thumbnail Image.png
Description
Deep metric learning has recently shown extremely promising results in the classical data domain, creating well-separated feature spaces. This idea was also adapted to quantum computers via Quantum Metric Learning (QMeL). QMeL consists of a 2 step process with a classical model to compress the data to fit into the

Deep metric learning has recently shown extremely promising results in the classical data domain, creating well-separated feature spaces. This idea was also adapted to quantum computers via Quantum Metric Learning (QMeL). QMeL consists of a 2 step process with a classical model to compress the data to fit into the limited number of qubits, then train a Parameterized Quantum Circuit (PQC) to create better separation in Hilbert Space. However, on Noisy Intermediate Scale Quantum (NISQ) devices, QMeL solutions result in high circuit width and depth, both of which limit scalability. The proposed Quantum Polar Metric Learning (QPMeL ), uses a classical model to learn the parameters of the polar form of a qubit. A shallow PQC with Ry and Rz gates is then utilized to create the state and a trainable layer of ZZ(θ)-gates to learn entanglement. The circuit also computes fidelity via a SWAP Test for the proposed Fidelity Triplet Loss function, used to train both classical and quantum components. When compared to QMeL approaches, QPMeL achieves 3X better multi-class separation, while using only 1/2 the number of gates and depth. QPMeL is shown to outperform classical networks with similar configurations, presentinga promising avenue for future research on fully classical models with quantum loss functions.
ContributorsSharma, Vinayak (Author) / Shrivastava, Aviral (Thesis advisor) / Jiang, Zilin (Committee member) / Kambhampati, Subbarao (Committee member) / Arizona State University (Publisher)
Created2024
187325-Thumbnail Image.png
Description
SLAM (Simultaneous Localization and Mapping) is a problem that has existed for a long time in robotics and autonomous navigation. The objective of SLAM is for a robot to simultaneously figure out its position in space and map its environment. SLAM is especially useful and mandatory for robots that want

SLAM (Simultaneous Localization and Mapping) is a problem that has existed for a long time in robotics and autonomous navigation. The objective of SLAM is for a robot to simultaneously figure out its position in space and map its environment. SLAM is especially useful and mandatory for robots that want to navigate autonomously. The description might make it seem like a chicken and egg problem, but numerous methods have been proposed to tackle SLAM. Before the rise in the popularity of deep learning and AI (Artificial Intelligence), most existing algorithms involved traditional hard-coded algorithms that would receive and process sensor information and convert it into some solvable sensor-agnostic problem. The challenge for these sorts of methods is having to tackle dynamic environments. The more variety in the environment, the poorer the results. Also due to the increase in computational power and the capability of deep learning-based image processing, visual SLAM has become extremely viable and maybe even preferable to traditional SLAM algorithms. In this research, a deep learning-based solution to the SLAM problem is proposed, specifically monocular visual SLAM which is solving the problem of SLAM purely with a singular camera as the input, and the model is tested on the KITTI (Karlsruhe Institute of Technology & Toyota Technological Institute) odometry dataset.
ContributorsRupaakula, Krishna Sandeep (Author) / Bansal, Ajay (Thesis advisor) / Baron, Tyler (Committee member) / Acuna, Ruben (Committee member) / Arizona State University (Publisher)
Created2023
187326-Thumbnail Image.png
Description
Frontend development often involves the repetitive and time-consuming task of transforming a Graphical User interface (GUI) design into Frontend Code. The GUI design could either be an image or a design created on tools like Figma, Sketch, etc. This process can be particularly challenging when the website designs are experimental

Frontend development often involves the repetitive and time-consuming task of transforming a Graphical User interface (GUI) design into Frontend Code. The GUI design could either be an image or a design created on tools like Figma, Sketch, etc. This process can be particularly challenging when the website designs are experimental and undergo multiple iterations before the final version gets deployed. In such cases, developers work with the designers to make continuous changes and improve the look and feel of the website. This can lead to a lot of reworks and a poorly managed codebase that requires significant developer resources. To tackle this problem, researchers are exploring ways to automate the process of transforming image designs into functional websites instantly. This thesis explores the use of machine learning, specifically Recurrent Neural networks (RNN) to generate an intermediate code from an image design and then compile it into a React web frontend code. By utilizing this approach, designers can essentially transform an image design into a functional website, granting them creative freedom and the ability to present working prototypes to stockholders in real-time. To overcome the limitations of existing publicly available datasets, the thesis places significant emphasis on generating synthetic datasets. As part of this effort, the research proposes a novel method to double the size of the pix2code [2] dataset by incorporating additional complex HTML elements such as login forms, carousels, and cards. This approach has the potential to enhance the quality and diversity of training data available for machine learning models. Overall, the proposed approach offers a promising solution to the repetitive and time-consuming task of transforming GUI designs into frontend code.
ContributorsSingh, Ajitesh Janardan (Author) / Bansal, Ajay (Thesis advisor) / Mehlhase, Alexandra (Committee member) / Baron, Tyler (Committee member) / Arizona State University (Publisher)
Created2023
187330-Thumbnail Image.png
Description
Since the early 2000s the Rubik’s Cube has seen growing usage at speedsolving competitions and as an effective tool to teach Science, Technology, Engineering, Mathematics (STEM) topics at hundreds of schools and universities across the world. Recently, cube manufacturers have begun embedding sensors to enable digital face tracking. The live

Since the early 2000s the Rubik’s Cube has seen growing usage at speedsolving competitions and as an effective tool to teach Science, Technology, Engineering, Mathematics (STEM) topics at hundreds of schools and universities across the world. Recently, cube manufacturers have begun embedding sensors to enable digital face tracking. The live feedback from these so called “smartcubes” enables a new wave of immersive solution tutorials and interactive educational games using the cube as a controller. Existing smartcube software has several limitations. Manufacturers’ applications support only a narrow set of puzzle form factors and application platforms, fragmenting the ecosystem. Most apps require an active internet connection for key features, limiting where users can practice with a smartcube. Finally, existing applications focus on a single 3x3x3connection, losing opportunities afforded by new form factors. This research demonstrates an open-source smartcube application which mitigates these limitations. Particular attention is given to creating an Application Programming Interface (API) for smartcube communication and building representative solve analysis tools. These innovations have included successful negotiations to re-license existing open-source Rubik’sCube software projects to support deployment on multiple platforms, particularly iOS. The resulting application supports smartcubes from three manufacturers, runs on two platforms (Android and iOS), functions entirely offline after an initial download of remote assets, demonstrates concurrent connections with up to six smartcubes, and supports all current and anticipated smartcube form factors. These foundational elements can accelerate future efforts to build smartcube applications, including automated performance feedback systems and personalized gamification of learning experiences. Such advances will hopefully enhance the Rubik’s Cube’s value both as a competitive toy and as a pedagogical tool in educational institutions worldwide.
ContributorsHale, Joseph (Author) / Bansal, Ajay (Thesis advisor) / Heinrichs, Robert (Committee member) / Gary, Kevin (Committee member) / Arizona State University (Publisher)
Created2023
187340-Thumbnail Image.png
Description
Recommendation systems provide recommendations based on user behavior andcontent data. User behavior and content data are fed to machine learning algorithms to train them and give recommendations to the users. These algorithms need a large amount of data for a reasonable conversion rate. But for small applications, the available amount of data is

Recommendation systems provide recommendations based on user behavior andcontent data. User behavior and content data are fed to machine learning algorithms to train them and give recommendations to the users. These algorithms need a large amount of data for a reasonable conversion rate. But for small applications, the available amount of data is minimal, leading to high recommendation aberrations. Also, when an existing large scaled application with a high amount of available data uses a new recommendation system, it requires some time and testing to decide which recommendation algorithm is best suited to get higher conversion rates. This learning curve costs highly when the user base and data size are significantly high. In this thesis, A/B testing is used with manual intervention in the decision-making of recommendation systems. To understand the effectiveness of the recommendations, user interaction data is compared to compare experiences. Based on the comparisons, the experiments conclude the effectiveness of A/B testing for the recommendation system.
ContributorsVaidya, Yogesh Vinayak (Author) / Bansal, Ajay (Thesis advisor) / Findler, Michael (Committee member) / Chakravarthi, Bharatesh (Committee member) / Arizona State University (Publisher)
Created2023
187378-Thumbnail Image.png
Description
This paper introduces Zenith, a statically typed, functional programming language that compiles to Lua modules. The goal of Zenith is to be used in tandem with Lua, as a secondary language, in which Lua developers can transition potentially unsound programs into Zenith instead. Here developers will be ensured a set

This paper introduces Zenith, a statically typed, functional programming language that compiles to Lua modules. The goal of Zenith is to be used in tandem with Lua, as a secondary language, in which Lua developers can transition potentially unsound programs into Zenith instead. Here developers will be ensured a set of guarantees during compile time, which are provided through Zenith’s language design and type system. This paper formulates the reasoning behind the design choices in Zenith, based on prior work. This paper also provides a basic understanding and intuitions on the Hindley-Milner type system used in Zenith, and the functional programming data types used to encode unsound functions. With these ideas combined, the paper concludes on how Zenith can provide soundness and runtime safety as a language, and how Zenith may be used with Lua to create safe systems.
ContributorsShrestha, Abhash (Author) / De Luca, Gennaro (Thesis advisor) / Bansal, Ajay (Thesis advisor) / Chen, Yinong (Committee member) / Arizona State University (Publisher)
Created2023
187854-Thumbnail Image.png
Description
Traditional sports coaching involves face-to-face instructions with athletes or playingback 2D videos of athletes’ training. However, if the coach is not in the same area as the athlete, then the coach will not be able to see the athlete’s full body and thus cannot give precise guidance to the athlete, limiting the

Traditional sports coaching involves face-to-face instructions with athletes or playingback 2D videos of athletes’ training. However, if the coach is not in the same area as the athlete, then the coach will not be able to see the athlete’s full body and thus cannot give precise guidance to the athlete, limiting the athlete’s improvement. To address these challenges, this paper proposes Augmented Coach, an augmented reality platform where coaches can view, manipulate and comment on athletes’ movement volumetric video data remotely via the network. In particular, this work includes a). Capturing the athlete’s movement video data with Kinects and converting it into point cloud format b). Transmitting the point cloud data to the coach’s Oculus headset via 5G or wireless network c). Coach’s commenting on the athlete’s joints. In addition, the evaluation of Augmented Coach includes an assessment of its performance from five metrics via the wireless network and 5G network environment, but also from the coaches’ and athletes’ experience of using it. The result shows that Augmented Coach enables coaches to instruct athletes from a distance and provide effective feedback for correcting athletes’ motions under the network.
ContributorsQiao, Yunhan (Author) / LiKamWa, Robert (Thesis advisor) / Bansal, Ajay (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2023
193524-Thumbnail Image.png
Description
Astronomy has a data de-noising problem. The quantity of data produced by astronomical instruments is immense, and a wide variety of noise is present in this data including artifacts. Many types of this noise are not easily filtered using traditional handwritten algorithms. Deep learning techniques present a potential solution to

Astronomy has a data de-noising problem. The quantity of data produced by astronomical instruments is immense, and a wide variety of noise is present in this data including artifacts. Many types of this noise are not easily filtered using traditional handwritten algorithms. Deep learning techniques present a potential solution to the identification and filtering of these more difficult types of noise. In this thesis, deep learning approaches to two astronomical data de-noising steps are attempted and evaluated. Pre-existing simulation tools are utilized to generate a high-quality training dataset for deep neural network models. These models are then tested on real-world data. One set of models masks diffraction spikes from bright stars in James Webb Space Telescope data. A second set of models identifies and masks regions of the sky that would interfere with sky surface brightness measurements. The results obtained indicate that many such astronomical data de-noising and analysis problems can use this approach of simulating a high-quality training dataset and then utilizing a deep learning model trained on that dataset.
ContributorsJeffries, Charles George (Author) / Bansal, Ajay (Thesis advisor) / Windhorst, Rogier (Committee member) / Acuna, Ruben (Committee member) / Arizona State University (Publisher)
Created2024
193613-Thumbnail Image.png
Description
In today's world, robotic technology has become increasingly prevalent across various fields such as manufacturing, warehouses, delivery, and household applications. Planning is crucial for robots to solve various tasks in such difficult domains. However, most robots rely heavily on humans for world models that enable planning. Consequently, it is not

In today's world, robotic technology has become increasingly prevalent across various fields such as manufacturing, warehouses, delivery, and household applications. Planning is crucial for robots to solve various tasks in such difficult domains. However, most robots rely heavily on humans for world models that enable planning. Consequently, it is not only expensive to create such world models, as it requires human experts who understand the domain as well as robot limitations, these models may also be biased by human embodiment, which can be limiting for robots whose kinematics are not human-like. This thesis answers the fundamental question: Can we learn such world models automatically? This research shows that we can learn complex world models directly from unannotated and unlabeled demonstrations containing only the configurations of the robot and the objects in the environment. The core contributions of this thesis are the first known approaches for i) task and motion planning that explicitly handle stochasticity, ii) automatically inventing neuro-symbolic state and action abstractions for deterministic and stochastic motion planning, and iii) automatically inventing relational and interpretable world models in the form of symbolic predicates and actions. This thesis also presents a thorough and rigorous empirical experimentation. With experiments in both simulated and real-world settings, this thesis has demonstrated the efficacy and robustness of automatically learned world models in overcoming challenges, generalizing beyond situations encountered during training.
ContributorsShah, Naman (Author) / Srivastava, Siddharth (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Konidaris, George (Committee member) / Speranzon, Alberto (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2024
157016-Thumbnail Image.png
Description
A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware"

A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop.
ContributorsChakraborti, Tathagata (Author) / Kambhampati, Subbarao (Thesis advisor) / Talamadupula, Kartik (Committee member) / Scheutz, Matthias (Committee member) / Ben Amor, Hani (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2018