Matching Items (4)
193839-Thumbnail Image.png
Description
While the growing prevalence of robots in industry and daily life necessitatesknowing how to operate them safely and effectively, the steep learning curve of programming languages and formal AI education is a barrier for most beginner users. This thesis presents an interactive platform which leverages a block based programming interface with natural language

While the growing prevalence of robots in industry and daily life necessitatesknowing how to operate them safely and effectively, the steep learning curve of programming languages and formal AI education is a barrier for most beginner users. This thesis presents an interactive platform which leverages a block based programming interface with natural language instructions to teach robotics programming to novice users. An integrated robot simulator allows users to view the execution of their high-level plan, with the hierarchical low level planning abstracted away from them. Users are provided human-understandable explanations of their planning failures and hints using LLMs to enhance the learning process. The results obtained from a user study conducted with students having minimal programming experience show that JEDAI-Ed is successful in teaching robotic planning to users, as well as increasing their curiosity about AI in general.
ContributorsDobhal, Daksh (Author) / Srivastava, Siddharth (Thesis advisor) / Gopalan, Nakul (Committee member) / Seifi, Hasti (Committee member) / Arizona State University (Publisher)
Created2024
193701-Thumbnail Image.png
Description
This research project seeks to develop an innovative data visualization tool tailored for beginners to enhance their ability to interpret and present data effectively. Central to the approach is creating an intuitive, user-friendly interface that simplifies the data visualization process, making it accessible even to those with no prior background

This research project seeks to develop an innovative data visualization tool tailored for beginners to enhance their ability to interpret and present data effectively. Central to the approach is creating an intuitive, user-friendly interface that simplifies the data visualization process, making it accessible even to those with no prior background in the field. The tool will introduce users to standard visualization formats and expose them to various alternative chart types, fostering a deeper understanding and broader skill set in data representation. I plan to leverage innovative visualization techniques to ensure the tool is compelling and engaging. An essential aspect of my research will involve conducting comprehensive user studies and surveys to assess the tool's impact on enhancing data visualization competencies among the target audience. Through this, I aim to gather valuable insights into the tool's usability and effectiveness, enabling further refinements. The outcome of this project is a powerful and versatile tool that will be an invaluable asset for students, researchers, and professionals who regularly engage with data. By democratizing data visualization skills, I envisage empowering a broader audience to comprehend and creatively present complex data in a more meaningful and impactful manner.
ContributorsNarula, Jai (Author) / Bryan, Chris (Thesis advisor) / Seifi, Hasti (Committee member) / Bansal, Srividya (Committee member) / Arizona State University (Publisher)
Created2024
193343-Thumbnail Image.png
Description
Socially assistive robots (SARs) can act as assistants and caregivers, interacting and communicating with people through touch gestures. There has been ongoing research on using them as companion robots for children with autism as therapy assistants and playmates. Building touch-perception systems for social robots has been a challenge. The sensors

Socially assistive robots (SARs) can act as assistants and caregivers, interacting and communicating with people through touch gestures. There has been ongoing research on using them as companion robots for children with autism as therapy assistants and playmates. Building touch-perception systems for social robots has been a challenge. The sensors must be designed to ensure comfortable and natural user interaction while recording high-quality data. The sensor must be able to detect touch gestures. Accurate touch gesture classification is challenging as different users perform the same touch gesture in their own unique way. This study aims to build and evaluate a skin-like sensor by replicating a recent paper introducing a novel silicone-based sensor design. A touch gesture classification is performed using deep-learning models to classify touch gestures accurately. This study focuses on 8 gestures: Fistbump, Hitting, Holding, Poking, Squeezing, Stroking, Tapping, and Tickling. They were chosen based on previous research where specialists determined which gestures were essential to detect while interacting with children with autism. In this work, a user study data collection was conducted with 20 adult subjects, using the skin-like sensor to record gesture data and a load cell underneath to record the force. Three different types of input were used for the touch gesture classification: skin-like sensor & load cell data, only skin-like sensor data, and only load cell data. A Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) neural network architecture was developed for inputs with skin-like sensor data, and an LSTM network for only load cell data. This work achieved an average accuracy of 94% with skin-like sensor & load cell data, 95% for only skin-like sensor data, and 45% for only load cell data after a stratified 10-fold validation. This work also performed subject-dependent splitting and achieved accuracies of 69% skin-like sensor & load cell data, 66% for only skin-like sensor data, and 31% for only load cell data, respectively.
ContributorsUmesh, Tejas (Author) / Seifi, Hasti (Thesis advisor) / Fazli, Pooyan (Committee member) / Gopalan, Nakul (Committee member) / Arizona State University (Publisher)
Created2024
193025-Thumbnail Image.png
Description
Mid-air ultrasound haptic technology can enhance user interaction and immersion in extended reality (XR) applications through contactless touch feedback. However, existing design tools for mid-air haptics primarily support the creation of static tactile sensations (tactons), which lack adaptability at runtime. These tactons do not offer the required expressiveness in interactive

Mid-air ultrasound haptic technology can enhance user interaction and immersion in extended reality (XR) applications through contactless touch feedback. However, existing design tools for mid-air haptics primarily support the creation of static tactile sensations (tactons), which lack adaptability at runtime. These tactons do not offer the required expressiveness in interactive scenarios where a continuous closed-loop response to user movement or environmental states is desirable. This thesis proposes AdapTics, a toolkit featuring a graphical interface for the rapid prototyping of adaptive tactons—dynamic sensations that can adjust at runtime based on user interactions, environmental changes, or other inputs. A software library and a Unity package accompany the graphical interface to enable integration of adaptive tactons in existing applications. The design space provided by AdapTics for creating adaptive mid-air ultrasound tactons is presented, along with evidence that the design tool enhances Creativity Support Index ratings for Exploration and Expressiveness, as demonstrated in a user study involving 12 XR and haptic designers.
ContributorsJohn, Kevin (Author) / Seifi, Hasti (Thesis advisor) / Bryan, Chris (Committee member) / Schneider, Oliver (Committee member) / Arizona State University (Publisher)
Created2024