Matching Items (2)
Filtering by
- All Subjects: Haptics
- Creators: Venkateswara, Hemanth
Description
Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from afar. As a sensory organ in particular, the eyes have an unparalleled ability to adjust to varying degrees of light, color, and distance. Therefore, in the case of a non-visual traveler, someone who is blind or low vision, access to visual information is unattainable if it is positioned beyond the reach of the preferred mobility device or outside the path of travel. Although, the area of assistive technology in terms of electronic travel aids (ETA’s) has received considerable attention over the last two decades; surprisingly, the field has seen little work in the area focused on augmenting rather than replacing current non-visual travel techniques, methods, and tools. Consequently, this work describes the design of an intuitive tactile language and series of wearable tactile interfaces (the Haptic Chair, HaptWrap, and HapBack) to deliver real-time spatiotemporal data. The overall intuitiveness of the haptic mappings conveyed through the tactile interfaces are evaluated using a combination of absolute identification accuracy of a series of patterns and subjective feedback through post-experiment surveys. Two types of spatiotemporal representations are considered: static patterns representing object location at a single time instance, and dynamic patterns, added in the HaptWrap, which represent object movement over a time interval. Results support the viability of multi-dimensional haptics applied to the body to yield an intuitive understanding of dynamic interactions occurring around the navigator during travel. Lastly, it is important to point out that the guiding principle of this work centered on providing the navigator with spatial knowledge otherwise unattainable through current mobility techniques, methods, and tools, thus, providing the \emph{navigator} with the information necessary to make informed navigation decisions independently, at a distance.
ContributorsDuarte, Bryan Joiner (Author) / McDaniel, Troy (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2020
Description
Societal infrastructure is built with vision at the forefront of daily life. For those with
severe visual impairments, this creates countless barriers to the participation and
enjoyment of life’s opportunities. Technological progress has been both a blessing and
a curse in this regard. Digital text together with screen readers and refreshable Braille
displays have made whole libraries readily accessible and rideshare tech has made
independent mobility more attainable. Simultaneously, screen-based interactions and
experiences have only grown in pervasiveness and importance, precluding many of
those with visual impairments.
Sensory Substituion, the process of substituting an unavailable modality with
another one, has shown promise as an alternative to accomodation, but in recent
years meaningful strides in Sensory Substitution for vision have declined in frequency.
Given recent advances in Computer Vision, this stagnation is especially disconcerting.
Designing Sensory Substitution Devices (SSDs) for vision for use in interactive settings
that leverage modern Computer Vision techniques presents a variety of challenges
including perceptual bandwidth, human-computer-interaction, and person-centered
machine learning considerations. To surmount these barriers an approach called Per-
sonal Foveated Haptic Gaze (PFHG), is introduced. PFHG consists of two primary
components: a human visual system inspired interaction paradigm that is intuitive
and flexible enough to generalize to a variety of applications called Foveated Haptic
Gaze (FHG), and a person-centered learning component to address the expressivity
limitations of most SSDs. This component is called One-Shot Object Detection by
Data Augmentation (1SODDA), a one-shot object detection approach that allows a
user to specify the objects they are interested in locating visually and with minimal
effort realizing an object detection model that does so effectively.
The Personal Foveated Haptic Gaze framework was realized in a virtual and real-
world application: playing a 3D, interactive, first person video game (DOOM) and
finding user-specified real-world objects. User study results found Foveated Haptic
Gaze to be an effective and intuitive interface for interacting with dynamic visual
world using solely haptics. Additionally, 1SODDA achieves competitive performance
among few-shot object detection methods and high-framerate many-shot object de-
tectors. The combination of which paves the way for modern Sensory Substitution
Devices for vision.
severe visual impairments, this creates countless barriers to the participation and
enjoyment of life’s opportunities. Technological progress has been both a blessing and
a curse in this regard. Digital text together with screen readers and refreshable Braille
displays have made whole libraries readily accessible and rideshare tech has made
independent mobility more attainable. Simultaneously, screen-based interactions and
experiences have only grown in pervasiveness and importance, precluding many of
those with visual impairments.
Sensory Substituion, the process of substituting an unavailable modality with
another one, has shown promise as an alternative to accomodation, but in recent
years meaningful strides in Sensory Substitution for vision have declined in frequency.
Given recent advances in Computer Vision, this stagnation is especially disconcerting.
Designing Sensory Substitution Devices (SSDs) for vision for use in interactive settings
that leverage modern Computer Vision techniques presents a variety of challenges
including perceptual bandwidth, human-computer-interaction, and person-centered
machine learning considerations. To surmount these barriers an approach called Per-
sonal Foveated Haptic Gaze (PFHG), is introduced. PFHG consists of two primary
components: a human visual system inspired interaction paradigm that is intuitive
and flexible enough to generalize to a variety of applications called Foveated Haptic
Gaze (FHG), and a person-centered learning component to address the expressivity
limitations of most SSDs. This component is called One-Shot Object Detection by
Data Augmentation (1SODDA), a one-shot object detection approach that allows a
user to specify the objects they are interested in locating visually and with minimal
effort realizing an object detection model that does so effectively.
The Personal Foveated Haptic Gaze framework was realized in a virtual and real-
world application: playing a 3D, interactive, first person video game (DOOM) and
finding user-specified real-world objects. User study results found Foveated Haptic
Gaze to be an effective and intuitive interface for interacting with dynamic visual
world using solely haptics. Additionally, 1SODDA achieves competitive performance
among few-shot object detection methods and high-framerate many-shot object de-
tectors. The combination of which paves the way for modern Sensory Substitution
Devices for vision.
ContributorsFakhri, Bijan (Author) / Panchanathan, Sethuraman (Thesis advisor) / McDaniel, Troy L (Committee member) / Venkateswara, Hemanth (Committee member) / Amor, Heni (Committee member) / Arizona State University (Publisher)
Created2020