Matching Items (2)
Filtering by
- All Subjects: Sensory Substitution
- Creators: Olson, Markey Cierra
- Creators: Venkateswara, Hemanth
Description
Humans rely on a complex interworking of visual, tactile and proprioceptive feedback to accomplish even the most simple of daily tasks. These senses work together to provide information about the size, weight, shape, density, and texture of objects being interacted with. While vision is highly relied upon for many tasks, especially those involving accurate reaches, people can typically accomplish common daily skills without constant visual feedback, instead relying on tactile and proprioceptive cues. Amputees using prosthetic hands, however, do not currently have access to such cues, making these tasks impossible. This experiment was designed to test whether vibratory haptic cues could be used in replacement of tactile feedback to signal contact for a size discrimination task. Two experiments were run in which subjects were asked to identify changes in block size between consecutive trials using wither physical or virtual blocks to test the accuracy of size discrimination using tactile and haptic feedback, respectively. Blocks randomly increased or decreased in size in increments of 2 to 12 mm between trials for both experiments. This experiment showed that subjects were significantly better at determining size changes using tactile feedback than vibratory haptic cues. This suggests that, while haptic feedback can technically be used to grasp and discriminate between objects of different sizes, it does not lend the same level of input as tactile cues.
ContributorsOlson, Markey Cierra (Author) / Helms-Tilley, Stephen (Thesis director) / Buneo, Christopher (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2015-05
Description
Societal infrastructure is built with vision at the forefront of daily life. For those with
severe visual impairments, this creates countless barriers to the participation and
enjoyment of life’s opportunities. Technological progress has been both a blessing and
a curse in this regard. Digital text together with screen readers and refreshable Braille
displays have made whole libraries readily accessible and rideshare tech has made
independent mobility more attainable. Simultaneously, screen-based interactions and
experiences have only grown in pervasiveness and importance, precluding many of
those with visual impairments.
Sensory Substituion, the process of substituting an unavailable modality with
another one, has shown promise as an alternative to accomodation, but in recent
years meaningful strides in Sensory Substitution for vision have declined in frequency.
Given recent advances in Computer Vision, this stagnation is especially disconcerting.
Designing Sensory Substitution Devices (SSDs) for vision for use in interactive settings
that leverage modern Computer Vision techniques presents a variety of challenges
including perceptual bandwidth, human-computer-interaction, and person-centered
machine learning considerations. To surmount these barriers an approach called Per-
sonal Foveated Haptic Gaze (PFHG), is introduced. PFHG consists of two primary
components: a human visual system inspired interaction paradigm that is intuitive
and flexible enough to generalize to a variety of applications called Foveated Haptic
Gaze (FHG), and a person-centered learning component to address the expressivity
limitations of most SSDs. This component is called One-Shot Object Detection by
Data Augmentation (1SODDA), a one-shot object detection approach that allows a
user to specify the objects they are interested in locating visually and with minimal
effort realizing an object detection model that does so effectively.
The Personal Foveated Haptic Gaze framework was realized in a virtual and real-
world application: playing a 3D, interactive, first person video game (DOOM) and
finding user-specified real-world objects. User study results found Foveated Haptic
Gaze to be an effective and intuitive interface for interacting with dynamic visual
world using solely haptics. Additionally, 1SODDA achieves competitive performance
among few-shot object detection methods and high-framerate many-shot object de-
tectors. The combination of which paves the way for modern Sensory Substitution
Devices for vision.
severe visual impairments, this creates countless barriers to the participation and
enjoyment of life’s opportunities. Technological progress has been both a blessing and
a curse in this regard. Digital text together with screen readers and refreshable Braille
displays have made whole libraries readily accessible and rideshare tech has made
independent mobility more attainable. Simultaneously, screen-based interactions and
experiences have only grown in pervasiveness and importance, precluding many of
those with visual impairments.
Sensory Substituion, the process of substituting an unavailable modality with
another one, has shown promise as an alternative to accomodation, but in recent
years meaningful strides in Sensory Substitution for vision have declined in frequency.
Given recent advances in Computer Vision, this stagnation is especially disconcerting.
Designing Sensory Substitution Devices (SSDs) for vision for use in interactive settings
that leverage modern Computer Vision techniques presents a variety of challenges
including perceptual bandwidth, human-computer-interaction, and person-centered
machine learning considerations. To surmount these barriers an approach called Per-
sonal Foveated Haptic Gaze (PFHG), is introduced. PFHG consists of two primary
components: a human visual system inspired interaction paradigm that is intuitive
and flexible enough to generalize to a variety of applications called Foveated Haptic
Gaze (FHG), and a person-centered learning component to address the expressivity
limitations of most SSDs. This component is called One-Shot Object Detection by
Data Augmentation (1SODDA), a one-shot object detection approach that allows a
user to specify the objects they are interested in locating visually and with minimal
effort realizing an object detection model that does so effectively.
The Personal Foveated Haptic Gaze framework was realized in a virtual and real-
world application: playing a 3D, interactive, first person video game (DOOM) and
finding user-specified real-world objects. User study results found Foveated Haptic
Gaze to be an effective and intuitive interface for interacting with dynamic visual
world using solely haptics. Additionally, 1SODDA achieves competitive performance
among few-shot object detection methods and high-framerate many-shot object de-
tectors. The combination of which paves the way for modern Sensory Substitution
Devices for vision.
ContributorsFakhri, Bijan (Author) / Panchanathan, Sethuraman (Thesis advisor) / McDaniel, Troy L (Committee member) / Venkateswara, Hemanth (Committee member) / Amor, Heni (Committee member) / Arizona State University (Publisher)
Created2020