Matching Items (14)
152881-Thumbnail Image.png
Description
Dexterous manipulation is a representative task that involves sensorimotor integration underlying a fine control of movements. Over the past 30 years, research has provided significant insight, including the control mechanisms of force coordination during manipulation tasks. Successful dexterous manipulation is thought to rely on the ability to integrate the sense

Dexterous manipulation is a representative task that involves sensorimotor integration underlying a fine control of movements. Over the past 30 years, research has provided significant insight, including the control mechanisms of force coordination during manipulation tasks. Successful dexterous manipulation is thought to rely on the ability to integrate the sense of digit position with motor commands responsible for generating digit forces and placement. However, the mechanisms underlying the phenomenon of digit position-force coordination are not well understood. This dissertation addresses this question through three experiments that are based on psychophysics and object lifting tasks. It was found in psychophysics tasks that sensed relative digit position was accurately reproduced when sensorimotor transformations occurred with larger vertical fingertip separations, within the same hand, and at the same hand posture. The results from a follow-up experiment conducted in the same digit position-matching task while generating forces in different directions reveal a biased relative digit position toward the direction of force production. Specifically, subjects reproduced the thumb CoP higher than the index finger CoP when vertical digit forces were directed upward and downward, respectively, and vice versa. It was also found in lifting tasks that the ability to discriminate the relative digit position prior to lifting an object and modulate digit forces to minimize object roll as a function of digit position are robust regardless of whether motor commands for positioning the digits on the object are involved. These results indicate that the erroneous sensorimotor transformations of relative digit position reported here must be compensated during dexterous manipulation by other mechanisms, e.g., visual feedback of fingertip position. Furthermore, predicted sensory consequences derived from the efference copy of voluntary motor commands to generate vertical digit forces may override haptic sensory feedback for the estimation of relative digit position. Lastly, the sensorimotor transformations from haptic feedback to digit force modulation to position appear to be facilitated by motor commands for active digit placement in manipulation.
ContributorsShibata, Daisuke (Author) / Santello, Marco (Thesis advisor) / Dounskaia, Natalia (Committee member) / Kleim, Jeffrey (Committee member) / Helms Tillery, Stephen (Committee member) / McBeath, Michael (Committee member) / Arizona State University (Publisher)
Created2014
150150-Thumbnail Image.png
Description
Learning and transfer were investigated for a categorical structure in which relevant stimulus information could be mapped without loss from one modality to another. The category space was composed of three non-overlapping, linearly-separable categories. Each stimulus was composed of a sequence of on-off events that varied in duration and number

Learning and transfer were investigated for a categorical structure in which relevant stimulus information could be mapped without loss from one modality to another. The category space was composed of three non-overlapping, linearly-separable categories. Each stimulus was composed of a sequence of on-off events that varied in duration and number of sub-events (complexity). Categories were learned visually, haptically, or auditorily, and transferred to the same or an alternate modality. The transfer set contained old, new, and prototype stimuli, and subjects made both classification and recognition judgments. The results showed an early learning advantage in the visual modality, with transfer performance varying among the conditions in both classification and recognition. In general, classification accuracy was highest for the category prototype, with false recognition of the category prototype higher in the cross-modality conditions. The results are discussed in terms of current theories in modality transfer, and shed preliminary light on categorical transfer of temporal stimuli.
ContributorsFerguson, Ryan (Author) / Homa, Donald (Thesis advisor) / Goldinger, Stephen (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
150112-Thumbnail Image.png
Description
Typically, the complete loss or severe impairment of a sense such as vision and/or hearing is compensated through sensory substitution, i.e., the use of an alternative sense for receiving the same information. For individuals who are blind or visually impaired, the alternative senses have predominantly been hearing and touch. For

Typically, the complete loss or severe impairment of a sense such as vision and/or hearing is compensated through sensory substitution, i.e., the use of an alternative sense for receiving the same information. For individuals who are blind or visually impaired, the alternative senses have predominantly been hearing and touch. For movies, visual content has been made accessible to visually impaired viewers through audio descriptions -- an additional narration that describes scenes, the characters involved and other pertinent details. However, as audio descriptions should not overlap with dialogue, sound effects and musical scores, there is limited time to convey information, often resulting in stunted and abridged descriptions that leave out many important visual cues and concepts. This work proposes a promising multimodal approach to sensory substitution for movies by providing complementary information through haptics, pertaining to the positions and movements of actors, in addition to a film's audio description and audio content. In a ten-minute presentation of five movie clips to ten individuals who were visually impaired or blind, the novel methodology was found to provide an almost two time increase in the perception of actors' movements in scenes. Moreover, participants appreciated and found useful the overall concept of providing a visual perspective to film through haptics.
ContributorsViswanathan, Lakshmie Narayan (Author) / Panchanathan, Sethuraman (Thesis advisor) / Hedgpeth, Terri (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
135207-Thumbnail Image.png
Description
Situations present themselves in which someone needs to navigate inside of a building, for example, to the exit or to retrieve and object. Sometimes, vision is not a reliable sense of spatial awareness, maybe because of a smoky environment, a dark environment, distractions, etc. I propose a wearable haptic device,

Situations present themselves in which someone needs to navigate inside of a building, for example, to the exit or to retrieve and object. Sometimes, vision is not a reliable sense of spatial awareness, maybe because of a smoky environment, a dark environment, distractions, etc. I propose a wearable haptic device, a belt or vest, that provides haptic feedback to help people navigate inside of a building that does not rely on the user's vision. The first proposed device has an obstacle avoidance component and a navigation component. This paper discussed the challenges of designing and implementing this kind of technology in the context of indoor navigation, where GPS signal is poor. Analyzing accelerometer data for the purpose of indoor navigation and then using haptic cues from a wearable haptic device for the navigation were explored in this project, and the device is promising.
ContributorsBerk, Emily Marie (Author) / Atkinson, Robert (Thesis director) / Chavez-Echeagaray, Maria Elena (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136593-Thumbnail Image.png
Description
Humans rely on a complex interworking of visual, tactile and proprioceptive feedback to accomplish even the most simple of daily tasks. These senses work together to provide information about the size, weight, shape, density, and texture of objects being interacted with. While vision is highly relied upon for many tasks,

Humans rely on a complex interworking of visual, tactile and proprioceptive feedback to accomplish even the most simple of daily tasks. These senses work together to provide information about the size, weight, shape, density, and texture of objects being interacted with. While vision is highly relied upon for many tasks, especially those involving accurate reaches, people can typically accomplish common daily skills without constant visual feedback, instead relying on tactile and proprioceptive cues. Amputees using prosthetic hands, however, do not currently have access to such cues, making these tasks impossible. This experiment was designed to test whether vibratory haptic cues could be used in replacement of tactile feedback to signal contact for a size discrimination task. Two experiments were run in which subjects were asked to identify changes in block size between consecutive trials using wither physical or virtual blocks to test the accuracy of size discrimination using tactile and haptic feedback, respectively. Blocks randomly increased or decreased in size in increments of 2 to 12 mm between trials for both experiments. This experiment showed that subjects were significantly better at determining size changes using tactile feedback than vibratory haptic cues. This suggests that, while haptic feedback can technically be used to grasp and discriminate between objects of different sizes, it does not lend the same level of input as tactile cues.
ContributorsOlson, Markey Cierra (Author) / Helms-Tilley, Stephen (Thesis director) / Buneo, Christopher (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2015-05
137106-Thumbnail Image.png
Description
The goal of this project was to use the sense of touch to investigate tactile cues during multidigit rotational manipulations of objects. A robotic arm and hand equipped with three multimodal tactile sensors were used to gather data about skin deformation during rotation of a haptic knob. Three different rotation

The goal of this project was to use the sense of touch to investigate tactile cues during multidigit rotational manipulations of objects. A robotic arm and hand equipped with three multimodal tactile sensors were used to gather data about skin deformation during rotation of a haptic knob. Three different rotation speeds and two levels of rotation resistance were used to investigate tactile cues during knob rotation. In the future, this multidigit task can be generalized to similar rotational tasks, such as opening a bottle or turning a doorknob.
ContributorsChalla, Santhi Priya (Author) / Santos, Veronica (Thesis director) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / School of Earth and Space Exploration (Contributor)
Created2014-05
154617-Thumbnail Image.png
Description
Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority of the information used. While vision is often relied on

Humans constantly rely on a complex interaction of a variety of sensory modalities in order to complete even the simplest of daily tasks. For reaching and grasping to interact with objects, the visual, tactile, and proprioceptive senses provide the majority of the information used. While vision is often relied on for many tasks, most people are able to accomplish common daily rituals without constant visual attention, instead relying mainly on tactile and proprioceptive cues. However, amputees using prosthetic arms do not have access to these cues, making tasks impossible without vision. Even tasks with vision can be incredibly difficult as prosthesis users are unable to modify grip force using touch, and thus tend to grip objects excessively hard to make sure they don’t slip.

Methods such as vibratory sensory substitution have shown promise for providing prosthesis users with a sense of contact and have proved helpful in completing motor tasks. In this thesis, two experiments were conducted to determine whether vibratory cues could be useful in discriminating between sizes. In the first experiment, subjects were asked to grasp a series of hidden virtual blocks of varying sizes with vibrations on the fingertips as indication of contact and compare the size of consecutive boxes. Vibratory haptic feedback significantly increased the accuracy of size discrimination over objects with only visual indication of contact, though accuracy was not as great as for typical grasping tasks with physical blocks. In the second, subjects were asked to adjust their virtual finger position around a series of virtual boxes with vibratory feedback on the fingertips using either finger movement or EMG. It was found that EMG control allowed for significantly less accuracy in size discrimination, implying that, while proprioceptive feedback alone is not enough to determine size, direct kinesthetic information about finger position is still needed.
ContributorsOlson, Markey (Author) / Helms-Tillery, Stephen (Thesis advisor) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2016
148244-Thumbnail Image.png
Description

In this experiment, a haptic glove with vibratory motors on the fingertips was tested against the standard HTC Vive controller to see if the additional vibrations provided by the glove increased immersion in common gaming scenarios where haptic feedback is provided. Specifically, two scenarios were developed: an explosion scene containing

In this experiment, a haptic glove with vibratory motors on the fingertips was tested against the standard HTC Vive controller to see if the additional vibrations provided by the glove increased immersion in common gaming scenarios where haptic feedback is provided. Specifically, two scenarios were developed: an explosion scene containing a small and large explosion and a box interaction scene that allowed the participants to touch the box virtually with their hand. At the start of this project, it was hypothesized that the haptic glove would have a significant positive impact in at least one of these scenarios. Nine participants took place in the study and immersion was measured through a post-experiment questionnaire. Statistical analysis on the results showed that the haptic glove did have a significant impact on immersion in the box interaction scene, but not in the explosion scene. In the end, I conclude that since this haptic glove does not significantly increase immersion across all scenarios when compared to the standard Vive controller, it should not be used at a replacement in its current state.

ContributorsGriffieth, Alan P (Author) / McDaniel, Troy (Thesis director) / Selgrad, Justin (Committee member) / Computing and Informatics Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
158792-Thumbnail Image.png
Description
Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from

Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from afar. As a sensory organ in particular, the eyes have an unparalleled ability to adjust to varying degrees of light, color, and distance. Therefore, in the case of a non-visual traveler, someone who is blind or low vision, access to visual information is unattainable if it is positioned beyond the reach of the preferred mobility device or outside the path of travel. Although, the area of assistive technology in terms of electronic travel aids (ETA’s) has received considerable attention over the last two decades; surprisingly, the field has seen little work in the area focused on augmenting rather than replacing current non-visual travel techniques, methods, and tools. Consequently, this work describes the design of an intuitive tactile language and series of wearable tactile interfaces (the Haptic Chair, HaptWrap, and HapBack) to deliver real-time spatiotemporal data. The overall intuitiveness of the haptic mappings conveyed through the tactile interfaces are evaluated using a combination of absolute identification accuracy of a series of patterns and subjective feedback through post-experiment surveys. Two types of spatiotemporal representations are considered: static patterns representing object location at a single time instance, and dynamic patterns, added in the HaptWrap, which represent object movement over a time interval. Results support the viability of multi-dimensional haptics applied to the body to yield an intuitive understanding of dynamic interactions occurring around the navigator during travel. Lastly, it is important to point out that the guiding principle of this work centered on providing the navigator with spatial knowledge otherwise unattainable through current mobility techniques, methods, and tools, thus, providing the \emph{navigator} with the information necessary to make informed navigation decisions independently, at a distance.
ContributorsDuarte, Bryan Joiner (Author) / McDaniel, Troy (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2020
158224-Thumbnail Image.png
Description
Societal infrastructure is built with vision at the forefront of daily life. For those with

severe visual impairments, this creates countless barriers to the participation and

enjoyment of life’s opportunities. Technological progress has been both a blessing and

a curse in this regard. Digital text together with screen readers and refreshable Braille

displays have

Societal infrastructure is built with vision at the forefront of daily life. For those with

severe visual impairments, this creates countless barriers to the participation and

enjoyment of life’s opportunities. Technological progress has been both a blessing and

a curse in this regard. Digital text together with screen readers and refreshable Braille

displays have made whole libraries readily accessible and rideshare tech has made

independent mobility more attainable. Simultaneously, screen-based interactions and

experiences have only grown in pervasiveness and importance, precluding many of

those with visual impairments.

Sensory Substituion, the process of substituting an unavailable modality with

another one, has shown promise as an alternative to accomodation, but in recent

years meaningful strides in Sensory Substitution for vision have declined in frequency.

Given recent advances in Computer Vision, this stagnation is especially disconcerting.

Designing Sensory Substitution Devices (SSDs) for vision for use in interactive settings

that leverage modern Computer Vision techniques presents a variety of challenges

including perceptual bandwidth, human-computer-interaction, and person-centered

machine learning considerations. To surmount these barriers an approach called Per-

sonal Foveated Haptic Gaze (PFHG), is introduced. PFHG consists of two primary

components: a human visual system inspired interaction paradigm that is intuitive

and flexible enough to generalize to a variety of applications called Foveated Haptic

Gaze (FHG), and a person-centered learning component to address the expressivity

limitations of most SSDs. This component is called One-Shot Object Detection by

Data Augmentation (1SODDA), a one-shot object detection approach that allows a

user to specify the objects they are interested in locating visually and with minimal

effort realizing an object detection model that does so effectively.

The Personal Foveated Haptic Gaze framework was realized in a virtual and real-

world application: playing a 3D, interactive, first person video game (DOOM) and

finding user-specified real-world objects. User study results found Foveated Haptic

Gaze to be an effective and intuitive interface for interacting with dynamic visual

world using solely haptics. Additionally, 1SODDA achieves competitive performance

among few-shot object detection methods and high-framerate many-shot object de-

tectors. The combination of which paves the way for modern Sensory Substitution

Devices for vision.
ContributorsFakhri, Bijan (Author) / Panchanathan, Sethuraman (Thesis advisor) / McDaniel, Troy L (Committee member) / Venkateswara, Hemanth (Committee member) / Amor, Heni (Committee member) / Arizona State University (Publisher)
Created2020