Lack of proprioceptive feedback is one cause for the high upper-limb prosthesis abandonment rate. The lack of environmental interaction normalcy from unreliable proprioception creates dissatisfaction among prosthesis users. The purpose of this experiment is to investigate the effects of square breathing on learning to navigate without reliable proprioception. Square breathing is thought to influence the vagus nerve which is linked to increased learning rates. In this experiment, participants were instructed to reach toward targets in a semi-immersive virtual reality environment. Directional error, peak velocity, and peak acceleration of the reaching hand were investigated before and after participants underwent square breathing training. As the results of<br/>this experiment are inconclusive, further investigation needs to be done with larger sample sizes and examining unperturbed data to fully understand the effects of square breathing on learning new motor strategies in unreliable proprioceptive conditions.
Following a study conducted in 1991 supporting that kinesthetic information affects visual processing information when moving an arm in extrapersonal space, this research aims to suggest utilizing virtual-reality (VR) technology will lead to more accurate and faster data acquisition (Helms Tillery, et al.) [1]. The previous methods for conducting such research used ultrasonic systems of ultrasound emitters and microphones to track distance from the speed of sound. This method made the experimentation process long and spatial data difficult to synthesize. The purpose of this paper is to show the progress I have made in the efforts to capture spatial data using VR technology to enhance the previous research that has been done in the field of neuroscience. The experimental setup was completed using the Oculus Quest 2 VR headset and included hand controllers. The experiment simulation was created using Unity game engine to build a 3D VR world which can be used interactively with the Oculus. The result of this simulation allows the user to interact with a ball in the VR environment without seeing the body of the user. The VR simulation is able to be used in combination with real-time motion capture cameras to capture live spatial data of the user during trials, though spatial data from the VR environment has not been able to be collected.
Many upper limb amputees experience an incessant, post-amputation “phantom limb pain” and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF), rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech “rubber hand” illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the “BairClaw” presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger–object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced cognitive burden.
From previous research, it has been observed that neural summation can be observed from reaction time tasks. This is observed through race models, as proposed by J.O. Miller. These models are referred to as “race models” as different stimuli “race” to extract a response during tasks. The race model is augmented by the Race Model Inequality, which claims the probability that two simultaneous signals will have a faster reaction time than the summation of the probabilities of two individual signals. When this inequality expression is violated, it indicates neural summation is occurring. In another study, researchers studied how the location of visual stimuli influences neural summation with tactile information, observing the visual stimuli from different distances and a mirrored reflection condition. However, results of the mirror condition did not follow the other visual conditions, offering unique properties. The mirrored case is examined more closely in this project, attempting to answer if the presence of a mirrored representation of the hand will affect reaction time during timed tasks, suggesting the occurrence of neural summation, and suggesting that a mirrored reflection of self is interpreted as an independent channel of information. This was measured by evaluating participants’ response time while manipulating the presence of a reflection and checking if they violate the race model. However, the results of this study indicated that the presence of a mirror does not have an effect in reaction time and therefore did not present the occurrence of neural summation