Matching Items (38)

137492-Thumbnail Image.png

The Dyadic Interaction Assistant for Individuals with Visual Impairments

Description

This paper presents an overview of The Dyadic Interaction Assistant for Individuals with Visual Impairments with a focus on the software component. The system is designed to communicate facial information

This paper presents an overview of The Dyadic Interaction Assistant for Individuals with Visual Impairments with a focus on the software component. The system is designed to communicate facial information (facial Action Units, facial expressions, and facial features) to an individual with visual impairments in a dyadic interaction between two people sitting across from each other. Comprised of (1) a webcam, (2) software, and (3) a haptic device, the system can also be described as a series of input, processing, and output stages, respectively. The processing stage of the system builds on the open source FaceTracker software and the application Computer Expression Recognition Toolbox (CERT). While these two sources provide the facial data, the program developed through the IDE Qt Creator and several AppleScripts are used to adapt the information to a Graphical User Interface (GUI) and output the data to a comma-separated values (CSV) file. It is the first software to convey all 3 types of facial information at once in real-time. Future work includes testing and evaluating the quality of the software with human subjects (both sighted and blind/low vision), integrating the haptic device to complete the system, and evaluating the entire system with human subjects (sighted and blind/low vision).

Contributors

Agent

Created

Date Created
  • 2013-05

136785-Thumbnail Image.png

Exploring the Design of Vibrotactile Cues for Visio-Haptic Sensory Substitution

Description

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of a chair to provide vibrotactile stimulation in the context of a dyadic (one-on-one) interaction across a table. This work explores the design of spatiotemporal vibration patterns that can be used to convey the basic building blocks of facial movements according to the Facial Action Unit Coding System. A behavioral study was conducted to explore the factors that influence the naturalness of conveying affect using vibrotactile cues.

Contributors

Agent

Created

Date Created
  • 2014-05

135798-Thumbnail Image.png

Utilizing Neural Networks to Predict Freezing of Gait in Parkinson's Patients

Description

The artificial neural network is a form of machine learning that is highly effective at recognizing patterns in large, noise-filled datasets. Possessing these attributes uniquely qualifies the neural network as

The artificial neural network is a form of machine learning that is highly effective at recognizing patterns in large, noise-filled datasets. Possessing these attributes uniquely qualifies the neural network as a mathematical basis for adaptability in personal biomedical devices. The purpose of this study was to determine the viability of neural networks in predicting Freezing of Gait (FoG), a symptom of Parkinson's disease in which the patient's legs are suddenly rendered unable to move. More specifically, a class of neural networks known as layered recurrent networks (LRNs) was applied to an open- source FoG experimental dataset donated to the Machine Learning Repository of the University of California at Irvine. The independent variables in this experiment \u2014 the subject being tested, neural network architecture, and sampling of the majority classes \u2014 were each varied and compared against the performance of the neural network in predicting future FoG events. It was determined that single-layered recurrent networks are a viable method of predicting FoG events given the volume of the training data available, though results varied significantly between different patients. For the three patients tested, shank acceleration data was used to train networks with peak precision/recall values of 41.88%/47.12%, 89.05%/29.60%, and 57.19%/27.39% respectively. These values were obtained for networks optimized using detection theory rather than optimized for desired values of precision and recall. Furthermore, due to the nature of the experiments performed in this study, these values are representative of the lower-bound performance of layered recurrent networks trained to detect gait freezing. As such, these values may be improved through a variety of measures.

Contributors

Agent

Created

Date Created
  • 2016-05

135660-Thumbnail Image.png

Convolutional Neural Networks for Facial Expression Recognition

Description

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.

Contributors

Agent

Created

Date Created
  • 2016-05

135386-Thumbnail Image.png

EMG-Interfaced Device for the Detection and Alleviation of Freezing of Gait in Individuals with Parkinson's Disease

Description

Parkinson's disease is a neurodegenerative disorder in the central nervous system that affects a host of daily activities and involves a variety of symptoms; these include tremors, slurred speech, and

Parkinson's disease is a neurodegenerative disorder in the central nervous system that affects a host of daily activities and involves a variety of symptoms; these include tremors, slurred speech, and rigid muscles. It is the second most common movement disorder globally. In Stage 3 of Parkinson's, afflicted individuals begin to develop an abnormal gait pattern known as freezing of gait (FoG), which is characterized by decreased step length, shuffling, and eventually complete loss of movement; they are unable to move, and often results in a fall. Surface electromyography (sEMG) is a diagnostic tool to measure electrical activity in the muscles to assess overall muscle function. Most conventional EMG systems, however, are bulky, tethered to a single location, expensive, and primarily used in a lab or clinical setting. This project explores an affordable, open-source, and portable platform called Open Brain-Computer Interface (OpenBCI). The purpose of the proposed device is to detect gait patterns by leveraging the surface electromyography (EMG) signals from the OpenBCI and to help a patient overcome an episode using haptic feedback mechanisms. Previously designed devices with similar intended purposes utilize accelerometry as a method of detection as well as audio and visual feedback mechanisms in their design.

Contributors

Agent

Created

Date Created
  • 2016-05

133018-Thumbnail Image.png

MisophoniAPP: A Website for Treating Misophonia

Description

This paper introduces MisophoniAPP, a new website for managing misophonia. It will briefly discuss the nature of this chronic syndrome, which is the experience of reacting strongly to certain everyday

This paper introduces MisophoniAPP, a new website for managing misophonia. It will briefly discuss the nature of this chronic syndrome, which is the experience of reacting strongly to certain everyday sounds, or “triggers”. Various forms of Cognitive Behavioral Therapy and the Neural Repatterning Technique are currently used to treat misophonia, but they are not guaranteed to work for every patient. Few apps exist to help patients with their therapy, so this paper describes the design and creation of a new website that combines exposure therapy,
relaxation, and gamification to help patients alleviate their misophonic reflexes.

Contributors

Agent

Created

Date Created
  • 2019-05

133291-Thumbnail Image.png

Fresh15

Description

Fresh15 is an iOS application geared towards helping college students eat healthier. This is based on a user's preferences of price range, food restrictions, and favorite ingredients. Our application also

Fresh15 is an iOS application geared towards helping college students eat healthier. This is based on a user's preferences of price range, food restrictions, and favorite ingredients. Our application also considers the fact that students may have to order their ingredients online since they don't have access to transportation.

Contributors

Agent

Created

Date Created
  • 2018-05

133225-Thumbnail Image.png

Using Goodness of Pronunciation Features for Spoken Nasality Detection

Description

Speech nasality disorders are characterized by abnormal resonance in the nasal cavity. Hypernasal speech is of particular interest, characterized by an inability to prevent improper nasalization of vowels, and poor

Speech nasality disorders are characterized by abnormal resonance in the nasal cavity. Hypernasal speech is of particular interest, characterized by an inability to prevent improper nasalization of vowels, and poor articulation of plosive and fricative consonants, and can lead to negative communicative and social consequences. It can be associated with a range of conditions, including cleft lip or palate, velopharyngeal dysfunction (a physical or neurological defective closure of the soft palate that regulates resonance between the oral and nasal cavity), dysarthria, or hearing impairment, and can also be an early indicator of developing neurological disorders such as ALS. Hypernasality is typically scored perceptually by a Speech Language Pathologist (SLP). Misdiagnosis could lead to inadequate treatment plans and poor treatment outcomes for a patient. Also, for some applications, particularly screening for early neurological disorders, the use of an SLP is not practical. Hence this work demonstrates a data-driven approach to objective assessment of hypernasality, through the use of Goodness of Pronunciation features. These features capture the overall precision of articulation of speaker on a phoneme-by-phoneme basis, allowing demonstrated models to achieve a Pearson correlation coefficient of 0.88 on low-nasality speakers, the population of most interest for this sort of technique. These results are comparable to milestone methods in this domain.

Contributors

Agent

Created

Date Created
  • 2018-05

133624-Thumbnail Image.png

Noninvasive and Accurate Fine Motor Rehabilitation Through a Rhythm Based Game Using a Leap Motion Controller: Usability Evaluation of Leap Motion Game

Description

This paper presents a system to deliver automated, noninvasive, and effective fine motor rehabilitation through a rhythm-based game using a Leap Motion Controller. The system is a rhythm game where

This paper presents a system to deliver automated, noninvasive, and effective fine motor rehabilitation through a rhythm-based game using a Leap Motion Controller. The system is a rhythm game where hand gestures are used as input and must match the rhythm and gestures shown on screen, thus allowing a physical therapist to represent an exercise session involving the user's hand and finger joints as a series of patterns. Fine motor rehabilitation plays an important role in the recovery and improvement of the effects of stroke, Parkinson's disease, multiple sclerosis, and more. Individuals with these conditions possess a wide range of impairment in terms of fine motor movement. The serious game developed takes this into account and is designed to work with individuals with different levels of impairment. In a pilot study, under partnership with South West Advanced Neurological Rehabilitation (SWAN Rehab) in Phoenix, Arizona, we compared the performance of individuals with fine motor impairment to individuals without this impairment to determine whether a human-centered approach and adapting to an user's range of motion can allow an individual with fine motor impairment to perform at a similar level as a non-impaired user.

Contributors

Agent

Created

Date Created
  • 2018-05

132065-Thumbnail Image.png

HapBack - Providing Spatial Awareness at a Distance Using Haptic Stimulation

Description

This paper presents a study done to gain knowledge on the communication of an object’s relative 3-dimensional position in relation to individuals who are visually impaired and blind. The HapBack,

This paper presents a study done to gain knowledge on the communication of an object’s relative 3-dimensional position in relation to individuals who are visually impaired and blind. The HapBack, a continuation of the HaptWrap V1.0 (Duarte et al., 2018), focused on the perception of objects and their distances in 3-dimensional space using haptic communication. The HapBack is a device that consists of two elastic bands wrapped horizontally secured around the user’s torso and two backpack straps secured along the user’s back. The backpack straps are embedded with 10 vibrotactile motors evenly positioned along the spine. This device is designed to provide a wearable interface for blind and visually impaired individuals in order to understand how the position of objects in 3-dimensional space are perceived through haptic communication. We were able to analyze the accuracy of the HapBack device through three vectors (1) Two different modes of vibration – absolute and relative (2) the location of the vibrotactile motors when in absolute mode (3) and the location of the vibrotactile motors when in relative mode. The results provided support that the HapBack provided vibrotactile patterns that were intuitively mapped to distances represented in the study. We were able to gain a better understanding on how distance can be perceived through haptic communication in individuals who are blind through analyzing the intuitiveness of the vibro-tactile patterns and the accuracy of the user’s responses.

Contributors

Agent

Created

Date Created
  • 2019-12