Matching Items (35)

131208-Thumbnail Image.png

Effects of Virtual Reality on Memory Rehabilitation

Description

In this project, I investigated the impact of virtual reality on memory retention. The investigative approach to see the impact of virtual reality on memory retention, I utilized the memorization

In this project, I investigated the impact of virtual reality on memory retention. The investigative approach to see the impact of virtual reality on memory retention, I utilized the memorization technique called the memory palace in a virtual reality environment. For the experiment, due to Covid-19, I was forced to be the only subject. To get effective data, I tested myself within randomly generated environments with a completely unique set of objects, both outside of a virtual reality environment and within one. First I conducted a set of 10 tests on myself by going through a virtual environment on my laptop and recalling as many objects I could within that environment. I recorded the accuracy of my own recollection as well as how long it took me to get through the data. Next I conducted a set of 10 tests on myself by going through the same virtual environment, but this time with an immersive virtual reality(VR) headset and a completely new set of objects. At the start of the project it was hypothesized that virtual reality would result in a higher memory retention rate versus simply going through the environment in a non-immersive environment. In the end, the results, albeit with a low test rate, leaned more toward showing the hypothesis to be true rather than not.

Contributors

Agent

Created

Date Created
  • 2020-05

131212-Thumbnail Image.png

Facial Expression Recognition Using Machine Learning

Description

In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have

In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models with different algorithms to determine if potential systems would prove to be successful. Even today, there are still many research initiatives that are continuing to develop new models in the hopes to discover potential solutions for problems such as autonomous driving or determining the emotional value from a single sentence. One of the current popular research topics for Machine Learning is the development of Facial Expression Recognition systems. These Machine Learning models focus on classifying images of human faces that are expressing different emotions through facial expressions. In order to develop effective models to perform Facial Expression Recognition, researchers have gone on to utilize Deep Learning models, which are a more advanced implementation of Machine Learning models, known as Neural Networks. More specifically, the use of Convolutional Neural Networks has proven to be the most effective models for achieving highly accurate results at classifying images of various facial expressions. Convolutional Neural Networks are Deep Learning models that are capable of processing visual data, such as images and videos, and can be used to identify various facial expressions. The purpose of this project, I focused on learning about the important concepts of Machine Learning, Deep Learning, and Convolutional Neural Networks to implement a Convolutional Neural Network that was previously developed by a recommended research paper.

Contributors

Created

Date Created
  • 2020-05

135386-Thumbnail Image.png

EMG-Interfaced Device for the Detection and Alleviation of Freezing of Gait in Individuals with Parkinson's Disease

Description

Parkinson's disease is a neurodegenerative disorder in the central nervous system that affects a host of daily activities and involves a variety of symptoms; these include tremors, slurred speech, and

Parkinson's disease is a neurodegenerative disorder in the central nervous system that affects a host of daily activities and involves a variety of symptoms; these include tremors, slurred speech, and rigid muscles. It is the second most common movement disorder globally. In Stage 3 of Parkinson's, afflicted individuals begin to develop an abnormal gait pattern known as freezing of gait (FoG), which is characterized by decreased step length, shuffling, and eventually complete loss of movement; they are unable to move, and often results in a fall. Surface electromyography (sEMG) is a diagnostic tool to measure electrical activity in the muscles to assess overall muscle function. Most conventional EMG systems, however, are bulky, tethered to a single location, expensive, and primarily used in a lab or clinical setting. This project explores an affordable, open-source, and portable platform called Open Brain-Computer Interface (OpenBCI). The purpose of the proposed device is to detect gait patterns by leveraging the surface electromyography (EMG) signals from the OpenBCI and to help a patient overcome an episode using haptic feedback mechanisms. Previously designed devices with similar intended purposes utilize accelerometry as a method of detection as well as audio and visual feedback mechanisms in their design.

Contributors

Agent

Created

Date Created
  • 2016-05

133624-Thumbnail Image.png

Noninvasive and Accurate Fine Motor Rehabilitation Through a Rhythm Based Game Using a Leap Motion Controller: Usability Evaluation of Leap Motion Game

Description

This paper presents a system to deliver automated, noninvasive, and effective fine motor rehabilitation through a rhythm-based game using a Leap Motion Controller. The system is a rhythm game where

This paper presents a system to deliver automated, noninvasive, and effective fine motor rehabilitation through a rhythm-based game using a Leap Motion Controller. The system is a rhythm game where hand gestures are used as input and must match the rhythm and gestures shown on screen, thus allowing a physical therapist to represent an exercise session involving the user's hand and finger joints as a series of patterns. Fine motor rehabilitation plays an important role in the recovery and improvement of the effects of stroke, Parkinson's disease, multiple sclerosis, and more. Individuals with these conditions possess a wide range of impairment in terms of fine motor movement. The serious game developed takes this into account and is designed to work with individuals with different levels of impairment. In a pilot study, under partnership with South West Advanced Neurological Rehabilitation (SWAN Rehab) in Phoenix, Arizona, we compared the performance of individuals with fine motor impairment to individuals without this impairment to determine whether a human-centered approach and adapting to an user's range of motion can allow an individual with fine motor impairment to perform at a similar level as a non-impaired user.

Contributors

Agent

Created

Date Created
  • 2018-05

133225-Thumbnail Image.png

Using Goodness of Pronunciation Features for Spoken Nasality Detection

Description

Speech nasality disorders are characterized by abnormal resonance in the nasal cavity. Hypernasal speech is of particular interest, characterized by an inability to prevent improper nasalization of vowels, and poor

Speech nasality disorders are characterized by abnormal resonance in the nasal cavity. Hypernasal speech is of particular interest, characterized by an inability to prevent improper nasalization of vowels, and poor articulation of plosive and fricative consonants, and can lead to negative communicative and social consequences. It can be associated with a range of conditions, including cleft lip or palate, velopharyngeal dysfunction (a physical or neurological defective closure of the soft palate that regulates resonance between the oral and nasal cavity), dysarthria, or hearing impairment, and can also be an early indicator of developing neurological disorders such as ALS. Hypernasality is typically scored perceptually by a Speech Language Pathologist (SLP). Misdiagnosis could lead to inadequate treatment plans and poor treatment outcomes for a patient. Also, for some applications, particularly screening for early neurological disorders, the use of an SLP is not practical. Hence this work demonstrates a data-driven approach to objective assessment of hypernasality, through the use of Goodness of Pronunciation features. These features capture the overall precision of articulation of speaker on a phoneme-by-phoneme basis, allowing demonstrated models to achieve a Pearson correlation coefficient of 0.88 on low-nasality speakers, the population of most interest for this sort of technique. These results are comparable to milestone methods in this domain.

Contributors

Agent

Created

Date Created
  • 2018-05

133186-Thumbnail Image.png

Web Application for Sorority Involvement Tracking

Description

Most collegiate organizations aim to unite students with common interests and engage them in a like-minded community of peers. A significant sub-group of these organizations are classified under sororities and

Most collegiate organizations aim to unite students with common interests and engage them in a like-minded community of peers. A significant sub-group of these organizations are classified under sororities and fraternities and commonly known as Greek Life. Member involvement is a crucial element for Greek Life as participation in philanthropic events, chapter meetings, rituals, recruitment events, etc. often reflects the state of the organization. The purpose of this project is to create a web application that allows members of an Arizona State University sorority to view their involvement activity as outlined by the chapter. Maintaining the balance between academics, sleep, a social life, and extra-curricular activities/organizations can be difficult for college students. With the use of this website, members can view their attendances, absences, and study/volunteer hours to know their progress towards the involvement requirements set by the chapter. This knowledge makes it easier to plan schedules and alleviate some stress associated with the time-management of sorority events, assignments/homework, and studying. It is also designed for the sorority leadership to analyze and track the participation of the membership. Members can submit their participation in events, making the need for manual counting and calculations disappear. The website administrator(s) can view and approve data from any and all members. The website was developed using HTML, CSS, and JavaScript in conjunction with Firebase for the back-end database. Human-Computer Interaction (HCI) tools and techniques were used throughout the development process to aide in prototyping, visual design, and evaluation. The front-end appearance of the website was designed to mimic the data manipulation used in the current involvement tracking system while presenting it in a more personalized and aesthetically pleasing manner.

Contributors

Created

Date Created
  • 2018-12

136785-Thumbnail Image.png

Exploring the Design of Vibrotactile Cues for Visio-Haptic Sensory Substitution

Description

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of a chair to provide vibrotactile stimulation in the context of a dyadic (one-on-one) interaction across a table. This work explores the design of spatiotemporal vibration patterns that can be used to convey the basic building blocks of facial movements according to the Facial Action Unit Coding System. A behavioral study was conducted to explore the factors that influence the naturalness of conveying affect using vibrotactile cues.

Contributors

Agent

Created

Date Created
  • 2014-05

148244-Thumbnail Image.png

Exploring the Impact of a Haptic Glove on Immersion

Description

In this experiment, a haptic glove with vibratory motors on the fingertips was tested against the standard HTC Vive controller to see if the additional vibrations provided by the glove

In this experiment, a haptic glove with vibratory motors on the fingertips was tested against the standard HTC Vive controller to see if the additional vibrations provided by the glove increased immersion in common gaming scenarios where haptic feedback is provided. Specifically, two scenarios were developed: an explosion scene containing a small and large explosion and a box interaction scene that allowed the participants to touch the box virtually with their hand. At the start of this project, it was hypothesized that the haptic glove would have a significant positive impact in at least one of these scenarios. Nine participants took place in the study and immersion was measured through a post-experiment questionnaire. Statistical analysis on the results showed that the haptic glove did have a significant impact on immersion in the box interaction scene, but not in the explosion scene. In the end, I conclude that since this haptic glove does not significantly increase immersion across all scenarios when compared to the standard Vive controller, it should not be used at a replacement in its current state.

Contributors

Created

Date Created
  • 2021-05

137492-Thumbnail Image.png

The Dyadic Interaction Assistant for Individuals with Visual Impairments

Description

This paper presents an overview of The Dyadic Interaction Assistant for Individuals with Visual Impairments with a focus on the software component. The system is designed to communicate facial information

This paper presents an overview of The Dyadic Interaction Assistant for Individuals with Visual Impairments with a focus on the software component. The system is designed to communicate facial information (facial Action Units, facial expressions, and facial features) to an individual with visual impairments in a dyadic interaction between two people sitting across from each other. Comprised of (1) a webcam, (2) software, and (3) a haptic device, the system can also be described as a series of input, processing, and output stages, respectively. The processing stage of the system builds on the open source FaceTracker software and the application Computer Expression Recognition Toolbox (CERT). While these two sources provide the facial data, the program developed through the IDE Qt Creator and several AppleScripts are used to adapt the information to a Graphical User Interface (GUI) and output the data to a comma-separated values (CSV) file. It is the first software to convey all 3 types of facial information at once in real-time. Future work includes testing and evaluating the quality of the software with human subjects (both sighted and blind/low vision), integrating the haptic device to complete the system, and evaluating the entire system with human subjects (sighted and blind/low vision).

Contributors

Agent

Created

Date Created
  • 2013-05

135798-Thumbnail Image.png

Utilizing Neural Networks to Predict Freezing of Gait in Parkinson's Patients

Description

The artificial neural network is a form of machine learning that is highly effective at recognizing patterns in large, noise-filled datasets. Possessing these attributes uniquely qualifies the neural network as

The artificial neural network is a form of machine learning that is highly effective at recognizing patterns in large, noise-filled datasets. Possessing these attributes uniquely qualifies the neural network as a mathematical basis for adaptability in personal biomedical devices. The purpose of this study was to determine the viability of neural networks in predicting Freezing of Gait (FoG), a symptom of Parkinson's disease in which the patient's legs are suddenly rendered unable to move. More specifically, a class of neural networks known as layered recurrent networks (LRNs) was applied to an open- source FoG experimental dataset donated to the Machine Learning Repository of the University of California at Irvine. The independent variables in this experiment \u2014 the subject being tested, neural network architecture, and sampling of the majority classes \u2014 were each varied and compared against the performance of the neural network in predicting future FoG events. It was determined that single-layered recurrent networks are a viable method of predicting FoG events given the volume of the training data available, though results varied significantly between different patients. For the three patients tested, shank acceleration data was used to train networks with peak precision/recall values of 41.88%/47.12%, 89.05%/29.60%, and 57.19%/27.39% respectively. These values were obtained for networks optimized using detection theory rather than optimized for desired values of precision and recall. Furthermore, due to the nature of the experiments performed in this study, these values are representative of the lower-bound performance of layered recurrent networks trained to detect gait freezing. As such, these values may be improved through a variety of measures.

Contributors

Agent

Created

Date Created
  • 2016-05