Matching Items (11)
Filtering by

Clear all filters

151373-Thumbnail Image.png
Description
In this thesis, quantitative evaluation of quality of movement during stroke rehabilitation will be discussed. Previous research on stroke rehabilitation in hospital has been shown to be effective. In this thesis, we study various issues that arise when creating a home-based system that can be deployed in a patient's home.

In this thesis, quantitative evaluation of quality of movement during stroke rehabilitation will be discussed. Previous research on stroke rehabilitation in hospital has been shown to be effective. In this thesis, we study various issues that arise when creating a home-based system that can be deployed in a patient's home. Limitation of motion capture due to reduced number of sensors leads to problems with design of kinematic features for quantitative evaluation. Also, the hierarchical three-level tasks of rehabilitation requires new design of kinematic features. In this thesis, the design of kinematic features for a home based stroke rehabilitation system will be presented. Results of the most challenging classifier are shown and proves the effectiveness of the design. Comparison between modern classification techniques and low computational cost threshold based classification with same features will also be shown.
ContributorsCheng, Long (Author) / Turaga, Pavan (Thesis advisor) / Arizona State University (Publisher)
Created2012
151383-Thumbnail Image.png
Description
Motion capture using cost-effective sensing technology is challenging and the huge success of Microsoft Kinect has been attracting researchers to uncover the potential of using this technology into computer vision applications. In this thesis, an upper-body motion analysis in a home-based system for stroke rehabilitation using novel RGB-D camera -

Motion capture using cost-effective sensing technology is challenging and the huge success of Microsoft Kinect has been attracting researchers to uncover the potential of using this technology into computer vision applications. In this thesis, an upper-body motion analysis in a home-based system for stroke rehabilitation using novel RGB-D camera - Kinect is presented. We address this problem by first conducting a systematic analysis of the usability of Kinect for motion analysis in stroke rehabilitation. Then a hybrid upper body tracking approach is proposed which combines off-the-shelf skeleton tracking with a novel depth-fused mean shift tracking method. We proposed several kinematic features reliably extracted from the proposed inexpensive and portable motion capture system and classifiers that correlate torso movement to clinical measures of unimpaired and impaired. Experiment results show that the proposed sensing and analysis works reliably on measuring torso movement quality and is promising for end-point tracking. The system is currently being deployed for large-scale evaluations.
ContributorsDu, Tingfang (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Rikakis, Thanassis (Committee member) / Arizona State University (Publisher)
Created2012
152941-Thumbnail Image.png
Description
Head movement is known to have the benefit of improving the accuracy of sound localization for humans and animals. Marmoset is a small bodied New World monkey species and it has become an emerging model for studying the auditory functions. This thesis aims to detect the horizontal and vertical

Head movement is known to have the benefit of improving the accuracy of sound localization for humans and animals. Marmoset is a small bodied New World monkey species and it has become an emerging model for studying the auditory functions. This thesis aims to detect the horizontal and vertical rotation of head movement in marmoset monkeys.

Experiments were conducted in a sound-attenuated acoustic chamber. Head movement of marmoset monkey was studied under various auditory and visual stimulation conditions. With increasing complexity, these conditions are (1) idle, (2) sound-alone, (3) sound and visual signals, and (4) alert signal by opening and closing of the chamber door. All of these conditions were tested with either house light on or off. Infra-red camera with a frame rate of 90 Hz was used to capture of the head movement of monkeys. To assist the signal detection, two circular markers were attached to the top of monkey head. The data analysis used an image-based marker detection scheme. Images were processed using the Computation Vision Toolbox in Matlab. The markers and their positions were detected using blob detection techniques. Based on the frame-by-frame information of marker positions, the angular position, velocity and acceleration were extracted in horizontal and vertical planes. Adaptive Otsu Thresholding, Kalman filtering and bound setting for marker properties were used to overcome a number of challenges encountered during this analysis, such as finding image segmentation threshold, continuously tracking markers during large head movement, and false alarm detection.

The results show that the blob detection method together with Kalman filtering yielded better performances than other image based techniques like optical flow and SURF features .The median of the maximal head turn in the horizontal plane was in the range of 20 to 70 degrees and the median of the maximal velocity in horizontal plane was in the range of a few hundreds of degrees per second. In comparison, the natural alert signal - door opening and closing - evoked the faster head turns than other stimulus conditions. These results suggest that behaviorally relevant stimulus such as alert signals evoke faster head-turn responses in marmoset monkeys.
ContributorsSimhadri, Sravanthi (Author) / Zhou, Yi (Thesis advisor) / Turaga, Pavan (Thesis advisor) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2014
Description
As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential

As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential media systems, applies hybrid knowledge synthesized across multiple disciplines to address challenges relevant to daily experience. Interactive neurorehabilitation (INR) aims to enhance functional movement therapy by integrating detailed motion capture with interactive feedback in a manner that facilitates engagement and sensorimotor learning for those who have suffered neurologic injury. While INR shows great promise to advance the current state of therapies, a cohesive media design methodology for INR is missing due to the present lack of substantial evidence within the field. Using an experiential media based approach to draw knowledge from external disciplines, this dissertation proposes a compositional framework for authoring visual media for INR systems across contexts and applications within upper extremity stroke rehabilitation. The compositional framework is applied across systems for supervised training, unsupervised training, and assisted reflection, which reflect the collective work of the Adaptive Mixed Reality Rehabilitation (AMRR) Team at Arizona State University, of which the author is a member. Formal structures and a methodology for applying them are described in detail for the visual media environments designed by the author. Data collected from studies conducted by the AMRR team to evaluate these systems in both supervised and unsupervised training contexts is also discussed in terms of the extent to which the application of the compositional framework is supported and which aspects require further investigation. The potential broader implications of the proposed compositional framework and methodology are the dissemination of interdisciplinary information to accelerate the informed development of INR applications and to demonstrate the potential benefit of generalizing integrative approaches, merging arts and science based knowledge, for other complex problems related to embodied learning.
ContributorsLehrer, Nicole (Author) / Rikakis, Thanassis (Committee member) / Olson, Loren (Committee member) / Wolf, Steven L. (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
153158-Thumbnail Image.png
Description
Stroke is a leading cause of disability with varying effects across stroke survivors necessitating comprehensive approaches to rehabilitation. Interactive neurorehabilitation (INR) systems represent promising technological solutions that can provide an array of sensing, feedback and analysis tools which hold the potential to maximize clinical therapy as well as extend therapy

Stroke is a leading cause of disability with varying effects across stroke survivors necessitating comprehensive approaches to rehabilitation. Interactive neurorehabilitation (INR) systems represent promising technological solutions that can provide an array of sensing, feedback and analysis tools which hold the potential to maximize clinical therapy as well as extend therapy to the home. Currently, there are a variety of approaches to INR design, which coupled with minimal large-scale clinical data, has led to a lack of cohesion in INR design. INR design presents an inherently complex space as these systems have multiple users including stroke survivors, therapists and designers, each with their own user experience needs. This dissertation proposes that comprehensive INR design, which can address this complex user space, requires and benefits from the application of interdisciplinary research that spans motor learning and interactive learning. A methodology for integrated and iterative design approaches to INR task experience, assessment, hardware, software and interactive training protocol design is proposed within the comprehensive example of design and implementation of a mixed reality rehabilitation system for minimally supervised environments. This system was tested with eight stroke survivors who showed promising results in both functional and movement quality improvement. The results of testing the system with stroke survivors as well as observing user experiences will be presented along with suggested improvements to the proposed design methodology. This integrative design methodology is proposed to have benefit for not only comprehensive INR design but also complex interactive system design in general.
ContributorsBaran, Michael (Author) / Rikakis, Thanassis (Thesis advisor) / Olson, Loren (Thesis advisor) / Wolf, Steven L. (Committee member) / Ingalls, Todd (Committee member) / Arizona State University (Publisher)
Created2014
156919-Thumbnail Image.png
Description
Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today,

Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today, optical flow fields are utilized to solve problems in various areas such as object detection and tracking, interpolation, visual odometry, etc. In this dissertation, three problems from different areas of computer vision and the solutions that make use of modified optical flow methods are explained.

The contributions of this dissertation are approaches and frameworks that introduce i) a new optical flow-based interpolation method to achieve minimally divergent velocimetry data, ii) a framework that improves the accuracy of change detection algorithms in synthetic aperture radar (SAR) images, and iii) a set of new methods to integrate Proton Magnetic Resonance Spectroscopy (1HMRSI) data into threedimensional (3D) neuronavigation systems for tumor biopsies.

In the first application an optical flow-based approach for the interpolation of minimally divergent velocimetry data is proposed. The velocimetry data of incompressible fluids contain signals that describe the flow velocity. The approach uses the additional flow velocity information to guide the interpolation process towards reduced divergence in the interpolated data.

In the second application a framework that mainly consists of optical flow methods and other image processing and computer vision techniques to improve object extraction from synthetic aperture radar images is proposed. The proposed framework is used for distinguishing between actual motion and detected motion due to misregistration in SAR image sets and it can lead to more accurate and meaningful change detection and improve object extraction from a SAR datasets.

In the third application a set of new methods that aim to improve upon the current state-of-the-art in neuronavigation through the use of detailed three-dimensional (3D) 1H-MRSI data are proposed. The result is a progressive form of online MRSI-guided neuronavigation that is demonstrated through phantom validation and clinical application.
ContributorsKanberoglu, Berkay (Author) / Frakes, David (Thesis advisor) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2018
154603-Thumbnail Image.png
Description
The increased risk of falling and the worse ability to perform other daily physical activities in the elderly cause concern about monitoring and correcting basic everyday movement. In this thesis, a Kinect-based system was designed to assess one of the most important factors in balance control of human body when

The increased risk of falling and the worse ability to perform other daily physical activities in the elderly cause concern about monitoring and correcting basic everyday movement. In this thesis, a Kinect-based system was designed to assess one of the most important factors in balance control of human body when doing Sit-to-Stand (STS) movement: the postural symmetry in mediolateral direction. A symmetry score, calculated by the data obtained from a Kinect RGB-D camera, was proposed to reflect the mediolateral postural symmetry degree and was used to drive a real-time audio feedback designed in MAX/MSP to help users adjust themselves to perform their movement in a more symmetrical way during STS. The symmetry score was verified by calculating the Spearman correlation coefficient with the data obtained from Inertial Measurement Unit (IMU) sensor and got an average value at 0.732. Five healthy adults, four males and one female, with normal balance abilities and with no musculoskeletal disorders, were selected to participate in the experiment and the results showed that the low-cost Kinect-based system has the potential to train users to perform a more symmetrical movement in mediolateral direction during STS movement.
ContributorsZhou, Henghao (Author) / Turaga, Pavan (Thesis advisor) / Ingalls, Todd (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2016
149371-Thumbnail Image.png
Description
This thesis presents a multi-modal motion tracking system for stroke patient rehabilitation. This system deploys two sensor modules: marker-based motion capture system and inertial measurement unit (IMU). The integrated system provides real-time measurement of the right arm and trunk movement, even in the presence of marker occlusion. The information from

This thesis presents a multi-modal motion tracking system for stroke patient rehabilitation. This system deploys two sensor modules: marker-based motion capture system and inertial measurement unit (IMU). The integrated system provides real-time measurement of the right arm and trunk movement, even in the presence of marker occlusion. The information from the two sensors is fused through quaternion-based recursive filters to promise robust detection of torso compensation (undesired body motion). Since this algorithm allows flexible sensor configurations, it presents a framework for fusing the IMU data and vision data that can adapt to various sensor selection scenarios. The proposed system consequently has the potential to improve both the robustness and flexibility of the sensing process. Through comparison between the complementary filter, the extended Kalman filter (EKF), the unscented Kalman filter (UKF) and the particle filter (PF), the experimental part evaluated the performance of the quaternion-based complementary filter for 10 sensor combination scenarios. Experimental results demonstrate the favorable performance of the proposed system in case of occlusion. Such investigation also provides valuable information for filtering algorithm and strategy selection in specific sensor applications.
ContributorsLiu, Yangzi (Author) / Qian, Gang (Thesis advisor) / Olson, Loren (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2010
171607-Thumbnail Image.png
Description
Nearly one percent of the population over 65 years of age is living with Parkinson’s disease (PD) and this population worldwide is projected to be approximately nine million by 2030. PD is a progressive neurological disease characterized by both motor and cognitive impairments. One of the most serious challenges for

Nearly one percent of the population over 65 years of age is living with Parkinson’s disease (PD) and this population worldwide is projected to be approximately nine million by 2030. PD is a progressive neurological disease characterized by both motor and cognitive impairments. One of the most serious challenges for an individual as the disease progresses is the increasing severity of gait and posture impairments since they result in debilitating conditions such as freezing of gait, increased likelihood of falls, and poor quality of life. Although dopaminergic therapy and deep brain stimulation are generally effective, they often fail to improve gait and posture deficits. Several recent studies have employed real-time feedback (RTF) of gait parameters to improve walking patterns in PD. In earlier work, results from the investigation of the effects of RTF of step length and back angle during treadmill walking demonstrated that people with PD could follow the feedback and utilize it to modulate movements favorably in a manner that transferred, at least acutely, to overground walking. In this work, recent advances in wearable technologies were leveraged to develop a wearable real-time feedback (WRTF) system that can monitor and evaluate movements and provide feedback during daily activities that involve overground walking. Specifically, this work addressed the challenges of obtaining accurate gait and posture measures from wearable sensors in real-time and providing auditory feedback on the calculated real-time measures for rehabilitation. An algorithm was developed to calculate gait and posture variables from wearable sensor measurements, which were then validated against gold-standard measurements. The WRTF system calculates these measures and provides auditory feedback in real-time. The WRTF system was evaluated as a potential rehabilitation tool for use by people with mild to moderate PD. Results from the study indicated that the system can accurately measure step length and back angle, and that subjects could respond to real-time auditory feedback in a manner that improved their step length and uprightness. These improvements were exhibited while using the system that provided feedback and were sustained in subsequent trials immediately thereafter in which subjects walked without receiving feedback from the system.
ContributorsMuthukrishnan, Niveditha (Author) / Abbas, James (Thesis advisor) / Krishnamurthi, Narayanan (Thesis advisor) / Shill, Holly A (Committee member) / Honeycutt, Claire (Committee member) / Turaga, Pavan (Committee member) / Ingalls, Todd (Committee member) / Arizona State University (Publisher)
Created2022
152367-Thumbnail Image.png
Description
Advancements in mobile technologies have significantly enhanced the capabilities of mobile devices to serve as powerful platforms for sensing, processing, and visualization. Surges in the sensing technology and the abundance of data have enabled the use of these portable devices for real-time data analysis and decision-making in digital signal processing

Advancements in mobile technologies have significantly enhanced the capabilities of mobile devices to serve as powerful platforms for sensing, processing, and visualization. Surges in the sensing technology and the abundance of data have enabled the use of these portable devices for real-time data analysis and decision-making in digital signal processing (DSP) applications. Most of the current efforts in DSP education focus on building tools to facilitate understanding of the mathematical principles. However, there is a disconnect between real-world data processing problems and the material presented in a DSP course. Sophisticated mobile interfaces and apps can potentially play a crucial role in providing a hands-on-experience with modern DSP applications to students. In this work, a new paradigm of DSP learning is explored by building an interactive easy-to-use health monitoring application for use in DSP courses. This is motivated by the increasing commercial interest in employing mobile phones for real-time health monitoring tasks. The idea is to exploit the computational abilities of the Android platform to build m-Health modules with sensor interfaces. In particular, appropriate sensing modalities have been identified, and a suite of software functionalities have been developed. Within the existing framework of the AJDSP app, a graphical programming environment, interfaces to on-board and external sensor hardware have also been developed to acquire and process physiological data. The set of sensor signals that can be monitored include electrocardiogram (ECG), photoplethysmogram (PPG), accelerometer signal, and galvanic skin response (GSR). The proposed m-Health modules can be used to estimate parameters such as heart rate, oxygen saturation, step count, and heart rate variability. A set of laboratory exercises have been designed to demonstrate the use of these modules in DSP courses. The app was evaluated through several workshops involving graduate and undergraduate students in signal processing majors at Arizona State University. The usefulness of the software modules in enhancing student understanding of signals, sensors and DSP systems were analyzed. Student opinions about the app and the proposed m-health modules evidenced the merits of integrating tools for mobile sensing and processing in a DSP curriculum, and familiarizing students with challenges in modern data-driven applications.
ContributorsRajan, Deepta (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013