Matching Items (15)
Filtering by

Clear all filters

151926-Thumbnail Image.png
Description
In recent years, machine learning and data mining technologies have received growing attention in several areas such as recommendation systems, natural language processing, speech and handwriting recognition, image processing and biomedical domain. Many of these applications which deal with physiological and biomedical data require person specific or person adaptive systems.

In recent years, machine learning and data mining technologies have received growing attention in several areas such as recommendation systems, natural language processing, speech and handwriting recognition, image processing and biomedical domain. Many of these applications which deal with physiological and biomedical data require person specific or person adaptive systems. The greatest challenge in developing such systems is the subject-dependent data variations or subject-based variability in physiological and biomedical data, which leads to difference in data distributions making the task of modeling these data, using traditional machine learning algorithms, complex and challenging. As a result, despite the wide application of machine learning, efficient deployment of its principles to model real-world data is still a challenge. This dissertation addresses the problem of subject based variability in physiological and biomedical data and proposes person adaptive prediction models based on novel transfer and active learning algorithms, an emerging field in machine learning. One of the significant contributions of this dissertation is a person adaptive method, for early detection of muscle fatigue using Surface Electromyogram signals, based on a new multi-source transfer learning algorithm. This dissertation also proposes a subject-independent algorithm for grading the progression of muscle fatigue from 0 to 1 level in a test subject, during isometric or dynamic contractions, at real-time. Besides subject based variability, biomedical image data also varies due to variations in their imaging techniques, leading to distribution differences between the image databases. Hence a classifier learned on one database may perform poorly on the other database. Another significant contribution of this dissertation has been the design and development of an efficient biomedical image data annotation framework, based on a novel combination of transfer learning and a new batch-mode active learning method, capable of addressing the distribution differences across databases. The methodologies developed in this dissertation are relevant and applicable to a large set of computing problems where there is a high variation of data between subjects or sources, such as face detection, pose detection and speech recognition. From a broader perspective, these frameworks can be viewed as a first step towards design of automated adaptive systems for real world data.
ContributorsChattopadhyay, Rita (Author) / Panchanathan, Sethuraman (Thesis advisor) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2013
151088-Thumbnail Image.png
Description
Approximately 1.7 million people in the United States are living with limb loss and are in need of more sophisticated devices that better mimic human function. In the Human Machine Integration Laboratory, a powered, transtibial prosthetic ankle was designed and build that allows a person to regain ankle function with

Approximately 1.7 million people in the United States are living with limb loss and are in need of more sophisticated devices that better mimic human function. In the Human Machine Integration Laboratory, a powered, transtibial prosthetic ankle was designed and build that allows a person to regain ankle function with improved ankle kinematics and kinetics. The ankle allows a person to walk normally and up and down stairs, but volitional control is still an issue. This research tackled the problem of giving the user more control over the prosthetic ankle using a force/torque circuit. When the user presses against a force/torque sensor located inside the socket the prosthetic foot plantar flexes or moves downward. This will help the user add additional push-off force when walking up slopes or stairs. It also gives the user a sense of control over the device.
ContributorsFronczyk, Adam (Author) / Sugar, Thomas G. (Thesis advisor) / Helms-Tillery, Stephen (Thesis advisor) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
150599-Thumbnail Image.png
Description
Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's

Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.
ContributorsMcDaniel, Troy Lee (Author) / Panchanathan, Sethuraman (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
153814-Thumbnail Image.png
Description
The current work investigated the emergence of leader-follower roles during social motor coordination. Previous research has presumed a leader during coordination assumes a spatiotemporally advanced position (e.g., relative phase lead). While intuitive, this definition discounts what role-taking implies. Leading and following is defined as one person (or limb) having a

The current work investigated the emergence of leader-follower roles during social motor coordination. Previous research has presumed a leader during coordination assumes a spatiotemporally advanced position (e.g., relative phase lead). While intuitive, this definition discounts what role-taking implies. Leading and following is defined as one person (or limb) having a larger influence on the motor state changes of another; the coupling is asymmetric. Three experiments demonstrated asymmetric coupling effects emerge when task or biomechanical asymmetries are imputed between actors. Participants coordinated in-phase (Ф =0o) swinging of handheld pendulums, which differed in their uncoupled eigenfrequencies (frequency detuning). Coupling effects were recovered through phase-amplitude modeling. Experiment 1 examined leader-follower coupling during a bidirectional task. Experiment 2 employed an additional coupling asymmetry by assigning an explicit leader and follower. Both experiment 1 and 2 demonstrated asymmetric coupling effects with increased detuning. In experiment 2, though, the explicit follower exhibited a phase lead in nearly all conditions. These results confirm that coupling direction was not determined strictly by relative phasing. A third experiment examined the question raised by the previous two, which is how could someone follow from ahead (i.e., phase lead in experiment 2). This was tested using a combination of frequency detuning and amplitude asymmetry requirements (e.g., 1:1 or 1:2 & 2:1). Results demonstrated larger amplitude movements drove the coupling towards the person with the smaller amplitude; small amplitude movements exhibited a phase lead, despite being a follower in coupling terms. These results suggest leader-follower coupling is a general property of social motor coordination. Predicting when such coupling effects occur is emphasized by the stability reducing effects of coordinating asymmetric components. Generally, the implication is role-taking is an emergent strategy of dividing up coordination stabilizing efforts unequally between actors (or limbs).
ContributorsFine, Justin (Author) / Amazeen, Eric L. (Thesis advisor) / Amazeen, Polemnia G. (Committee member) / Brewer, Gene (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2015
133601-Thumbnail Image.png
Description
Most daily living tasks consist of pairing a series of sequential movements, e.g., reaching to a cup, grabbing the cup, lifting and returning the cup to your mouth. The process by which we control and mediate the smooth progression of these tasks is not well understood. One method which we

Most daily living tasks consist of pairing a series of sequential movements, e.g., reaching to a cup, grabbing the cup, lifting and returning the cup to your mouth. The process by which we control and mediate the smooth progression of these tasks is not well understood. One method which we can use to further evaluate these motions is known as Startle Evoked Movements (SEM). SEM is an established technique to probe the motor learning and planning processes by detecting muscle activation of the sternocleidomastoid muscles of the neck prior to 120ms after a startling stimulus is presented. If activation of these muscles was detected following a stimulus in the 120ms window, the movement is classified as Startle+ whereas if no sternocleidomastoid activation is detected after a stimulus in the allotted time the movement is considered Startle-. For a movement to be considered SEM, the activation of movements for Startle+ trials must be faster than the activation of Startle- trials. The objective of this study was to evaluate the effect that expertise has on sequential movements as well as determining if startle can distinguish when the consolidation of actions, known as chunking, has occurred. We hypothesized that SEM could distinguish words that were solidified or chunked. Specifically, SEM would be present when expert typists were asked to type a common word but not during uncommon letter combinations. The results from this study indicated that the only word that was susceptible to SEM, where Startle+ trials were initiated faster than Startle-, was an uncommon task "HET" while the common words "AND" and "THE" were not. Additionally, the evaluation of the differences between each keystroke for common and uncommon words showed that Startle was unable to distinguish differences in motor chunking between Startle+ and Startle- trials. Explanations into why these results were observed could be related to hand dominance in expert typists. No proper research has been conducted to evaluate the susceptibility of the non-dominant hand's fingers to SEM, and the results of future studies into this as well as the results from this study can impact our understanding of sequential movements.
ContributorsMieth, Justin Richard (Author) / Honeycutt, Claire (Thesis director) / Santello, Marco (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134938-Thumbnail Image.png
Description
Startle-evoked-movement (SEM), the involuntary release of a planned movement via a startling stimulus, has gained significant attention recently for its ability to probe motor planning as well as enhance movement of the upper extremity following stroke. We recently showed that hand movements are susceptible to SEM. Interestingly, only coordinated movements

Startle-evoked-movement (SEM), the involuntary release of a planned movement via a startling stimulus, has gained significant attention recently for its ability to probe motor planning as well as enhance movement of the upper extremity following stroke. We recently showed that hand movements are susceptible to SEM. Interestingly, only coordinated movements of the hand (grasp) but not individuated movements of the finger (finger abduction) were susceptible. It was suggested that this resulted from different neural mechanisms involved in each task; however it is possible this was the result of task familiarity. The objective of this study was to evaluate a more familiar individuated finger movement, typing, to determine if this task was susceptible to SEM. We hypothesized that typing movements will be susceptible to SEM in all fingers. These results indicate that individuated movements of the fingers are susceptible to SEM when the task involves a more familiar task, since the electromyogram (EMG) latency is faster in SCM+ trials compared to SCM- trials. However, the middle finger does not show a difference in terms of the keystroke voltage signal, suggesting the middle finger is less susceptible to SEM. Given that SEM is thought to be mediated by the brainstem, specifically the reticulospinal tract, this suggest that the brainstem may play a role in movements of the distal limb when those movements are very familiar, and the independence of each finger might also have a significant on the effect of SEM. Further research includes understanding SEM in fingers in the stroke population. The implications of this research can impact the way upper extremity rehabilitation is delivered.
ContributorsQuezada Valladares, Maria Jose (Author) / Honeycutt, Claire (Thesis director) / Santello, Marco (Committee member) / Harrington Bioengineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134804-Thumbnail Image.png
Description
Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed

Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed with difficulty. While the presence of SEM in the stroke survivor population advances scientific understanding of movement capabilities following a stroke, published studies using the SEM phenomenon only examined one joint. The ability of SEM to generate multi-jointed movements is understudied and consequently limits SEM as a potential therapy tool. In order to apply SEM as a therapy tool however, the biomechanics of the arm in multi-jointed movement planning and execution must be better understood. Thus, the objective of our study was to evaluate if SEM could elicit multi-joint reaching movements that were accurate in an unrestrained, two-dimensional workspace. Data was collected from ten subjects with no previous neck, arm, or brain injury. Each subject performed a reaching task to five Targets that were equally spaced in a semi-circle to create a two-dimensional workspace. The subject reached to each Target following a sequence of two non-startling acoustic stimuli cues: "Get Ready" and "Go". A loud acoustic stimuli was randomly substituted for the "Go" cue. We hypothesized that SEM is accessible and accurate for unrestricted multi-jointed reaching tasks in a functional workspace and is therefore independent of movement direction. Our results found that SEM is possible in all five Target directions. The probability of evoking SEM and the movement kinematics (i.e. total movement time, linear deviation, average velocity) to each Target are not statistically different. Thus, we conclude that SEM is possible in a functional workspace and is not dependent on where arm stability is maximized. Moreover, coordinated preparation and storage of a multi-jointed movement is indeed possible.
ContributorsOssanna, Meilin Ryan (Author) / Honeycutt, Claire (Thesis director) / Schaefer, Sydney (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
155473-Thumbnail Image.png
Description
In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central

In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central or peripheral). Recent studies in non-human primates and humans have shown the possibility of controlling a prosthesis for accomplishing varied tasks such as self-feeding, typing, reaching, grasping, and performing fine dexterous movements. A neural decoding system comprises mainly of three components: (i) sensors to record neural signals, (ii) an algorithm to map neural recordings to upper limb kinematics and (iii) a prosthetic arm actuated by control signals generated by the algorithm. Machine learning algorithms that map input neural activity to the output kinematics (like finger trajectory) form the core of the neural decoding system. The choice of the algorithm is thus, mainly imposed by the neural signal of interest and the output parameter being decoded. The various parts of a neural decoding system are neural data, feature extraction, feature selection, and machine learning algorithm. There have been significant advances in the field of neural prosthetic applications. But there are challenges for translating a neural prosthesis from a laboratory setting to a clinical environment. To achieve a fully functional prosthetic device with maximum user compliance and acceptance, these factors need to be addressed and taken into consideration. Three challenges in developing robust neural decoding systems were addressed by exploring neural variability in the peripheral nervous system for dexterous finger movements, feature selection methods based on clinically relevant metrics and a novel method for decoding dexterous finger movements based on ensemble methods.
ContributorsPadmanaban, Subash (Author) / Greger, Bradley (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Crook, Sharon (Committee member) / Arizona State University (Publisher)
Created2017
155864-Thumbnail Image.png
Description
The interaction between visual fixations during planning and performance in a

dexterous task was analyzed. An eye-tracking device was affixed to subjects during

sequences of null (salient center of mass) and weighted (non salient center of mass) trials

with unconstrained precision grasp. Subjects experienced both expected and unexpected

perturbations, with the task of minimizing

The interaction between visual fixations during planning and performance in a

dexterous task was analyzed. An eye-tracking device was affixed to subjects during

sequences of null (salient center of mass) and weighted (non salient center of mass) trials

with unconstrained precision grasp. Subjects experienced both expected and unexpected

perturbations, with the task of minimizing object roll. Unexpected perturbations were

controlled by switching weights between trials, expected perturbations were controlled by

asking subjects to rotate the object themselves. In all cases subjects were able to

minimize the roll of the object within three trials. Eye fixations were correlated with

object weight for the initial context and for known shifts in center of mass. In subsequent

trials with unexpected weight shifts, subjects appeared to scan areas of interest from both

contexts even after learning present orientation.
ContributorsSmith, Michael David (Author) / Santello, Marco (Thesis advisor) / Buneo, Christopher (Committee member) / Schaefer, Sydney (Committee member) / Arizona State University (Publisher)
Created2017