This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 95
152198-Thumbnail Image.png
Description
The processing power and storage capacity of portable devices have improved considerably over the past decade. This has motivated the implementation of sophisticated audio and other signal processing algorithms on such mobile devices. Of particular interest in this thesis is audio/speech processing based on perceptual criteria. Specifically, estimation of parameters

The processing power and storage capacity of portable devices have improved considerably over the past decade. This has motivated the implementation of sophisticated audio and other signal processing algorithms on such mobile devices. Of particular interest in this thesis is audio/speech processing based on perceptual criteria. Specifically, estimation of parameters from human auditory models, such as auditory patterns and loudness, involves computationally intensive operations which can strain device resources. Hence, strategies for implementing computationally efficient human auditory models for loudness estimation have been studied in this thesis. Existing algorithms for reducing computations in auditory pattern and loudness estimation have been examined and improved algorithms have been proposed to overcome limitations of these methods. In addition, real-time applications such as perceptual loudness estimation and loudness equalization using auditory models have also been implemented. A software implementation of loudness estimation on iOS devices is also reported in this thesis. In addition to the loudness estimation algorithms and software, in this thesis project we also created new illustrations of speech and audio processing concepts for research and education. As a result, a new suite of speech/audio DSP functions was developed and integrated as part of the award-winning educational iOS App 'iJDSP." These functions are described in detail in this thesis. Several enhancements in the architecture of the application have also been introduced for providing the supporting framework for speech/audio processing. Frame-by-frame processing and visualization functionalities have been developed to facilitate speech/audio processing. In addition, facilities for easy sound recording, processing and audio rendering have also been developed to provide students, practitioners and researchers with an enriched DSP simulation tool. Simulations and assessments have been also developed for use in classes and training of practitioners and students.
ContributorsKalyanasundaram, Girish (Author) / Spanias, Andreas S (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2013
152070-Thumbnail Image.png
Description
When surgical resection becomes necessary to alleviate a patient's epileptiform activity, that patient is monitored by video synchronized with electrocorticography (ECoG) to determine the type and location of seizure focus. This provides a unique opportunity for researchers to gather neurophysiological data with high temporal and spatial resolution; these data are

When surgical resection becomes necessary to alleviate a patient's epileptiform activity, that patient is monitored by video synchronized with electrocorticography (ECoG) to determine the type and location of seizure focus. This provides a unique opportunity for researchers to gather neurophysiological data with high temporal and spatial resolution; these data are assessed prior to surgical resection to ensure the preservation of the patient's quality of life, e.g. avoid the removal of brain tissue required for speech processing. Currently considered the "gold standard" for the mapping of cortex, electrical cortical stimulation (ECS) involves the systematic activation of pairs of electrodes to localize functionally specific brain regions. This method has distinct limitations, which often includes pain experienced by the patient. Even in the best cases, the technique suffers from subjective assessments on the parts of both patients and physicians, and high inter- and intra-observer variability. Recent advances have been made as researchers have reported the localization of language areas through several signal processing methodologies, all necessitating patient participation in a controlled experiment. The development of a quantification tool to localize speech areas in which a patient is engaged in an unconstrained interpersonal conversation would eliminate the dependence of biased patient and reviewer input, as well as unnecessary discomfort to the patient. Post-hoc ECoG data were gathered from five patients with intractable epilepsy while each was engaged in a conversation with family members or clinicians. After the data were separated into different speech conditions, the power of each was compared to baseline to determine statistically significant activated electrodes. The results of several analytical methods are presented here. The algorithms did not yield language-specific areas exclusively, as broad activation of statistically significant electrodes was apparent across cortical areas. For one patient, 15 adjacent contacts along superior temporal gyrus (STG) and posterior part of the temporal lobe were determined language-significant through a controlled experiment. The task involved a patient lying in bed listening to repeated words, and yielded statistically significant activations that aligned with those of clinical evaluation. The results of this study do not support the hypothesis that unconstrained conversation may be used to localize areas required for receptive and productive speech, yet suggests a simple listening task may be an adequate alternative to direct cortical stimulation.
ContributorsLingo VanGilder, Jennapher (Author) / Helms Tillery, Stephen I (Thesis advisor) / Wahnoun, Remy (Thesis advisor) / Buneo, Christopher (Committee member) / Arizona State University (Publisher)
Created2013
Description
Intracortical microstimulation (ICMS) within somatosensory cortex can produce artificial sensations including touch, pressure, and vibration. There is significant interest in using ICMS to provide sensory feedback for a prosthetic limb. In such a system, information recorded from sensors on the prosthetic would be translated into electrical stimulation and delivered directly

Intracortical microstimulation (ICMS) within somatosensory cortex can produce artificial sensations including touch, pressure, and vibration. There is significant interest in using ICMS to provide sensory feedback for a prosthetic limb. In such a system, information recorded from sensors on the prosthetic would be translated into electrical stimulation and delivered directly to the brain, providing feedback about features of objects in contact with the prosthetic. To achieve this goal, multiple simultaneous streams of information will need to be encoded by ICMS in a manner that produces robust, reliable, and discriminable sensations. The first segment of this work focuses on the discriminability of sensations elicited by ICMS within somatosensory cortex. Stimulation on multiple single electrodes and near-simultaneous stimulation across multiple electrodes, driven by a multimodal tactile sensor, were both used in these experiments. A SynTouch BioTac sensor was moved across a flat surface in several directions, and a subset of the sensor's electrode impedance channels were used to drive multichannel ICMS in the somatosensory cortex of a non-human primate. The animal performed a behavioral task during this stimulation to indicate the discriminability of sensations evoked by the electrical stimulation. The animal's responses to ICMS were somewhat inconsistent across experimental sessions but indicated that discriminable sensations were evoked by both single and multichannel ICMS. The factors that affect the discriminability of stimulation-induced sensations are not well understood, in part because the relationship between ICMS and the neural activity it induces is poorly defined. The second component of this work was to develop computational models that describe the populations of neurons likely to be activated by ICMS. Models of several neurons were constructed, and their responses to ICMS were calculated. A three-dimensional cortical model was constructed using these cell models and used to identify the populations of neurons likely to be recruited by ICMS. Stimulation activated neurons in a sparse and discontinuous fashion; additionally, the type, number, and location of neurons likely to be activated by stimulation varied with electrode depth.
ContributorsOverstreet, Cynthia K (Author) / Helms Tillery, Stephen I (Thesis advisor) / Santos, Veronica (Committee member) / Buneo, Christopher (Committee member) / Otto, Kevin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2013
152011-Thumbnail Image.png
Description
Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions

Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions and forces are coordinated during natural manipulation tasks, and b) what mechanisms underlie the formation and retention of internal representations of dexterous manipulation. This dissertation addresses these two questions through five experiments that are based on novel grip devices and experimental protocols. It was found that high-level representation of manipulation tasks can be learned in an effector-independent fashion. Specifically, when challenged by trial-to-trial variability in finger positions or using digits that were not previously engaged in learning the task, subjects could adjust finger forces to compensate for this variability, thus leading to consistent task performance. The results from a follow-up experiment conducted in a virtual reality environment indicate that haptic feedback is sufficient to implement the above coordination between digit position and forces. However, it was also found that the generalizability of a learned manipulation is limited across tasks. Specifically, when subjects learned to manipulate the same object across different contexts that require different motor output, interference was found at the time of switching contexts. Data from additional studies provide evidence for parallel learning processes, which are characterized by different rates of decay and learning. These experiments have provided important insight into the neural mechanisms underlying learning and control of object manipulation. The present findings have potential biomedical applications including brain-machine interfaces, rehabilitation of hand function, and prosthetics.
ContributorsFu, Qiushi (Author) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Santos, Veronica (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2013
152400-Thumbnail Image.png
Description
Advances in implantable MEMS technology has made possible adaptive micro-robotic implants that can track and record from single neurons in the brain. Development of autonomous neural interfaces opens up exciting possibilities of micro-robots performing standard electrophysiological techniques that would previously take researchers several hundred hours to train and achieve the

Advances in implantable MEMS technology has made possible adaptive micro-robotic implants that can track and record from single neurons in the brain. Development of autonomous neural interfaces opens up exciting possibilities of micro-robots performing standard electrophysiological techniques that would previously take researchers several hundred hours to train and achieve the desired skill level. It would result in more reliable and adaptive neural interfaces that could record optimal neural activity 24/7 with high fidelity signals, high yield and increased throughput. The main contribution here is validating adaptive strategies to overcome challenges in autonomous navigation of microelectrodes inside the brain. The following issues pose significant challenges as brain tissue is both functionally and structurally dynamic: a) time varying mechanical properties of the brain tissue-microelectrode interface due to the hyperelastic, viscoelastic nature of brain tissue b) non-stationarities in the neural signal caused by mechanical and physiological events in the interface and c) the lack of visual feedback of microelectrode position in brain tissue. A closed loop control algorithm is proposed here for autonomous navigation of microelectrodes in brain tissue while optimizing the signal-to-noise ratio of multi-unit neural recordings. The algorithm incorporates a quantitative understanding of constitutive mechanical properties of soft viscoelastic tissue like the brain and is guided by models that predict stresses developed in brain tissue during movement of the microelectrode. An optimal movement strategy is developed that achieves precise positioning of microelectrodes in the brain by minimizing the stresses developed in the surrounding tissue during navigation and maximizing the speed of movement. Results of testing the closed-loop control paradigm in short-term rodent experiments validated that it was possible to achieve a consistently high quality SNR throughout the duration of the experiment. At the systems level, new generation of MEMS actuators for movable microelectrode array are characterized and the MEMS device operation parameters are optimized for improved performance and reliability. Further, recommendations for packaging to minimize the form factor of the implant; design of device mounting and implantation techniques of MEMS microelectrode array to enhance the longevity of the implant are also included in a top-down approach to achieve a reliable brain interface.
ContributorsAnand, Sindhu (Author) / Muthuswamy, Jitendran (Thesis advisor) / Tillery, Stephen H (Committee member) / Buneo, Christopher (Committee member) / Abbas, James (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013
152594-Thumbnail Image.png
Description
The recent spotlight on concussion has illuminated deficits in the current standard of care with regard to addressing acute and persistent cognitive signs and symptoms of mild brain injury. This stems, in part, from the diffuse nature of the injury, which tends not to produce focal cognitive or behavioral deficits

The recent spotlight on concussion has illuminated deficits in the current standard of care with regard to addressing acute and persistent cognitive signs and symptoms of mild brain injury. This stems, in part, from the diffuse nature of the injury, which tends not to produce focal cognitive or behavioral deficits that are easily identified or tracked. Indeed it has been shown that patients with enduring symptoms have difficulty describing their problems; therefore, there is an urgent need for a sensitive measure of brain activity that corresponds with higher order cognitive processing. The development of a neurophysiological metric that maps to clinical resolution would inform decisions about diagnosis and prognosis, including the need for clinical intervention to address cognitive deficits. The literature suggests the need for assessment of concussion under cognitively demanding tasks. Here, a joint behavioral- high-density electroencephalography (EEG) paradigm was employed. This allows for the examination of cortical activity patterns during speech comprehension at various levels of degradation in a sentence verification task, imposing the need for higher-order cognitive processes. Eight participants with concussion listened to true-false sentences produced with either moderately to highly intelligible noise-vocoders. Behavioral data were simultaneously collected. The analysis of cortical activation patterns included 1) the examination of event-related potentials, including latency and source localization, and 2) measures of frequency spectra and associated power. Individual performance patterns were assessed during acute injury and a return visit several months following injury. Results demonstrate a combination of task-related electrophysiology measures correspond to changes in task performance during the course of recovery. Further, a discriminant function analysis suggests EEG measures are more sensitive than behavioral measures in distinguishing between individuals with concussion and healthy controls at both injury and recovery, suggesting the robustness of neurophysiological measures during a cognitively demanding task to both injury and persisting pathophysiology.
ContributorsUtianski, Rene (Author) / Liss, Julie M (Thesis advisor) / Berisha, Visar (Committee member) / Caviness, John N (Committee member) / Dorman, Michael (Committee member) / Arizona State University (Publisher)
Created2014
152719-Thumbnail Image.png
Description
Gait and balance disorders are the second leading cause of falls in the elderly. Investigating the changes in static and dynamic balance due to aging may provide a better understanding of the effects of aging on postural control system. Static and dynamic balance were evaluated in a total of 21

Gait and balance disorders are the second leading cause of falls in the elderly. Investigating the changes in static and dynamic balance due to aging may provide a better understanding of the effects of aging on postural control system. Static and dynamic balance were evaluated in a total of 21 young (21-35 years) and 22 elderly (50-75 years) healthy subjects while they performed three different tasks: quiet standing, dynamic weight shifts, and over ground walking. During the quiet standing task, the subjects stood with their eyes open and eyes closed. When performing dynamic weight shifts task, subjects shifted their Center of Pressure (CoP) from the center target to outward targets and vice versa while following real-time feedback of their CoP. For over ground walking tasks, subjects performed Timed Up and Go test, tandem walking, and regular walking at their self-selected speed. Various quantitative balance and gait measures were obtained to evaluate the above respective balance and walking tasks. Total excursion, sway area, and mean frequency of CoP during quiet standing were found to be the most reliable and showed significant increase with age and absence of visual input. During dynamic shifts, elderly subjects exhibited higher initiation time, initiation path length, movement time, movement path length, and inaccuracy indicating deterioration in performance. Furthermore, the elderly walked with a shorter stride length, increased stride variability, with a greater turn and turn-to-sit duration. Significant correlations were also observed between measures derived from the different balance and gait tasks. Thus, it can be concluded that aging deteriorates the postural control system affecting static and dynamic balance and some of the alterations in CoP and gait measures may be considered as protective mechanisms to prevent loss of balance.
ContributorsBalasubramanian, Shruthi (Author) / Krishnamurthi, Narayanan (Thesis advisor) / Abbas, James (Thesis advisor) / Buneo, Christopher (Committee member) / Arizona State University (Publisher)
Created2014
152687-Thumbnail Image.png
Description
Learning by trial-and-error requires retrospective information that whether a past action resulted in a rewarded outcome. Previous outcome in turn may provide information to guide future behavioral adjustment. But the specific contribution of this information to learning a task and the neural representations during the trial-and-error learning process is not

Learning by trial-and-error requires retrospective information that whether a past action resulted in a rewarded outcome. Previous outcome in turn may provide information to guide future behavioral adjustment. But the specific contribution of this information to learning a task and the neural representations during the trial-and-error learning process is not well understood. In this dissertation, such learning is analyzed by means of single unit neural recordings in the rats' motor agranular medial (AGm) and agranular lateral (AGl) while the rats learned to perform a directional choice task. Multichannel chronic recordings using implanted microelectrodes in the rat's brain were essential to this study. Also for fundamental scientific investigations in general and for some applications such as brain machine interface, the recorded neural waveforms need to be analyzed first to identify neural action potentials as basic computing units. Prior to analyzing and modeling the recorded neural signals, this dissertation proposes an advanced spike sorting system, the M-Sorter, to extract the action potentials from raw neural waveforms. The M-Sorter shows better or comparable performance compared with two other popular spike sorters under automatic mode. With the sorted action potentials in place, neuronal activity in the AGm and AGl areas in rats during learning of a directional choice task is examined. Systematic analyses suggest that rat's neural activity in AGm and AGl was modulated by previous trial outcomes during learning. Single unit based neural dynamics during task learning are described in detail in the dissertation. Furthermore, the differences in neural modulation between fast and slow learning rats were compared. The results show that the level of neural modulation of previous trial outcome is different in fast and slow learning rats which may in turn suggest an important role of previous trial outcome encoding in learning.
ContributorsYuan, Yu'an (Author) / Si, Jennie (Thesis advisor) / Buneo, Christopher (Committee member) / Santello, Marco (Committee member) / Chae, Junseok (Committee member) / Arizona State University (Publisher)
Created2014
152691-Thumbnail Image.png
Description
Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and

Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and a neural network adapt as learning progresses. In this dissertation, single units in the medial and lateral agranular (AGm and AGl) cortices were recorded as rats learned a directional choice task. The task required the rat to make a left/right side lever press if a light cue appeared on the left/right side of the interface panel. Behavior analysis showed that rat's movement parameters during performance of directional choices became stereotyped very quickly (2-3 days) while learning to solve the directional choice problem took weeks to occur. The entire learning process was further broken down to 3 stages, each having similar number of recording sessions (days). Single unit based firing rate analysis revealed that 1) directional rate modulation was observed in both cortices; 2) the averaged mean rate between left and right trials in the neural ensemble each day did not change significantly among the three learning stages; 3) the rate difference between left and right trials of the ensemble did not change significantly either. Besides, for either left or right trials, the trial-to-trial firing variability of single neurons did not change significantly over the three stages. To explore the spatiotemporal neural pattern of the recorded ensemble, support vector machines (SVMs) were constructed each day to decode the direction of choice in single trials. Improved classification accuracy indicated enhanced discriminability between neural patterns of left and right choices as learning progressed. When using a restricted Boltzmann machine (RBM) model to extract features from neural activity patterns, results further supported the idea that neural firing patterns adapted during the three learning stages to facilitate the neural codes of directional choices. Put together, these findings suggest a spatiotemporal neural coding scheme in a rat AGl and AGm neural ensemble that may be responsible for and contributing to learning the directional choice task.
ContributorsMao, Hongwei (Author) / Si, Jennie (Thesis advisor) / Buneo, Christopher (Committee member) / Cao, Yu (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2014
152801-Thumbnail Image.png
Description
Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults

Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults participated in the study that measured intelligibility (percent words correct) of dysarthric speech in auditory versus audiovisual conditions. Participants were then separated into two groups: older adults (age range 47 to 68) and young adults (age range 19 to 36) to examine the influence of age. Findings revealed that all participants, regardless of age, improved their ability to recognize dysarthric speech when visual speech was added to the auditory signal. The magnitude of this benefit, however, was greater for older adults when compared with younger adults. These results inform our understanding of how visual speech information influences understanding of dysarthric speech.
ContributorsFall, Elizabeth (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2014