Matching Items (438)
Filtering by

Clear all filters

148204-Thumbnail Image.png
Description

The purpose of this longitudinal study was to predict /r/ acquisition using acoustic signal processing. 19 children, aged 5-7 with inaccurate /r/, were followed until they turned 8 or acquired /r/, whichever came first. Acoustic and descriptive data from 14 participants were analyzed. The remaining 5 children continued to be

The purpose of this longitudinal study was to predict /r/ acquisition using acoustic signal processing. 19 children, aged 5-7 with inaccurate /r/, were followed until they turned 8 or acquired /r/, whichever came first. Acoustic and descriptive data from 14 participants were analyzed. The remaining 5 children continued to be followed. The study analyzed differences in spectral energy at the baseline acoustic signals of participants who eventually acquired /r/ compared to that of those who did not acquire /r/. Results indicated significant differences between groups in the baseline signals for vocalic and postvocalic /r/, suggesting that the acquisition of certain allophones may be predictable. Participants’ articulatory changes made during the progression of acquisition were also analyzed spectrally. A retrospective analysis described the pattern in which /r/ allophones were acquired, proposing that vocalic /r/ and the postvocalic variant of consonantal /r/ may be acquired prior to prevocalic /r/, and /r/ followed by low vowels may be acquired before /r/ followed by high vowels, although individual variations exist.

ContributorsConger, Sarah Grace (Author) / Weinhold, Juliet (Thesis director) / Daliri, Ayoub (Committee member) / Bruce, Laurel (Committee member) / College of Health Solutions (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
150108-Thumbnail Image.png
Description
In the late 1960s, Granger published a seminal study on causality in time series, using linear interdependencies and information transfer. Recent developments in the field of information theory have introduced new methods to investigate the transfer of information in dynamical systems. Using concepts from Chaos and Markov theory, much of

In the late 1960s, Granger published a seminal study on causality in time series, using linear interdependencies and information transfer. Recent developments in the field of information theory have introduced new methods to investigate the transfer of information in dynamical systems. Using concepts from Chaos and Markov theory, much of these methods have evolved to capture non-linear relations and information flow between coupled dynamical systems with applications to fields like biomedical signal processing. This thesis deals with the application of information theory to non-linear multivariate time series and develops measures of information flow to identify significant drivers and response (driven) components in networks of coupled sub-systems with variable coupling in strength and direction (uni- or bi-directional) for each connection. Transfer Entropy (TE) is used to quantify pairwise directional information. Four TE-based measures of information flow are proposed, namely TE Outflow (TEO), TE Inflow (TEI), TE Net flow (TEN), and Average TE flow (ATE). First, the reliability of the information flow measures on models, with and without noise, is evaluated. The driver and response sub-systems in these models are identified. Second, these measures are applied to electroencephalographic (EEG) data from two patients with focal epilepsy. The analysis showed dominant directions of information flow between brain sites and identified the epileptogenic focus as the system component typically with the highest value for the proposed measures (for example, ATE). Statistical tests between pre-seizure (preictal) and post-seizure (postictal) information flow also showed a breakage of the driving of the brain by the focus after seizure onset. The above findings shed light on the function of the epileptogenic focus and understanding of ictogenesis. It is expected that they will contribute to the diagnosis of epilepsy, for example by accurate identification of the epileptogenic focus from interictal periods, as well as the development of better seizure detection, prediction and control methods, for example by isolating pathologic areas of excessive information flow through electrical stimulation.
ContributorsPrasanna, Shashank (Author) / Jassemidis, Leonidas (Thesis advisor) / Tsakalis, Konstantinos (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
150929-Thumbnail Image.png
Description
This thesis examines the application of statistical signal processing approaches to data arising from surveys intended to measure psychological and sociological phenomena underpinning human social dynamics. The use of signal processing methods for analysis of signals arising from measurement of social, biological, and other non-traditional phenomena has been an important

This thesis examines the application of statistical signal processing approaches to data arising from surveys intended to measure psychological and sociological phenomena underpinning human social dynamics. The use of signal processing methods for analysis of signals arising from measurement of social, biological, and other non-traditional phenomena has been an important and growing area of signal processing research over the past decade. Here, we explore the application of statistical modeling and signal processing concepts to data obtained from the Global Group Relations Project, specifically to understand and quantify the effects and interactions of social psychological factors related to intergroup conflicts. We use Bayesian networks to specify prospective models of conditional dependence. Bayesian networks are determined between social psychological factors and conflict variables, and modeled by directed acyclic graphs, while the significant interactions are modeled as conditional probabilities. Since the data are sparse and multi-dimensional, we regress Gaussian mixture models (GMMs) against the data to estimate the conditional probabilities of interest. The parameters of GMMs are estimated using the expectation-maximization (EM) algorithm. However, the EM algorithm may suffer from over-fitting problem due to the high dimensionality and limited observations entailed in this data set. Therefore, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are used for GMM order estimation. To assist intuitive understanding of the interactions of social variables and the intergroup conflicts, we introduce a color-based visualization scheme. In this scheme, the intensities of colors are proportional to the conditional probabilities observed.
ContributorsLiu, Hui (Author) / Taylor, Thomas (Thesis advisor) / Cochran, Douglas (Thesis advisor) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2012
150596-Thumbnail Image.png
Description
Advances in miniaturized sensors and wireless technologies have enabled mobile health systems for efficient healthcare. A mobile health system assists the physician to monitor the patient's progress remotely and provide quick feedbacks and suggestions in case of emergencies, which reduces the cost of healthcare without the expense of hospitalization. This

Advances in miniaturized sensors and wireless technologies have enabled mobile health systems for efficient healthcare. A mobile health system assists the physician to monitor the patient's progress remotely and provide quick feedbacks and suggestions in case of emergencies, which reduces the cost of healthcare without the expense of hospitalization. This work involves development of an innovative mobile health system with adaptive biofeedback mechanism and demonstrates the importance of biofeedback in accurate measurements of physiological parameters to facilitate the diagnosis in mobile health systems. Resting Metabolic Rate (RMR) assessment, a key aspect in the treatment of diet related health problems is considered as a model to demonstrate the importance of adaptive biofeedback in mobile health. A breathing biofeedback mechanism has been implemented with digital signal processing techniques for real-time visual and musical guidance to accurately measure the RMR. The effects of adaptive biofeedback with musical and visual guidance were assessed on 22 healthy subjects (12 men, 10 women). Eight RMR measurements were taken for each subject on different days under same conditions. It was observed the subjects unconsciously followed breathing biofeedback, yielding consistent and accurate measurements for the diagnosis. The coefficient of variation of the measured metabolic parameters decreased significantly (p < 0.05) for 20 subjects out of 22 subjects.
ContributorsKrishnan, Ranganath (Author) / Tao, Nongjian (Thesis advisor) / Forzani, Erica (Committee member) / Yu, Hongyu (Committee member) / Arizona State University (Publisher)
Created2012
136314-Thumbnail Image.png
Description
The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of

The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of digital output to headphones or speakers. Based on this information, the gamer can discern where a particular stimulus is coming from and whether or not that is a threat to their wellbeing within the virtual world. People with reliable hearing have a distinct advantage over hearing impaired people in the fact that they can gather information not just from what is in front of them, but from every angle relative to the way they're facing. The purpose of this project was to find a way to even the playing field, so that a person hard of hearing could also receive the sensory feedback that any other person would get while playing video games To do this, visual surround sound was created. This is a system that takes a surround sound input, and illuminates LEDs around the periphery of glasses based on the direction, frequency and amplitude of the audio wave. This provides the user with crucial information on the whereabouts of different elements within the game. In this paper, the research and development of Visual Surround Sound is discussed along with its viability in regards to a deaf person's ability to learn the technology, and decipher the visual cues.
ContributorsKadi, Danyal (Co-author) / Burrell, Nathaneal (Co-author) / Butler, Kristi (Co-author) / Wright, Gavin (Co-author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05
136520-Thumbnail Image.png
Description
Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.
ContributorsHansen, Jakob Kristian (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
132193-Thumbnail Image.png
Description
Power spectral analysis is a fundamental aspect of signal processing used in the detection and \\estimation of various signal features. Signals spaced closely in frequency are problematic and lead analysts to miss crucial details surrounding the data. The Capon and Bartlett methods are non-parametric filterbank approaches to power spectrum estimation.

Power spectral analysis is a fundamental aspect of signal processing used in the detection and \\estimation of various signal features. Signals spaced closely in frequency are problematic and lead analysts to miss crucial details surrounding the data. The Capon and Bartlett methods are non-parametric filterbank approaches to power spectrum estimation. The Capon algorithm is known as the "adaptive" approach to power spectrum estimation because its filter impulse responses are adapted to fit the characteristics of the data. The Bartlett method is known as the "conventional" approach to power spectrum estimation (PSE) and has a fixed deterministic filter. Both techniques rely on the Sample Covariance Matrix (SCM). The first objective of this project is to analyze the origins and characteristics of the Capon and Bartlett methods to understand their abilities to resolve signals closely spaced in frequency. Taking into consideration the Capon and Bartlett's reliance on the SCM, there is a novelty in combining these two algorithms using their cross-coherence. The second objective of this project is to analyze the performance of the Capon-Bartlett Cross Spectra. This study will involve Matlab simulations of known test cases and comparisons with approximate theoretical predictions.
ContributorsYoshiyama, Cassidy (Author) / Richmond, Christ (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132037-Thumbnail Image.png
Description
In the field of electronic music, haptic feedback is a crucial feature of digital musical instruments (DMIs) because it gives the musician a more immersive experience. This feedback might come in the form of a wearable haptic device that vibrates in response to music. Such advancements in the electronic music

In the field of electronic music, haptic feedback is a crucial feature of digital musical instruments (DMIs) because it gives the musician a more immersive experience. This feedback might come in the form of a wearable haptic device that vibrates in response to music. Such advancements in the electronic music field are applicable to the field of speech and hearing. More specifically, wearable haptic feedback devices can enhance the musical listening experience for people who use cochlear implant (CI) devices.
This Honors Thesis is a continuation of Prof. Lauren Hayes’s and Dr. Xin Luo’s research initiative, Haptic Electronic Audio Research into Musical Experience (HEAR-ME), which investigates how to enhance the musical listening experience for CI users using a wearable haptic system. The goals of this Honors Thesis are to adapt Prof. Hayes’s system code from the Max visual programming language into the C++ object-oriented programming language and to study the results of the developed C++ codes. This adaptation allows the system to operate in real-time and independently of a computer.
Towards these goals, two signal processing algorithms were developed and programmed in C++. The first algorithm is a thresholding method, which outputs a pulse of a predefined width when the input signal falls below some threshold in amplitude. The second algorithm is a root-mean-square (RMS) method, which outputs a pulse-width modulation signal with a fixed period and with a duty cycle dependent on the RMS of the input signal. The thresholding method was found to work best with speech, and the RMS method was found to work best with music. Future work entails the design of adaptive signal processing algorithms to allow the system to work more effectively on speech in a noisy environment and to emphasize a variety of elements in music.
ContributorsBonelli, Dominic Berlage (Author) / Papandreou-Suppappola, Antonia (Thesis director) / Hayes, Lauren (Thesis director, Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
133725-Thumbnail Image.png
Description
Detecting early signs of neurodegeneration is vital for measuring the efficacy of pharmaceuticals and planning treatments for neurological diseases. This is especially true for Amyotrophic Lateral Sclerosis (ALS) where differences in symptom onset can be indicative of the prognosis. Because it can be measured noninvasively, changes in speech production have

Detecting early signs of neurodegeneration is vital for measuring the efficacy of pharmaceuticals and planning treatments for neurological diseases. This is especially true for Amyotrophic Lateral Sclerosis (ALS) where differences in symptom onset can be indicative of the prognosis. Because it can be measured noninvasively, changes in speech production have been proposed as a promising indicator of neurological decline. However, speech changes are typically measured subjectively by a clinician. These perceptual ratings can vary widely between clinicians and within the same clinician on different patient visits, making clinical ratings less sensitive to subtle early indicators. In this paper, we propose an algorithm for the objective measurement of flutter, a quasi-sinusoidal modulation of fundamental frequency that manifests in the speech of some ALS patients. The algorithm detailed in this paper employs long-term average spectral analysis on the residual F0 track of a sustained phonation to detect the presence of flutter and is robust to longitudinal drifts in F0. The algorithm is evaluated on a longitudinal speech dataset of ALS patients at varying stages in their prognosis. Benchmarking with two stages of perceptual ratings provided by an expert speech pathologist indicate that the algorithm follows perceptual ratings with moderate accuracy and can objectively detect flutter in instances where the variability of the perceptual rating causes uncertainty.
ContributorsPeplinski, Jacob Scott (Author) / Berisha, Visar (Thesis director) / Liss, Julie (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134576-Thumbnail Image.png
Description
Research on /r/ production previously used formant analysis as the primary acoustic analysis, with particular focus on the low third formant in the speech signal. Prior imaging of speech used X-Ray, MRI, and electromagnetic midsagittal articulometer systems. More recently, the signal processing technique of Mel-log spectral plots has been used

Research on /r/ production previously used formant analysis as the primary acoustic analysis, with particular focus on the low third formant in the speech signal. Prior imaging of speech used X-Ray, MRI, and electromagnetic midsagittal articulometer systems. More recently, the signal processing technique of Mel-log spectral plots has been used to study /r/ production in children and female adults. Ultrasound imaging of the tongue also has been used to image the tongue during speech production in both clinical and research settings. The current study attempts to describe /r/ production in three different allophonic contexts; vocalic, prevocalic, and postvocalic positions. Ultrasound analysis, formant analysis, Mel-log spectral plots, and /r/ duration were measured for /r/ production in 29 adult speakers (10 male, 19 female). A possible relationship between these variables was also explored. Results showed that the amount of superior constriction in the postvocalic /r/ allophone was significantly lower than the other /r/ allophones. Formant two was significantly lower and the distance between formant two and three was significantly higher for the prevocalic /r/ allophone. Vocalic /r/ had the longest average duration, while prevocalic /r/ had the shortest duration. Signal processing results revealed candidate Mel-bin values for accurate /r/ production for each allophone of /r/. The results indicate that allophones of /r/ can be distinguished based the different analyses. However, relationships between these analyses are still unclear. Future research is needed in order to gather more data on /r/ acoustics and articulation in order to find possible relationships between the analyses for /r/ production.
ContributorsHirsch, Megan Elizabeth (Author) / Weinhold, Juliet (Thesis director) / Gardner, Joshua (Committee member) / Department of Speech and Hearing Science (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05