Matching Items (83)
147972-Thumbnail Image.png
Description

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.

ContributorsHirte, Amanda (Author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147824-Thumbnail Image.png
Description

Speech motor learning is important for learning to speak during childhood and maintaining the speech system throughout adulthood. Motor and auditory cortical regions play crucial roles in speech motor learning. This experiment aimed to use transcranial alternating current stimulation, a neurostimulation technique, to influence auditory and motor cortical activity. In

Speech motor learning is important for learning to speak during childhood and maintaining the speech system throughout adulthood. Motor and auditory cortical regions play crucial roles in speech motor learning. This experiment aimed to use transcranial alternating current stimulation, a neurostimulation technique, to influence auditory and motor cortical activity. In this study, we used an auditory-motor adaptation task as an experimental model of speech motor learning. Subjects repeated words while receiving formant shifts, which made the subjects’ speech sound different from their production. During the adaptation task, subjects received Beta (20 Hz), Alpha (10 Hz), or Sham stimulation. We applied the stimulation to the ventral motor cortex that is involved in planning speech movements. We found that the stimulation did not influence the magnitude of adaptation. We suggest that some limitations of the study may have contributed to the negative results.

ContributorsMannan, Arhum (Author) / Daliri, Ayoub (Thesis director) / Luo, Xin (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
151971-Thumbnail Image.png
Description
Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart

Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart of these algorithms is particle filtering (PF), a sequential Monte Carlo technique used to estimate the unknown parameters of dynamic systems. First, we analyze the bottlenecks in existing PF algorithms, and we propose a new parallel PF (PPF) algorithm based on the independent Metropolis-Hastings (IMH) algorithm. We show that the proposed PPF-IMH algorithm improves the root mean-squared error (RMSE) estimation performance, and we demonstrate that a parallel implementation of the algorithm results in significant reduction in inter-processor communication. We apply our implementation on a Xilinx Virtex-5 field programmable gate array (FPGA) platform to demonstrate that, for a one-dimensional problem, the PPF-IMH architecture with four processing elements and 1,000 particles can process input samples at 170 kHz by using less than 5% FPGA resources. We also apply the proposed PPF-IMH to waveform-agile sensing to achieve real-time tracking of dynamic targets with high RMSE tracking performance. We next integrate the PPF-IMH algorithm to track the dynamic parameters in neural sensing when the number of neural dipole sources is known. We analyze the computational complexity of a PF based method and propose the use of multiple particle filtering (MPF) to reduce the complexity. We demonstrate the improved performance of MPF using numerical simulations with both synthetic and real data. We also propose an FPGA implementation of the MPF algorithm and show that the implementation supports real-time tracking. For the more realistic scenario of automatically estimating an unknown number of time-varying neural dipole sources, we propose a new approach based on the probability hypothesis density filtering (PHDF) algorithm. The PHDF is implemented using particle filtering (PF-PHDF), and it is applied in a closed-loop to first estimate the number of dipole sources and then their corresponding amplitude, location and orientation parameters. We demonstrate the improved tracking performance of the proposed PF-PHDF algorithm and map it onto a Xilinx Virtex-5 FPGA platform to show its real-time implementation potential. Finally, we propose the use of sensor scheduling and compressive sensing techniques to reduce the number of active sensors, and thus overall power consumption, of electroencephalography (EEG) systems. We propose an efficient sensor scheduling algorithm which adaptively configures EEG sensors at each measurement time interval to reduce the number of sensors needed for accurate tracking. We combine the sensor scheduling method with PF-PHDF and implement the system on an FPGA platform to achieve real-time tracking. We also investigate the sparsity of EEG signals and integrate compressive sensing with PF to estimate neural activity. Simulation results show that both sensor scheduling and compressive sensing based methods achieve comparable tracking performance with significantly reduced number of sensors.
ContributorsMiao, Lifeng (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Zhang, Junshan (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2013
152044-Thumbnail Image.png
Description
Doppler radar can be used to measure respiration and heart rate without contact and through obstacles. In this work, a Doppler radar architecture at 2.4 GHz and a new signal processing algorithm to estimate the respiration and heart rate are presented. The received signal is dominated by the transceiver noise,

Doppler radar can be used to measure respiration and heart rate without contact and through obstacles. In this work, a Doppler radar architecture at 2.4 GHz and a new signal processing algorithm to estimate the respiration and heart rate are presented. The received signal is dominated by the transceiver noise, LO phase noise and clutter which reduces the signal-to-noise ratio of the desired signal. The proposed architecture and algorithm are used to mitigate these issues and obtain an accurate estimate of the heart and respiration rate. Quadrature low-IF transceiver architecture is adopted to resolve null point problem as well as avoid 1/f noise and DC offset due to mixer-LO coupling. Adaptive clutter cancellation algorithm is used to enhance receiver sensitivity coupled with a novel Pattern Search in Noise Subspace (PSNS) algorithm is used to estimate respiration and heart rate. PSNS is a modified MUSIC algorithm which uses the phase noise to enhance Doppler shift detection. A prototype system was implemented using off-the-shelf TI and RFMD transceiver and tests were conduct with eight individuals. The measured results shows accurate estimate of the cardio pulmonary signals in low-SNR conditions and have been tested up to a distance of 6 meters.
ContributorsKhunti, Hitesh Devshi (Author) / Kiaei, Sayfe (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Bliss, Daniel (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
152307-Thumbnail Image.png
Description
Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of

Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of biological threats. Currently, traditional bioinformatics tools, such as data mining classification algorithms, are used to process the large amount of peptide microarray data. However, these methods generally require training data and do not adapt to changing immune conditions or additional patient information. This work proposes advanced processing techniques to improve the classification and identification of single and multiple underlying immune response states embedded in immunosignatures, making it possible to detect both known and previously unknown diseases or biothreat agents. Novel adaptive learning methodologies for un- supervised and semi-supervised clustering integrated with immunosignature feature extraction approaches are proposed. The techniques are based on extracting novel stochastic features from microarray binding intensities and use Dirichlet process Gaussian mixture models to adaptively cluster the immunosignatures in the feature space. This learning-while-clustering approach allows continuous discovery of antibody activity by adaptively detecting new disease states, with limited a priori disease or patient information. A beta process factor analysis model to determine underlying patient immune responses is also proposed to further improve the adaptive clustering performance by formatting new relationships between patients and antibody activity. In order to extend the clustering methods for diagnosing multiple states in a patient, the adaptive hierarchical Dirichlet process is integrated with modified beta process factor analysis latent feature modeling to identify relationships between patients and infectious agents. The use of Bayesian nonparametric adaptive learning techniques allows for further clustering if additional patient data is received. Significant improvements in feature identification and immune response clustering are demonstrated using samples from patients with different diseases.
ContributorsMalin, Anna (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Lacroix, Zoé (Committee member) / Arizona State University (Publisher)
Created2013
136314-Thumbnail Image.png
Description
The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of

The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of digital output to headphones or speakers. Based on this information, the gamer can discern where a particular stimulus is coming from and whether or not that is a threat to their wellbeing within the virtual world. People with reliable hearing have a distinct advantage over hearing impaired people in the fact that they can gather information not just from what is in front of them, but from every angle relative to the way they're facing. The purpose of this project was to find a way to even the playing field, so that a person hard of hearing could also receive the sensory feedback that any other person would get while playing video games To do this, visual surround sound was created. This is a system that takes a surround sound input, and illuminates LEDs around the periphery of glasses based on the direction, frequency and amplitude of the audio wave. This provides the user with crucial information on the whereabouts of different elements within the game. In this paper, the research and development of Visual Surround Sound is discussed along with its viability in regards to a deaf person's ability to learn the technology, and decipher the visual cues.
ContributorsKadi, Danyal (Co-author) / Burrell, Nathaneal (Co-author) / Butler, Kristi (Co-author) / Wright, Gavin (Co-author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05
136475-Thumbnail Image.png
Description
Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested

Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested through a sensitivity analysis. Doing so also provides insight about how to construct more effective feature vectors.
ContributorsMa, Owen (Author) / Bliss, Daniel (Thesis director) / Berisha, Visar (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05
Description

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light of the importance of residual acoustic hearing for the development of vocal emotion production, this study tested whether pre-lingually deafened CI children with residual acoustic hearing may produce similar pitch cues for vocal emotions as the other participant groups. Sixteen pre-lingually deafened CI children with residual acoustic hearing, nine post-lingually deafened CI adults with residual acoustic hearing, twelve NH children, and eleven NH adults were asked to produce ten semantically neutral sentences in happy or sad emotion. The results showed that there was no significant group effect for the ratio of mean fundamental frequency (F0) and the ratio of F0 standard deviation between emotions. Instead, CI children showed significantly greater intensity difference between emotions than CI adults, NH children, and NH adults. In CI children, aided pure-tone average hearing threshold of acoustic ear was correlated with the ratio of mean F0 and the ratio of duration between emotions. These results suggest that residual acoustic hearing with low-frequency pitch cues may facilitate the development of vocal emotion production in pre-lingually deafened CI children.

ContributorsMacdonald, Andrina Elizabeth (Author) / Luo, Xin (Thesis director) / Pittman, Andrea (Committee member) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
135844-Thumbnail Image.png
Description
Head turning is a common sound localization strategy in primates. A novel system that can track head movement and acoustic signals received at the entrance to the ear canal was tested to obtain binaural sound localization information during fast head movement of marmoset monkey. Analysis of binaural information was conducted

Head turning is a common sound localization strategy in primates. A novel system that can track head movement and acoustic signals received at the entrance to the ear canal was tested to obtain binaural sound localization information during fast head movement of marmoset monkey. Analysis of binaural information was conducted with a focus on inter-aural level difference (ILD) and inter-aural time difference (ITD) at various head positions over time. The results showed that during fast head turns, the ITDs showed significant and clear changes in trajectory in response to low frequency stimuli. However, significant phase ambiguity occurred at frequencies greater than 2 kHz. Analysis of ITD and ILD information with animal vocalization as the stimulus was also tested. The results indicated that ILDs may provide more information in understanding the dynamics of head movement in response to animal vocalizations in the environment. The primary significance of this experimentation is the successful implementation of a system capable of simultaneously recording head movement and acoustic signals at the ear canals. The collected data provides insight into the usefulness of ITD and ILD as binaural cues during head movement.
ContributorsLabban, Kyle John (Author) / Zhou, Yi (Thesis director) / Buneo, Christopher (Committee member) / Dorman, Michael (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
130901-Thumbnail Image.png
Description
Alzheimer's disease is the 6th leading cause of death in the United States and vastly affects millions across the world each year. Currently, there are no medications or treatments available to slow or stop the progression of Alzheimer’s Disease. The GENUS therapy out of the Massachusetts Institute of Technology presently

Alzheimer's disease is the 6th leading cause of death in the United States and vastly affects millions across the world each year. Currently, there are no medications or treatments available to slow or stop the progression of Alzheimer’s Disease. The GENUS therapy out of the Massachusetts Institute of Technology presently shows positive results in slowing the progression of the disease among animal trials. This thesis is a continuation of that study, to develop and build a testing apparatus for human clinical trials. Included is a complete outline into the design, development, testing measures, and instructional aid for the final apparatus.
ContributorsScheller, Rachel D (Author) / Bliss, Daniel (Thesis director) / Corman, Steven (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12