Matching Items (677)
Filtering by

Clear all filters

ContributorsWasbotten, Leia (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-30
151971-Thumbnail Image.png
Description
Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart

Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart of these algorithms is particle filtering (PF), a sequential Monte Carlo technique used to estimate the unknown parameters of dynamic systems. First, we analyze the bottlenecks in existing PF algorithms, and we propose a new parallel PF (PPF) algorithm based on the independent Metropolis-Hastings (IMH) algorithm. We show that the proposed PPF-IMH algorithm improves the root mean-squared error (RMSE) estimation performance, and we demonstrate that a parallel implementation of the algorithm results in significant reduction in inter-processor communication. We apply our implementation on a Xilinx Virtex-5 field programmable gate array (FPGA) platform to demonstrate that, for a one-dimensional problem, the PPF-IMH architecture with four processing elements and 1,000 particles can process input samples at 170 kHz by using less than 5% FPGA resources. We also apply the proposed PPF-IMH to waveform-agile sensing to achieve real-time tracking of dynamic targets with high RMSE tracking performance. We next integrate the PPF-IMH algorithm to track the dynamic parameters in neural sensing when the number of neural dipole sources is known. We analyze the computational complexity of a PF based method and propose the use of multiple particle filtering (MPF) to reduce the complexity. We demonstrate the improved performance of MPF using numerical simulations with both synthetic and real data. We also propose an FPGA implementation of the MPF algorithm and show that the implementation supports real-time tracking. For the more realistic scenario of automatically estimating an unknown number of time-varying neural dipole sources, we propose a new approach based on the probability hypothesis density filtering (PHDF) algorithm. The PHDF is implemented using particle filtering (PF-PHDF), and it is applied in a closed-loop to first estimate the number of dipole sources and then their corresponding amplitude, location and orientation parameters. We demonstrate the improved tracking performance of the proposed PF-PHDF algorithm and map it onto a Xilinx Virtex-5 FPGA platform to show its real-time implementation potential. Finally, we propose the use of sensor scheduling and compressive sensing techniques to reduce the number of active sensors, and thus overall power consumption, of electroencephalography (EEG) systems. We propose an efficient sensor scheduling algorithm which adaptively configures EEG sensors at each measurement time interval to reduce the number of sensors needed for accurate tracking. We combine the sensor scheduling method with PF-PHDF and implement the system on an FPGA platform to achieve real-time tracking. We also investigate the sparsity of EEG signals and integrate compressive sensing with PF to estimate neural activity. Simulation results show that both sensor scheduling and compressive sensing based methods achieve comparable tracking performance with significantly reduced number of sensors.
ContributorsMiao, Lifeng (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Zhang, Junshan (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2013
151635-Thumbnail Image.png
Description
Libby Larsen is one of the most performed and acclaimed composers today. She is a spirited, compelling, and sensitive composer whose music enhances the poetry of America's most prominent authors. Notable among her works are song cycles for soprano based on the poetry of female writers, among them novelist and

Libby Larsen is one of the most performed and acclaimed composers today. She is a spirited, compelling, and sensitive composer whose music enhances the poetry of America's most prominent authors. Notable among her works are song cycles for soprano based on the poetry of female writers, among them novelist and poet Willa Cather (1873-1947). Larsen has produced two song cycles on works from Cather's substantial output of fiction: one based on Cather's short story, "Eric Hermannson's Soul," titled Margaret Songs: Three Songs from Willa Cather (1996); and later, My Antonia (2000), based on Cather's novel of the same title. In Margaret Songs, Cather's poetry and short stories--specifically the character of Margaret Elliot--combine with Larsen's unique compositional style to create a surprising collaboration. This study explores how Larsen in these songs delves into the emotional and psychological depths of Margaret's character, not fully formed by Cather. It is only through Larsen's music and Cather's poetry that Margaret's journey through self-discovery and love become fully realized. This song cycle is a glimpse through the eyes of two prominent female artists on the societal pressures placed upon Margaret's character, many of which still resonate with women in today's culture. This study examines the work Margaret Songs by discussing Willa Cather, her musical influences, and the conditions surrounding the writing of "Eric Hermannson's Soul." It looks also into Cather's influence on Libby Larsen and the commission leading to Margaret Songs. Finally, a description of the musical, dramatic, and textual content of the songs completes this interpretation of the interactions of Willa Cather, Libby Larsen, and the character of Margaret Elliot.
ContributorsMcLain, Christi Marie (Author) / FitzPatrick, Carole (Thesis advisor) / Dreyfoos, Dale (Committee member) / Holbrook, Amy (Committee member) / Ryan, Russell (Committee member) / Arizona State University (Publisher)
Created2013
151660-Thumbnail Image.png
Description
Puerto Rico has produced many important composers who have contributed to the musical culture of the nation during the last 200 years. However, a considerable amount of their music has proven to be difficult to access and may contain numerous errors. This research project intends to contribute to the accessibility

Puerto Rico has produced many important composers who have contributed to the musical culture of the nation during the last 200 years. However, a considerable amount of their music has proven to be difficult to access and may contain numerous errors. This research project intends to contribute to the accessibility of such music and to encourage similar studies of Puerto Rican music. This study focuses on the music of Héctor Campos Parsi (1922-1998), one of the most prominent composers of the 20th century in Puerto Rico. After an overview of the historical background of music on the island and the biography of the composer, four works from his art song repertoire are given for detailed examination. A product of this study is the first corrected edition of his cycles Canciones de Cielo y Agua, Tres Poemas de Corretjer, Los Paréntesis, and the song Majestad Negra. These compositions date from 1947 to 1959, and reflect both the European and nationalistic writing styles of the composer during this time. Data for these corrections have been obtained from the composer's manuscripts, published and unpublished editions, and published recordings. The corrected scores are ready for publication and a compact disc of this repertoire, performed by soprano Melliangee Pérez and the author, has been recorded to bring to life these revisions. Despite the best intentions of the author, the various copyright issues have yet to be resolved. It is hoped that this document will provide the foundation for a resolution and that these important works will be available for public performance and study in the near future.
ContributorsRodríguez Morales, Luis F., 1980- (Author) / Campbell, Andrew (Thesis advisor) / Buck, Elizabeth (Committee member) / Holbrook, Amy (Committee member) / Kopta, Anne (Committee member) / Ryan, Russell (Committee member) / Arizona State University (Publisher)
Created2013
ContributorsYi, Joyce (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-22
151465-Thumbnail Image.png
Description
Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on

Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on a priori information and user-specified model parameters. Also, ECG beat morphologies, which vary greatly across patients and disease states, cannot be uniquely characterized by a single model. In this work, sequential Bayesian based methods are used to appropriately model and adaptively select the corresponding model parameters of ECG signals. An adaptive framework based on a sequential Bayesian tracking method is proposed to adaptively select the cardiac parameters that minimize the estimation error, thus precluding the need for pre-processing. Simulations using real ECG data from the online Physionet database demonstrate the improvement in performance of the proposed algorithm in accurately estimating critical heart disease parameters. In addition, two new approaches to ECG modeling are presented using the interacting multiple model and the sequential Markov chain Monte Carlo technique with adaptive model selection. Both these methods can adaptively choose between different models for various ECG beat morphologies without requiring prior ECG information, as demonstrated by using real ECG signals. A supervised Bayesian maximum-likelihood (ML) based classifier uses the estimated model parameters to classify different types of cardiac arrhythmias. However, the non-availability of sufficient amounts of representative training data and the large inter-patient variability pose a challenge to the existing supervised learning algorithms, resulting in a poor classification performance. In addition, recently developed unsupervised learning methods require a priori knowledge on the number of diseases to cluster the ECG data, which often evolves over time. In order to address these issues, an adaptive learning ECG classification method that uses Dirichlet process Gaussian mixture models is proposed. This approach does not place any restriction on the number of disease classes, nor does it require any training data. This algorithm is adapted to be patient-specific by labeling or identifying the generated mixtures using the Bayesian ML method, assuming the availability of labeled training data.
ContributorsEdla, Shwetha Reddy (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
152477-Thumbnail Image.png
Description
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation

This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation educational context grounded in theories of cognition and learning. BN models were manipulated along two factors: latent variable dependency structure and number of latent classes. Distributions of posterior predicted p-values (PPP-values) served as the primary outcome measure and were summarized in graphical presentations, by median values across replications, and by proportions of replications in which the PPP-values were extreme. An effect size measure for PPMC was introduced as a supplemental numerical summary to the PPP-value. Consistent with previous PPMC research, all investigated fit functions tended to perform conservatively, but Standardized Generalized Dimensionality Discrepancy Measure (SGDDM), Yen's Q3, and Hierarchy Consistency Index (HCI) only mildly so. Adequate power to detect at least some types of misfit was demonstrated by SGDDM, Q3, HCI, Item Consistency Index (ICI), and to a lesser extent Deviance, while proportion correct (PC), a chi-square-type item-fit measure, Ranked Probability Score (RPS), and Good's Logarithmic Scale (GLS) were powerless across all investigated factors. Bivariate SGDDM and Q3 were found to provide powerful and detailed feedback for all investigated types of misfit.
ContributorsCrawford, Aaron (Author) / Levy, Roy (Thesis advisor) / Green, Samuel (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2014
152344-Thumbnail Image.png
Description
Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of

Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage.
ContributorsHuff, Daniel W (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Chakrabarti, Chaitali (Committee member) / Chattopadhyay, Aditi (Committee member) / Arizona State University (Publisher)
Created2013
152857-Thumbnail Image.png
Description
This dissertation applies the Bayesian approach as a method to improve the estimation efficiency of existing econometric tools. The first chapter suggests the Continuous Choice Bayesian (CCB) estimator which combines the Bayesian approach with the Continuous Choice (CC) estimator suggested by Imai and Keane (2004). Using simulation study, I provide

This dissertation applies the Bayesian approach as a method to improve the estimation efficiency of existing econometric tools. The first chapter suggests the Continuous Choice Bayesian (CCB) estimator which combines the Bayesian approach with the Continuous Choice (CC) estimator suggested by Imai and Keane (2004). Using simulation study, I provide two important findings. First, the CC estimator clearly has better finite sample properties compared to a frequently used Discrete Choice (DC) estimator. Second, the CCB estimator has better estimation efficiency when data size is relatively small and it still retains the advantage of the CC estimator over the DC estimator. The second chapter estimates baseball's managerial efficiency using a stochastic frontier function with the Bayesian approach. When I apply a stochastic frontier model to baseball panel data, the difficult part is that dataset often has a small number of periods, which result in large estimation variance. To overcome this problem, I apply the Bayesian approach to a stochastic frontier analysis. I compare the confidence interval of efficiencies from the Bayesian estimator with the classical frequentist confidence interval. Simulation results show that when I use the Bayesian approach, I achieve smaller estimation variance while I do not lose any reliability in a point estimation. Then, I apply the Bayesian stochastic frontier analysis to answer some interesting questions in baseball.
ContributorsChoi, Kwang-shin (Author) / Ahn, Seung (Thesis advisor) / Mehra, Rajnish (Committee member) / Park, Sungho (Committee member) / Arizona State University (Publisher)
Created2014
152985-Thumbnail Image.png
Description
Research methods based on the frequentist philosophy use prior information in a priori power calculations and when determining the necessary sample size for the detection of an effect, but not in statistical analyses. Bayesian methods incorporate prior knowledge into the statistical analysis in the form of a prior distribution. When

Research methods based on the frequentist philosophy use prior information in a priori power calculations and when determining the necessary sample size for the detection of an effect, but not in statistical analyses. Bayesian methods incorporate prior knowledge into the statistical analysis in the form of a prior distribution. When prior information about a relationship is available, the estimates obtained could differ drastically depending on the choice of Bayesian or frequentist method. Study 1 in this project compared the performance of five methods for obtaining interval estimates of the mediated effect in terms of coverage, Type I error rate, empirical power, interval imbalance, and interval width at N = 20, 40, 60, 100 and 500. In Study 1, Bayesian methods with informative prior distributions performed almost identically to Bayesian methods with diffuse prior distributions, and had more power than normal theory confidence limits, lower Type I error rates than the percentile bootstrap, and coverage, interval width, and imbalance comparable to normal theory, percentile bootstrap, and the bias-corrected bootstrap confidence limits. Study 2 evaluated if a Bayesian method with true parameter values as prior information outperforms the other methods. The findings indicate that with true values of parameters as the prior information, Bayesian credibility intervals with informative prior distributions have more power, less imbalance, and narrower intervals than Bayesian credibility intervals with diffuse prior distributions, normal theory, percentile bootstrap, and bias-corrected bootstrap confidence limits. Study 3 examined how much power increases when increasing the precision of the prior distribution by a factor of ten for either the action or the conceptual path in mediation analysis. Power generally increases with increases in precision but there are many sample size and parameter value combinations where precision increases by a factor of 10 do not lead to substantial increases in power.
ContributorsMiocevic, Milica (Author) / Mackinnon, David P. (Thesis advisor) / Levy, Roy (Committee member) / West, Stephen G. (Committee member) / Enders, Craig (Committee member) / Arizona State University (Publisher)
Created2014