Matching Items (304)
Filtering by

Clear all filters

151694-Thumbnail Image.png
Description
This document is intended to show the various kinds of stylistically appropriate melodic and rhythmic ornamentation that can be used in the improvisation of the Sarabandes by J.S. Bach. Traditional editions of Bach's and other Baroque-era keyboard works have reflected evolving historical trends. The historical performance movement and other attempts

This document is intended to show the various kinds of stylistically appropriate melodic and rhythmic ornamentation that can be used in the improvisation of the Sarabandes by J.S. Bach. Traditional editions of Bach's and other Baroque-era keyboard works have reflected evolving historical trends. The historical performance movement and other attempts to "clean up" pre-1950s romanticized performances have greatly limited the freedom and experimentation that was the original intention of these dances. Prior to this study, few ornamented editions of these works have been published. Although traditional practices do not necessarily encourage classical improvisation in performance I argue that manipulation of the melodic and rhythmic layers over the established harmonic progressions will not only provide diversity within the individual dance movements, but also further engage the ears of the performer and listener which encourages further creative exploration. I will focus this study on the ornamentation of all six Sarabandes from J.S. Bach's French Suites and show how various types of melodic and rhythmic variation can provide aurally pleasing alternatives to the composed score without disrupting the harmonic fluency. The author intends this document to be used as a pedagogical tool and the fully ornamented Sarabandes from J.S. Bach's French Suites are included with this document.
ContributorsOakley, Ashley (Author) / Meir, Baruch (Thesis advisor) / Campbell, Andrew (Committee member) / Norton, Kay (Committee member) / Pagano, Caio (Committee member) / Ryan, Russell (Committee member) / Arizona State University (Publisher)
Created2013
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151926-Thumbnail Image.png
Description
In recent years, machine learning and data mining technologies have received growing attention in several areas such as recommendation systems, natural language processing, speech and handwriting recognition, image processing and biomedical domain. Many of these applications which deal with physiological and biomedical data require person specific or person adaptive systems.

In recent years, machine learning and data mining technologies have received growing attention in several areas such as recommendation systems, natural language processing, speech and handwriting recognition, image processing and biomedical domain. Many of these applications which deal with physiological and biomedical data require person specific or person adaptive systems. The greatest challenge in developing such systems is the subject-dependent data variations or subject-based variability in physiological and biomedical data, which leads to difference in data distributions making the task of modeling these data, using traditional machine learning algorithms, complex and challenging. As a result, despite the wide application of machine learning, efficient deployment of its principles to model real-world data is still a challenge. This dissertation addresses the problem of subject based variability in physiological and biomedical data and proposes person adaptive prediction models based on novel transfer and active learning algorithms, an emerging field in machine learning. One of the significant contributions of this dissertation is a person adaptive method, for early detection of muscle fatigue using Surface Electromyogram signals, based on a new multi-source transfer learning algorithm. This dissertation also proposes a subject-independent algorithm for grading the progression of muscle fatigue from 0 to 1 level in a test subject, during isometric or dynamic contractions, at real-time. Besides subject based variability, biomedical image data also varies due to variations in their imaging techniques, leading to distribution differences between the image databases. Hence a classifier learned on one database may perform poorly on the other database. Another significant contribution of this dissertation has been the design and development of an efficient biomedical image data annotation framework, based on a novel combination of transfer learning and a new batch-mode active learning method, capable of addressing the distribution differences across databases. The methodologies developed in this dissertation are relevant and applicable to a large set of computing problems where there is a high variation of data between subjects or sources, such as face detection, pose detection and speech recognition. From a broader perspective, these frameworks can be viewed as a first step towards design of automated adaptive systems for real world data.
ContributorsChattopadhyay, Rita (Author) / Panchanathan, Sethuraman (Thesis advisor) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2013
151963-Thumbnail Image.png
Description
Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to

Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to the interface language of the system. NL2KR (Natural language to knowledge representation) v.1 system is a prototype of such a system. It is a learning based system that learns new meanings of words in terms of lambda-calculus formulas given an initial lexicon of some words and their meanings and a training corpus of sentences with their translations. As a part of this thesis, we take the prototype NL2KR v.1 system and enhance various components of it to make it usable for somewhat substantial and useful interface languages. We revamped the lexicon learning components, Inverse-lambda and Generalization modules, and redesigned the lexicon learning algorithm which uses these components to learn new meanings of words. Similarly, we re-developed an inbuilt parser of the system in Answer Set Programming (ASP) and also integrated external parser with the system. Apart from this, we added some new rich features like various system configurations and memory cache in the learning component of the NL2KR system. These enhancements helped in learning more meanings of the words, boosted performance of the system by reducing the computation time by a factor of 8 and improved the usability of the system. We evaluated the NL2KR system on iRODS domain. iRODS is a rule-oriented data system, which helps in managing large set of computer files using policies. This system provides a Rule-Oriented interface langauge whose syntactic structure is like any procedural programming language (eg. C). However, direct translation of natural language (NL) to this interface language is difficult. So, for automatic translation of NL to this language, we define a simple intermediate Policy Declarative Language (IPDL) to represent the knowledge in the policies, which then can be directly translated to iRODS rules. We develop a corpus of 100 policy statements and manually translate them to IPDL langauge. This corpus is then used for the evaluation of NL2KR system. We performed 10 fold cross validation on the system. Furthermore, using this corpus, we illustrate how different components of our NL2KR system work.
ContributorsKumbhare, Kanchan Ravishankar (Author) / Baral, Chitta (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2013
151775-Thumbnail Image.png
Description
ABSTRACT Musicians endure injuries at an alarming rate, largely due to the misuse of their bodies. Musicians move their bodies for a living and therefore should understand how to move them in a healthy way. This paper presents Body Mapping as an injury prevention technique specifically directed toward collaborative pianists.

ABSTRACT Musicians endure injuries at an alarming rate, largely due to the misuse of their bodies. Musicians move their bodies for a living and therefore should understand how to move them in a healthy way. This paper presents Body Mapping as an injury prevention technique specifically directed toward collaborative pianists. A body map is the self-representation in one's brain that includes information on the structure, function, and size of one's body; Body Mapping is the process of refining one's body map to produce coordinated movement. In addition to preventing injury, Body Mapping provides a means to achieve greater musical artistry through the training of movement, attention, and the senses. With the main function of collaborating with one or more musical partners, a collaborative pianist will have the opportunity to share the knowledge of Body Mapping with many fellow musicians. This study demonstrates the author's credentials as a Body Mapping instructor, the current status of the field of collaborative piano, and the recommendation for increased body awareness. Information on the nature and abundance of injuries and Body Mapping concepts are also analyzed. The study culminates in a course syllabus entitled An Introduction to Collaborative Piano and Body Mapping with the objective of imparting fundamental collaborative piano skills integrated with proper body use. The author hopes to inform educators of the benefits of prioritizing health among their students and to provide a Body Mapping foundation upon which their students can build technique.
ContributorsBindel, Jennifer (Author) / Campbell, Andrew (Thesis advisor) / Doan, Jerry (Committee member) / Rogers, Rodney (Committee member) / Ryan, Russell (Committee member) / Schuring, Martin (Committee member) / Arizona State University (Publisher)
Created2013
151781-Thumbnail Image.png
Description
This study compares the Hummel Concertos in A Minor, Op. 85 and B Minor, Op. 89 and the Chopin Concertos in E Minor, Op. 11 and F Minor, Op. 21. On initial hearing of Hummel's rarely played concertos, one immediately detects similarities with Chopin's concerto style. Upon closer examination, one

This study compares the Hummel Concertos in A Minor, Op. 85 and B Minor, Op. 89 and the Chopin Concertos in E Minor, Op. 11 and F Minor, Op. 21. On initial hearing of Hummel's rarely played concertos, one immediately detects similarities with Chopin's concerto style. Upon closer examination, one discovers a substantial number of interesting and significant parallels with Chopin's concertos, many of which are highlighted in this research project. Hummel belongs to a generation of composers who made a shift away from the Classical style, and Chopin, as an early Romantic, absorbed much from his immediate predecessors in establishing his highly unique style. I have chosen to focus on Chopin's concertos to demonstrate this association. The essay begins with a discussion of the historical background of Chopin's formative years as it pertains to the formation of his compositional style, Hummel's role and influence in the contemporary musical arena, as well as interactions between the two composers. It then provides the historical background of the aforementioned concertos leading to a comparative analysis, which includes structural, melodic, harmonic, and motivic parallels. With a better understanding of his stylistic influences, and of how Chopin assimilated them in the creation of his masterful works, the performer can adopt a more informed approach to the interpretation of these two concertos, which are among the most beloved masterpieces in piano literature.
ContributorsYam, Jessica (Author) / Hamilton, Robert (Thesis advisor) / Levy, Benjamin (Committee member) / Ryan, Russell (Committee member) / Arizona State University (Publisher)
Created2013
152003-Thumbnail Image.png
Description
We solve the problem of activity verification in the context of sustainability. Activity verification is the process of proving the user assertions pertaining to a certain activity performed by the user. Our motivation lies in incentivizing the user for engaging in sustainable activities like taking public transport or recycling. Such

We solve the problem of activity verification in the context of sustainability. Activity verification is the process of proving the user assertions pertaining to a certain activity performed by the user. Our motivation lies in incentivizing the user for engaging in sustainable activities like taking public transport or recycling. Such incentivization schemes require the system to verify the claim made by the user. The system verifies these claims by analyzing the supporting evidence captured by the user while performing the activity. The proliferation of portable smart-phones in the past few years has provided us with a ubiquitous and relatively cheap platform, having multiple sensors like accelerometer, gyroscope, microphone etc. to capture this evidence data in-situ. In this research, we investigate the supervised and semi-supervised learning techniques for activity verification. Both these techniques make use the data set constructed using the evidence submitted by the user. Supervised learning makes use of annotated evidence data to build a function to predict the class labels of the unlabeled data points. The evidence data captured can be either unimodal or multimodal in nature. We use the accelerometer data as evidence for transportation mode verification and image data as evidence for recycling verification. After training the system, we achieve maximum accuracy of 94% when classifying the transport mode and 81% when detecting recycle activity. In the case of recycle verification, we could improve the classification accuracy by asking the user for more evidence. We present some techniques to ask the user for the next best piece of evidence that maximizes the probability of classification. Using these techniques for detecting recycle activity, the accuracy increases to 93%. The major disadvantage of using supervised models is that it requires extensive annotated training data, which expensive to collect. Due to the limited training data, we look at the graph based inductive semi-supervised learning methods to propagate the labels among the unlabeled samples. In the semi-supervised approach, we represent each instance in the data set as a node in the graph. Since it is a complete graph, edges interconnect these nodes, with each edge having some weight representing the similarity between the points. We propagate the labels in this graph, based on the proximity of the data points to the labeled nodes. We estimate the performance of these algorithms by measuring how close the probability distribution of the data after label propagation is to the probability distribution of the ground truth data. Since labeling has a cost associated with it, in this thesis we propose two algorithms that help us in selecting minimum number of labeled points to propagate the labels accurately. Our proposed algorithm achieves a maximum of 73% increase in performance when compared to the baseline algorithm.
ContributorsDesai, Vaishnav (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151635-Thumbnail Image.png
Description
Libby Larsen is one of the most performed and acclaimed composers today. She is a spirited, compelling, and sensitive composer whose music enhances the poetry of America's most prominent authors. Notable among her works are song cycles for soprano based on the poetry of female writers, among them novelist and

Libby Larsen is one of the most performed and acclaimed composers today. She is a spirited, compelling, and sensitive composer whose music enhances the poetry of America's most prominent authors. Notable among her works are song cycles for soprano based on the poetry of female writers, among them novelist and poet Willa Cather (1873-1947). Larsen has produced two song cycles on works from Cather's substantial output of fiction: one based on Cather's short story, "Eric Hermannson's Soul," titled Margaret Songs: Three Songs from Willa Cather (1996); and later, My Antonia (2000), based on Cather's novel of the same title. In Margaret Songs, Cather's poetry and short stories--specifically the character of Margaret Elliot--combine with Larsen's unique compositional style to create a surprising collaboration. This study explores how Larsen in these songs delves into the emotional and psychological depths of Margaret's character, not fully formed by Cather. It is only through Larsen's music and Cather's poetry that Margaret's journey through self-discovery and love become fully realized. This song cycle is a glimpse through the eyes of two prominent female artists on the societal pressures placed upon Margaret's character, many of which still resonate with women in today's culture. This study examines the work Margaret Songs by discussing Willa Cather, her musical influences, and the conditions surrounding the writing of "Eric Hermannson's Soul." It looks also into Cather's influence on Libby Larsen and the commission leading to Margaret Songs. Finally, a description of the musical, dramatic, and textual content of the songs completes this interpretation of the interactions of Willa Cather, Libby Larsen, and the character of Margaret Elliot.
ContributorsMcLain, Christi Marie (Author) / FitzPatrick, Carole (Thesis advisor) / Dreyfoos, Dale (Committee member) / Holbrook, Amy (Committee member) / Ryan, Russell (Committee member) / Arizona State University (Publisher)
Created2013