This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 70
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151994-Thumbnail Image.png
Description
Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly

Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly organized into the following two parts: I) spatio-temporal wind power analysis for wind generation forecast and integration, and II) data mining and information fusion of synchrophasor measurements toward secure power grids. Part I is centered around wind power generation forecast and integration. First, a spatio-temporal analysis approach for short-term wind farm generation forecasting is proposed. Specifically, using extensive measurement data from an actual wind farm, the probability distribution and the level crossing rate of wind farm generation are characterized using tools from graphical learning and time-series analysis. Built on these spatial and temporal characterizations, finite state Markov chain models are developed, and a point forecast of wind farm generation is derived using the Markov chains. Then, multi-timescale scheduling and dispatch with stochastic wind generation and opportunistic demand response is investigated. Part II focuses on incorporating the emerging synchrophasor technology into the security assessment and the post-disturbance fault diagnosis of power systems. First, a data-mining framework is developed for on-line dynamic security assessment by using adaptive ensemble decision tree learning of real-time synchrophasor measurements. Under this framework, novel on-line dynamic security assessment schemes are devised, aiming to handle various factors (including variations of operating conditions, forced system topology change, and loss of critical synchrophasor measurements) that can have significant impact on the performance of conventional data-mining based on-line DSA schemes. Then, in the context of post-disturbance analysis, fault detection and localization of line outage is investigated using a dependency graph approach. It is shown that a dependency graph for voltage phase angles can be built according to the interconnection structure of power system, and line outage events can be detected and localized through networked data fusion of the synchrophasor measurements collected from multiple locations of power grids. Along a more practical avenue, a decentralized networked data fusion scheme is proposed for efficient fault detection and localization.
ContributorsHe, Miao (Author) / Zhang, Junshan (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Hedman, Kory (Committee member) / Si, Jennie (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151963-Thumbnail Image.png
Description
Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to

Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to the interface language of the system. NL2KR (Natural language to knowledge representation) v.1 system is a prototype of such a system. It is a learning based system that learns new meanings of words in terms of lambda-calculus formulas given an initial lexicon of some words and their meanings and a training corpus of sentences with their translations. As a part of this thesis, we take the prototype NL2KR v.1 system and enhance various components of it to make it usable for somewhat substantial and useful interface languages. We revamped the lexicon learning components, Inverse-lambda and Generalization modules, and redesigned the lexicon learning algorithm which uses these components to learn new meanings of words. Similarly, we re-developed an inbuilt parser of the system in Answer Set Programming (ASP) and also integrated external parser with the system. Apart from this, we added some new rich features like various system configurations and memory cache in the learning component of the NL2KR system. These enhancements helped in learning more meanings of the words, boosted performance of the system by reducing the computation time by a factor of 8 and improved the usability of the system. We evaluated the NL2KR system on iRODS domain. iRODS is a rule-oriented data system, which helps in managing large set of computer files using policies. This system provides a Rule-Oriented interface langauge whose syntactic structure is like any procedural programming language (eg. C). However, direct translation of natural language (NL) to this interface language is difficult. So, for automatic translation of NL to this language, we define a simple intermediate Policy Declarative Language (IPDL) to represent the knowledge in the policies, which then can be directly translated to iRODS rules. We develop a corpus of 100 policy statements and manually translate them to IPDL langauge. This corpus is then used for the evaluation of NL2KR system. We performed 10 fold cross validation on the system. Furthermore, using this corpus, we illustrate how different components of our NL2KR system work.
ContributorsKumbhare, Kanchan Ravishankar (Author) / Baral, Chitta (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2013
151778-Thumbnail Image.png
Description
This project features three new pieces for clarinet commissioned from three different composers. Two are for unaccompanied clarinet and one is for clarinet, bass clarinet, and laptop. These pieces are Storm's a Comin' by Chris Burton, Light and Shadows by Theresa Martin, and My Own Agenda by Robbie McCarthy. These

This project features three new pieces for clarinet commissioned from three different composers. Two are for unaccompanied clarinet and one is for clarinet, bass clarinet, and laptop. These pieces are Storm's a Comin' by Chris Burton, Light and Shadows by Theresa Martin, and My Own Agenda by Robbie McCarthy. These three solos challenge the performer in various ways including complex rhythm, use of extended techniques such as growling, glissando, and multiphonics, and the incorporation of technology into a live performance. In addition to background information, a performance practice guide has also been included for each of the pieces. This guide provides recommendations and suggestions for future performers wishing to study and perform these works. Also included are transcripts of interviews done with each of the composers as well as full scores for each of the pieces. Accompanying this document are recordings of each of the three pieces, performed by the author.
ContributorsVaughan, Melissa Lynn (Author) / Spring, Robert (Thesis advisor) / Micklich, Albie (Committee member) / Gardner, Joshua (Committee member) / Hill, Gary (Committee member) / Feisst, Sabine (Committee member) / Arizona State University (Publisher)
Created2013
151816-Thumbnail Image.png
Description
A common concern among musical performers in today'’s musical market pertains to their capacity to adapt to the constantly changing climate of the music business. This document focuses on one aspect of the development of a sustainable, entrepreneurship skill set: the production of a recording. While producing the recording Chocolates,

A common concern among musical performers in today'’s musical market pertains to their capacity to adapt to the constantly changing climate of the music business. This document focuses on one aspect of the development of a sustainable, entrepreneurship skill set: the production of a recording. While producing the recording Chocolates, the author examined and documented the multiplicity of skills encompassed with a recording project. The first part of the document includes a discussion of various aspects of the recording project, Chocolates, through an entrepreneurial lens, and an evaluation of the skill sets acquired through the recording process. Additionally, the inspiration and relevance behind the recording project and the process of collaboration between the two composers from whom I commissioned new compositions, Noah Taylor and James Grant, and myself is considered. Finally, I describe the recording and editing processes, including the planning involved within each process, how I achieved the final product, and the entrepreneurial skills involved. The second portion of this document examines a broad range of applications of entrepreneurship, marketing, and career management skills not only within the confines of this particular project, but also in relation to the overall sustainability of a twenty-–first century music-–performing career.
ContributorsStuckemeyer, Mary (Author) / Micklich, Albie (Thesis advisor) / Carpenter, Ellon (Committee member) / Hill, Gary (Committee member) / Schuring, Martin (Committee member) / Spring, Robert (Committee member) / Arizona State University (Publisher)
Created2013
151825-Thumbnail Image.png
Description
There are a significant number of musical compositions for violin by composers who used folk songs and dances of various cultures in their music, including works by George Enescu, Béla Bartók and György Ligeti. Less known are pieces that draw on the plethora of melodies and rhythms from Turkey. The

There are a significant number of musical compositions for violin by composers who used folk songs and dances of various cultures in their music, including works by George Enescu, Béla Bartók and György Ligeti. Less known are pieces that draw on the plethora of melodies and rhythms from Turkey. The purpose of this paper is to help performers become more familiar with two such compositions: Fazil Say's Sonata for Violin and Piano and Cleopatra for Solo Violin. Fazil Say (b. 1970) is considered to be a significant, contemporary Turkish composer. Both of the works discussed in this document simulate traditional "Eastern" instruments, such as the kemenҫe, the baðlama, the kanun and the ud. Additionally, both pieces use themes from folk melodies of Turkey, Turkish dance rhythms and Arabian scales, all framed within traditional structural techniques, such as ostinato bass and the fughetta. Both the Sonata for Violin and Piano and Cleopatra are enormously expressive and musically interesting works, demanding virtuosity and a wide technical range. Although this document does not purport to be a full theoretical analysis, by providing biographical information, analytical descriptions, notes regarding interpretation, and suggestions to assist performers in overcoming technical obstacles, the writer hopes to inspire other violinists to consider learning and performing these works.
ContributorsKalantzi, Panagiota (Author) / Jiang, Danwen (Thesis advisor) / Hill, Gary (Committee member) / Rogers, Rodney (Committee member) / Rotaru, Catalin (Committee member) / Arizona State University (Publisher)
Created2013
152001-Thumbnail Image.png
Description
Despite significant advances in digital pathology and automation sciences, current diagnostic practice for cancer detection primarily relies on a qualitative manual inspection of tissue architecture and cell and nuclear morphology in stained biopsies using low-magnification, two-dimensional (2D) brightfield microscopy. The efficacy of this process is limited by inter-operator variations in

Despite significant advances in digital pathology and automation sciences, current diagnostic practice for cancer detection primarily relies on a qualitative manual inspection of tissue architecture and cell and nuclear morphology in stained biopsies using low-magnification, two-dimensional (2D) brightfield microscopy. The efficacy of this process is limited by inter-operator variations in sample preparation and imaging, and by inter-observer variability in assessment. Over the past few decades, the predictive value quantitative morphology measurements derived from computerized analysis of micrographs has been compromised by the inability of 2D microscopy to capture information in the third dimension, and by the anisotropic spatial resolution inherent to conventional microscopy techniques that generate volumetric images by stacking 2D optical sections to approximate 3D. To gain insight into the analytical 3D nature of cells, this dissertation explores the application of a new technology for single-cell optical computed tomography (optical cell CT) that is a promising 3D tomographic imaging technique which uses visible light absorption to image stained cells individually with sub-micron, isotropic spatial resolution. This dissertation provides a scalable analytical framework to perform fully-automated 3D morphological analysis from transmission-mode optical cell CT images of hematoxylin-stained cells. The developed framework performs rapid and accurate quantification of 3D cell and nuclear morphology, facilitates assessment of morphological heterogeneity, and generates shape- and texture-based biosignatures predictive of the cell state. Custom 3D image segmentation methods were developed to precisely delineate volumes of interest (VOIs) from reconstructed cell images. Comparison with user-defined ground truth assessments yielded an average agreement (DICE coefficient) of 94% for the cell and its nucleus. Seventy nine biologically relevant morphological descriptors (features) were computed from the segmented VOIs, and statistical classification methods were implemented to determine the subset of features that best predicted cell health. The efficacy of our proposed framework was demonstrated on an in vitro model of multistep carcinogenesis in human Barrett's esophagus (BE) and classifier performance using our 3D morphometric analysis was compared against computerized analysis of 2D image slices that reflected conventional cytological observation. Our results enable sensitive and specific nuclear grade classification for early cancer diagnosis and underline the value of the approach as an objective adjunctive tool to better understand morphological changes associated with malignant transformation.
ContributorsNandakumar, Vivek (Author) / Meldrum, Deirdre R (Thesis advisor) / Nelson, Alan C. (Committee member) / Karam, Lina J (Committee member) / Ye, Jieping (Committee member) / Johnson, Roger H (Committee member) / Bussey, Kimberly J (Committee member) / Arizona State University (Publisher)
Created2013
151640-Thumbnail Image.png
Description
The purpose of the paper is to outline the process that was used to write a reduction for Henry Brant's Concerto for Alto Saxophone and Orchestra, to describe the improvements in saxophone playing since the premiere of the piece, and to demonstrate the necessity of having a reduction in the

The purpose of the paper is to outline the process that was used to write a reduction for Henry Brant's Concerto for Alto Saxophone and Orchestra, to describe the improvements in saxophone playing since the premiere of the piece, and to demonstrate the necessity of having a reduction in the process of learning a concerto. The Concerto was inspired by internationally known saxophonist, Sigurd Rascher, who demonstrated for Brant the extent of his abilities on the saxophone. These abilities included use of four-octave range and two types of extended techniques: slap-tonguing and flutter-tonguing. Brant incorporated all three elements in his Concerto, and believed that only Rascher had the command over the saxophone needed to perform the piece. To prevent the possibility of an unsuccessful performance, Brant chose to make the piece unavailable to saxophonists by leaving the Concerto without a reduction. Subsequently, there were no performances of this piece between 1953 and 2001. In 2011, the two directors of Brant's Estate decided to allow for a reduction to be written for the piece so that it would become more widely available to saxophonists.
ContributorsAmes, Elizabeth (Pianist) (Author) / Ryan, Russell (Thesis advisor) / Levy, Benjamin (Committee member) / Hill, Gary (Committee member) / Campbell, Andrew (Committee member) / Arizona State University (Publisher)
Created2013
151327-Thumbnail Image.png
Description
The integration of yoga into the music curriculum has the potential of offering many immediate and life-long benefits to musicians. Yoga can help address issues such as performance anxiety and musculoskeletal problems, and enhance focus and awareness during musical practice and performance. Although the philosophy of yoga has many similarities

The integration of yoga into the music curriculum has the potential of offering many immediate and life-long benefits to musicians. Yoga can help address issues such as performance anxiety and musculoskeletal problems, and enhance focus and awareness during musical practice and performance. Although the philosophy of yoga has many similarities to the process of learning a musical instrument, the benefits of yoga for musicians is a topic that has gained attention only recently. This document explores several ways in which the practice and philosophy of yoga can be fused with saxophone pedagogy as one way to prepare students for a healthy and successful musical career. A six-week study at Arizona State University was conducted to observe the effects of regular yoga practice on collegiate saxophone students. Nine participants attended a sixty-minute "yoga for musicians" class twice a week. Measures included pre- and post- study questionnaires as well as personal journals kept throughout the duration of the study. These self-reported results showed that yoga had positive effects on saxophone playing. It significantly increased physical comfort and positive thinking, and improved awareness of habitual patterns and breath control. Student participants responded positively to the idea of integrating such a course into the music curriculum. The integration of yoga and saxophone by qualified professionals could also be a natural part of studio class and individual instruction. Carrie Koffman, professor of saxophone at The Hartt School, University of Hartford, has established one strong model for the combination of these disciplines. Her methods and philosophy, together with the basics of Western-style hatha yoga, clinical reports on performance injuries, and qualitative data from the ASU study are explored. These inquiries form the foundation of a new model for integrating yoga practice regularly into the saxophone studio.
ContributorsAdams, Allison Dromgold (Author) / Norton, Kay (Thesis advisor) / Hill, Gary (Committee member) / McAllister, Timothy (Committee member) / Micklich, Albie (Committee member) / Standley, Eileen (Committee member) / Arizona State University (Publisher)
Created2012