Matching Items (1,388)
Filtering by

Clear all filters

152264-Thumbnail Image.png
Description
In order to cope with the decreasing availability of symphony jobs and collegiate faculty positions, many musicians are starting to pursue less traditional career paths. Also, to combat declining audiences, musicians are exploring ways to cultivate new and enthusiastic listeners through relevant and engaging performances. Due to these challenges, many

In order to cope with the decreasing availability of symphony jobs and collegiate faculty positions, many musicians are starting to pursue less traditional career paths. Also, to combat declining audiences, musicians are exploring ways to cultivate new and enthusiastic listeners through relevant and engaging performances. Due to these challenges, many community-based chamber music ensembles have been formed throughout the United States. These groups not only focus on performing classical music, but serve the needs of their communities as well. The problem, however, is that many musicians have not learned the business skills necessary to create these career opportunities. In this document I discuss the steps ensembles must take to develop sustainable careers. I first analyze how groups build a strong foundation through getting to know their communities and creating core values. I then discuss branding and marketing so ensembles can develop a public image and learn how to publicize themselves. This is followed by an investigation of how ensembles make and organize their money. I then examine the ways groups ensure long-lasting relationships with their communities and within the ensemble. I end by presenting three case studies of professional ensembles to show how groups create and maintain successful careers. Ensembles must develop entrepreneurship skills in addition to cultivating their artistry. These business concepts are crucial to the longevity of chamber groups. Through interviews of successful ensemble members and my own personal experiences in the Tetra String Quartet, I provide a guide for musicians to use when creating a community-based ensemble.
ContributorsDalbey, Jenna (Author) / Landschoot, Thomas (Thesis advisor) / McLin, Katherine (Committee member) / Ryan, Russell (Committee member) / Solis, Theodore (Committee member) / Spring, Robert (Committee member) / Arizona State University (Publisher)
Created2013
152727-Thumbnail Image.png
Description
American Primitive is a composition written for wind ensemble with an instrumentation of flute, oboe, clarinet, bass clarinet, alto, tenor, and baritone saxophones, trumpet, horn, trombone, euphonium, tuba, piano, and percussion. The piece is approximately twelve minutes in duration and was written September - December 2013. American Primitive is absolute

American Primitive is a composition written for wind ensemble with an instrumentation of flute, oboe, clarinet, bass clarinet, alto, tenor, and baritone saxophones, trumpet, horn, trombone, euphonium, tuba, piano, and percussion. The piece is approximately twelve minutes in duration and was written September - December 2013. American Primitive is absolute music (i.e. it does not follow a specific narrative) comprising blocks of distinct, contrasting gestures which bookend a central region of delicate textural layering and minimal gestural contrast. Though three gestures (a descending interval followed by a smaller ascending interval, a dynamic swell, and a chordal "chop") were consciously employed throughout, it is the first gesture of the three that creates a sense of unification and overall coherence to the work. Additionally, the work challenges listeners' expectations of traditional wind ensemble music by featuring the trumpet as a quasi-soloist whose material is predominately inspired by transcriptions of jazz solos. This jazz-inspired material is at times mimicked and further developed by the ensemble, also often in a soloistic manner while the trumpet maintains its role throughout. This interplay of dialogue between the "soloists" and the "ensemble" further skews listeners' conceptions of traditional wind ensemble music by featuring almost every instrument in the ensemble. Though the term "American Primitive" is usually associated with the "naïve art" movement, it bears no association to the music presented in this work. Instead, the term refers to the author's own compositional attitudes, education, and aesthetic interests.
ContributorsJandreau, Joshua (Composer) / Rockmaker, Jody D (Thesis advisor) / Rogers, Rodney I (Committee member) / Demars, James R (Committee member) / Arizona State University (Publisher)
Created2014
152840-Thumbnail Image.png
Description
Many learning models have been proposed for various tasks in visual computing. Popular examples include hidden Markov models and support vector machines. Recently, sparse-representation-based learning methods have attracted a lot of attention in the computer vision field, largely because of their impressive performance in many applications. In the literature, many

Many learning models have been proposed for various tasks in visual computing. Popular examples include hidden Markov models and support vector machines. Recently, sparse-representation-based learning methods have attracted a lot of attention in the computer vision field, largely because of their impressive performance in many applications. In the literature, many of such sparse learning methods focus on designing or application of some learning techniques for certain feature space without much explicit consideration on possible interaction between the underlying semantics of the visual data and the employed learning technique. Rich semantic information in most visual data, if properly incorporated into algorithm design, should help achieving improved performance while delivering intuitive interpretation of the algorithmic outcomes. My study addresses the problem of how to explicitly consider the semantic information of the visual data in the sparse learning algorithms. In this work, we identify four problems which are of great importance and broad interest to the community. Specifically, a novel approach is proposed to incorporate label information to learn a dictionary which is not only reconstructive but also discriminative; considering the formation process of face images, a novel image decomposition approach for an ensemble of correlated images is proposed, where a subspace is built from the decomposition and applied to face recognition; based on the observation that, the foreground (or salient) objects are sparse in input domain and the background is sparse in frequency domain, a novel and efficient spatio-temporal saliency detection algorithm is proposed to identify the salient regions in video; and a novel hidden Markov model learning approach is proposed by utilizing a sparse set of pairwise comparisons among the data, which is easier to obtain and more meaningful, consistent than tradition labels, in many scenarios, e.g., evaluating motion skills in surgical simulations. In those four problems, different types of semantic information are modeled and incorporated in designing sparse learning algorithms for the corresponding visual computing tasks. Several real world applications are selected to demonstrate the effectiveness of the proposed methods, including, face recognition, spatio-temporal saliency detection, abnormality detection, spatio-temporal interest point detection, motion analysis and emotion recognition. In those applications, data of different modalities are involved, ranging from audio signal, image to video. Experiments on large scale real world data with comparisons to state-of-art methods confirm the proposed approaches deliver salient advantages, showing adding those semantic information dramatically improve the performances of the general sparse learning methods.
ContributorsZhang, Qiang (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Wang, Yalin (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2014
153120-Thumbnail Image.png
Description
This project is a practical annotated bibliography of original works for oboe trio with the specific instrumentation of two oboes and English horn. Presenting descriptions of 116 readily available oboe trios, this project is intended to promote awareness, accessibility, and performance of compositions within this genre.

The annotated bibliography focuses

This project is a practical annotated bibliography of original works for oboe trio with the specific instrumentation of two oboes and English horn. Presenting descriptions of 116 readily available oboe trios, this project is intended to promote awareness, accessibility, and performance of compositions within this genre.

The annotated bibliography focuses exclusively on original, published works for two oboes and English horn. Unpublished works, arrangements, works that are out of print and not available through interlibrary loan, or works that feature slightly altered instrumentation are not included.

Entries in this annotated bibliography are listed alphabetically by the last name of the composer. Each entry includes the dates of the composer and a brief biography, followed by the title of the work, composition date, commission, and dedication of the piece. Also included are the names of publishers, the length of the entire piece in minutes and seconds, and an incipit of the first one to eight measures for each movement of the work.

In addition to providing a comprehensive and detailed bibliography of oboe trios, this document traces the history of the oboe trio and includes biographical sketches of each composer cited, allowing readers to place the genre of oboe trios and each individual composition into its historical context. Four appendices at the end include a list of trios arranged alphabetically by composer's last name, chronologically by the date of composition, and by country of origin and a list of publications of Ludwig van Beethoven's oboe trios from the 1940s and earlier.
ContributorsSassaman, Melissa Ann (Author) / Schuring, Martin (Thesis advisor) / Buck, Elizabeth (Committee member) / Holbrook, Amy (Committee member) / Hill, Gary (Committee member) / Arizona State University (Publisher)
Created2014
150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
ContributorsThulasiram, Ramesh (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
151176-Thumbnail Image.png
Description
Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these

Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these massive datasets lay in their complex structures, such as high-dimensionality, hierarchy, multi-modality, heterogeneity and data uncertainty. Besides the statistical challenges, the associated computational approaches are also considered essential in achieving efficiency, effectiveness, as well as the numerical stability in practice. On the other hand, some recent developments in statistics and machine learning, such as sparse learning, transfer learning, and some traditional methodologies which still hold potential, such as multi-level models, all shed lights on addressing these complex datasets in a statistically powerful and computationally efficient way. In this dissertation, we identify four kinds of general complex datasets, including "high-dimensional datasets", "hierarchically-structured datasets", "multimodality datasets" and "data uncertainties", which are ubiquitous in many domains, such as biology, medicine, neuroscience, health care delivery, manufacturing, etc. We depict the development of novel statistical models to analyze complex datasets which fall under these four categories, and we show how these models can be applied to some real-world applications, such as Alzheimer's disease research, nursing care process, and manufacturing.
ContributorsHuang, Shuai (Author) / Li, Jing (Thesis advisor) / Askin, Ronald (Committee member) / Ye, Jieping (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2012
ContributorsPagano, Caio, 1940- (Performer) / Mechetti, Fabio (Conductor) / Buck, Elizabeth (Performer) / Schuring, Martin (Performer) / Spring, Robert (Performer) / Rodrigues, Christiano (Performer) / Landschoot, Thomas (Performer) / Rotaru, Catalin (Performer) / Avanti Festival Orchestra (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-02
156887-Thumbnail Image.png
Description
Computer vision technology automatically extracts high level, meaningful information from visual data such as images or videos, and the object recognition and detection algorithms are essential in most computer vision applications. In this dissertation, we focus on developing algorithms used for real life computer vision applications, presenting innovative algorithms for

Computer vision technology automatically extracts high level, meaningful information from visual data such as images or videos, and the object recognition and detection algorithms are essential in most computer vision applications. In this dissertation, we focus on developing algorithms used for real life computer vision applications, presenting innovative algorithms for object segmentation and feature extraction for objects and actions recognition in video data, and sparse feature selection algorithms for medical image analysis, as well as automated feature extraction using convolutional neural network for blood cancer grading.

To detect and classify objects in video, the objects have to be separated from the background, and then the discriminant features are extracted from the region of interest before feeding to a classifier. Effective object segmentation and feature extraction are often application specific, and posing major challenges for object detection and classification tasks. In this dissertation, we address effective object flow based ROI generation algorithm for segmenting moving objects in video data, which can be applied in surveillance and self driving vehicle areas. Optical flow can also be used as features in human action recognition algorithm, and we present using optical flow feature in pre-trained convolutional neural network to improve performance of human action recognition algorithms. Both algorithms outperform the state-of-the-arts at their time.

Medical images and videos pose unique challenges for image understanding mainly due to the fact that the tissues and cells are often irregularly shaped, colored, and textured, and hand selecting most discriminant features is often difficult, thus an automated feature selection method is desired. Sparse learning is a technique to extract the most discriminant and representative features from raw visual data. However, sparse learning with \textit{L1} regularization only takes the sparsity in feature dimension into consideration; we improve the algorithm so it selects the type of features as well; less important or noisy feature types are entirely removed from the feature set. We demonstrate this algorithm to analyze the endoscopy images to detect unhealthy abnormalities in esophagus and stomach, such as ulcer and cancer. Besides sparsity constraint, other application specific constraints and prior knowledge may also need to be incorporated in the loss function in sparse learning to obtain the desired results. We demonstrate how to incorporate similar-inhibition constraint, gaze and attention prior in sparse dictionary selection for gastroscopic video summarization that enable intelligent key frame extraction from gastroscopic video data. With recent advancement in multi-layer neural networks, the automatic end-to-end feature learning becomes feasible. Convolutional neural network mimics the mammal visual cortex and can extract most discriminant features automatically from training samples. We present using convolutinal neural network with hierarchical classifier to grade the severity of Follicular Lymphoma, a type of blood cancer, and it reaches 91\% accuracy, on par with analysis by expert pathologists.

Developing real world computer vision applications is more than just developing core vision algorithms to extract and understand information from visual data; it is also subject to many practical requirements and constraints, such as hardware and computing infrastructure, cost, robustness to lighting changes and deformation, ease of use and deployment, etc.The general processing pipeline and system architecture for the computer vision based applications share many similar design principles and architecture. We developed common processing components and a generic framework for computer vision application, and a versatile scale adaptive template matching algorithm for object detection. We demonstrate the design principle and best practices by developing and deploying a complete computer vision application in real life, building a multi-channel water level monitoring system, where the techniques and design methodology can be generalized to other real life applications. The general software engineering principles, such as modularity, abstraction, robust to requirement change, generality, etc., are all demonstrated in this research.
ContributorsCao, Jun (Author) / Li, Baoxin (Thesis advisor) / Liu, Huan (Committee member) / Zhang, Yu (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2018
ContributorsDe La Cruz, Nathaniel (Performer) / LoGiudice, Rosa (Contributor) / Tallino, Michael (Performer) / McKinch, Riley (Performer) / Li, Yuhui (Performer) / Armenta, Tyler (Contributor) / Gonzalez, David (Performer) / Jones, Tarin (Performer) / Ryall, Blake (Performer) / Senseman, Stephen (Performer)
Created2018-10-10