Matching Items (1,390)
Filtering by

Clear all filters

152264-Thumbnail Image.png
Description
In order to cope with the decreasing availability of symphony jobs and collegiate faculty positions, many musicians are starting to pursue less traditional career paths. Also, to combat declining audiences, musicians are exploring ways to cultivate new and enthusiastic listeners through relevant and engaging performances. Due to these challenges, many

In order to cope with the decreasing availability of symphony jobs and collegiate faculty positions, many musicians are starting to pursue less traditional career paths. Also, to combat declining audiences, musicians are exploring ways to cultivate new and enthusiastic listeners through relevant and engaging performances. Due to these challenges, many community-based chamber music ensembles have been formed throughout the United States. These groups not only focus on performing classical music, but serve the needs of their communities as well. The problem, however, is that many musicians have not learned the business skills necessary to create these career opportunities. In this document I discuss the steps ensembles must take to develop sustainable careers. I first analyze how groups build a strong foundation through getting to know their communities and creating core values. I then discuss branding and marketing so ensembles can develop a public image and learn how to publicize themselves. This is followed by an investigation of how ensembles make and organize their money. I then examine the ways groups ensure long-lasting relationships with their communities and within the ensemble. I end by presenting three case studies of professional ensembles to show how groups create and maintain successful careers. Ensembles must develop entrepreneurship skills in addition to cultivating their artistry. These business concepts are crucial to the longevity of chamber groups. Through interviews of successful ensemble members and my own personal experiences in the Tetra String Quartet, I provide a guide for musicians to use when creating a community-based ensemble.
ContributorsDalbey, Jenna (Author) / Landschoot, Thomas (Thesis advisor) / McLin, Katherine (Committee member) / Ryan, Russell (Committee member) / Solis, Theodore (Committee member) / Spring, Robert (Committee member) / Arizona State University (Publisher)
Created2013
152593-Thumbnail Image.png
Description
Mobile apps have improved human lifestyle in various aspects ranging from instant messaging to tele-health. In the current app development paradigm, apps are being developed individually and agnostic of each other. The goal of this thesis is to allow a new world where multiple apps communicate with each other to

Mobile apps have improved human lifestyle in various aspects ranging from instant messaging to tele-health. In the current app development paradigm, apps are being developed individually and agnostic of each other. The goal of this thesis is to allow a new world where multiple apps communicate with each other to achieve synergistic benefits. To enable integration between apps, manual communication between developers is needed, which can be problematic on many levels. In order to promote app integration, a systematic approach towards data sharing between multiple apps is essential. However, current approaches to app integration require large code modifications to reap the benefits of shared data such as requiring developers to provide APIs or use large, invasive middlewares. In this thesis, a data sharing framework was developed providing a non-invasive interface between mobile apps for data sharing and integration. A separate app acts as a registry to allow apps to register database tables to be shared and query this information. Two health monitoring apps were developed to evaluate the sharing framework and different methods of data integration between apps to promote synergistic feedback. The health monitoring apps have shown non-invasive solutions can provide data sharing functionality without large code modifications and manual communication between developers.
ContributorsMilazzo, Joseph (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Varsamopoulos, Georgios (Committee member) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2014
152158-Thumbnail Image.png
Description
Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many

Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many cases, where the most a cleaning system can do is to generate a (hopefully small) set of clean candidates for each dirty tuple. When the cleaning system is required to output a deterministic database, it is forced to pick one clean candidate (say the "most likely" candidate) per tuple. Such an approach can lead to loss of information. For example, consider a situation where there are three equally likely clean candidates of a dirty tuple. An appealing alternative that avoids such an information loss is to abandon the requirement that the output database be deterministic. In other words, even though the input (dirty) database is deterministic, I allow the reconstructed database to be probabilistic. Although such an approach does avoid the information loss, it also brings forth several challenges. For example, how many alternatives should be kept per tuple in the reconstructed database? Maintaining too many alternatives increases the size of the reconstructed database, and hence the query processing time. Second, while processing queries on the probabilistic database may well increase recall, how would they affect the precision of the query processing? In this thesis, I investigate these questions. My investigation is done in the context of a data cleaning system called BayesWipe that has the capability of producing multiple clean candidates per each dirty tuple, along with the probability that they are the correct cleaned version. I represent these alternatives as tuples in a tuple disjoint probabilistic database, and use the Mystiq system to process queries on it. This probabilistic reconstruction (called BayesWipe–PDB) is compared to a deterministic reconstruction (called BayesWipe–DET)—where the most likely clean candidate for each tuple is chosen, and the rest of the alternatives discarded.
ContributorsRihan, Preet Inder Singh (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
152727-Thumbnail Image.png
Description
American Primitive is a composition written for wind ensemble with an instrumentation of flute, oboe, clarinet, bass clarinet, alto, tenor, and baritone saxophones, trumpet, horn, trombone, euphonium, tuba, piano, and percussion. The piece is approximately twelve minutes in duration and was written September - December 2013. American Primitive is absolute

American Primitive is a composition written for wind ensemble with an instrumentation of flute, oboe, clarinet, bass clarinet, alto, tenor, and baritone saxophones, trumpet, horn, trombone, euphonium, tuba, piano, and percussion. The piece is approximately twelve minutes in duration and was written September - December 2013. American Primitive is absolute music (i.e. it does not follow a specific narrative) comprising blocks of distinct, contrasting gestures which bookend a central region of delicate textural layering and minimal gestural contrast. Though three gestures (a descending interval followed by a smaller ascending interval, a dynamic swell, and a chordal "chop") were consciously employed throughout, it is the first gesture of the three that creates a sense of unification and overall coherence to the work. Additionally, the work challenges listeners' expectations of traditional wind ensemble music by featuring the trumpet as a quasi-soloist whose material is predominately inspired by transcriptions of jazz solos. This jazz-inspired material is at times mimicked and further developed by the ensemble, also often in a soloistic manner while the trumpet maintains its role throughout. This interplay of dialogue between the "soloists" and the "ensemble" further skews listeners' conceptions of traditional wind ensemble music by featuring almost every instrument in the ensemble. Though the term "American Primitive" is usually associated with the "naïve art" movement, it bears no association to the music presented in this work. Instead, the term refers to the author's own compositional attitudes, education, and aesthetic interests.
ContributorsJandreau, Joshua (Composer) / Rockmaker, Jody D (Thesis advisor) / Rogers, Rodney I (Committee member) / Demars, James R (Committee member) / Arizona State University (Publisher)
Created2014
153003-Thumbnail Image.png
Description
Recent efforts in data cleaning have focused mostly on problems like data deduplication, record matching, and data standardization; few of these focus on fixing incorrect attribute values in tuples. Correcting values in tuples is typically performed by a minimum cost repair of tuples that violate static constraints like CFDs (which

Recent efforts in data cleaning have focused mostly on problems like data deduplication, record matching, and data standardization; few of these focus on fixing incorrect attribute values in tuples. Correcting values in tuples is typically performed by a minimum cost repair of tuples that violate static constraints like CFDs (which have to be provided by domain experts, or learned from a clean sample of the database). In this thesis, I provide a method for correcting individual attribute values in a structured database using a Bayesian generative model and a statistical error model learned from the noisy database directly. I thus avoid the necessity for a domain expert or master data. I also show how to efficiently perform consistent query answering using this model over a dirty database, in case write permissions to the database are unavailable. A Map-Reduce architecture to perform this computation in a distributed manner is also shown. I evaluate these methods over both synthetic and real data.
ContributorsDe, Sushovan (Author) / Kambhampati, Subbarao (Thesis advisor) / Chen, Yi (Committee member) / Candan, K. Selcuk (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2014
153120-Thumbnail Image.png
Description
This project is a practical annotated bibliography of original works for oboe trio with the specific instrumentation of two oboes and English horn. Presenting descriptions of 116 readily available oboe trios, this project is intended to promote awareness, accessibility, and performance of compositions within this genre.

The annotated bibliography focuses

This project is a practical annotated bibliography of original works for oboe trio with the specific instrumentation of two oboes and English horn. Presenting descriptions of 116 readily available oboe trios, this project is intended to promote awareness, accessibility, and performance of compositions within this genre.

The annotated bibliography focuses exclusively on original, published works for two oboes and English horn. Unpublished works, arrangements, works that are out of print and not available through interlibrary loan, or works that feature slightly altered instrumentation are not included.

Entries in this annotated bibliography are listed alphabetically by the last name of the composer. Each entry includes the dates of the composer and a brief biography, followed by the title of the work, composition date, commission, and dedication of the piece. Also included are the names of publishers, the length of the entire piece in minutes and seconds, and an incipit of the first one to eight measures for each movement of the work.

In addition to providing a comprehensive and detailed bibliography of oboe trios, this document traces the history of the oboe trio and includes biographical sketches of each composer cited, allowing readers to place the genre of oboe trios and each individual composition into its historical context. Four appendices at the end include a list of trios arranged alphabetically by composer's last name, chronologically by the date of composition, and by country of origin and a list of publications of Ludwig van Beethoven's oboe trios from the 1940s and earlier.
ContributorsSassaman, Melissa Ann (Author) / Schuring, Martin (Thesis advisor) / Buck, Elizabeth (Committee member) / Holbrook, Amy (Committee member) / Hill, Gary (Committee member) / Arizona State University (Publisher)
Created2014
150212-Thumbnail Image.png
Description
This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management

This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management systems (DBMS). Also, unlike other client-driven work, this approach provides support for a richer set of schema updates including vertical split (normalization), horizontal split, vertical and horizontal merge (union), difference and intersection. The update process automatically generates a runtime update client from a mapping between the old the new schemas. The solution has been validated by testing it on a relatively small database of around 300,000 records per table and less than 1 Gb, but with limited memory buffer size of 24 Mb. This thesis presents the study of the overhead of the update process as a function of the transaction rates and the batch size used to copy data from the old to the new schema. It shows that the overhead introduced is minimal for medium size applications and that the update can be achieved with no more than one minute of downtime.
ContributorsTyagi, Preetika (Author) / Bazzi, Rida (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
150244-Thumbnail Image.png
Description
A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment

A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment challenging, including the massive amounts of data available, large numbers of users, and a highly dynamic environment, provide unique and untapped opportunities for solving the provenance problem for social media. Current approaches for tracking provenance data do not scale for online social media and consequently there is a gap in provenance methodologies and technologies providing exciting research opportunities. The guiding vision is the use of social media information itself to realize a useful amount of provenance data for information in social media. This departs from traditional approaches for data provenance which rely on a central store of provenance information. The contemporary online social media environment is an enormous and constantly updated "central store" that can be mined for provenance information that is not readily made available to the average social media user. This research introduces an approach and builds a foundation aimed at realizing a provenance data capability for social media users that is not accessible today.
ContributorsBarbier, Geoffrey P (Author) / Liu, Huan (Thesis advisor) / Bell, Herbert (Committee member) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
150226-Thumbnail Image.png
Description
As the information available to lay users through autonomous data sources continues to increase, mediators become important to ensure that the wealth of information available is tapped effectively. A key challenge that these information mediators need to handle is the varying levels of incompleteness in the underlying databases in terms

As the information available to lay users through autonomous data sources continues to increase, mediators become important to ensure that the wealth of information available is tapped effectively. A key challenge that these information mediators need to handle is the varying levels of incompleteness in the underlying databases in terms of missing attribute values. Existing approaches such as Query Processing over Incomplete Autonomous Databases (QPIAD) aim to mine and use Approximate Functional Dependencies (AFDs) to predict and retrieve relevant incomplete tuples. These approaches make independence assumptions about missing values--which critically hobbles their performance when there are tuples containing missing values for multiple correlated attributes. In this thesis, I present a principled probabilis- tic alternative that views an incomplete tuple as defining a distribution over the complete tuples that it stands for. I learn this distribution in terms of Bayes networks. My approach involves min- ing/"learning" Bayes networks from a sample of the database, and using it do both imputation (predict a missing value) and query rewriting (retrieve relevant results with incompleteness on the query-constrained attributes, when the data sources are autonomous). I present empirical studies to demonstrate that (i) at higher levels of incompleteness, when multiple attribute values are missing, Bayes networks do provide a significantly higher classification accuracy and (ii) the relevant possible answers retrieved by the queries reformulated using Bayes networks provide higher precision and recall than AFDs while keeping query processing costs manageable.
ContributorsRaghunathan, Rohit (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2011