Matching Items (666)
Filtering by

Clear all filters

ContributorsWasbotten, Leia (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-30
151635-Thumbnail Image.png
Description
Libby Larsen is one of the most performed and acclaimed composers today. She is a spirited, compelling, and sensitive composer whose music enhances the poetry of America's most prominent authors. Notable among her works are song cycles for soprano based on the poetry of female writers, among them novelist and

Libby Larsen is one of the most performed and acclaimed composers today. She is a spirited, compelling, and sensitive composer whose music enhances the poetry of America's most prominent authors. Notable among her works are song cycles for soprano based on the poetry of female writers, among them novelist and poet Willa Cather (1873-1947). Larsen has produced two song cycles on works from Cather's substantial output of fiction: one based on Cather's short story, "Eric Hermannson's Soul," titled Margaret Songs: Three Songs from Willa Cather (1996); and later, My Antonia (2000), based on Cather's novel of the same title. In Margaret Songs, Cather's poetry and short stories--specifically the character of Margaret Elliot--combine with Larsen's unique compositional style to create a surprising collaboration. This study explores how Larsen in these songs delves into the emotional and psychological depths of Margaret's character, not fully formed by Cather. It is only through Larsen's music and Cather's poetry that Margaret's journey through self-discovery and love become fully realized. This song cycle is a glimpse through the eyes of two prominent female artists on the societal pressures placed upon Margaret's character, many of which still resonate with women in today's culture. This study examines the work Margaret Songs by discussing Willa Cather, her musical influences, and the conditions surrounding the writing of "Eric Hermannson's Soul." It looks also into Cather's influence on Libby Larsen and the commission leading to Margaret Songs. Finally, a description of the musical, dramatic, and textual content of the songs completes this interpretation of the interactions of Willa Cather, Libby Larsen, and the character of Margaret Elliot.
ContributorsMcLain, Christi Marie (Author) / FitzPatrick, Carole (Thesis advisor) / Dreyfoos, Dale (Committee member) / Holbrook, Amy (Committee member) / Ryan, Russell (Committee member) / Arizona State University (Publisher)
Created2013
151660-Thumbnail Image.png
Description
Puerto Rico has produced many important composers who have contributed to the musical culture of the nation during the last 200 years. However, a considerable amount of their music has proven to be difficult to access and may contain numerous errors. This research project intends to contribute to the accessibility

Puerto Rico has produced many important composers who have contributed to the musical culture of the nation during the last 200 years. However, a considerable amount of their music has proven to be difficult to access and may contain numerous errors. This research project intends to contribute to the accessibility of such music and to encourage similar studies of Puerto Rican music. This study focuses on the music of Héctor Campos Parsi (1922-1998), one of the most prominent composers of the 20th century in Puerto Rico. After an overview of the historical background of music on the island and the biography of the composer, four works from his art song repertoire are given for detailed examination. A product of this study is the first corrected edition of his cycles Canciones de Cielo y Agua, Tres Poemas de Corretjer, Los Paréntesis, and the song Majestad Negra. These compositions date from 1947 to 1959, and reflect both the European and nationalistic writing styles of the composer during this time. Data for these corrections have been obtained from the composer's manuscripts, published and unpublished editions, and published recordings. The corrected scores are ready for publication and a compact disc of this repertoire, performed by soprano Melliangee Pérez and the author, has been recorded to bring to life these revisions. Despite the best intentions of the author, the various copyright issues have yet to be resolved. It is hoped that this document will provide the foundation for a resolution and that these important works will be available for public performance and study in the near future.
ContributorsRodríguez Morales, Luis F., 1980- (Author) / Campbell, Andrew (Thesis advisor) / Buck, Elizabeth (Committee member) / Holbrook, Amy (Committee member) / Kopta, Anne (Committee member) / Ryan, Russell (Committee member) / Arizona State University (Publisher)
Created2013
ContributorsYi, Joyce (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-22
152593-Thumbnail Image.png
Description
Mobile apps have improved human lifestyle in various aspects ranging from instant messaging to tele-health. In the current app development paradigm, apps are being developed individually and agnostic of each other. The goal of this thesis is to allow a new world where multiple apps communicate with each other to

Mobile apps have improved human lifestyle in various aspects ranging from instant messaging to tele-health. In the current app development paradigm, apps are being developed individually and agnostic of each other. The goal of this thesis is to allow a new world where multiple apps communicate with each other to achieve synergistic benefits. To enable integration between apps, manual communication between developers is needed, which can be problematic on many levels. In order to promote app integration, a systematic approach towards data sharing between multiple apps is essential. However, current approaches to app integration require large code modifications to reap the benefits of shared data such as requiring developers to provide APIs or use large, invasive middlewares. In this thesis, a data sharing framework was developed providing a non-invasive interface between mobile apps for data sharing and integration. A separate app acts as a registry to allow apps to register database tables to be shared and query this information. Two health monitoring apps were developed to evaluate the sharing framework and different methods of data integration between apps to promote synergistic feedback. The health monitoring apps have shown non-invasive solutions can provide data sharing functionality without large code modifications and manual communication between developers.
ContributorsMilazzo, Joseph (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Varsamopoulos, Georgios (Committee member) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2014
152158-Thumbnail Image.png
Description
Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many

Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many cases, where the most a cleaning system can do is to generate a (hopefully small) set of clean candidates for each dirty tuple. When the cleaning system is required to output a deterministic database, it is forced to pick one clean candidate (say the "most likely" candidate) per tuple. Such an approach can lead to loss of information. For example, consider a situation where there are three equally likely clean candidates of a dirty tuple. An appealing alternative that avoids such an information loss is to abandon the requirement that the output database be deterministic. In other words, even though the input (dirty) database is deterministic, I allow the reconstructed database to be probabilistic. Although such an approach does avoid the information loss, it also brings forth several challenges. For example, how many alternatives should be kept per tuple in the reconstructed database? Maintaining too many alternatives increases the size of the reconstructed database, and hence the query processing time. Second, while processing queries on the probabilistic database may well increase recall, how would they affect the precision of the query processing? In this thesis, I investigate these questions. My investigation is done in the context of a data cleaning system called BayesWipe that has the capability of producing multiple clean candidates per each dirty tuple, along with the probability that they are the correct cleaned version. I represent these alternatives as tuples in a tuple disjoint probabilistic database, and use the Mystiq system to process queries on it. This probabilistic reconstruction (called BayesWipe–PDB) is compared to a deterministic reconstruction (called BayesWipe–DET)—where the most likely clean candidate for each tuple is chosen, and the rest of the alternatives discarded.
ContributorsRihan, Preet Inder Singh (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
153003-Thumbnail Image.png
Description
Recent efforts in data cleaning have focused mostly on problems like data deduplication, record matching, and data standardization; few of these focus on fixing incorrect attribute values in tuples. Correcting values in tuples is typically performed by a minimum cost repair of tuples that violate static constraints like CFDs (which

Recent efforts in data cleaning have focused mostly on problems like data deduplication, record matching, and data standardization; few of these focus on fixing incorrect attribute values in tuples. Correcting values in tuples is typically performed by a minimum cost repair of tuples that violate static constraints like CFDs (which have to be provided by domain experts, or learned from a clean sample of the database). In this thesis, I provide a method for correcting individual attribute values in a structured database using a Bayesian generative model and a statistical error model learned from the noisy database directly. I thus avoid the necessity for a domain expert or master data. I also show how to efficiently perform consistent query answering using this model over a dirty database, in case write permissions to the database are unavailable. A Map-Reduce architecture to perform this computation in a distributed manner is also shown. I evaluate these methods over both synthetic and real data.
ContributorsDe, Sushovan (Author) / Kambhampati, Subbarao (Thesis advisor) / Chen, Yi (Committee member) / Candan, K. Selcuk (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2014
150212-Thumbnail Image.png
Description
This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management

This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management systems (DBMS). Also, unlike other client-driven work, this approach provides support for a richer set of schema updates including vertical split (normalization), horizontal split, vertical and horizontal merge (union), difference and intersection. The update process automatically generates a runtime update client from a mapping between the old the new schemas. The solution has been validated by testing it on a relatively small database of around 300,000 records per table and less than 1 Gb, but with limited memory buffer size of 24 Mb. This thesis presents the study of the overhead of the update process as a function of the transaction rates and the batch size used to copy data from the old to the new schema. It shows that the overhead introduced is minimal for medium size applications and that the update can be achieved with no more than one minute of downtime.
ContributorsTyagi, Preetika (Author) / Bazzi, Rida (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
150244-Thumbnail Image.png
Description
A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment

A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment challenging, including the massive amounts of data available, large numbers of users, and a highly dynamic environment, provide unique and untapped opportunities for solving the provenance problem for social media. Current approaches for tracking provenance data do not scale for online social media and consequently there is a gap in provenance methodologies and technologies providing exciting research opportunities. The guiding vision is the use of social media information itself to realize a useful amount of provenance data for information in social media. This departs from traditional approaches for data provenance which rely on a central store of provenance information. The contemporary online social media environment is an enormous and constantly updated "central store" that can be mined for provenance information that is not readily made available to the average social media user. This research introduces an approach and builds a foundation aimed at realizing a provenance data capability for social media users that is not accessible today.
ContributorsBarbier, Geoffrey P (Author) / Liu, Huan (Thesis advisor) / Bell, Herbert (Committee member) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011