Matching Items (407)
150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150025-Thumbnail Image.png
Description
With the increasing focus on developing environmentally benign electronic packages, lead-free solder alloys have received a great deal of attention. Mishandling of packages, during manufacture, assembly, or by the user may cause failure of solder joint. A fundamental understanding of the behavior of lead-free solders under mechanical shock conditions is

With the increasing focus on developing environmentally benign electronic packages, lead-free solder alloys have received a great deal of attention. Mishandling of packages, during manufacture, assembly, or by the user may cause failure of solder joint. A fundamental understanding of the behavior of lead-free solders under mechanical shock conditions is lacking. Reliable experimental and numerical analysis of lead-free solder joints in the intermediate strain rate regime need to be investigated. This dissertation mainly focuses on exploring the mechanical shock behavior of lead-free tin-rich solder alloys via multiscale modeling and numerical simulations. First, the macroscopic stress/strain behaviors of three bulk lead-free tin-rich solders were tested over a range of strain rates from 0.001/s to 30/s. Finite element analysis was conducted to determine appropriate specimen geometry that could reach a homogeneous stress/strain field and a relatively high strain rate. A novel self-consistent true stress correction method is developed to compensate the inaccuracy caused by the triaxial stress state at the post-necking stage. Then the material property of micron-scale intermetallic was examined by micro-compression test. The accuracy of this measure is systematically validated by finite element analysis, and empirical adjustments are provided. Moreover, the interfacial property of the solder/intermetallic interface is investigated, and a continuum traction-separation law of this interface is developed from an atomistic-based cohesive element method. The macroscopic stress/strain relation and microstructural properties are combined together to form a multiscale material behavior via a stochastic approach for both solder and intermetallic. As a result, solder is modeled by porous plasticity with random voids, and intermetallic is characterized as brittle material with random vulnerable region. Thereafter, the porous plasticity fracture of the solders and the brittle fracture of the intermetallics are coupled together in one finite element model. Finally, this study yields a multiscale model to understand and predict the mechanical shock behavior of lead-free tin-rich solder joints. Different fracture patterns are observed for various strain rates and/or intermetallic thicknesses. The predictions have a good agreement with the theory and experiments.
ContributorsFei, Huiyang (Author) / Jiang, Hanqing (Thesis advisor) / Chawla, Nikhilesh (Thesis advisor) / Tasooji, Amaneh (Committee member) / Mobasher, Barzin (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
149995-Thumbnail Image.png
Description
A new arrangement of the Concerto for Two Horns in E-flat Major, Hob. VIId/6, attributed by some to Franz Joseph Haydn, is presented here. The arrangement reduces the orchestral portion to ten wind instruments, specifically a double wind quintet, to facilitate performance of the work. A full score and a

A new arrangement of the Concerto for Two Horns in E-flat Major, Hob. VIId/6, attributed by some to Franz Joseph Haydn, is presented here. The arrangement reduces the orchestral portion to ten wind instruments, specifically a double wind quintet, to facilitate performance of the work. A full score and a complete set of parts are included. In support of this new arrangement, a discussion of the early treatment of horns in pairs and the subsequent development of the double horn concerto in the eighteenth century provides historical context for the Concerto for Two Horns in E-flat major. A summary of the controversy concerning the identity of the composer of this concerto is followed by a description of the content and structure of each of its three movements. Some comments on the procedures of the arrangement complete the background information.
ContributorsYeh, Guan-Lin (Author) / Ericson, John (Thesis advisor) / Holbrook, Amy (Committee member) / Micklich, Albie (Committee member) / Pilafian, J. Samuel (Committee member) / Arizona State University (Publisher)
Created2011
Description
The purpose of this project was to commission, perform, and discuss a new work for an instrument pairing not often utilized, oboe and percussion. The composer, Alyssa Morris, was selected in June 2009. Her work, titled Forecast, was completed in October of 2009 and premiered in February of 2010, as

The purpose of this project was to commission, perform, and discuss a new work for an instrument pairing not often utilized, oboe and percussion. The composer, Alyssa Morris, was selected in June 2009. Her work, titled Forecast, was completed in October of 2009 and premiered in February of 2010, as part of a program showcasing music for oboe and percussion. Included in this document is a detailed biography of the composer, a description of the four movements of Forecast, performance notes for each movement, a diagram for stage set-up, the full score, the program from the premiere performance with biographies of all the performers involved, and both a live recording and MIDI sound file. The performance notes discuss issues that arose during preparation for the premiere and should help avoid potential pitfalls. TrevCo Music, publisher of the work, graciously allowed inclusion of the full score. This score is solely for use in this document; please visit the publisher's website for purchasing information. The commission and documentation of this composition are intended to add to the repertoire for oboe in an unusual instrument pairing and to encourage further exploration of such combinations.
ContributorsCreamer, Caryn (Author) / Schuring, Martin (Thesis advisor) / Hill, Gary (Committee member) / Holbrook, Amy (Committee member) / Micklich, Albie (Committee member) / Spring, Robert (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
149907-Thumbnail Image.png
Description
Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and

Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and user credit card purchase pattern monitoring, however the matches to the user queries are in fact plentiful and the system has to efficiently sift through these many matches to locate only the few most preferable matches. In this work, we propose a complex pattern ranking (CPR) framework for specifying top-k pattern queries over streaming data, present new algorithms to support top-k pattern queries in data streaming environments, and verify the effectiveness and efficiency of the proposed algorithms. The developed algorithms identify top-k matching results satisfying both patterns as well as additional criteria. To support real-time processing of the data streams, instead of computing top-k results from scratch for each time window, we maintain top-k results dynamically as new events come and old ones expire. We also develop new top-k join execution strategies that are able to adapt to the changing situations (e.g., sorted and random access costs, join rates) without having to assume a priori presence of data statistics. Experiments show significant improvements over existing approaches.
ContributorsWang, Xinxin (Author) / Candan, K. Selcuk (Thesis advisor) / Chen, Yi (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
149826-Thumbnail Image.png
Description
ABSTRACT &eacutetudes; written for violin ensemble, which include violin duets, trios, and quartets, are less numerous than solo &eacutetudes.; These works rarely go by the title "&eacutetude;," and have not been the focus of much scholarly research. Ensemble &eacutetudes; have much to offer students, teachers and

ABSTRACT &eacutetudes; written for violin ensemble, which include violin duets, trios, and quartets, are less numerous than solo &eacutetudes.; These works rarely go by the title "&eacutetude;," and have not been the focus of much scholarly research. Ensemble &eacutetudes; have much to offer students, teachers and composers, however, because they add an extra dimension to the learning, teaching, and composing processes. This document establishes the value of ensemble &eacutetudes; in pedagogy and explores applications of the repertoire currently available. Rather than focus on violin duets, the most common form of ensemble &eacutetude;, it mainly considers works for three and four violins without accompaniment. Concentrating on the pedagogical possibilities of studying &eacutetudes; in a group, this document introduces creative ways that works for violin ensemble can be used as both &eacutetudes; and performance pieces. The first two chapters explore the history and philosophy of the violin &eacutetude; and multiple-violin works, the practice of arranging of solo &eacutetudes; for multiple instruments, and the benefits of group learning and cooperative learning that distinguish ensemble &eacutetude; study from solo &eacutetude; study. The third chapter is an annotated survey of works for three and four violins without accompaniment, and serves as a pedagogical guide to some of the available repertoire. Representing a wide variety of styles, techniques and levels, it illuminates an historical association between violin ensemble works and pedagogy. The fourth chapter presents an original composition by the author, titled Variations on a Scottish Folk Song: &eacutetude; for Four Violins, with an explanation of the process and techniques used to create this ensemble &eacutetude.; This work is an example of the musical and technical integration essential to &eacutetude; study, and demonstrates various compositional traits that promote cooperative learning. Ensemble &eacutetudes; are valuable pedagogical tools that deserve wider exposure. It is my hope that the information and ideas about ensemble &eacutetudes; in this paper and the individual descriptions of the works presented will increase interest in and application of violin trios and quartets at the university level.
ContributorsLundell, Eva Rachel (Contributor) / Swartz, Jonathan (Thesis advisor) / Rockmaker, Jody (Committee member) / Buck, Nancy (Committee member) / Koonce, Frank (Committee member) / Norton, Kay (Committee member) / Arizona State University (Publisher)
Created2011
149842-Thumbnail Image.png
Description
The name of Geechie Wiley has surfaced only rarely since 1931, when she recorded her second session with the Paramount Company in Grafton, WI. A few scholars including Paul Oliver and Greil Marcus unearthed and promoted her music and called for further research on this enigmatic figure. In other publications,

The name of Geechie Wiley has surfaced only rarely since 1931, when she recorded her second session with the Paramount Company in Grafton, WI. A few scholars including Paul Oliver and Greil Marcus unearthed and promoted her music and called for further research on this enigmatic figure. In other publications, Wiley is frequently given only passing mention in long lists of talented female blues singer-guitarists, or briefly discussed in descriptions of songsters. Her music is lauded in the liner notes of the myriad compilation albums that have re-released her recordings. However, prior to this study, Marcus's three-page profile is the longest work written about Wiley; other contributions range between one sentence and two paragraphs in length. None really answers the question: who was Geechie Wiley? This thesis begins by documenting my attempt to piece together all information presently available on Geechie Wiley. A biographical chapter, supplemented with a discussion of the blues songster, follows. I then discuss my methodology and philosophy for transcription. This is followed by a critical and comparative analysis of the recordings, using the transcriptions as supplements. Finally, my fifth chapter presents conclusions about Wiley's life, career, and disappearance. My transcriptions of Wiley's six songs are found in the first appendix. Reproductions of Paramount Records advertisements are located in the final appendix. In these ways, this thesis argues that Wiley's work traces the transformation of African-American music from the general secular music of the songsters to the iconic blues genre.
ContributorsCordeiro, AnneMarie Youell (Author) / Norton, Kay (Thesis advisor) / Mook, Richard (Committee member) / Sunkett, Mark (Committee member) / Arizona State University (Publisher)
Created2011
149695-Thumbnail Image.png
Description
Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized views over structured heterogeneous data sources to support multiple query

Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized views over structured heterogeneous data sources to support multiple query optimization in a distributed event stream processing framework that supports such applications involving various query expressions for detecting events, monitoring conditions, handling data streams, and querying data. Materialized views store the results of the computed view so that subsequent access to the view retrieves the materialized results, avoiding the cost of recomputing the entire view from base data sources. Using a service-based metadata repository that provides metadata level access to the various language components in the system, a heuristics-based algorithm detects the common subexpressions from the queries represented in a mixed multigraph model over relational and structured XML data sources. These common subexpressions can be relational, XML or a hybrid join over the heterogeneous data sources. This research examines the challenges in the definition and materialization of views when the heterogeneous data sources are retained in their native format, instead of converting the data to a common model. LINQ serves as the materialized view definition language for creating the view definitions. An algorithm is introduced that uses LINQ to create a data structure for the persistence of these hybrid views. Any changes to base data sources used to materialize views are captured and mapped to a delta structure. The deltas are then streamed within the framework for use in the incremental update of the materialized view. Algorithms are presented that use the magic sets query optimization approach to both efficiently materialize the views and to propagate the relevant changes to the views for incremental maintenance. Using representative scenarios over structured heterogeneous data sources, an evaluation of the framework demonstrates an improvement in performance. Thus, defining the LINQ-based materialized views over heterogeneous structured data sources using the detected common subexpressions and incrementally maintaining the views by using magic sets enhances the efficiency of the distributed event stream processing environment.
ContributorsChaudhari, Mahesh Balkrishna (Author) / Dietrich, Suzanne W (Thesis advisor) / Urban, Susan D (Committee member) / Davulcu, Hasan (Committee member) / Chen, Yi (Committee member) / Arizona State University (Publisher)
Created2011