This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 262
Filtering by

Clear all filters

150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
150029-Thumbnail Image.png
Description
A dual-channel directional digital hearing aid (DHA) front-end using a fully differential difference amplifier (FDDA) based Microphone interface circuit (MIC) for a capacitive Micro Electro Mechanical Systems (MEMS) microphones and an adaptive-power analog font end (AFE) is presented. The Microphone interface circuit based on FDDA converts

A dual-channel directional digital hearing aid (DHA) front-end using a fully differential difference amplifier (FDDA) based Microphone interface circuit (MIC) for a capacitive Micro Electro Mechanical Systems (MEMS) microphones and an adaptive-power analog font end (AFE) is presented. The Microphone interface circuit based on FDDA converts the capacitance variations into voltage signal, achieves a noise of 32 dB SPL (sound pressure level) and an SNR of 72 dB, additionally it also performs single to differential conversion allowing for fully differential analog signal chain. The analog front-end consists of 40dB VGA and a power scalable continuous time sigma delta ADC, with 68dB SNR dissipating 67u¬W from a 1.2V supply. The ADC implements a self calibrating feedback DAC, for calibrating the 2nd order non-linearity. The VGA and power scalable ADC is fabricated on 0.25 um CMOS TSMC process. The dual channels of the DHA are precisely matched and achieve about 0.5dB gain mismatch, resulting in greater than 5dB directivity index. This will enable a highly integrated and low power DHA
ContributorsNaqvi, Syed Roomi (Author) / Kiaei, Sayfe (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Chae, Junseok (Committee member) / Barnby, Hugh (Committee member) / Aberle, James T., 1961- (Committee member) / Arizona State University (Publisher)
Created2011
149724-Thumbnail Image.png
Description
This composition was commissioned by the Orgelpark to be performed in Amsterdam in September 2011 during Gaudeamus Muziekweek. It will be performed by the vocal group VocaalLab Nederland. It is scored for four vocalists, organ, tanpura, and electronic sound. The work is a culmination of my studies in South Indian

This composition was commissioned by the Orgelpark to be performed in Amsterdam in September 2011 during Gaudeamus Muziekweek. It will be performed by the vocal group VocaalLab Nederland. It is scored for four vocalists, organ, tanpura, and electronic sound. The work is a culmination of my studies in South Indian Carnatic rhythm, North Indian classical singing, and American minimalism. It is a meditation on the idea that the drone and pulse are micro/macro aspects of the same phenomenon of vibration. Cycles are created on the macroscale through a mathematically defined scale of harmonic/pitch relationships. Cycles are created on the microscale through the subdivision and addition of rhythmic pulses.
ContributorsAdler, Jacob (Composer) / Rockmaker, Jody (Thesis advisor) / Feisst, Sabine (Committee member) / Etezady, Roshanne, 1973- (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150333-Thumbnail Image.png
Description
A systematic approach to composition has been used by a variety of composers to control an assortment of musical elements in their pieces. This paper begins with a brief survey of some of the important systematic approaches that composers have employed in their compositions, devoting particular attention to Pierre Boulez's

A systematic approach to composition has been used by a variety of composers to control an assortment of musical elements in their pieces. This paper begins with a brief survey of some of the important systematic approaches that composers have employed in their compositions, devoting particular attention to Pierre Boulez's Structures Ia . The purpose of this survey is to examine several systematic approaches to composition by prominent composers and their philosophy in adopting this type of approach. The next section of the paper introduces my own systematic approach to composition: the Take-Away System. The third provides several musical applications of the system, citing my work, Octulus for two pianos, as an example. The appendix details theorems and observations within the system for further study.
ContributorsHarbin, Doug (Author) / Hackbarth, Glenn (Thesis advisor) / DeMars, James (Committee member) / Etezady, Roshanne, 1973- (Committee member) / Rockmaker, Jody (Committee member) / Rogers, Rodney (Committee member) / Arizona State University (Publisher)
Created2011
149893-Thumbnail Image.png
Description
Sensing and controlling current flow is a fundamental requirement for many electronic systems, including power management (DC-DC converters and LDOs), battery chargers, electric vehicles, solenoid positioning, motor control, and power monitoring. Current Shunt Monitor (CSM) systems have various applications for precise current monitoring of those aforementioned applications. CSMs enable current

Sensing and controlling current flow is a fundamental requirement for many electronic systems, including power management (DC-DC converters and LDOs), battery chargers, electric vehicles, solenoid positioning, motor control, and power monitoring. Current Shunt Monitor (CSM) systems have various applications for precise current monitoring of those aforementioned applications. CSMs enable current measurement across an external sense resistor (RS) in series to current flow. Two different types of CSMs designed and characterized in this paper. First design used direct current reading method and the other design used indirect current reading method. Proposed CSM systems can sense power supply current ranging from 1mA to 200mA for the direct current reading topology and from 1mA to 500mA for the indirect current reading topology across a typical board Cu-trace resistance of 1 ohm with less than 10 µV input-referred offset, 0.3 µV/°C offset drift and 0.1% accuracy for both topologies. Proposed systems avoid using a costly zero-temperature coefficient (TC) sense resistor that is normally used in typical CSM systems. Instead, both of the designs used existing Cu-trace on the printed circuit board (PCB) in place of the costly resistor. The systems use chopper stabilization at the front-end amplifier signal path to suppress input-referred offset down to less than 10 µV. Switching current-mode (SI) FIR filtering technique is used at the instrumentation amplifier output to filter out the chopping ripple caused by input offset and flicker noise by averaging half of the phase 1 signal and the other half of the phase 2 signal. In addition, residual offset mainly caused by clock feed-through and charge injection of the chopper switches at the chopping frequency and its multiple frequencies notched out by the since response of the SI-FIR filter. A frequency domain Sigma Delta ADC which is used for the indirect current reading type design enables a digital interface to processor applications with minimally added circuitries to build a simple 1st order Sigma Delta ADC. The CSMs are fabricated on a 0.7µm CMOS process with 3 levels of metal, with maximum Vds tolerance of 8V and operates across a common mode range of 0 to 26V for the direct current reading type and of 0 to 30V for the indirect current reading type achieving less than 10nV/sqrtHz of flicker noise at 100 Hz for both approaches. By using a semi-digital SI-FIR filter, residual chopper offset is suppressed down to 0.5mVpp from a baseline of 8mVpp, which is equivalent to 25dB suppression.
ContributorsYeom, Hyunsoo (Author) / Bakkaloglu, Bertan (Thesis advisor) / Kiaei, Sayfe (Committee member) / Ozev, Sule (Committee member) / Yu, Hongyu (Committee member) / Arizona State University (Publisher)
Created2011
149907-Thumbnail Image.png
Description
Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and

Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and user credit card purchase pattern monitoring, however the matches to the user queries are in fact plentiful and the system has to efficiently sift through these many matches to locate only the few most preferable matches. In this work, we propose a complex pattern ranking (CPR) framework for specifying top-k pattern queries over streaming data, present new algorithms to support top-k pattern queries in data streaming environments, and verify the effectiveness and efficiency of the proposed algorithms. The developed algorithms identify top-k matching results satisfying both patterns as well as additional criteria. To support real-time processing of the data streams, instead of computing top-k results from scratch for each time window, we maintain top-k results dynamically as new events come and old ones expire. We also develop new top-k join execution strategies that are able to adapt to the changing situations (e.g., sorted and random access costs, join rates) without having to assume a priori presence of data statistics. Experiments show significant improvements over existing approaches.
ContributorsWang, Xinxin (Author) / Candan, K. Selcuk (Thesis advisor) / Chen, Yi (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
149826-Thumbnail Image.png
Description
ABSTRACT &eacutetudes; written for violin ensemble, which include violin duets, trios, and quartets, are less numerous than solo &eacutetudes.; These works rarely go by the title "&eacutetude;," and have not been the focus of much scholarly research. Ensemble &eacutetudes; have much to offer students, teachers and

ABSTRACT &eacutetudes; written for violin ensemble, which include violin duets, trios, and quartets, are less numerous than solo &eacutetudes.; These works rarely go by the title "&eacutetude;," and have not been the focus of much scholarly research. Ensemble &eacutetudes; have much to offer students, teachers and composers, however, because they add an extra dimension to the learning, teaching, and composing processes. This document establishes the value of ensemble &eacutetudes; in pedagogy and explores applications of the repertoire currently available. Rather than focus on violin duets, the most common form of ensemble &eacutetude;, it mainly considers works for three and four violins without accompaniment. Concentrating on the pedagogical possibilities of studying &eacutetudes; in a group, this document introduces creative ways that works for violin ensemble can be used as both &eacutetudes; and performance pieces. The first two chapters explore the history and philosophy of the violin &eacutetude; and multiple-violin works, the practice of arranging of solo &eacutetudes; for multiple instruments, and the benefits of group learning and cooperative learning that distinguish ensemble &eacutetude; study from solo &eacutetude; study. The third chapter is an annotated survey of works for three and four violins without accompaniment, and serves as a pedagogical guide to some of the available repertoire. Representing a wide variety of styles, techniques and levels, it illuminates an historical association between violin ensemble works and pedagogy. The fourth chapter presents an original composition by the author, titled Variations on a Scottish Folk Song: &eacutetude; for Four Violins, with an explanation of the process and techniques used to create this ensemble &eacutetude.; This work is an example of the musical and technical integration essential to &eacutetude; study, and demonstrates various compositional traits that promote cooperative learning. Ensemble &eacutetudes; are valuable pedagogical tools that deserve wider exposure. It is my hope that the information and ideas about ensemble &eacutetudes; in this paper and the individual descriptions of the works presented will increase interest in and application of violin trios and quartets at the university level.
ContributorsLundell, Eva Rachel (Contributor) / Swartz, Jonathan (Thesis advisor) / Rockmaker, Jody (Committee member) / Buck, Nancy (Committee member) / Koonce, Frank (Committee member) / Norton, Kay (Committee member) / Arizona State University (Publisher)
Created2011
149695-Thumbnail Image.png
Description
Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized views over structured heterogeneous data sources to support multiple query

Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized views over structured heterogeneous data sources to support multiple query optimization in a distributed event stream processing framework that supports such applications involving various query expressions for detecting events, monitoring conditions, handling data streams, and querying data. Materialized views store the results of the computed view so that subsequent access to the view retrieves the materialized results, avoiding the cost of recomputing the entire view from base data sources. Using a service-based metadata repository that provides metadata level access to the various language components in the system, a heuristics-based algorithm detects the common subexpressions from the queries represented in a mixed multigraph model over relational and structured XML data sources. These common subexpressions can be relational, XML or a hybrid join over the heterogeneous data sources. This research examines the challenges in the definition and materialization of views when the heterogeneous data sources are retained in their native format, instead of converting the data to a common model. LINQ serves as the materialized view definition language for creating the view definitions. An algorithm is introduced that uses LINQ to create a data structure for the persistence of these hybrid views. Any changes to base data sources used to materialize views are captured and mapped to a delta structure. The deltas are then streamed within the framework for use in the incremental update of the materialized view. Algorithms are presented that use the magic sets query optimization approach to both efficiently materialize the views and to propagate the relevant changes to the views for incremental maintenance. Using representative scenarios over structured heterogeneous data sources, an evaluation of the framework demonstrates an improvement in performance. Thus, defining the LINQ-based materialized views over heterogeneous structured data sources using the detected common subexpressions and incrementally maintaining the views by using magic sets enhances the efficiency of the distributed event stream processing environment.
ContributorsChaudhari, Mahesh Balkrishna (Author) / Dietrich, Suzanne W (Thesis advisor) / Urban, Susan D (Committee member) / Davulcu, Hasan (Committee member) / Chen, Yi (Committee member) / Arizona State University (Publisher)
Created2011