Matching Items (351)
150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
150005-Thumbnail Image.png
Description
The Magnetoplasmadynamic (MPD) thruster is an electromagnetic thruster that produces a higher specific impulse than conventional chemical rockets and greater thrust densities than electrostatic thrusters, but the well-known operational limit---referred to as ``onset"---imposes a severe limitation efficiency and lifetime. This phenomenon is associated with large fluctuations in operating voltage, high

The Magnetoplasmadynamic (MPD) thruster is an electromagnetic thruster that produces a higher specific impulse than conventional chemical rockets and greater thrust densities than electrostatic thrusters, but the well-known operational limit---referred to as ``onset"---imposes a severe limitation efficiency and lifetime. This phenomenon is associated with large fluctuations in operating voltage, high rates of electrode erosion, and three-dimensional instabilities in the plasma flow-field which cannot be adequately represented by two-dimensional, axisymmetric models. Simulations of the Princeton Benchmark Thruster (PBT) were conducted using the three-dimensional version of the magnetohydrodynamic (MHD) code, MACH. Validation of the numerical model is partially achieved by comparison to equivalent simulations conducted using the well-established two-dimensional, axisymmetric version of MACH. Comparisons with available experimental data was subsequently performed to further validate the model and gain insights into the physical processes of MPD acceleration. Thrust, plasma voltage, and plasma flow-field predictions were calculated for the PBT operating with applied currents in the range $6.5kA < J < 23.25kA$ and mass-flow rates of $1g/s$, $3g/s$, and $6g/s$. Comparisons of performance characteristics between the two versions of the code show excellent agreement, indicating that MACH3 can be expected to be as predictive as MACH2 has demonstrated over multiple applications to MPD thrusters. Predicted thrust for operating conditions within the range which exhibited no symptoms of the onset phenomenon experimentally also showed agreement between MACH3 and experiment well within the experimental uncertainty. At operating conditions beyond such values , however, there is a discrepancy---up to $\sim20\%$---which implies that certain significant physical processes associated with onset are not currently being modeled. Such processes are also evident in the experimental total voltage data, as is evident by the characteristic ``voltage hash", but not present in predicted plasma voltage. Additionally, analysis of the predicted plasma flow-field shows no breakdown in azimuthal symmetry, which is expected to be associated with onset. This implies that perhaps certain physical processes are modeled by neither MACH2 nor MACH3; the latter indicating that such phenomenon may not be inherently three dimensional and related to the plasma---as suggested by other efforts---but rather a consequence of electrode material processes which have not been incorporated into the current models.
ContributorsParma, Brian (Author) / Mikellides, Pavlos G (Thesis advisor) / Squires, Kyle (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2011
149785-Thumbnail Image.png
Description
Microchannel heat sinks can possess heat transfer characteristics unavailable in conventional heat exchangers; such sinks offer compact solutions to otherwise intractable thermal management problems, notably in small-scale electronics cooling. Flow boiling in microchannels allows a very high heat transfer rate, but is bounded by the critical heat flux (CHF). This

Microchannel heat sinks can possess heat transfer characteristics unavailable in conventional heat exchangers; such sinks offer compact solutions to otherwise intractable thermal management problems, notably in small-scale electronics cooling. Flow boiling in microchannels allows a very high heat transfer rate, but is bounded by the critical heat flux (CHF). This thesis presents a theoretical-numerical study of a method to improve the heat rejection capability of a microchannel heat sink via expansion of the channel cross-section along the flow direction. The thermodynamic quality of the refrigerant increases during flow boiling, decreasing the density of the bulk coolant as it flows. This may effect pressure fluctuations in the channels, leading to nonuniform heat transfer and local dryout in regions exceeding CHF. This undesirable phenomenon is counteracted by permitting the cross-section of the microchannel to increase along the direction of flow, allowing more volume for the vapor. Governing equations are derived from a control-volume analysis of a single heated rectangular microchannel; the cross-section is allowed to expand in width and height. The resulting differential equations are solved numerically for a variety of channel expansion profiles and numbers of channels. The refrigerant is R-134a and channel parameters are based on a physical test bed in a related experiment. Significant improvement in CHF is possible with moderate area expansion. Minimal additional manufacturing costs could yield major gains in the utility of microchannel heat sinks. An optimum expansion rate occurred in certain cases, and alterations in the channel width are, in general, more effective at improving CHF than alterations in the channel height. Modest expansion in height enables small width expansions to be very effective.
ContributorsMiner, Mark (Author) / Phelan, Patrick E (Thesis advisor) / Herrmann, Marcus (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150341-Thumbnail Image.png
Description
A numerical study of incremental spin-up and spin-up from rest of a thermally- stratified fluid enclosed within a right circular cylinder with rigid bottom and side walls and stress-free upper surface is presented. Thermally stratified spin-up is a typical example of baroclinity, which is initiated by a sudden increase in

A numerical study of incremental spin-up and spin-up from rest of a thermally- stratified fluid enclosed within a right circular cylinder with rigid bottom and side walls and stress-free upper surface is presented. Thermally stratified spin-up is a typical example of baroclinity, which is initiated by a sudden increase in rotation rate and the tilting of isotherms gives rise to baroclinic source of vorticity. Research by (Smirnov et al. [2010a]) showed the differences in evolution of instabilities when Dirichlet and Neumann thermal boundary conditions were applied at top and bottom walls. Study of parametric variations carried out in this dissertation confirmed the instability patterns observed by them for given aspect ratio and Rossby number values greater than 0.5. Also results reveal that flow maintained axisymmetry and stability for short aspect ratio containers independent of amount of rotational increment imparted. Investigation on vorticity components provides framework for baroclinic vorticity feedback mechanism which plays important role in delayed rise of instabilities when Dirichlet thermal Boundary Conditions are applied.
ContributorsKher, Aditya Deepak (Author) / Chen, Kangping (Thesis advisor) / Huang, Huei-Ping (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2011
149907-Thumbnail Image.png
Description
Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and

Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and user credit card purchase pattern monitoring, however the matches to the user queries are in fact plentiful and the system has to efficiently sift through these many matches to locate only the few most preferable matches. In this work, we propose a complex pattern ranking (CPR) framework for specifying top-k pattern queries over streaming data, present new algorithms to support top-k pattern queries in data streaming environments, and verify the effectiveness and efficiency of the proposed algorithms. The developed algorithms identify top-k matching results satisfying both patterns as well as additional criteria. To support real-time processing of the data streams, instead of computing top-k results from scratch for each time window, we maintain top-k results dynamically as new events come and old ones expire. We also develop new top-k join execution strategies that are able to adapt to the changing situations (e.g., sorted and random access costs, join rates) without having to assume a priori presence of data statistics. Experiments show significant improvements over existing approaches.
ContributorsWang, Xinxin (Author) / Candan, K. Selcuk (Thesis advisor) / Chen, Yi (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
149695-Thumbnail Image.png
Description
Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized views over structured heterogeneous data sources to support multiple query

Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized views over structured heterogeneous data sources to support multiple query optimization in a distributed event stream processing framework that supports such applications involving various query expressions for detecting events, monitoring conditions, handling data streams, and querying data. Materialized views store the results of the computed view so that subsequent access to the view retrieves the materialized results, avoiding the cost of recomputing the entire view from base data sources. Using a service-based metadata repository that provides metadata level access to the various language components in the system, a heuristics-based algorithm detects the common subexpressions from the queries represented in a mixed multigraph model over relational and structured XML data sources. These common subexpressions can be relational, XML or a hybrid join over the heterogeneous data sources. This research examines the challenges in the definition and materialization of views when the heterogeneous data sources are retained in their native format, instead of converting the data to a common model. LINQ serves as the materialized view definition language for creating the view definitions. An algorithm is introduced that uses LINQ to create a data structure for the persistence of these hybrid views. Any changes to base data sources used to materialize views are captured and mapped to a delta structure. The deltas are then streamed within the framework for use in the incremental update of the materialized view. Algorithms are presented that use the magic sets query optimization approach to both efficiently materialize the views and to propagate the relevant changes to the views for incremental maintenance. Using representative scenarios over structured heterogeneous data sources, an evaluation of the framework demonstrates an improvement in performance. Thus, defining the LINQ-based materialized views over heterogeneous structured data sources using the detected common subexpressions and incrementally maintaining the views by using magic sets enhances the efficiency of the distributed event stream processing environment.
ContributorsChaudhari, Mahesh Balkrishna (Author) / Dietrich, Suzanne W (Thesis advisor) / Urban, Susan D (Committee member) / Davulcu, Hasan (Committee member) / Chen, Yi (Committee member) / Arizona State University (Publisher)
Created2011
150212-Thumbnail Image.png
Description
This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management

This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management systems (DBMS). Also, unlike other client-driven work, this approach provides support for a richer set of schema updates including vertical split (normalization), horizontal split, vertical and horizontal merge (union), difference and intersection. The update process automatically generates a runtime update client from a mapping between the old the new schemas. The solution has been validated by testing it on a relatively small database of around 300,000 records per table and less than 1 Gb, but with limited memory buffer size of 24 Mb. This thesis presents the study of the overhead of the update process as a function of the transaction rates and the batch size used to copy data from the old to the new schema. It shows that the overhead introduced is minimal for medium size applications and that the update can be achieved with no more than one minute of downtime.
ContributorsTyagi, Preetika (Author) / Bazzi, Rida (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
150215-Thumbnail Image.png
Description
Multiphase flows are an important part of many natural and technological phe- nomena such as ocean-air coupling (which is important for climate modeling) and the atomization of liquid fuel jets in combustion engines. The unique challenges of multiphase flow often make analytical solutions to the governing equations impos- sible and

Multiphase flows are an important part of many natural and technological phe- nomena such as ocean-air coupling (which is important for climate modeling) and the atomization of liquid fuel jets in combustion engines. The unique challenges of multiphase flow often make analytical solutions to the governing equations impos- sible and experimental investigations very difficult. Thus, high-fidelity numerical simulations can play a pivotal role in understanding these systems. This disserta- tion describes numerical methods developed for complex multiphase flows and the simulations performed using these methods. First, the issue of multiphase code verification is addressed. Code verification answers the question "Is this code solving the equations correctly?" The method of manufactured solutions (MMS) is a procedure for generating exact benchmark solutions which can test the most general capabilities of a code. The chief obstacle to applying MMS to multiphase flow lies in the discontinuous nature of the material properties at the interface. An extension of the MMS procedure to multiphase flow is presented, using an adaptive marching tetrahedron style algorithm to compute the source terms near the interface. Guidelines for the use of the MMS to help locate coding mistakes are also detailed. Three multiphase systems are then investigated: (1) the thermocapillary motion of three-dimensional and axisymmetric drops in a confined apparatus, (2) the flow of two immiscible fluids completely filling an enclosed cylinder and driven by the rotation of the bottom endwall, and (3) the atomization of a single drop subjected to a high shear turbulent flow. The systems are simulated numerically by solving the full multiphase Navier- Stokes equations coupled to the various equations of state and a level set interface tracking scheme based on the refined level set grid method. The codes have been parallelized using MPI in order to take advantage of today's very large parallel computational architectures. In the first system, the code's ability to handle surface tension and large tem- perature gradients is established. In the second system, the code's ability to sim- ulate simple interface geometries with strong shear is demonstrated. In the third system, the ability to handle extremely complex geometries and topology changes with strong shear is shown.
ContributorsBrady, Peter, Ph.D (Author) / Herrmann, Marcus (Thesis advisor) / Lopez, Juan (Thesis advisor) / Adrian, Ronald (Committee member) / Calhoun, Ronald (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2011