Matching Items (353)
Filtering by

Clear all filters

150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150025-Thumbnail Image.png
Description
With the increasing focus on developing environmentally benign electronic packages, lead-free solder alloys have received a great deal of attention. Mishandling of packages, during manufacture, assembly, or by the user may cause failure of solder joint. A fundamental understanding of the behavior of lead-free solders under mechanical shock conditions is

With the increasing focus on developing environmentally benign electronic packages, lead-free solder alloys have received a great deal of attention. Mishandling of packages, during manufacture, assembly, or by the user may cause failure of solder joint. A fundamental understanding of the behavior of lead-free solders under mechanical shock conditions is lacking. Reliable experimental and numerical analysis of lead-free solder joints in the intermediate strain rate regime need to be investigated. This dissertation mainly focuses on exploring the mechanical shock behavior of lead-free tin-rich solder alloys via multiscale modeling and numerical simulations. First, the macroscopic stress/strain behaviors of three bulk lead-free tin-rich solders were tested over a range of strain rates from 0.001/s to 30/s. Finite element analysis was conducted to determine appropriate specimen geometry that could reach a homogeneous stress/strain field and a relatively high strain rate. A novel self-consistent true stress correction method is developed to compensate the inaccuracy caused by the triaxial stress state at the post-necking stage. Then the material property of micron-scale intermetallic was examined by micro-compression test. The accuracy of this measure is systematically validated by finite element analysis, and empirical adjustments are provided. Moreover, the interfacial property of the solder/intermetallic interface is investigated, and a continuum traction-separation law of this interface is developed from an atomistic-based cohesive element method. The macroscopic stress/strain relation and microstructural properties are combined together to form a multiscale material behavior via a stochastic approach for both solder and intermetallic. As a result, solder is modeled by porous plasticity with random voids, and intermetallic is characterized as brittle material with random vulnerable region. Thereafter, the porous plasticity fracture of the solders and the brittle fracture of the intermetallics are coupled together in one finite element model. Finally, this study yields a multiscale model to understand and predict the mechanical shock behavior of lead-free tin-rich solder joints. Different fracture patterns are observed for various strain rates and/or intermetallic thicknesses. The predictions have a good agreement with the theory and experiments.
ContributorsFei, Huiyang (Author) / Jiang, Hanqing (Thesis advisor) / Chawla, Nikhilesh (Thesis advisor) / Tasooji, Amaneh (Committee member) / Mobasher, Barzin (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
150035-Thumbnail Image.png
Description
Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of

Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of critical load bearing structures can lead to an increase in time allowed for the evacuation of occupants, recovery of property, and access to the fire. Much work has been done in improving the structural performance of concrete including reducing column sizes and providing a safer structure. As a result, high-strength (HS) concrete has been developed to fulfill the needs of such improvements. HS concrete varies from normal-strength (NS) concrete in that it has a higher stiffness, lower permeability and larger durability. This, unfortunately, has resulted in poor performance under fire. The lower permeability allows for water vapor to build up causing HS concrete to suffer from explosive spalling under rapid heating. In addition, the coefficient of thermal expansion (CTE) of HS concrete is lower than that of NS concrete. In this study, the effects of introducing a region of crumb rubber concrete into a steel-reinforced concrete column were analyzed. The inclusion of crumb rubber concrete into a column will greatly increase the thermal resistivity of the overall column, leading to a reduction in core temperature as well as the rate at which the column is heated. Different cases were analyzed while varying the positioning of the crumb-rubber region to characterize the effect of position on the improvement of fire resistance. Computer simulated finite element analysis was used to calculate the temperature and strain distribution with time across the column's cross-sectional area with specific interest in the steel - concrete region. Of the several cases which were investigated, it was found that the improvement of time before failure ranged between 32 to 45 minutes.
ContributorsZiadeh, Bassam Mohammed (Author) / Phelan, Patrick (Thesis advisor) / Kaloush, Kamil (Thesis advisor) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150419-Thumbnail Image.png
Description
Pb-free solders are used as interconnects in various levels of micro-electronic packaging. Reliability of these interconnects is very critical for the performance of the package. One of the main factors affecting the reliability of solder joints is the presence of porosity which is introduced during processing of the joints. In

Pb-free solders are used as interconnects in various levels of micro-electronic packaging. Reliability of these interconnects is very critical for the performance of the package. One of the main factors affecting the reliability of solder joints is the presence of porosity which is introduced during processing of the joints. In this thesis, the effect of such porosity on the deformation behavior and eventual failure of the joints is studied using Finite Element (FE) modeling technique. A 3D model obtained by reconstruction of x-ray tomographic image data is used as input for FE analysis to simulate shear deformation and eventual failure of the joint using ductile damage model. The modeling was done in ABAQUS (v 6.10). The FE model predictions are validated with experimental results by comparing the deformation of the pores and the crack path as predicted by the model with the experimentally observed deformation and failure pattern. To understand the influence of size, shape, and distribution of pores on the mechanical behavior of the joint four different solder joints with varying degrees of porosity are modeled using the validated FE model. The validation technique mentioned above enables comparison of the simulated and actual deformation only. A more robust way of validating the FE model would be to compare the strain distribution in the joint as predicted by the model and as observed experimentally. In this study, to enable visualization of the experimental strain for the 3D microstructure obtained from tomography, a three dimensional digital image correlation (3D DIC) code has been implemented in MATLAB (MathWorks Inc). This developed 3D DIC code can be used as another tool to verify the numerical model predictions. The capability of the developed code in measuring local displacement and strain is demonstrated by considering a test case.
ContributorsJakkali, Vaidehi (Author) / Chawla, Nikhilesh K (Thesis advisor) / Jiang, Hanqing (Committee member) / Solanki, Kiran (Committee member) / Arizona State University (Publisher)
Created2011
149945-Thumbnail Image.png
Description
Increasing demand for high strength powder metallurgy (PM) steels has resulted in the development of dual phase PM steels. In this work, the effects of thermal aging on the microstructure and mechanical behavior of dual phase precipitation hardened powder metallurgy (PM) stainless steels of varying ferrite-martensite content were examined. Quantitative

Increasing demand for high strength powder metallurgy (PM) steels has resulted in the development of dual phase PM steels. In this work, the effects of thermal aging on the microstructure and mechanical behavior of dual phase precipitation hardened powder metallurgy (PM) stainless steels of varying ferrite-martensite content were examined. Quantitative analyses of the inherent porosity and phase fractions were conducted on the steels and no significant differences were noted with respect to aging temperature. Tensile strength, yield strength, and elongation to fracture all increased with increasing aging temperature reaching maxima at 538oC in most cases. Increased strength and decreased ductility were observed in steels of higher martensite content. Nanoindentation of the individual microconstituents was employed to obtain a fundamental understanding of the strengthening contributions. Both the ferrite and martensite hardness values increased with aging temperature and exhibited similar maxima to the bulk tensile properties. Due to the complex non-uniform stresses and strains associated with conventional nanoindentation, micropillar compression has become an attractive method to probe local mechanical behavior while limiting strain gradients and contributions from surrounding features. In this study, micropillars of ferrite and martensite were fabricated by focused ion beam (FIB) milling of dual phase precipitation hardened powder metallurgy (PM) stainless steels. Compression testing was conducted using a nanoindenter equipped with a flat punch indenter. The stress-strain curves of the individual microconstituents were calculated from the load-displacement curves less the extraneous displacements of the system. Using a rule of mixtures approach in conjunction with porosity corrections, the mechanical properties of ferrite and martensite were combined for comparison to tensile tests of the bulk material, and reasonable agreement was found for the ultimate tensile strength. Micropillar compression experiments of both as sintered and thermally aged material allowed for investigation of the effect of thermal aging.
ContributorsStewart, Jennifer (Author) / Chawla, Nikhilesh (Thesis advisor) / Jiang, Hanqing (Committee member) / Krause, Stephen (Committee member) / Arizona State University (Publisher)
Created2011
149912-Thumbnail Image.png
Description
This study treats in some depth a contemporary solo piano work, "Arirang Variations" (2006) by Edward "Teddy" Niedermaier (b. 1983). Though Niedermaier is an American composer and pianist, he derives his inspiration for that work from four types of Korean arirang: "Arirang," "Raengsanmopan Older Babe Arirang," "Gangwondo Arirang" and "Kin

This study treats in some depth a contemporary solo piano work, "Arirang Variations" (2006) by Edward "Teddy" Niedermaier (b. 1983). Though Niedermaier is an American composer and pianist, he derives his inspiration for that work from four types of Korean arirang: "Arirang," "Raengsanmopan Older Babe Arirang," "Gangwondo Arirang" and "Kin Arirang." The analysis of "Arirang Variations" focuses primarily on how the composer adapts arirang in each variation and develops them into his own musical language. A salient feature of Niedermaier's composition is his combination of certain contradictions: traditional and contemporary styles, and Western and Eastern musical styles. In order to discuss in detail the musical elements of arirang used in "Arirang Variations," scores of all the arirang Niedermaier references are included with the discussion of each. Unfortunately, sources concerning three of these were limited to a single book by Yon-gap Kim, Pukhan Arirang Yongu (A Study of North Korean Arirang), because "Raengsanmopan Older Babe Arirang," "Gangwondo Arirang" and "Kin Arirang"are North Korean versions of arirang. Since arirang are the most important Korean folk song genre, basic information concerning such features of Korean traditional musical elements as scales, vocal techniques, rhythms and types of folk songs are provided along with an overview of the history and origins of arirang. Given that each arirang has distinctive characteristics that vary by region, the four best-known types of arirang are introduced to demonstrate these differences.  
ContributorsPark, Hyunjin (Author) / Meir, Baruch (Thesis advisor) / Campbell, Andrew (Committee member) / Levy, Benjamin (Committee member) / Thompson, Janice (Committee member) / Arizona State University (Publisher)
Created2011
149913-Thumbnail Image.png
Description
One necessary condition for the two-pass risk premium estimator to be consistent and asymptotically normal is that the rank of the beta matrix in a proposed linear asset pricing model is full column. I first investigate the asymptotic properties of the risk premium estimators and the related t-test and

One necessary condition for the two-pass risk premium estimator to be consistent and asymptotically normal is that the rank of the beta matrix in a proposed linear asset pricing model is full column. I first investigate the asymptotic properties of the risk premium estimators and the related t-test and Wald test statistics when the full rank condition fails. I show that the beta risk of useless factors or multiple proxy factors for a true factor are priced more often than they should be at the nominal size in the asset pricing models omitting some true factors. While under the null hypothesis that the risk premiums of the true factors are equal to zero, the beta risk of the true factors are priced less often than the nominal size. The simulation results are consistent with the theoretical findings. Hence, the factor selection in a proposed factor model should not be made solely based on their estimated risk premiums. In response to this problem, I propose an alternative estimation of the underlying factor structure. Specifically, I propose to use the linear combination of factors weighted by the eigenvectors of the inner product of estimated beta matrix. I further propose a new method to estimate the rank of the beta matrix in a factor model. For this method, the idiosyncratic components of asset returns are allowed to be correlated both over different cross-sectional units and over different time periods. The estimator I propose is easy to use because it is computed with the eigenvalues of the inner product of an estimated beta matrix. Simulation results show that the proposed method works well even in small samples. The analysis of US individual stock returns suggests that there are six common risk factors in US individual stock returns among the thirteen factor candidates used. The analysis of portfolio returns reveals that the estimated number of common factors changes depending on how the portfolios are constructed. The number of risk sources found from the analysis of portfolio returns is generally smaller than the number found in individual stock returns.
ContributorsWang, Na (Author) / Ahn, Seung C. (Thesis advisor) / Kallberg, Jarl G. (Committee member) / Liu, Crocker H. (Committee member) / Arizona State University (Publisher)
Created2011
149907-Thumbnail Image.png
Description
Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and

Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and user credit card purchase pattern monitoring, however the matches to the user queries are in fact plentiful and the system has to efficiently sift through these many matches to locate only the few most preferable matches. In this work, we propose a complex pattern ranking (CPR) framework for specifying top-k pattern queries over streaming data, present new algorithms to support top-k pattern queries in data streaming environments, and verify the effectiveness and efficiency of the proposed algorithms. The developed algorithms identify top-k matching results satisfying both patterns as well as additional criteria. To support real-time processing of the data streams, instead of computing top-k results from scratch for each time window, we maintain top-k results dynamically as new events come and old ones expire. We also develop new top-k join execution strategies that are able to adapt to the changing situations (e.g., sorted and random access costs, join rates) without having to assume a priori presence of data statistics. Experiments show significant improvements over existing approaches.
ContributorsWang, Xinxin (Author) / Candan, K. Selcuk (Thesis advisor) / Chen, Yi (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011