Matching Items (215)
Filtering by

Clear all filters

153182-Thumbnail Image.png
Description
Commercially pure (CP) and extra low interstitial (ELI) grade Ti-alloys present excellent corrosion resistance, lightweight, and formability making them attractive materials for expanded use in transportation and medical applications. However, the strength and toughness of CP titanium are affected by relatively small variations in their impurity/solute content (IC), e.g., O,

Commercially pure (CP) and extra low interstitial (ELI) grade Ti-alloys present excellent corrosion resistance, lightweight, and formability making them attractive materials for expanded use in transportation and medical applications. However, the strength and toughness of CP titanium are affected by relatively small variations in their impurity/solute content (IC), e.g., O, Al, and V. This increase in strength is due to the fact that the solute either increases the critical stress required for the prismatic slip systems ({10-10}<1-210>) or activates another slip system ((0001)<11-20>, {10-11}<11-20>). In particular, solute additions such as O can effectively strengthen the alloy but with an attendant loss in ductility by changing the behavior from wavy (cross slip) to planar nature. In order to understand the underlying behavior of strengthening by solutes, it is important to understand the atomic scale mechanism. This dissertation aims to address this knowledge gap through a synergistic combination of density functional theory (DFT) and molecular dynamics. Further, due to the long-range strain fields of the dislocations and the periodicity of the DFT simulation cells, it is difficult to apply ab initio simulations to study the dislocation core structure. To alleviate this issue we developed a multiscale quantum mechanics/molecular mechanics approach (QM/MM) to study the dislocation core. We use the developed QM/MM method to study the pipe diffusion along a prismatic edge dislocation core. Complementary to the atomistic simulations, the Semi-discrete Variational Peierls-Nabarro model (SVPN) was also used to analyze the dislocation core structure and mobility. The chemical interaction between the solute/impurity and the dislocation core is captured by the so-called generalized stacking fault energy (GSFE) surface which was determined from DFT-VASP calculations. By taking the chemical interaction into consideration the SVPN model can predict the dislocation core structure and mobility in the presence and absence of the solute/impurity and thus reveal the effect of impurity/solute on the softening/hardening behavior in alpha-Ti. Finally, to study the interaction of the dislocation core with other planar defects such as grain boundaries (GB), we develop an automated method to theoretically generate GBs in HCP type materials.
ContributorsBhatia, Mehul Anoopkumar (Author) / Solanki, Kiran N (Thesis advisor) / Peralta, Pedro (Committee member) / Jiang, Hanqing (Committee member) / Neithalath, Narayanan (Committee member) / Rajagopalan, Jagannathan (Committee member) / Arizona State University (Publisher)
Created2014
153259-Thumbnail Image.png
Description
With the rise of social media, hundreds of millions of people spend countless hours all over the globe on social media to connect, interact, share, and create user-generated data. This rich environment provides tremendous opportunities for many different players to easily and effectively reach out to people, interact with them,

With the rise of social media, hundreds of millions of people spend countless hours all over the globe on social media to connect, interact, share, and create user-generated data. This rich environment provides tremendous opportunities for many different players to easily and effectively reach out to people, interact with them, influence them, or get their opinions. There are two pieces of information that attract most attention on social media sites, including user preferences and interactions. Businesses and organizations use this information to better understand and therefore provide customized services to social media users. This data can be used for different purposes such as, targeted advertisement, product recommendation, or even opinion mining. Social media sites use this information to better serve their users.

Despite the importance of personal information, in many cases people do not reveal this information to the public. Predicting the hidden or missing information is a common response to this challenge. In this thesis, we address the problem of predicting user attributes and future or missing links using an egocentric approach. The current research proposes novel concepts and approaches to better understand social media users in twofold including, a) their attributes, preferences, and interests, and b) their future or missing connections and interactions. More specifically, the contributions of this dissertation are (1) proposing a framework to study social media users through their attributes and link information, (2) proposing a scalable algorithm to predict user preferences; and (3) proposing a novel approach to predict attributes and links with limited information. The proposed algorithms use an egocentric approach to improve the state of the art algorithms in two directions. First by improving the prediction accuracy, and second, by increasing the scalability of the algorithms.
ContributorsAbbasi, Mohammad Ali, 1975- (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Ye, Jieping (Committee member) / Agarwal, Nitin (Committee member) / Arizona State University (Publisher)
Created2014
153265-Thumbnail Image.png
Description
Corporations invest considerable resources to create, preserve and analyze

their data; yet while organizations are interested in protecting against

unauthorized data transfer, there lacks a comprehensive metric to discriminate

what data are at risk of leaking.

This thesis motivates the need for a quantitative leakage risk metric, and

provides a risk assessment system,

Corporations invest considerable resources to create, preserve and analyze

their data; yet while organizations are interested in protecting against

unauthorized data transfer, there lacks a comprehensive metric to discriminate

what data are at risk of leaking.

This thesis motivates the need for a quantitative leakage risk metric, and

provides a risk assessment system, called Whispers, for computing it. Using

unsupervised machine learning techniques, Whispers uncovers themes in an

organization's document corpus, including previously unknown or unclassified

data. Then, by correlating the document with its authors, Whispers can

identify which data are easier to contain, and conversely which are at risk.

Using the Enron email database, Whispers constructs a social network segmented

by topic themes. This graph uncovers communication channels within the

organization. Using this social network, Whispers determines the risk of each

topic by measuring the rate at which simulated leaks are not detected. For the

Enron set, Whispers identified 18 separate topic themes between January 1999

and December 2000. The highest risk data emanated from the legal department

with a leakage risk as high as 60%.
ContributorsWright, Jeremy (Author) / Syrotiuk, Violet (Thesis advisor) / Davulcu, Hasan (Committee member) / Yau, Stephen (Committee member) / Arizona State University (Publisher)
Created2014
153275-Thumbnail Image.png
Description
In this work, a highly sensitive strain sensing technique is developed to realize in-plane strain mapping for microelectronic packages or emerging flexible or foldable devices, where mechanical or thermal strain is a major concern that could affect the performance of the working devices or even lead to the failure of

In this work, a highly sensitive strain sensing technique is developed to realize in-plane strain mapping for microelectronic packages or emerging flexible or foldable devices, where mechanical or thermal strain is a major concern that could affect the performance of the working devices or even lead to the failure of the devices. Therefore strain sensing techniques to create a contour of the strain distribution is desired.

The developed highly sensitive micro-strain sensing technique differs from the existing strain mapping techniques, such as digital image correlation (DIC)/micro-Moiré techniques, in terms of working mechanism, by filling a technology gap that requires high spatial resolution while simultaneously maintaining a large field-of-view. The strain sensing mechanism relies on the scanning of a tightly focused laser beam onto the grating that is on the sample surface to detect the change in the diffracted beam angle as a result of the strain. Gratings are fabricated on the target substrates to serve as strain sensors, which carries the strain information in the form of variations in the grating period. The geometric structure of the optical system inherently ensures the high sensitivity for the strain sensing, where the nanoscale change of the grating period is amplified by almost six orders into a diffraction peak shift on the order of several hundred micrometers. It significantly amplifies the small signal measurements so that the desired sensitivity and accuracy can be achieved.

The important features, such as strain sensitivity and spatial resolution, for the strain sensing technique are investigated to evaluate the technique. The strain sensitivity has been validated by measurements on homogenous materials with well known reference values of CTE (coefficient of thermal expansion). 10 micro-strain has been successfully resolved from the silicon CTE extraction measurements. Furthermore, the spatial resolution has been studied on predefined grating patterns, which are assembled to mimic the uneven strain distribution across the sample surface. A resolvable feature size of 10 µm has been achieved with an incident laser spot size of 50 µm in diameter.

In addition, the strain sensing technique has been applied to a composite sample made of SU8 and silicon, as well as the microelectronic packages for thermal strain mappings.
ContributorsLiang, Hanshuang (Author) / Yu, Hongbin (Thesis advisor) / Poon, Poh Chieh Benny (Committee member) / Jiang, Hanqing (Committee member) / Zhang, Yong-Hang (Committee member) / Arizona State University (Publisher)
Created2014
152439-Thumbnail Image.png
Description
As one of the most promising materials for high capacity electrode in next generation of lithium ion batteries, silicon has attracted a great deal of attention in recent years. Advanced characterization techniques and atomic simulations helped to depict that the lithiation/delithiation of silicon electrode involves processes including large volume change

As one of the most promising materials for high capacity electrode in next generation of lithium ion batteries, silicon has attracted a great deal of attention in recent years. Advanced characterization techniques and atomic simulations helped to depict that the lithiation/delithiation of silicon electrode involves processes including large volume change (anisotropic for the initial lithiation of crystal silicon), plastic flow or softening of material dependent on composition, electrochemically driven phase transformation between solid states, anisotropic or isotropic migration of atomic sharp interface, and mass diffusion of lithium atoms. Motivated by the promising prospect of the application and underlying interesting physics, mechanics coupled with multi-physics of silicon electrodes in lithium ion batteries is studied in this dissertation. For silicon electrodes with large size, diffusion controlled kinetics is assumed, and the coupled large deformation and mass transportation is studied. For crystal silicon with small size, interface controlled kinetics is assumed, and anisotropic interface reaction is studied, with a geometry design principle proposed. As a preliminary experimental validation, enhanced lithiation and fracture behavior of silicon pillars via atomic layer coatings and geometry design is studied, with results supporting the geometry design principle we proposed based on our simulations. Through the work documented here, a consistent description and understanding of the behavior of silicon electrode is given at continuum level and some insights for the future development of the silicon electrode are provided.
ContributorsAn, Yonghao (Author) / Jiang, Hanqing (Thesis advisor) / Chawla, Nikhilesh (Committee member) / Phelan, Patrick (Committee member) / Wang, Yinming (Committee member) / Yu, Hongyu (Committee member) / Arizona State University (Publisher)
Created2014
150196-Thumbnail Image.png
Description
Advanced composites are being widely used in aerospace applications due to their high stiffness, strength and energy absorption capabilities. However, the assurance of structural reliability is a critical issue because a damage event will compromise the integrity of composite structures and lead to ultimate failure. In this dissertation a novel

Advanced composites are being widely used in aerospace applications due to their high stiffness, strength and energy absorption capabilities. However, the assurance of structural reliability is a critical issue because a damage event will compromise the integrity of composite structures and lead to ultimate failure. In this dissertation a novel homogenization based multiscale modeling framework using semi-analytical micromechanics is presented to simulate the response of textile composites. The novelty of this approach lies in the three scale homogenization/localization framework bridging between the constituent (micro), the fiber tow scale (meso), weave scale (macro), and the global response. The multiscale framework, named Multiscale Generalized Method of Cells (MSGMC), continuously bridges between the micro to the global scale as opposed to approaches that are top-down and bottom-up. This framework is fully generalized and capable of modeling several different weave and braids without reformulation. Particular emphasis in this dissertation is placed on modeling the nonlinearity and failure of both polymer matrix and ceramic matrix composites.
ContributorsLiu, Guang (Author) / Chattopadhyay, Aditi (Thesis advisor) / Mignolet, Marc (Committee member) / Jiang, Hanqing (Committee member) / Li, Jian (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2011
150212-Thumbnail Image.png
Description
This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management

This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management systems (DBMS). Also, unlike other client-driven work, this approach provides support for a richer set of schema updates including vertical split (normalization), horizontal split, vertical and horizontal merge (union), difference and intersection. The update process automatically generates a runtime update client from a mapping between the old the new schemas. The solution has been validated by testing it on a relatively small database of around 300,000 records per table and less than 1 Gb, but with limited memory buffer size of 24 Mb. This thesis presents the study of the overhead of the update process as a function of the transaction rates and the batch size used to copy data from the old to the new schema. It shows that the overhead introduced is minimal for medium size applications and that the update can be achieved with no more than one minute of downtime.
ContributorsTyagi, Preetika (Author) / Bazzi, Rida (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
150240-Thumbnail Image.png
Description
This thesis investigated two different thermal flow sensors for intravascular shear stress analysis. They were based on heat transfer principle, which heat convection from the resistively heated element to the flowing fluid was measured as a function of the changes in voltage. For both sensors, the resistively heated elements were

This thesis investigated two different thermal flow sensors for intravascular shear stress analysis. They were based on heat transfer principle, which heat convection from the resistively heated element to the flowing fluid was measured as a function of the changes in voltage. For both sensors, the resistively heated elements were made of Ti/Pt strips with the thickness 0.12 µm and 0.02 µm. The resistance of the sensing element was measured at approximately 1.6-1.7 kohms;. A linear relation between the resistance and temperature was established over the temperature ranging from 22 degree Celsius to 80 degree Celsius and the temperature coefficient of resistance (TCR) was at approximately 0.12 %/degree Celsius. The first thermal flow sensor was one-dimensional (1-D) flexible shear stress sensor. The structure was sensing element sandwiched by a biocompatible polymer "poly-para-xylylene", also known as Parylene, which provided both insulation of electrodes and flexibility of the sensors. A constant-temperature (CT) circuit was designed as the read out circuit based on 0.6 µm CMOS (Complementary metal-oxide-semiconductor) process. The 1-D shear stress sensor suffered from a large measurement error. Because when the sensor was inserted into blood vessels, it was impossible to mount the sensor to the wall as calibrated in micro fluidic channels. According to the previous simulation work, the shear stress was varying and the sensor itself changed the shear stress distribution. We proposed a three-dimensional (3-D) thermal flow sensor, with three-axis of sensing elements integrated in one sensor. It was in the similar shape as a hexagonal prism with diagonal of 1000 µm. On the top of the sensor, there were five bond pads for external wires over 500 µm thick silicon substrate. In each nonadjacent side surface, there was a bended parylene branch with one sensing element. Based on the unique 3-D structure, the sensor was able to obtain data along three axes. With computational fluid dynamics (CFD) model, it is possible to locate the sensor in the blood vessels and give us a better understanding of shear stress distribution in the presence of time-varying component of blood flow and realize more accurate assessment of intravascular convective heat transfer.
ContributorsTang, Rui (Author) / Yu, Hongyu (Thesis advisor) / Jiang, Hanqing (Committee member) / Pan, George (Committee member) / Arizona State University (Publisher)
Created2011
149972-Thumbnail Image.png
Description
Templates are wildly used in Web sites development. Finding the template for a given set of Web pages could be very important and useful for many applications like Web page classification and monitoring content and structure changes of Web pages. In this thesis, two novel sequence-based Web page template detection

Templates are wildly used in Web sites development. Finding the template for a given set of Web pages could be very important and useful for many applications like Web page classification and monitoring content and structure changes of Web pages. In this thesis, two novel sequence-based Web page template detection algorithms are presented. Different from tree mapping algorithms which are based on tree edit distance, sequence-based template detection algorithms operate on the Prüfer/Consolidated Prüfer sequences of trees. Since there are one-to-one correspondences between Prüfer/Consolidated Prüfer sequences and trees, sequence-based template detection algorithms identify the template by finding a common subsequence between to Prüfer/Consolidated Prüfer sequences. This subsequence should be a sequential representation of a common subtree of input trees. Experiments on real-world web pages showed that our approaches detect templates effectively and efficiently.
ContributorsHuang, Wei (Author) / Candan, Kasim Selcuk (Thesis advisor) / Sundaram, Hari (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011