Matching Items (152)
Filtering by

Clear all filters

152030-Thumbnail Image.png
Description
Recently, the use of zinc oxide (ZnO) nanowires as an interphase in composite materials has been demonstrated to increase the interfacial shear strength between carbon fiber and an epoxy matrix. In this research work, the strong adhesion between ZnO and carbon fiber is investigated to elucidate the interactions at the

Recently, the use of zinc oxide (ZnO) nanowires as an interphase in composite materials has been demonstrated to increase the interfacial shear strength between carbon fiber and an epoxy matrix. In this research work, the strong adhesion between ZnO and carbon fiber is investigated to elucidate the interactions at the interface that result in high interfacial strength. First, molecular dynamics (MD) simulations are performed to calculate the adhesive energy between bare carbon and ZnO. Since the carbon fiber surface has oxygen functional groups, these were modeled and MD simulations showed the preference of ketones to strongly interact with ZnO, however, this was not observed in the case of hydroxyls and carboxylic acid. It was also found that the ketone molecules ability to change orientation facilitated the interactions with the ZnO surface. Experimentally, the atomic force microscope (AFM) was used to measure the adhesive energy between ZnO and carbon through a liftoff test by employing highly oriented pyrolytic graphite (HOPG) substrate and a ZnO covered AFM tip. Oxygen functionalization of the HOPG surface shows the increase of adhesive energy. Additionally, the surface of ZnO was modified to hold a negative charge, which demonstrated an increase in the adhesive energy. This increase in adhesion resulted from increased induction forces given the relatively high polarizability of HOPG and the preservation of the charge on ZnO surface. It was found that the additional negative charge can be preserved on the ZnO surface because there is an energy barrier since carbon and ZnO form a Schottky contact. Other materials with the same ionic properties of ZnO but with higher polarizability also demonstrated good adhesion to carbon. This result substantiates that their induced interaction can be facilitated not only by the polarizability of carbon but by any of the materials at the interface. The versatility to modify the magnitude of the induced interaction between carbon and an ionic material provides a new route to create interfaces with controlled interfacial strength.
ContributorsGalan Vera, Magdian Ulises (Author) / Sodano, Henry A (Thesis advisor) / Jiang, Hanqing (Committee member) / Solanki, Kiran (Committee member) / Oswald, Jay (Committee member) / Speyer, Gil (Committee member) / Arizona State University (Publisher)
Created2013
152112-Thumbnail Image.png
Description
With the advent of social media (like Twitter, Facebook etc.,) people are easily sharing their opinions, sentiments and enforcing their ideologies on others like never before. Even people who are otherwise socially inactive would like to share their thoughts on current affairs by tweeting and sharing news feeds with their

With the advent of social media (like Twitter, Facebook etc.,) people are easily sharing their opinions, sentiments and enforcing their ideologies on others like never before. Even people who are otherwise socially inactive would like to share their thoughts on current affairs by tweeting and sharing news feeds with their friends and acquaintances. In this thesis study, we chose Twitter as our main data platform to analyze shifts and movements of 27 political organizations in Indonesia. So far, we have collected over 30 million tweets and 150,000 news articles from RSS feeds of the corresponding organizations for our analysis. For Twitter data extraction, we developed a multi-threaded application which seamlessly extracts, cleans and stores millions of tweets matching our keywords from Twitter Streaming API. For keyword extraction, we used topics and perspectives which were extracted using n-grams techniques and later approved by our social scientists. After the data is extracted, we aggregate the tweet contents that belong to every user on a weekly basis. Finally, we applied linear and logistic regression using SLEP, an open source sparse learning package to compute weekly score for users and mapping them to one of the 27 organizations on a radical or counter radical scale. Since, we are mapping users to organizations on a weekly basis, we are able to track user's behavior and important new events that triggered shifts among users between organizations. This thesis study can further be extended to identify topics and organization specific influential users and new users from various social media platforms like Facebook, YouTube etc. can easily be mapped to existing organizations on a radical or counter-radical scale.
ContributorsPoornachandran, Sathishkumar (Author) / Davulcu, Hasan (Thesis advisor) / Sen, Arunabha (Committee member) / Woodward, Mark (Committee member) / Arizona State University (Publisher)
Created2013
152040-Thumbnail Image.png
Description
"Sensor Decade" has been labeled on the first decade of the 21st century. Similar to the revolution of micro-computer in 1980s, sensor R&D; developed rapidly during the past 20 years. Hard workings were mainly made to minimize the size of devices with optimal the performance. Efforts to develop the small

"Sensor Decade" has been labeled on the first decade of the 21st century. Similar to the revolution of micro-computer in 1980s, sensor R&D; developed rapidly during the past 20 years. Hard workings were mainly made to minimize the size of devices with optimal the performance. Efforts to develop the small size devices are mainly concentrated around Micro-electro-mechanical-system (MEMS) technology. MEMS accelerometers are widely published and used in consumer electronics, such as smart phones, gaming consoles, anti-shake camera and vibration detectors. This study represents liquid-state low frequency micro-accelerometer based on molecular electronic transducer (MET), in which inertial mass is not the only but also the conversion of mechanical movement to electric current signal is the main utilization of the ionic liquid. With silicon-based planar micro-fabrication, the device uses a sub-micron liter electrolyte droplet sealed in oil as the sensing body and a MET electrode arrangement which is the anode-cathode-cathode-anode (ACCA) in parallel as the read-out sensing part. In order to sensing the movement of ionic liquid, an imposed electric potential was applied between the anode and the cathode. The electrode reaction, I_3^-+2e^___3I^-, occurs around the cathode which is reverse at the anodes. Obviously, the current magnitude varies with the concentration of ionic liquid, which will be effected by the movement of liquid droplet as the inertial mass. With such structure, the promising performance of the MET device design is to achieve 10.8 V/G (G=9.81 m/s^2) sensitivity at 20 Hz with the bandwidth from 1 Hz to 50 Hz, and a low noise floor of 100 ug/sqrt(Hz) at 20 Hz.
ContributorsLiang, Mengbing (Author) / Yu, Hongyu (Thesis advisor) / Jiang, Hanqing (Committee member) / Kozicki, Micheal (Committee member) / Arizona State University (Publisher)
Created2013
151345-Thumbnail Image.png
Description
Woven fabric composite materials are widely used in the construction of aircraft engine fan containment systems, mostly due to their high strength to weight ratios and ease of implementation. The development of a predictive model for fan blade containment would provide great benefit to engine manufactures in shortened development cycle

Woven fabric composite materials are widely used in the construction of aircraft engine fan containment systems, mostly due to their high strength to weight ratios and ease of implementation. The development of a predictive model for fan blade containment would provide great benefit to engine manufactures in shortened development cycle time, less risk in certification and fewer dollars lost to redesign/recertification cycles. A mechanistic user-defined material model subroutine has been developed at Arizona State University (ASU) that captures the behavioral response of these fabrics, namely Kevlar® 49, under ballistic loading. Previously developed finite element models used to validate the consistency of this material model neglected the effects of the physical constraints imposed on the test setup during ballistic testing performed at NASA Glenn Research Center (NASA GRC). Part of this research was to explore the effects of these boundary conditions on the results of the numerical simulations. These effects were found to be negligible in most instances. Other material models for woven fabrics are available in the LS-DYNA finite element code. One of these models, MAT234: MAT_VISCOELASTIC_LOOSE_FABRIC (Ivanov & Tabiei, 2004) was studied and implemented in the finite element simulations of ballistic testing associated with the FAA ASU research. The results from these models are compared to results obtained from the ASU UMAT as part of this research. The results indicate an underestimation in the energy absorption characteristics of the Kevlar 49 fabric containment systems. More investigation needs to be performed in the implementation of MAT234 for Kevlar 49 fabric. Static penetrator testing of Kevlar® 49 fabric was performed at ASU in conjunction with this research. These experiments are designed to mimic the type of loading experienced during fan blade out events. The resulting experimental strains were measured using a non-contact optical strain measurement system (ARAMIS).
ContributorsFein, Jonathan (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2012
151351-Thumbnail Image.png
Description
Dealloying induced stress corrosion cracking is particularly relevant in energy conversion systems (both nuclear and fossil fuel) as many failures in alloys such as austenitic stainless steel and nickel-based systems result directly from dealloying. This study provides evidence of the role of unstable dynamic fracture processes in dealloying induced stress-corrosion

Dealloying induced stress corrosion cracking is particularly relevant in energy conversion systems (both nuclear and fossil fuel) as many failures in alloys such as austenitic stainless steel and nickel-based systems result directly from dealloying. This study provides evidence of the role of unstable dynamic fracture processes in dealloying induced stress-corrosion cracking of face-centered cubic alloys. Corrosion of such alloys often results in the formation of a brittle nanoporous layer which we hypothesize serves to nucleate a crack that owing to dynamic effects penetrates into the un-dealloyed parent phase alloy. Thus, since there is essentially a purely mechanical component of cracking, stress corrosion crack propagation rates can be significantly larger than that predicted from electrochemical parameters. The main objective of this work is to examine and test this hypothesis under conditions relevant to stress corrosion cracking. Silver-gold alloys serve as a model system for this study since hydrogen effects can be neglected on a thermodynamic basis, which allows us to focus on a single cracking mechanism. In order to study various aspects of this problem, the dynamic fracture properties of monolithic nanoporous gold (NPG) were examined in air and under electrochemical conditions relevant to stress corrosion cracking. The detailed processes associated with the crack injection phenomenon were also examined by forming dealloyed nanoporous layers of prescribed properties on un-dealloyed parent phase structures and measuring crack penetration distances. Dynamic fracture in monolithic NPG and in crack injection experiments was examined using high-speed (106 frames s-1) digital photography. The tunable set of experimental parameters included the NPG length scale (20-40 nm), thickness of the dealloyed layer (10-3000 nm) and the electrochemical potential (0.5-1.5 V). The results of crack injection experiments were characterized using the dual-beam focused ion beam/scanning electron microscopy. Together these tools allow us to very accurately examine the detailed structure and composition of dealloyed grain boundaries and compare crack injection distances to the depth of dealloying. The results of this work should provide a basis for new mathematical modeling of dealloying induced stress corrosion cracking while providing a sound physical basis for the design of new alloys that may not be susceptible to this form of cracking. Additionally, the obtained results should be of broad interest to researchers interested in the fracture properties of nano-structured materials. The findings will open up new avenues of research apart from any implications the study may have for stress corrosion cracking.
ContributorsSun, Shaofeng (Author) / Sieradzki, Karl (Thesis advisor) / Jiang, Hanqing (Committee member) / Peralta, Pedro (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
151458-Thumbnail Image.png
Description
The focus of this investigation is on the optimum placement of a limited number of dampers, fewer than the number of blades, on a bladed disk to induce the smallest amplitude of blade response. The optimization process considers the presence of random mistuning, i.e. small involuntary variations in blade stiffness

The focus of this investigation is on the optimum placement of a limited number of dampers, fewer than the number of blades, on a bladed disk to induce the smallest amplitude of blade response. The optimization process considers the presence of random mistuning, i.e. small involuntary variations in blade stiffness properties resulting, say, from manufacturing variability. Designed variations of these properties, known as intentional mistuning, is considered as an option to reduce blade response and the pattern of two blade types (A and B blades) is then part of the optimization in addition to the location of dampers on the disk. First, this study focuses on the formulation and validation of dedicated algorithms for the selection of the damper locations and the intentional mistuning pattern. Failure of one or several of the dampers could lead to a sharp rise in blade response and this issue is addressed by including, in the optimization, the possibility of damper failure to yield a fail-safe solution. The high efficiency and accuracy of the optimization algorithms is assessed in comparison with computationally very demanding exhaustive search results. Second, the developed optimization algorithms are applied to nonlinear dampers (underplatform friction dampers), as well as to blade-blade dampers, both linear and nonlinear. Further, the optimization of blade-only and blade-blade linear dampers is extended to include uncertainty or variability in the damper properties induced by manufacturing or wear. It is found that the optimum achieved without considering such uncertainty is robust with respect to it. Finally, the potential benefits of using two different types of friction dampers differing in their masses (A and B types), on a bladed disk are considered. Both A/B pattern and the damper masses are optimized to obtain the largest benefit compared to using identical dampers of optimized masses on every blade. Four situations are considered: tuned disks, disks with random mistuning of blade stiffness, and, disks with random mistuning of both blade stiffness and damper normal forces with and without damper variability induced by manufacturing and wear. In all cases, the benefit of intentional mistuning of friction dampers is small, of the order of a few percent.
ContributorsMurthy, Raghavendra Narasimha (Author) / Mignolet, Marc P (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Lentz, Jeff (Committee member) / Chattopadhyay, Aditi (Committee member) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2012
151500-Thumbnail Image.png
Description
Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding

Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks.
ContributorsBanerjee, Sujogya (Author) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Richa, Andrea (Committee member) / Hurlbert, Glenn (Committee member) / Arizona State University (Publisher)
Created2013
151513-Thumbnail Image.png
Description
Ball Grid Array (BGA) using lead-free or lead-rich solder materials are widely used as Second Level Interconnects (SLI) in mounting packaged components to the printed circuit board (PCB). The reliability of these solder joints is of significant importance to the performance of microelectronics components and systems. Product design/form-factor, solder material,

Ball Grid Array (BGA) using lead-free or lead-rich solder materials are widely used as Second Level Interconnects (SLI) in mounting packaged components to the printed circuit board (PCB). The reliability of these solder joints is of significant importance to the performance of microelectronics components and systems. Product design/form-factor, solder material, manufacturing process, use condition, as well as, the inherent variabilities present in the system, greatly influence product reliability. Accurate reliability analysis requires an integrated approach to concurrently account for all these factors and their synergistic effects. Such an integrated and robust methodology can be used in design and development of new and advanced microelectronics systems and can provide significant improvement in cycle-time, cost, and reliability. IMPRPK approach is based on a probabilistic methodology, focusing on three major tasks of (1) Characterization of BGA solder joints to identify failure mechanisms and obtain statistical data, (2) Finite Element analysis (FEM) to predict system response needed for life prediction, and (3) development of a probabilistic methodology to predict the reliability, as well as, the sensitivity of the system to various parameters and the variabilities. These tasks and the predictive capabilities of IMPRPK in microelectronic reliability analysis are discussed.
ContributorsFallah-Adl, Ali (Author) / Tasooji, Amaneh (Thesis advisor) / Krause, Stephen (Committee member) / Alford, Terry (Committee member) / Jiang, Hanqing (Committee member) / Mahajan, Ravi (Committee member) / Arizona State University (Publisher)
Created2013
151517-Thumbnail Image.png
Description
Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like

Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like data with relevant consumption information but stored in different format and insufficient data about project attributes to interpret consumption data. Our first goal is to clean the historical data and organize it into meaningful structures for analysis. Once the preprocessing on data is completed, different data mining techniques like clustering is applied to find projects which involve resources of similar skillsets and which involve similar complexities and size. This results in "resource utilization templates" for groups of related projects from a resource consumption perspective. Then project characteristics are identified which generate this diversity in headcounts and skillsets. These characteristics are not currently contained in the data base and are elicited from the managers of historical projects. This represents an opportunity to improve the usefulness of the data collection system for the future. The ultimate goal is to match the product technical features with the resource requirement for projects in the past as a model to forecast resource requirements by skill set for future projects. The forecasting model is developed using linear regression with cross validation of the training data as the past project execution are relatively few in number. Acceptable levels of forecast accuracy are achieved relative to human experts' results and the tool is applied to forecast some future projects' resource demand.
ContributorsBhattacharya, Indrani (Author) / Sen, Arunabha (Thesis advisor) / Kempf, Karl G. (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013