Matching Items (232)
151345-Thumbnail Image.png
Description
Woven fabric composite materials are widely used in the construction of aircraft engine fan containment systems, mostly due to their high strength to weight ratios and ease of implementation. The development of a predictive model for fan blade containment would provide great benefit to engine manufactures in shortened development cycle

Woven fabric composite materials are widely used in the construction of aircraft engine fan containment systems, mostly due to their high strength to weight ratios and ease of implementation. The development of a predictive model for fan blade containment would provide great benefit to engine manufactures in shortened development cycle time, less risk in certification and fewer dollars lost to redesign/recertification cycles. A mechanistic user-defined material model subroutine has been developed at Arizona State University (ASU) that captures the behavioral response of these fabrics, namely Kevlar® 49, under ballistic loading. Previously developed finite element models used to validate the consistency of this material model neglected the effects of the physical constraints imposed on the test setup during ballistic testing performed at NASA Glenn Research Center (NASA GRC). Part of this research was to explore the effects of these boundary conditions on the results of the numerical simulations. These effects were found to be negligible in most instances. Other material models for woven fabrics are available in the LS-DYNA finite element code. One of these models, MAT234: MAT_VISCOELASTIC_LOOSE_FABRIC (Ivanov & Tabiei, 2004) was studied and implemented in the finite element simulations of ballistic testing associated with the FAA ASU research. The results from these models are compared to results obtained from the ASU UMAT as part of this research. The results indicate an underestimation in the energy absorption characteristics of the Kevlar 49 fabric containment systems. More investigation needs to be performed in the implementation of MAT234 for Kevlar 49 fabric. Static penetrator testing of Kevlar® 49 fabric was performed at ASU in conjunction with this research. These experiments are designed to mimic the type of loading experienced during fan blade out events. The resulting experimental strains were measured using a non-contact optical strain measurement system (ARAMIS).
ContributorsFein, Jonathan (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2012
151354-Thumbnail Image.png
Description
The design and development of analog/mixed-signal (AMS) integrated circuits (ICs) is becoming increasingly expensive, complex, and lengthy. Rapid prototyping and emulation of analog ICs will be significant in the design and testing of complex analog systems. A new approach, Programmable ANalog Device Array (PANDA) that maps any AMS design problem

The design and development of analog/mixed-signal (AMS) integrated circuits (ICs) is becoming increasingly expensive, complex, and lengthy. Rapid prototyping and emulation of analog ICs will be significant in the design and testing of complex analog systems. A new approach, Programmable ANalog Device Array (PANDA) that maps any AMS design problem to a transistor-level programmable hardware, is proposed. This approach enables fast system level validation and a reduction in post-Silicon bugs, minimizing design risk and cost. The unique features of the approach include 1) transistor-level programmability that emulates each transistor behavior in an analog design, achieving very fine granularity of reconfiguration; 2) programmable switches that are treated as a design component during analog transistor emulating, and optimized with the reconfiguration matrix; 3) compensation of AC performance degradation through boosting the bias current. Based on these principles, a digitally controlled PANDA platform is designed at 45nm node that can map AMS modules across 22nm to 90nm technology nodes. A systematic emulation approach to map any analog transistor to PANDA cell is proposed, which achieves transistor level matching accuracy of less than 5% for ID and less than 10% for Rout and Gm. Circuit level analog metrics of a voltage-controlled oscillator (VCO) emulated by PANDA, match to those of the original designs in 90nm nodes with less than a 5% error. Voltage-controlled delay lines at 65nm and 90nm are emulated by 32nm PANDA, which successfully match important analog metrics. And at-speed emulation is achieved as well. Several other 90nm analog blocks are successfully emulated by the 45nm PANDA platform, including a folded-cascode operational amplifier and a sample-and-hold module (S/H)
ContributorsXu, Cheng (Author) / Cao, Yu (Thesis advisor) / Blain Christen, Jennifer (Committee member) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2012
151519-Thumbnail Image.png
Description
Majority of the Sensor networks consist of low-cost autonomously powered devices, and are used to collect data in physical world. Today's sensor network deployments are mostly application specific & owned by a particular entity. Because of this application specific nature & the ownership boundaries, this modus operandi hinders large scale

Majority of the Sensor networks consist of low-cost autonomously powered devices, and are used to collect data in physical world. Today's sensor network deployments are mostly application specific & owned by a particular entity. Because of this application specific nature & the ownership boundaries, this modus operandi hinders large scale sensing & overall network operational capacity. The main goal of this research work is to create a mechanism to dynamically form personal area networks based on mote class devices spanning ownership boundaries. When coupled with an overlay based control system, this architecture can be conveniently used by a remote client to dynamically create sensor networks (personal area network based) even when the client does not own a network. The nodes here are "borrowed" from existing host networks & the application related to the newly formed network will co-exist with the native applications thanks to concurrency. The result allows users to embed a single collection tree onto spatially distant networks as if they were within communication range. This implementation consists of core operating system & various other external components that support injection maintenance & dissolution sensor network applications at client's request. A large object data dissemination protocol was designed for reliable application injection. The ability of this system to remotely reconfigure a network is useful given the high failure rate of real-world sensor network deployments. Collaborative sensing, various physical phenomenon monitoring also be considered as applications of this architecture.
ContributorsFernando, M. S. R (Author) / Dasgupta, Partha (Thesis advisor) / Bhattacharya, Amiya (Thesis advisor) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2013
151524-Thumbnail Image.png
Description
Process migration is a heavily studied research area and has a number of applications in distributed systems. Process migration means transferring a process running on one machine to another such that it resumes execution from the point at which it was suspended. The conventional approach to implement process migration is

Process migration is a heavily studied research area and has a number of applications in distributed systems. Process migration means transferring a process running on one machine to another such that it resumes execution from the point at which it was suspended. The conventional approach to implement process migration is to move the entire state information of the process (including hardware context, virtual memory, files etc.) from one machine to another. Copying all the state information is costly. This thesis proposes and demonstrates a new approach of migrating a process between two cores of Intel Single Chip Cloud (SCC), an experimental 48-core processor by Intel, with each core running a separate instance of the operating system. In this method the amount of process state to be transferred from one core's memory to another is reduced by making use of special registers called Lookup tables (LUTs) present on each core of SCC. Thus this new approach is faster than the conventional method.
ContributorsJain, Vaibhav (Author) / Dasgupta, Partha (Thesis advisor) / Shriavstava, Aviral (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
151533-Thumbnail Image.png
Description
Memories play an integral role in today's advanced ICs. Technology scaling has enabled high density designs at the price paid for impact due to variability and reliability. It is imperative to have accurate methods to measure and extract the variability in the SRAM cell to produce accurate reliability projections for

Memories play an integral role in today's advanced ICs. Technology scaling has enabled high density designs at the price paid for impact due to variability and reliability. It is imperative to have accurate methods to measure and extract the variability in the SRAM cell to produce accurate reliability projections for future technologies. This work presents a novel test measurement and extraction technique which is non-invasive to the actual operation of the SRAM memory array. The salient features of this work include i) A single ended SRAM test structure with no disturbance to SRAM operations ii) a convenient test procedure that only requires quasi-static control of external voltages iii) non-iterative method that extracts the VTH variation of each transistor from eight independent switch point measurements. With the present day technology scaling, in addition to the variability with the process, there is also the impact of other aging mechanisms which become dominant. The various aging mechanisms like Negative Bias Temperature Instability (NBTI), Channel Hot Carrier (CHC) and Time Dependent Dielectric Breakdown (TDDB) are critical in the present day nano-scale technology nodes. In this work, we focus on the impact of NBTI due to aging in the SRAM cell and have used Trapping/De-Trapping theory based log(t) model to explain the shift in threshold voltage VTH. The aging section focuses on the following i) Impact of Statistical aging in PMOS device due to NBTI dominates the temporal shift of SRAM cell ii) Besides static variations , shifting in VTH demands increased guard-banding margins in design stage iii) Aging statistics remain constant during the shift, presenting a secondary effect in aging prediction. iv) We have investigated to see if the aging mechanism can be used as a compensation technique to reduce mismatch due to process variations. Finally, the entire test setup has been tested in SPICE and also validated with silicon and the results are presented. The method also facilitates the study of design metrics such as static, read and write noise margins and also the data retention voltage and thus help designers to improve the cell stability of SRAM.
ContributorsRavi, Venkatesa (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Clark, Lawrence (Committee member) / Arizona State University (Publisher)
Created2013
151437-Thumbnail Image.png
Description
Dwindling energy resources and associated environmental costs have resulted in a serious need to design and construct energy efficient buildings. One of the strategies to develop energy efficient structural materials is through the incorporation of phase change materials (PCM) in the host matrix. This research work presents details of a

Dwindling energy resources and associated environmental costs have resulted in a serious need to design and construct energy efficient buildings. One of the strategies to develop energy efficient structural materials is through the incorporation of phase change materials (PCM) in the host matrix. This research work presents details of a finite element-based framework that is used to study the thermal performance of structural precast concrete wall elements with and without a layer of phase change material. The simulation platform developed can be implemented for a wide variety of input parameters. In this study, two different locations in the continental United States, representing different ambient temperature conditions (corresponding to hot, cold and typical days of the year) are studied. Two different types of concrete - normal weight and lightweight, different PCM types, gypsum wallboard's with varying PCM percentages and different PCM layer thicknesses are also considered with an aim of understanding the energy flow across the wall member. Effect of changing PCM location and prolonged thermal loading are also studied. The temperature of the inside face of the wall and energy flow through the inside face of the wall, which determines the indoor HVAC energy consumption are used as the defining parameters. An ad-hoc optimization scheme is also implemented where the PCM thickness is fixed but its location and properties are varied. Numerical results show that energy savings are possible with small changes in baseline values, facilitating appropriate material design for desired characteristics.
ContributorsHembade, Lavannya Babanrao (Author) / Neithalath, Narayanan (Thesis advisor) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2012
151405-Thumbnail Image.png
Description
Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and

Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and challenging. The principal hurdle in this regard is the characterization of the context driven interactions between software and the physical environment (cyber-physical interactions), which introduce multi-dimensional dynamics in space and time, complex non-linearities, and non-trivial aggregation of interaction in case of networked operations. Traditionally, CPS software is tested for safety either through experimental trials, which can be expensive, incomprehensive, and hazardous, or through static analysis of code, which ignore the cyber-physical interactions. This thesis considers model based engineering, a paradigm widely used in different disciplines of engineering, for safety verification of CPS software and contributes to three fundamental phases: a) modeling, building abstractions or models that characterize cyberphysical interactions in a mathematical framework, b) analysis, reasoning about safety based on properties of the model, and c) synthesis, implementing models on standard testbeds for performing preliminary experimental trials. In this regard, CPS modeling techniques are proposed that can accurately capture the context driven spatio-temporal aggregate cyber-physical interactions. Different levels of abstractions are considered, which result in high level architectural models, or more detailed formal behavioral models of CPSes. The outcomes include, a well defined architectural specification framework called CPS-DAS and a novel spatio-temporal formal model called Spatio-Temporal Hybrid Automata (STHA) for CPSes. Model analysis techniques are proposed for the CPS models, which can simulate the effects of dynamic context changes on non-linear spatio-temporal cyberphysical interactions, and characterize aggregate effects. The outcomes include tractable algorithms for simulation analysis and for theoretically proving safety properties of CPS software. Lastly a software synthesis technique is proposed that can automatically convert high level architectural models of CPSes in the healthcare domain into implementations in high level programming languages. The outcome is a tool called Health-Dev that can synthesize software implementations of CPS models in healthcare for experimental verification of safety properties.
ContributorsBanerjee, Ayan (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Poovendran, Radha (Committee member) / Fainekos, Georgios (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
151406-Thumbnail Image.png
Description
Alkali-activated aluminosilicates, commonly known as "geopolymers", are being increasingly studied as a potential replacement for Portland cement. These binders use an alkaline activator, typically alkali silicates, alkali hydroxides or a combination of both along with a silica-and-alumina rich material, such as fly ash or slag, to form a final product

Alkali-activated aluminosilicates, commonly known as "geopolymers", are being increasingly studied as a potential replacement for Portland cement. These binders use an alkaline activator, typically alkali silicates, alkali hydroxides or a combination of both along with a silica-and-alumina rich material, such as fly ash or slag, to form a final product with properties comparable to or better than those of ordinary Portland cement. The kinetics of alkali activation is highly dependent on the chemical composition of the binder material and the activator concentration. The influence of binder composition (slag, fly ash or both), different levels of alkalinity, expressed using the ratios of Na2O-to-binders (n) and activator SiO2-to-Na2O ratios (Ms), on the early age behavior in sodium silicate solution (waterglass) activated fly ash-slag blended systems is discussed in this thesis. Optimal binder composition and the n values are selected based on the setting times. Higher activator alkalinity (n value) is required when the amount of slag in the fly ash-slag blended mixtures is reduced. Isothermal calorimetry is performed to evaluate the early age hydration process and to understand the reaction kinetics of the alkali activated systems. The differences in the calorimetric signatures between waterglass activated slag and fly ash-slag blends facilitate an understanding of the impact of the binder composition on the reaction rates. Kinetic modeling is used to quantify the differences in reaction kinetics using the Exponential as well as the Knudsen method. The influence of temperature on the reaction kinetics of activated slag and fly ash-slag blends based on the hydration parameters are discussed. Very high compressive strengths can be obtained both at early ages as well as later ages (more than 70 MPa) with waterglass activated slag mortars. Compressive strength decreases with the increase in the fly ash content. A qualitative evidence of leaching is presented through the electrical conductivity changes in the saturating solution. The impact of leaching and the strength loss is found to be generally higher for the mixtures made using a higher activator Ms and a higher n value. Attenuated Total Reflectance-Fourier Transform Infrared Spectroscopy (ATR-FTIR) is used to obtain information about the reaction products.
ContributorsChithiraputhiran, Sundara Raman (Author) / Neithalath, Narayanan (Thesis advisor) / Rajan, Subramaniyam D (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2012
151410-Thumbnail Image.png
Description
Test cost has become a significant portion of device cost and a bottleneck in high volume manufacturing. Increasing integration density and shrinking feature sizes increased test time/cost and reduce observability. Test engineers have to put a tremendous effort in order to maintain test cost within an acceptable budget. Unfortunately, there

Test cost has become a significant portion of device cost and a bottleneck in high volume manufacturing. Increasing integration density and shrinking feature sizes increased test time/cost and reduce observability. Test engineers have to put a tremendous effort in order to maintain test cost within an acceptable budget. Unfortunately, there is not a single straightforward solution to the problem. Products that are tested have several application domains and distinct customer profiles. Some products are required to operate for long periods of time while others are required to be low cost and optimized for low cost. Multitude of constraints and goals make it impossible to find a single solution that work for all cases. Hence, test development/optimization is typically design/circuit dependent and even process specific. Therefore, test optimization cannot be performed using a single test approach, but necessitates a diversity of approaches. This works aims at addressing test cost minimization and test quality improvement at various levels. In the first chapter of the work, we investigate pre-silicon strategies, such as design for test and pre-silicon statistical simulation optimization. In the second chapter, we investigate efficient post-silicon test strategies, such as adaptive test, adaptive multi-site test, outlier analysis, and process shift detection/tracking.
ContributorsYilmaz, Ender (Author) / Ozev, Sule (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Cao, Yu (Committee member) / Christen, Jennifer Blain (Committee member) / Arizona State University (Publisher)
Created2012
151987-Thumbnail Image.png
Description
Properties of random porous material such as pervious concrete are strongly dependant on its pore structure features. This research deals with the development of an understanding of the relationship between the material structure and the mechanical and functional properties of pervious concretes. The fracture response of pervious concrete specimens proportioned

Properties of random porous material such as pervious concrete are strongly dependant on its pore structure features. This research deals with the development of an understanding of the relationship between the material structure and the mechanical and functional properties of pervious concretes. The fracture response of pervious concrete specimens proportioned for different porosities, as a function of the pore structure features and fiber volume fraction, is studied. Stereological and morphological methods are used to extract the relevant pore structure features of pervious concretes from planar images. A two-parameter fracture model is used to obtain the fracture toughness (KIC) and critical crack tip opening displacement (CTODc) from load-crack mouth opening displacement (CMOD) data of notched beams under three-point bending. The experimental results show that KIC is primarily dependent on the porosity of pervious concretes. For a similar porosity, an increase in pore size results in a reduction in KIC. At similar pore sizes, the effect of fibers on the post-peak response is more prominent in mixtures with a higher porosity, as shown by the residual load capacity, stress-crack extension relationships, and GR curves. These effects are explained using the mean free spacing of pores and pore-to-pore tortuosity in these systems. A sensitivity analysis is employed to quantify the influence of material design parameters on KIC. This research has also focused on studying the relationship between permeability and tortuosity as it pertains to porosity and pore size of pervious concretes. Various ideal geometric shapes were also constructed that had varying pore sizes and porosities. The pervious concretes also had differing pore sizes and porosities. The permeabilities were determined using three different methods; Stokes solver, Lattice Boltzmann method and the Katz-Thompson equation. These values were then compared to the tortuosity values determined using a Matlab code that uses a pore connectivity algorithm. The tortuosity was also determined from the inverse of the conductivity determined from a numerical analysis that was necessary for using the Katz-Thompson equation. These tortuosity values were then compared to the permeabilities. The pervious concretes and ideal geometric shapes showed consistent similarities betbetween their tortuosities and permeabilities.
ContributorsRehder, Benjamin (Author) / Neithalath, Narayanana (Thesis advisor) / Mobasher, Barzin (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2013