Matching Items (124)
Filtering by

Clear all filters

151406-Thumbnail Image.png
Description
Alkali-activated aluminosilicates, commonly known as "geopolymers", are being increasingly studied as a potential replacement for Portland cement. These binders use an alkaline activator, typically alkali silicates, alkali hydroxides or a combination of both along with a silica-and-alumina rich material, such as fly ash or slag, to form a final product

Alkali-activated aluminosilicates, commonly known as "geopolymers", are being increasingly studied as a potential replacement for Portland cement. These binders use an alkaline activator, typically alkali silicates, alkali hydroxides or a combination of both along with a silica-and-alumina rich material, such as fly ash or slag, to form a final product with properties comparable to or better than those of ordinary Portland cement. The kinetics of alkali activation is highly dependent on the chemical composition of the binder material and the activator concentration. The influence of binder composition (slag, fly ash or both), different levels of alkalinity, expressed using the ratios of Na2O-to-binders (n) and activator SiO2-to-Na2O ratios (Ms), on the early age behavior in sodium silicate solution (waterglass) activated fly ash-slag blended systems is discussed in this thesis. Optimal binder composition and the n values are selected based on the setting times. Higher activator alkalinity (n value) is required when the amount of slag in the fly ash-slag blended mixtures is reduced. Isothermal calorimetry is performed to evaluate the early age hydration process and to understand the reaction kinetics of the alkali activated systems. The differences in the calorimetric signatures between waterglass activated slag and fly ash-slag blends facilitate an understanding of the impact of the binder composition on the reaction rates. Kinetic modeling is used to quantify the differences in reaction kinetics using the Exponential as well as the Knudsen method. The influence of temperature on the reaction kinetics of activated slag and fly ash-slag blends based on the hydration parameters are discussed. Very high compressive strengths can be obtained both at early ages as well as later ages (more than 70 MPa) with waterglass activated slag mortars. Compressive strength decreases with the increase in the fly ash content. A qualitative evidence of leaching is presented through the electrical conductivity changes in the saturating solution. The impact of leaching and the strength loss is found to be generally higher for the mixtures made using a higher activator Ms and a higher n value. Attenuated Total Reflectance-Fourier Transform Infrared Spectroscopy (ATR-FTIR) is used to obtain information about the reaction products.
ContributorsChithiraputhiran, Sundara Raman (Author) / Neithalath, Narayanan (Thesis advisor) / Rajan, Subramaniyam D (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2012
151987-Thumbnail Image.png
Description
Properties of random porous material such as pervious concrete are strongly dependant on its pore structure features. This research deals with the development of an understanding of the relationship between the material structure and the mechanical and functional properties of pervious concretes. The fracture response of pervious concrete specimens proportioned

Properties of random porous material such as pervious concrete are strongly dependant on its pore structure features. This research deals with the development of an understanding of the relationship between the material structure and the mechanical and functional properties of pervious concretes. The fracture response of pervious concrete specimens proportioned for different porosities, as a function of the pore structure features and fiber volume fraction, is studied. Stereological and morphological methods are used to extract the relevant pore structure features of pervious concretes from planar images. A two-parameter fracture model is used to obtain the fracture toughness (KIC) and critical crack tip opening displacement (CTODc) from load-crack mouth opening displacement (CMOD) data of notched beams under three-point bending. The experimental results show that KIC is primarily dependent on the porosity of pervious concretes. For a similar porosity, an increase in pore size results in a reduction in KIC. At similar pore sizes, the effect of fibers on the post-peak response is more prominent in mixtures with a higher porosity, as shown by the residual load capacity, stress-crack extension relationships, and GR curves. These effects are explained using the mean free spacing of pores and pore-to-pore tortuosity in these systems. A sensitivity analysis is employed to quantify the influence of material design parameters on KIC. This research has also focused on studying the relationship between permeability and tortuosity as it pertains to porosity and pore size of pervious concretes. Various ideal geometric shapes were also constructed that had varying pore sizes and porosities. The pervious concretes also had differing pore sizes and porosities. The permeabilities were determined using three different methods; Stokes solver, Lattice Boltzmann method and the Katz-Thompson equation. These values were then compared to the tortuosity values determined using a Matlab code that uses a pore connectivity algorithm. The tortuosity was also determined from the inverse of the conductivity determined from a numerical analysis that was necessary for using the Katz-Thompson equation. These tortuosity values were then compared to the permeabilities. The pervious concretes and ideal geometric shapes showed consistent similarities betbetween their tortuosities and permeabilities.
ContributorsRehder, Benjamin (Author) / Neithalath, Narayanana (Thesis advisor) / Mobasher, Barzin (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2013
151960-Thumbnail Image.png
Description
Buildings consume a large portion of the world's energy, but with the integration of phase change materials (PCMs) in building elements this energy cost can be greatly reduced. The addition of PCMs into building elements, however, becomes a challenge to model and analyze how the material actually affects the energy

Buildings consume a large portion of the world's energy, but with the integration of phase change materials (PCMs) in building elements this energy cost can be greatly reduced. The addition of PCMs into building elements, however, becomes a challenge to model and analyze how the material actually affects the energy flow and temperatures in the system. This research work presents a comprehensive computer program used to model and analyze PCM embedded wall systems. The use of the finite element method (FEM) provides the tool to analyze the energy flow of these systems. Finite element analysis (FEA) can model the transient analysis of a typical climate cycle along with nonlinear problems, which the addition of PCM causes. The use of phase change materials is also a costly material expense. The initial expense of using PCMs can be compensated by the reduction in energy costs it can provide. Optimization is the tool used to determine the optimal point between adding PCM into a wall and the amount of energy savings that layer will provide. The integration of these two tools into a computer program allows for models to be efficiently created, analyzed and optimized. The program was then used to understand the benefits between two different wall models, a wall with a single layer of PCM or a wall with two different PCM layers. The effect of the PCMs on the inside wall temperature along with the energy flow across the wall are computed. The numerical results show that a multi-layer PCM wall was more energy efficient and cost effective than the single PCM layer wall. A structural analysis was then performed on the optimized designs using ABAQUS v. 6.10 to ensure the structural integrity of the wall was not affected by adding PCM layer(s).
ContributorsStockwell, Amie (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Thesis advisor) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2013
152088-Thumbnail Image.png
Description
The alkali activation of aluminosilicate materials as binder systems derived from industrial byproducts have been extensively studied due to the advantages they offer in terms enhanced material properties, while increasing sustainability by the reuse of industrial waste and byproducts and reducing the adverse impacts of OPC production. Fly ash and

The alkali activation of aluminosilicate materials as binder systems derived from industrial byproducts have been extensively studied due to the advantages they offer in terms enhanced material properties, while increasing sustainability by the reuse of industrial waste and byproducts and reducing the adverse impacts of OPC production. Fly ash and ground granulated blast furnace slag are commonly used for their content of soluble silica and aluminate species that can undergo dissolution, polymerization with the alkali, condensation on particle surfaces and solidification. The following topics are the focus of this thesis: (i) the use of microwave assisted thermal processing, in addition to heat-curing as a means of alkali activation and (ii) the relative effects of alkali cations (K or Na) in the activator (powder activators) on the mechanical properties and chemical structure of these systems. Unsuitable curing conditions instigate carbonation, which in turn lowers the pH of the system causing significant reductions in the rate of fly ash activation and mechanical strength development. This study explores the effects of sealing the samples during the curing process, which effectively traps the free water in the system, and allows for increased aluminosilicate activation. The use of microwave-curing in lieu of thermal-curing is also studied in order to reduce energy consumption and for its ability to provide fast volumetric heating. Potassium-based powder activators dry blended into the slag binder system is shown to be effective in obtaining very high compressive strengths under moist curing conditions (greater than 70 MPa), whereas sodium-based powder activation is much weaker (around 25 MPa). Compressive strength decreases when fly ash is introduced into the system. Isothermal calorimetry is used to evaluate the early hydration process, and to understand the reaction kinetics of the alkali powder activated systems. A qualitative evidence of the alkali-hydroxide concentration of the paste pore solution through the use of electrical conductivity measurements is also presented, with the results indicating the ion concentration of alkali is more prevalent in the pore solution of potassium-based systems. The use of advanced spectroscopic and thermal analysis techniques to distinguish the influence of studied parameters is also discussed.
ContributorsChowdhury, Ussala (Author) / Neithalath, Narayanan (Thesis advisor) / Rajan, Subramanium D. (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2013
151945-Thumbnail Image.png
Description
In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a

In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a constellation of heterogeneous processing elements (PEs) (general purpose PEs and application-specific integrated circuits (ASICS)). A typical MPSoC will be composed of a application processor, such as an ARM Coretex-A9 with cache coherent memory hierarchy, and several application sub-systems. Each of these sub-systems are composed of highly optimized instruction processors, graphics/DSP processors, and custom hardware accelerators. Typically, these sub-systems utilize scratchpad memories (SPM) rather than support cache coherency. The overall architecture is an integration of the various sub-systems through a high bandwidth system-level interconnect (such as a Network-on-Chip (NoC)). The shift to MPSoCs has been fueled by three major factors: demand for high performance, the use of component libraries, and short design turn around time. As customers continue to desire more and more complex applications on their embedded devices the performance demand for these devices continues to increase. Designers have turned to using MPSoCs to address this demand. By using pre-made IP libraries designers can quickly piece together a MPSoC that will meet the application demands of the end user with minimal time spent designing new hardware. Additionally, the use of MPSoCs allows designers to generate new devices very quickly and thus reducing the time to market. In this work, a complete MPSoC synthesis design flow is presented. We first present a technique \cite{leary1_intro} to address the synthesis of the interconnect architecture (particularly Network-on-Chip (NoC)). We then address the synthesis of the memory architecture of a MPSoC sub-system \cite{leary2_intro}. Lastly, we present a co-synthesis technique to generate the functional and memory architectures simultaneously. The validity and quality of each synthesis technique is demonstrated through extensive experimentation.
ContributorsLeary, Glenn (Author) / Chatha, Karamvir S (Thesis advisor) / Vrudhula, Sarma (Committee member) / Shrivastava, Aviral (Committee member) / Beraha, Rudy (Committee member) / Arizona State University (Publisher)
Created2013
152415-Thumbnail Image.png
Description
We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale

We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale to hundreds and thousands of cores. In addition, caches and coherence logic already take 20-50% of the total power consumption of the processor and 30-60% of die area. Therefore, a more scalable architecture is needed for manycore architectures. Software Managed Manycore (SMM) architectures emerge as a solution. They have scalable memory design in which each core has direct access to only its local scratchpad memory, and any data transfers to/from other memories must be done explicitly in the application using Direct Memory Access (DMA) commands. Lack of automatic memory management in the hardware makes such architectures extremely power-efficient, but they also become difficult to program. If the code/data of the task mapped onto a core cannot fit in the local scratchpad memory, then DMA calls must be added to bring in the code/data before it is required, and it may need to be evicted after its use. However, doing this adds a lot of complexity to the programmer's job. Now programmers must worry about data management, on top of worrying about the functional correctness of the program - which is already quite complex. This dissertation presents a comprehensive compiler and runtime integration to automatically manage the code and data of each task in the limited local memory of the core. We firstly developed a Complete Circular Stack Management. It manages stack frames between the local memory and the main memory, and addresses the stack pointer problem as well. Though it works, we found we could further optimize the management for most cases. Thus a Smart Stack Data Management (SSDM) is provided. In this work, we formulate the stack data management problem and propose a greedy algorithm for the same. Later on, we propose a general cost estimation algorithm, based on which CMSM heuristic for code mapping problem is developed. Finally, heap data is dynamic in nature and therefore it is hard to manage it. We provide two schemes to manage unlimited amount of heap data in constant sized region in the local memory. In addition to those separate schemes for different kinds of data, we also provide a memory partition methodology.
ContributorsBai, Ke (Author) / Shrivastava, Aviral (Thesis advisor) / Chatha, Karamvir (Committee member) / Xue, Guoliang (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
151367-Thumbnail Image.png
Description
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on

This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
ContributorsDeivanayagam, Arumugam (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2012
150743-Thumbnail Image.png
Description
Thanks to continuous technology scaling, intelligent, fast and smaller digital systems are now available at affordable costs. As a result, digital systems have found use in a wide range of application areas that were not even imagined before, including medical (e.g., MRI, remote or post-operative monitoring devices, etc.), automotive (e.g.,

Thanks to continuous technology scaling, intelligent, fast and smaller digital systems are now available at affordable costs. As a result, digital systems have found use in a wide range of application areas that were not even imagined before, including medical (e.g., MRI, remote or post-operative monitoring devices, etc.), automotive (e.g., adaptive cruise control, anti-lock brakes, etc.), security systems (e.g., residential security gateways, surveillance devices, etc.), and in- and out-of-body sensing (e.g., capsule swallowed by patients measuring digestive system pH, heart monitors, etc.). Such computing systems, which are completely embedded within the application, are called embedded systems, as opposed to general purpose computing systems. In the design of such embedded systems, power consumption and reliability are indispensable system requirements. In battery operated portable devices, the battery is the single largest factor contributing to device cost, weight, recharging time, frequency and ultimately its usability. For example, in the Apple iPhone 4 smart-phone, the battery is $40\%$ of the device weight, occupies $36\%$ of its volume and allows only $7$ hours (over 3G) of talk time. As embedded systems find use in a range of sensitive applications, from bio-medical applications to safety and security systems, the reliability of the computations performed becomes a crucial factor. At our current technology-node, portable embedded systems are prone to expect failures due to soft errors at the rate of once-per-year; but with aggressive technology scaling, the rate is predicted to increase exponentially to once-per-hour. Over the years, researchers have been successful in developing techniques, implemented at different layers of the design-spectrum, to improve system power efficiency and reliability. Among the layers of design abstraction, I observe that the interface between the compiler and processor micro-architecture possesses a unique potential for efficient design optimizations. A compiler designer is able to observe and analyze the application software at a finer granularity; while the processor architect analyzes the system output (power, performance, etc.) for each executed instruction. At the compiler micro-architecture interface, if the system knowledge at the two design layers can be integrated, design optimizations at the two layers can be modified to efficiently utilize available resources and thereby achieve appreciable system-level benefits. To this effect, the thesis statement is that, ``by merging system design information at the compiler and micro-architecture design layers, smart compilers can be developed, that achieve reliable and power-efficient embedded computing through: i) Pure compiler techniques, ii) Hybrid compiler micro-architecture techniques, and iii) Compiler-aware architectures''. In this dissertation demonstrates, through contributions in each of the three compiler-based techniques, the effectiveness of smart compilers in achieving power-efficiency and reliability in embedded systems.
ContributorsJeyapaul, Reiley (Author) / Shrivastava, Aviral (Thesis advisor) / Vrudhula, Sarma (Committee member) / Clark, Lawrence (Committee member) / Colbourn, Charles (Committee member) / Arizona State University (Publisher)
Created2012
150901-Thumbnail Image.png
Description
Threshold logic has been studied by at least two independent group of researchers. One group of researchers studied threshold logic with the intention of building threshold logic circuits. The earliest research to this end was done in the 1960's. The major work at that time focused on studying mathematical properties

Threshold logic has been studied by at least two independent group of researchers. One group of researchers studied threshold logic with the intention of building threshold logic circuits. The earliest research to this end was done in the 1960's. The major work at that time focused on studying mathematical properties of threshold logic as no efficient circuit implementations of threshold logic were available. Recently many post-CMOS (Complimentary Metal Oxide Semiconductor) technologies that implement threshold logic have been proposed along with efficient CMOS implementations. This has renewed the effort to develop efficient threshold logic design automation techniques. This work contributes to this ongoing effort. Another group studying threshold logic did so, because the building block of neural networks - the Perceptron, is identical to the threshold element implementing a threshold function. Neural networks are used for various purposes as data classifiers. This work contributes tangentially to this field by proposing new methods and techniques to study and analyze functions implemented by a Perceptron After completion of the Human Genome Project, it has become evident that most biological phenomenon is not caused by the action of single genes, but due to the complex interaction involving a system of genes. In recent times, the `systems approach' for the study of gene systems is gaining popularity. Many different theories from mathematics and computer science has been used for this purpose. Among the systems approaches, the Boolean logic gene model has emerged as the current most popular discrete gene model. This work proposes a new gene model based on threshold logic functions (which are a subset of Boolean logic functions). The biological relevance and utility of this model is argued illustrated by using it to model different in-vivo as well as in-silico gene systems.
ContributorsLinge Gowda, Tejaswi (Author) / Vrudhula, Sarma (Thesis advisor) / Shrivastava, Aviral (Committee member) / Chatha, Karamvir (Committee member) / Kim, Seungchan (Committee member) / Arizona State University (Publisher)
Created2012
150448-Thumbnail Image.png
Description
Concrete design has recently seen a shift in focus from prescriptive specifications to performance based specifications with increasing demands for sustainable products. Fiber reinforced composites (FRC) provides unique properties to a material that is very weak under tensile loads. The addition of fibers to a concrete mix provides additional ductility

Concrete design has recently seen a shift in focus from prescriptive specifications to performance based specifications with increasing demands for sustainable products. Fiber reinforced composites (FRC) provides unique properties to a material that is very weak under tensile loads. The addition of fibers to a concrete mix provides additional ductility and reduces the propagation of cracks in the concrete structure. It is the fibers that bridge the crack and dissipate the incurred strain energy in the form of a fiber-pullout mechanism. The addition of fibers plays an important role in tunnel lining systems and in reducing shrinkage cracking in high performance concretes. The interest in most design situations is the load where cracking first takes place. Typically the post crack response will exhibit either a load bearing increase as deflection continues, or a load bearing decrease as deflection continues. These behaviors are referred to as strain hardening and strain softening respectively. A strain softening or hardening response is used to model the behavior of different types of fiber reinforced concrete and simulate the experimental flexural response. Closed form equations for moment-curvature response of rectangular beams under four and three point loading in conjunction with crack localization rules are utilized. As a result, the stress distribution that considers a shifting neutral axis can be simulated which provides a more accurate representation of the residual strength of the fiber cement composites. The use of typical residual strength parameters by standards organizations ASTM, JCI and RILEM are examined to be incorrect in their linear elastic assumption of FRC behavior. Finite element models were implemented to study the effects and simulate the load defection response of fiber reinforced shotcrete round discrete panels (RDP's) tested in accordance with ASTM C-1550. The back-calculated material properties from the flexural tests were used as a basis for the FEM material models. Further development of FEM beams were also used to provide additional comparisons in residual strengths of early age samples. A correlation between the RDP and flexural beam test was generated based a relationship between normalized toughness with respect to the newly generated crack surfaces. A set of design equations are proposed using a residual strength correction factor generated by the model and produce the design moment based on specified concrete slab geometry.
ContributorsBarsby, Christopher (Author) / Mobasher, Barzin (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2011