Matching Items (278)
149681-Thumbnail Image.png
Description
The trend towards using recycled materials on new construction projects is growing as the cost for construction materials are ever increasing and the awareness of the responsibility we have to be good stewards of our environment is heightened. While recycled asphalt is sometimes used in pavements, its use as structural

The trend towards using recycled materials on new construction projects is growing as the cost for construction materials are ever increasing and the awareness of the responsibility we have to be good stewards of our environment is heightened. While recycled asphalt is sometimes used in pavements, its use as structural fill has been hindered by concern that it is susceptible to large long-term deformations (creep), preventing its use for a great many geotechnical applications. While asphalt/soil blends are often proposed as an alternative to 100% recycled asphalt fill, little data is available characterizing the geotechnical properties of recycled asphalt soil blends. In this dissertation, the geotechnical properties for five different recycled asphalt soil blends are characterized. Data includes the particle size distribution, plasticity index, creep, and shear strength for each blend. Blends with 0%, 25%, 50%, 75% and 100% recycled asphalt were tested. As the recycled asphalt material used for testing had particles sizes up to 1.5 inches, a large 18 inch diameter direct shear apparatus was used to determine the shear strength and creep characteristics of the material. The results of the testing program confirm that the creep potential of recycled asphalt is a geotechnical concern when the material is subjected to loads greater than 1500 pounds per square foot (psf). In addition, the test results demonstrate that the amount of soil blended with the recycled asphalt can greatly influence the creep and shear strength behavior of the composite material. Furthermore, there appears to be an optimal blend ratio where the composite material had better properties than either the recycled asphalt or virgin soil alone with respect to shear strength.
ContributorsSchaper, Jeffery M (Author) / Kavazanjian, Edward (Thesis advisor) / Houston, Sandra L. (Committee member) / Zapata, Claudia E (Committee member) / Arizona State University (Publisher)
Created2011
149728-Thumbnail Image.png
Description
In geotechnical engineering, measuring the unsaturated hydraulic conductivity of fine grained soils can be time consuming and tedious. The various applications that require knowledge of the unsaturated hydraulic conductivity function are great, and in geotechnical engineering, they range from modeling seepage through landfill covers to determining infiltration of water

In geotechnical engineering, measuring the unsaturated hydraulic conductivity of fine grained soils can be time consuming and tedious. The various applications that require knowledge of the unsaturated hydraulic conductivity function are great, and in geotechnical engineering, they range from modeling seepage through landfill covers to determining infiltration of water under a building slab. The unsaturated hydraulic conductivity function can be measured using various direct and indirect techniques. The instantaneous profile method has been found to be the most promising unsteady state method for measuring the unsaturated hydraulic conductivity function for fine grained soils over a wide range of suction values. The instantaneous profile method can be modified by using different techniques to measure suction and water content and also through the way water is introduced or removed from the soil profile. In this study, the instantaneous profile method was modified by creating duplicate soil samples compacted into cylindrical tubes at two different water contents. The techniques used in the duplicate method to measure the water content and matric suction included volumetric moisture probes, manual water content measurements, and filter paper tests. The experimental testing conducted in this study provided insight into determining the unsaturated hydraulic conductivity using the instantaneous profile method for a sandy clay soil and recommendations are provided for further evaluation. Overall, this study has demonstrated that the presence of cracks has no significant impact on the hydraulic behavior of soil in high suction ranges. The results of this study do not examine the behavior of cracked soil unsaturated hydraulic conductivity at low suction and at moisture contents near saturation.
ContributorsJacquemin, Sean Christopher (Author) / Zapata, Claudia (Thesis advisor) / Houston, Sandra (Committee member) / Kavazanjian, Edward (Committee member) / Arizona State University (Publisher)
Created2011
149788-Thumbnail Image.png
Description
Residue number systems have gained significant importance in the field of high-speed digital signal processing due to their carry-free nature and speed-up provided by parallelism. The critical aspect in the application of RNS is the selection of the moduli set and the design of the conversion units. There have been

Residue number systems have gained significant importance in the field of high-speed digital signal processing due to their carry-free nature and speed-up provided by parallelism. The critical aspect in the application of RNS is the selection of the moduli set and the design of the conversion units. There have been several RNS moduli sets proposed for the implementation of digital filters. However, some are unbalanced and some do not provide the required dynamic range. This thesis addresses the drawbacks of existing RNS moduli sets and proposes a new moduli set for efficient implementation of FIR filters. An efficient VLSI implementation model has been derived for the design of a reverse converter from RNS to the conventional two's complement representation. This model facilitates the realization of a reverse converter for better performance with less hardware complexity when compared with the reverse converter designs of the existing balanced 4-moduli sets. Experimental results comparing multiply and accumulate units using RNS that are implemented using the proposed four-moduli set with the state-of-the-art balanced four-moduli sets, show large improvements in area (46%) and power (43%) reduction for various dynamic ranges. RNS FIR filters using the proposed moduli-set and existing balanced 4-moduli set are implemented in RTL and compared for chip area and power and observed 20% improvements. This thesis also presents threshold logic implementation of the reverse converter.
ContributorsChalivendra, Gayathri (Author) / Vrudhula, Sarma (Thesis advisor) / Shrivastava, Aviral (Committee member) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2011
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
149822-Thumbnail Image.png
Description
It is estimated that wind induced soil transports more than 500 x 106 metric tons of fugitive dust annually. Soil erosion has negative effects on human health, the productivity of farms, and the quality of surface waters. A variety of different polymer stabilizers are available on the market for fugitive

It is estimated that wind induced soil transports more than 500 x 106 metric tons of fugitive dust annually. Soil erosion has negative effects on human health, the productivity of farms, and the quality of surface waters. A variety of different polymer stabilizers are available on the market for fugitive dust control. Most of these polymer stabilizers are expensive synthetic polymer products. Their adverse effects and expense usually limits their use. Biopolymers provide a potential alternative to synthetic polymers. They can provide dust abatement by encapsulating soil particles and creating a binding network throughout the treated area. This research into the effectiveness of biopolymers for fugitive dust control involved three phases. Phase I included proof of concept tests. Phase II included carrying out the tests in a wind tunnel. Phase III consisted of conducting the experiments in the field. Proof of concept tests showed that biopolymers have the potential to reduce soil erosion and fugitive dust transport. Wind tunnel tests on two candidate biopolymers, xanthan and chitosan, showed that there is a proportional relationship between biopolymer application rates and threshold wind velocities. The wind tunnel tests also showed that xanthan gum is more successful in the field than chitosan. The field tests showed that xanthan gum was effective at controlling soil erosion. However, the chitosan field data was inconsistent with the xanthan data and field data on bare soil.
ContributorsAlsanad, Abdullah (Author) / Kavazanjian, Edward (Thesis advisor) / Edwards, David (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2011
149851-Thumbnail Image.png
Description
This research describes software based remote attestation schemes for obtaining the integrity of an executing user application and the Operating System (OS) text section of an untrusted client platform. A trusted external entity issues a challenge to the client platform. The challenge is executable code which the client must execute,

This research describes software based remote attestation schemes for obtaining the integrity of an executing user application and the Operating System (OS) text section of an untrusted client platform. A trusted external entity issues a challenge to the client platform. The challenge is executable code which the client must execute, and the code generates results which are sent to the external entity. These results provide the external entity an assurance as to whether the client application and the OS are in pristine condition. This work also presents a technique where it can be verified that the application which was attested, did not get replaced by a different application after completion of the attestation. The implementation of these three techniques was achieved entirely in software and is backward compatible with legacy machines on the Intel x86 architecture. This research also presents two approaches to incorporating software based "root of trust" using Virtual Machine Monitors (VMMs). The first approach determines the integrity of an executing Guest OS from the Host OS using Linux Kernel-based Virtual Machine (KVM) and qemu emulation software. The second approach implements a small VMM called MIvmm that can be utilized as a trusted codebase to build security applications such as those implemented in this research. MIvmm was conceptualized and implemented without using any existing codebase; its minimal size allows it to be trustworthy. Both the VMM approaches leverage processor support for virtualization in the Intel x86 architecture.
ContributorsSrinivasan, Raghunathan (Author) / Dasgupta, Partha (Thesis advisor) / Colbourn, Charles (Committee member) / Shrivastava, Aviral (Committee member) / Huang, Dijiang (Committee member) / Dewan, Prashant (Committee member) / Arizona State University (Publisher)
Created2011
150169-Thumbnail Image.png
Description
A method for evaluating the integrity of geosynthetic elements of a waste containment system subject to seismic loading is developed using a large strain finite difference numerical computer program. The method accounts for the effect of interaction between the geosynthetic elements and the overlying waste on seismic response and allows

A method for evaluating the integrity of geosynthetic elements of a waste containment system subject to seismic loading is developed using a large strain finite difference numerical computer program. The method accounts for the effect of interaction between the geosynthetic elements and the overlying waste on seismic response and allows for explicit calculation of forces and strains in the geosynthetic elements. Based upon comparison of numerical results to experimental data, an elastic-perfectly plastic interface model is demonstrated to adequately reproduce the cyclic behavior of typical geomembrane-geotextile and geomembrane-geomembrane interfaces provided the appropriate interface properties are used. New constitutive models are developed for the in-plane cyclic shear behavior of textured geomembrane/geosynthetic clay liner (GMX/GCL) interfaces and GCLs. The GMX/GCL model is an empirical model and the GCL model is a kinematic hardening, isotropic softening multi yield surface plasticity model. Both new models allows for degradation in the cyclic shear resistance from a peak to a large displacement shear strength. The ability of the finite difference model to predict forces and strains in a geosynthetic element modeled as a beam element with zero moment of inertia sandwiched between two interface elements is demonstrated using hypothetical models of a heap leach pad and two typical landfill configurations. The numerical model is then used to conduct back analyses of the performance of two lined municipal solid waste (MSW) landfills subjected to strong ground motions in the Northridge earthquake. The modulus reduction "backbone curve" employed with the Masing criterion and 2% Rayleigh damping to model the cyclic behavior of MSW was established by back-analysis of the response of the Operating Industries Inc. landfill to five different earthquakes, three small magnitude nearby events and two larger magnitude distant events. The numerical back analysis was able to predict the tears observed in the Chiquita Canyon Landfill liner system after the earthquake if strain concentrations due to seams and scratches in the geomembrane are taken into account. The apparent good performance of the Lopez Canyon landfill geomembrane and the observed tension in the overlying geotextile after the Northridge event was also successfully predicted using the numerical model.
ContributorsArab, Mohamed G (Author) / Kavazanjian, Edward (Thesis advisor) / Zapata, Claudia (Committee member) / Houston, Sandra (Committee member) / Arizona State University (Publisher)
Created2011
150160-Thumbnail Image.png
Description
The importance of unsaturated soil behavior stems from the fact that a vast majority of infrastructures are founded on unsaturated soils. Research has recently been concentrated on unsaturated soil properties. In the evaluation of unsaturated soils, researchers agree that soil water retention characterized by the soil water characteristic curve (SWCC)

The importance of unsaturated soil behavior stems from the fact that a vast majority of infrastructures are founded on unsaturated soils. Research has recently been concentrated on unsaturated soil properties. In the evaluation of unsaturated soils, researchers agree that soil water retention characterized by the soil water characteristic curve (SWCC) is among the most important factors when assessing fluid flow, volume change and shear strength for these soils. The temperature influence on soil moisture flow is a major concern in the design of important engineering systems such as barriers in underground repositories for radioactive waste disposal, ground-source heat pump (GSHP) systems, evapotranspirative (ET) covers and pavement systems.. Accurate modeling of the temperature effect on the SWCC may lead to reduction in design costs, simpler constructability, and hence, more sustainable structures. . The study made use of two possible approaches to assess the temperature effect on the SWCC. In the first approach, soils were sorted from a large soil database into families of similar properties but located on sites with different MAAT. The SWCCs were plotted for each family of soils. Most families of soils showed a clear trend indicating the influence of temperature on the soil water retention curve at low degrees of saturation.. The second approach made use of statistical analysis. It was demonstrated that the suction increases as the MAAT decreases. The statistical analysis showed that even though the plasticity index proved to have the greatest influence on suction, the mean annual air temperature effect proved not to be negligible. In both approaches, a strong relationship between temperature, suction and soil properties was observed. Finally, a comparison of the model based on the mean annual air temperature environmental factor was compared to another model that makes use of the Thornthwaite Moisture Index (TMI) to estimate the environmental effects on the suction of unsaturated soils. Results showed that the MAAT can be a better indicator when compared to the TMI found but the results were inconclusive due to the lack of TMI data available.
ContributorsElkeshky, Maie Mohamed (Author) / Zapata, Claudia E (Thesis advisor) / Houston, Sandra (Committee member) / Kavazanjian, Edward (Committee member) / Arizona State University (Publisher)
Created2011
150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
ContributorsThulasiram, Ramesh (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150197-Thumbnail Image.png
Description
Ever reducing time to market, along with short product lifetimes, has created a need to shorten the microprocessor design time. Verification of the design and its analysis are two major components of this design cycle. Design validation techniques can be broadly classified into two major categories: simulation based approaches and

Ever reducing time to market, along with short product lifetimes, has created a need to shorten the microprocessor design time. Verification of the design and its analysis are two major components of this design cycle. Design validation techniques can be broadly classified into two major categories: simulation based approaches and formal techniques. Simulation based microprocessor validation involves running millions of cycles using random or pseudo random tests and allows verification of the register transfer level (RTL) model against an architectural model, i.e., that the processor executes instructions as required. The validation effort involves model checking to a high level description or simulation of the design against the RTL implementation. Formal techniques exhaustively analyze parts of the design but, do not verify RTL against the architecture specification. The focus of this work is to implement a fully automated validation environment for a MIPS based radiation hardened microprocessor using simulation based approaches. The basic framework uses the classical validation approach in which the design to be validated is described in a Hardware Definition Language (HDL) such as VHDL or Verilog. To implement a simulation based approach a number of random or pseudo random tests are generated. The output of the HDL based design is compared against the one obtained from a "perfect" model implementing similar functionality, a mismatch in the results would thus indicate a bug in the HDL based design. Effort is made to design the environment in such a manner that it can support validation during different stages of the design cycle. The validation environment includes appropriate changes so as to support architecture changes which are introduced because of radiation hardening. The manner in which the validation environment is build is highly dependent on the specifications of the perfect model used for comparisons. This work implements the validation environment for two MIPS simulators as the reference model. Two bugs have been discovered in the RTL model, using simulation based approaches through the validation environment.
ContributorsSharma, Abhishek (Author) / Clark, Lawrence (Thesis advisor) / Holbert, Keith E. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2011