This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 71
152073-Thumbnail Image.png
Description
The effect of earthquake-induced liquefaction on the local void ratio distribution of cohesionless soil is evaluated using x-ray computed tomography (CT) and an advanced image processing software package. Intact, relatively undisturbed specimens of cohesionless soil were recovered before and after liquefaction by freezing and coring soil deposits created by pluviation

The effect of earthquake-induced liquefaction on the local void ratio distribution of cohesionless soil is evaluated using x-ray computed tomography (CT) and an advanced image processing software package. Intact, relatively undisturbed specimens of cohesionless soil were recovered before and after liquefaction by freezing and coring soil deposits created by pluviation and by sedimentation through water. Pluviated soil deposits were liquefied in the small geotechnical centrifuge at the University of California at Davis shared-use National Science Foundation (NSF)-supported Network for Earthquake Engineering Simulation (NEES) facility. A soil deposit created by sedimentation through water was liquefied on a small shake table in the Arizona State University geotechnical laboratory. Initial centrifuge tests employed Ottawa 20-30 sand but this material proved to be too coarse to liquefy in the centrifuge. Therefore, subsequent centrifuge tests employed Ottawa F60 sand. The shake table test employed Ottawa 20-30 sand. Recovered cores were stabilized by impregnation with optical grade epoxy and sent to the University of Texas at Austin NSF-supported facility at the University of Texas at Austin for high-resolution CT scanning of geologic media. The local void ratio distribution of a CT-scanned core of Ottawa 20-30 sand evaluated using Avizo® Fire, a commercially available advanced program for image analysis, was compared to the local void ratio distribution established on the same core by analysis of optical images to demonstrate that analysis of the CT scans gave similar results to optical methods. CT scans were subsequently conducted on liquefied and not-liquefied specimens of Ottawa 20-30 sand and Ottawa F60 sand. The resolution of F60 specimens was inadequate to establish the local void ratio distribution. Results of the analysis of the Ottawa 20-30 specimens recovered from the model built for the shake table test showed that liquefaction can substantially influence the variability in local void ratio, increasing the degree of non-homogeneity in the specimen.
ContributorsGutierrez, Angel (Author) / Kavazanjian, Edward (Thesis advisor) / Houston, Sandra (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2013
151747-Thumbnail Image.png
Description
Heating of asphalt during production and construction causes the volatilization and oxidation of binders used in mixes. Volatilization and oxidation causes degradation of asphalt pavements by increasing the stiffness of the binders, increasing susceptibility to cracking and negatively affecting the functional and structural performance of the pavements. Degradation of asphalt

Heating of asphalt during production and construction causes the volatilization and oxidation of binders used in mixes. Volatilization and oxidation causes degradation of asphalt pavements by increasing the stiffness of the binders, increasing susceptibility to cracking and negatively affecting the functional and structural performance of the pavements. Degradation of asphalt binders by volatilization and oxidation due to high production temperature occur during early stages of pavement life and are known as Short Term Aging (STA). Elevated temperatures and increased exposure time to elevated temperatures causes increased STA of asphalt. The objective of this research was to investigate how elevated mixing temperatures and exposure time to elevated temperatures affect aging and stiffening of binders, thus influencing properties of the asphalt mixtures. The study was conducted in two stages. The first stage evaluated STA effect of asphalt binders. It involved aging two Performance Graded (PG) virgin asphalt binders, PG 76-16 and PG 64-22 at two different temperatures and durations, then measuring their viscosities. The second stage involved evaluating the effects of elevated STA temperature and time on properties of the asphalt mixtures. It involved STA of asphalt mixtures produced in the laboratory with the PG 64-22 binder at mixing temperatures elevated 25OF above standard practice; STA times at 2 and 4 hours longer than standard practices, and then compacted in a gyratory compactor. Dynamic modulus (E*) and Indirect Tensile Strength (IDT) were measured for the aged mixtures for each temperature and duration to determine the effect of different aging times and temperatures on the stiffness and fatigue properties of the aged asphalt mixtures. The binder test results showed that in all cases, there was increased viscosity. The results showed the highest increase in viscosity resulted from increased aging time. The results also indicated that PG 64-22 was more susceptible to elevated STA temperature and extended time than the PG 76-16 binders. The asphalt mixture test results confirmed the expected outcome that increasing the STA and mixing temperature by 25oF alters the stiffness of mixtures. Significant change in the dynamic modulus mostly occurred at four hour increase in STA time regardless of temperature.
ContributorsLolly, Rubben (Author) / Kaloush, Kamil (Thesis advisor) / Bearup, Wylie (Committee member) / Zapata, Claudia (Committee member) / Mamlouk, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151945-Thumbnail Image.png
Description
In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a

In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a constellation of heterogeneous processing elements (PEs) (general purpose PEs and application-specific integrated circuits (ASICS)). A typical MPSoC will be composed of a application processor, such as an ARM Coretex-A9 with cache coherent memory hierarchy, and several application sub-systems. Each of these sub-systems are composed of highly optimized instruction processors, graphics/DSP processors, and custom hardware accelerators. Typically, these sub-systems utilize scratchpad memories (SPM) rather than support cache coherency. The overall architecture is an integration of the various sub-systems through a high bandwidth system-level interconnect (such as a Network-on-Chip (NoC)). The shift to MPSoCs has been fueled by three major factors: demand for high performance, the use of component libraries, and short design turn around time. As customers continue to desire more and more complex applications on their embedded devices the performance demand for these devices continues to increase. Designers have turned to using MPSoCs to address this demand. By using pre-made IP libraries designers can quickly piece together a MPSoC that will meet the application demands of the end user with minimal time spent designing new hardware. Additionally, the use of MPSoCs allows designers to generate new devices very quickly and thus reducing the time to market. In this work, a complete MPSoC synthesis design flow is presented. We first present a technique \cite{leary1_intro} to address the synthesis of the interconnect architecture (particularly Network-on-Chip (NoC)). We then address the synthesis of the memory architecture of a MPSoC sub-system \cite{leary2_intro}. Lastly, we present a co-synthesis technique to generate the functional and memory architectures simultaneously. The validity and quality of each synthesis technique is demonstrated through extensive experimentation.
ContributorsLeary, Glenn (Author) / Chatha, Karamvir S (Thesis advisor) / Vrudhula, Sarma (Committee member) / Shrivastava, Aviral (Committee member) / Beraha, Rudy (Committee member) / Arizona State University (Publisher)
Created2013
151835-Thumbnail Image.png
Description
Unsaturated soil mechanics is becoming a part of geotechnical engineering practice, particularly in applications to moisture sensitive soils such as expansive and collapsible soils and in geoenvironmental applications. The soil water characteristic curve, which describes the amount of water in a soil versus soil suction, is perhaps the most important

Unsaturated soil mechanics is becoming a part of geotechnical engineering practice, particularly in applications to moisture sensitive soils such as expansive and collapsible soils and in geoenvironmental applications. The soil water characteristic curve, which describes the amount of water in a soil versus soil suction, is perhaps the most important soil property function for application of unsaturated soil mechanics. The soil water characteristic curve has been used extensively for estimating unsaturated soil properties, and a number of fitting equations for development of soil water characteristic curves from laboratory data have been proposed by researchers. Although not always mentioned, the underlying assumption of soil water characteristic curve fitting equations is that the soil is sufficiently stiff so that there is no change in total volume of the soil while measuring the soil water characteristic curve in the laboratory, and researchers rarely take volume change of soils into account when generating or using the soil water characteristic curve. Further, there has been little attention to the applied net normal stress during laboratory soil water characteristic curve measurement, and often zero to only token net normal stress is applied. The applied net normal stress also affects the volume change of the specimen during soil suction change. When a soil changes volume in response to suction change, failure to consider the volume change of the soil leads to errors in the estimated air-entry value and the slope of the soil water characteristic curve between the air-entry value and the residual moisture state. Inaccuracies in the soil water characteristic curve may lead to inaccuracies in estimated soil property functions such as unsaturated hydraulic conductivity. A number of researchers have recently recognized the importance of considering soil volume change in soil water characteristic curves. The study of correct methods of soil water characteristic curve measurement and determination considering soil volume change, and impacts on the unsaturated hydraulic conductivity function was of the primary focus of this study. Emphasis was placed upon study of the effect of volume change consideration on soil water characteristic curves, for expansive clays and other high volume change soils. The research involved extensive literature review and laboratory soil water characteristic curve testing on expansive soils. The effect of the initial state of the specimen (i.e. slurry versus compacted) on soil water characteristic curves, with regard to volume change effects, and effect of net normal stress on volume change for determination of these curves, was studied for expansive clays. Hysteresis effects were included in laboratory measurements of soil water characteristic curves as both wetting and drying paths were used. Impacts of soil water characteristic curve volume change considerations on fluid flow computations and associated suction-change induced soil deformations were studied through numerical simulations. The study includes both coupled and uncoupled flow and stress-deformation analyses, demonstrating that the impact of volume change consideration on the soil water characteristic curve and the estimated unsaturated hydraulic conductivity function can be quite substantial for high volume change soils.
ContributorsBani Hashem, Elham (Author) / Houston, Sandra L. (Thesis advisor) / Kavazanjian, Edward (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2013
152596-Thumbnail Image.png
Description
This thesis presents a probabilistic evaluation of multiple laterally loaded drilled pier foundation design approaches using extensive data from a geotechnical investigation for a high voltage electric transmission line. A series of Monte Carlo simulations provide insight about the computed level of reliability considering site standard penetration test blow count

This thesis presents a probabilistic evaluation of multiple laterally loaded drilled pier foundation design approaches using extensive data from a geotechnical investigation for a high voltage electric transmission line. A series of Monte Carlo simulations provide insight about the computed level of reliability considering site standard penetration test blow count value variability alone (i.e., assuming all other aspects of the design problem do not contribute error or bias). Evaluated methods include Eurocode 7 Geotechnical Design procedures, the Federal Highway Administration drilled shaft LRFD design method, the Electric Power Research Institute transmission foundation design procedure and a site specific variability based approach previously suggested by the author of this thesis and others. The analysis method is defined by three phases: a) Evaluate the spatial variability of an existing subsurface database. b) Derive theoretical foundation designs from the database in accordance with the various design methods identified. c) Conduct Monti Carlo Simulations to compute the reliability of the theoretical foundation designs. Over several decades, reliability-based foundation design (RBD) methods have been developed and implemented to varying degrees for buildings, bridges, electric systems and other structures. In recent years, an effort has been made by researchers, professional societies and other standard-developing organizations to publish design guidelines, manuals and standards concerning RBD for foundations. Most of these approaches rely on statistical methods for quantifying load and resistance probability distribution functions with defined reliability levels. However, each varies with regard to the influence of site-specific variability on resistance. An examination of the influence of site-specific variability is required to provide direction for incorporating the concept into practical RBD design methods. Recent surveys of transmission line engineers by the Electric Power Research Institute (EPRI) demonstrate RBD methods for the design of transmission line foundations have not been widely adopted. In the absence of a unifying design document with established reliability goals, transmission line foundations have historically performed very well, with relatively few failures. However, such a track record with no set reliability goals suggests, at least in some cases, a financial premium has likely been paid.
ContributorsHeim, Zackary (Author) / Houston, Sandra (Thesis advisor) / Witczak, Matthew (Committee member) / Kavazanjian, Edward (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2014
151851-Thumbnail Image.png
Description
In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by

In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by the given input which is a finite system trajectory, but also quantifies to what extend does the sequence satisfies or violates the MTL specification. The implementation of the algorithm is the DP-TALIRO toolbox for MATLAB. Currently it is used as the temporal logic robust computing engine of S-TALIRO which is a tool for MATLAB searching for trajectories of minimal robustness in Simulink/ Stateflow. DP-TALIRO is expected to have near linear running time and constant memory requirement depending on the structure of the MTL formula. DP-TALIRO toolbox also integrates new features not supported in its ancestor FW-TALIRO such as parameter replacement, most related iteration and most related predicate. A derivative of DP-TALIRO which is DP-T-TALIRO is also addressed in this thesis which applies dynamic programming algorithm for time robustness computation. We test the running time of DP-TALIRO and compare it with FW-TALIRO. Finally, we present an application where DP-TALIRO is used as the robustness computation core of S-TALIRO for a parameter estimation problem.
ContributorsYang, Hengyi (Author) / Fainekos, Georgios (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2013
152724-Thumbnail Image.png
Description
ABSTRACT Enzyme-Induced Carbonate Precipitation (EICP) using a plant-derived form of the urease enzyme to induce the precipitation of calcium carbonate (CaCO3) shows promise as a method of stabilizing soil for the mitigation of fugitive dust. Fugitive dust is a significant problem in Arizona, particularly in Maricopa County. Maricopa County is

ABSTRACT Enzyme-Induced Carbonate Precipitation (EICP) using a plant-derived form of the urease enzyme to induce the precipitation of calcium carbonate (CaCO3) shows promise as a method of stabilizing soil for the mitigation of fugitive dust. Fugitive dust is a significant problem in Arizona, particularly in Maricopa County. Maricopa County is an EPA air quality non-attainment zone, due primarily to fugitive dust, which presents a significant health risk to local residents. Conventional methods for fugitive dust control, including the application of water, are either ineffective in arid climates, very expensive, or limited to short term stabilization. Due to these limitations, engineers are searching for new and more effective ways to stabilize the soil and reduce wind erosion. EICP employs urea hydrolysis, a process in which carbonate precipitation is catalyzed by the urease enzyme, a widely occurring protein found in many plants and microorganisms. Wind tunnel experiments were conducted in the ASU/NASA Planetary Wind Tunnel to evaluate the use of EICP as a means to stabilize soil against fugitive dust emission. Three different soils were tested, including a native Arizona silty-sand, a uniform fine to medium grained silica sand, and mine tailings from a mine in southern Arizona. The test soil was loosely placed in specimen container and the surface was sprayed with an aqueous solution containing urea, calcium chloride, and urease enzyme. After a short period of time to allow for CaCO3 precipitation, the specimens were tested in the wind tunnel. The completed tests show that EICP can increase the detachment velocity compared to bare or wetted soil and thus holds promise as a means of mitigating fugitive dust emissions.
ContributorsKnorr, Brian (Author) / Kavazanjian, Edward (Thesis advisor) / Houston, Sandra (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2014
152778-Thumbnail Image.png
Description
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense

Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
ContributorsBoyd, Jeffrey Michael (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Shrivastava, Aviral (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
153311-Thumbnail Image.png
Description
Ground coupled heat pumps (GCHPs) have been used successfully in many environments to improve the heating and cooling efficiency of both small and large scale buildings. In arid climate regions, such as the Phoenix, Arizona metropolitan area, where the air condi-tioning load is dominated by cooling in the summer,

Ground coupled heat pumps (GCHPs) have been used successfully in many environments to improve the heating and cooling efficiency of both small and large scale buildings. In arid climate regions, such as the Phoenix, Arizona metropolitan area, where the air condi-tioning load is dominated by cooling in the summer, GCHPs are difficult to install and operate. This is because the nature of soils in arid climate regions, in that they are both dry and hot, renders them particularly ineffective at dissipating heat.

The first part of this thesis addresses applying the SVHeat finite element modeling soft-ware to create a model of a GCHP system. Using real-world data from a prototype solar-water heating system coupled with a ground-source heat exchanger installed in Menlo Park, California, a relatively accurate model was created to represent a novel GCHP panel system installed in a shallow vertical trench. A sensitivity analysis was performed to evaluate the accuracy of the calibrated model.

The second part of the thesis involved adapting the calibrated model to represent an ap-proximation of soil conditions in arid climate regions, using a range of thermal properties for dry soils. The effectiveness of the GCHP in the arid climate region model was then evaluated by comparing the thermal flux from the panel into the subsurface profile to that of the prototype GCHP. It was shown that soils in arid climate regions are particularly inefficient at heat dissipation, but that it is highly dependent on the thermal conductivity inputted into the model. This demonstrates the importance of proper site characterization in arid climate regions. Finally, several soil improvement methods were researched to evaluate their potential for use in improving the effectiveness of shallow horizontal GCHP systems in arid climate regions.
ContributorsNorth, Timothy James (Author) / Kavazanjian, Ed (Thesis advisor) / Redy, T. Agami (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2014
152997-Thumbnail Image.png
Description
Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The

Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The kernels are computationally intensive and are mainly characterized by real-time constraints that demand high throughput and data bandwidth with limited global data reuse. Conventional architectures fail to meet these demands due to their poorly matched execution models and the overheads associated with instruction and data movements.

This work presents StreamWorks, a multi-core embedded architecture for energy-efficient stream computing. The basic processing element in the StreamWorks architecture is the StreamEngine (SE) which is responsible for iteratively executing a stream kernel. SE introduces an instruction locking mechanism that exploits the iterative nature of the kernels and enables fine-grain instruction reuse. Each instruction in a SE is locked to a Reservation Station (RS) and revitalizes itself after execution; thus never retiring from the RS. The entire kernel is hosted in RS Banks (RSBs) close to functional units for energy-efficient instruction delivery. The dataflow semantics of stream kernels are captured by a context-aware dataflow execution mode that efficiently exploits the Instruction Level Parallelism (ILP) and Data-level parallelism (DLP) within stream kernels.

Multiple SEs are grouped together to form a StreamCluster (SC) that communicate via a local interconnect. A novel software FIFO virtualization technique with split-join functionality is proposed for efficient and scalable stream communication across SEs. The proposed communication mechanism exploits the Task-level parallelism (TLP) of the stream application. The performance and scalability of the communication mechanism is evaluated against the existing data movement schemes for scratchpad based multi-core architectures. Further, overlay schemes and architectural support are proposed that allow hosting any number of kernels on the StreamWorks architecture. The proposed oevrlay schemes for code management supports kernel(context) switching for the most common use cases and can be adapted for any multi-core architecture that use software managed local memories.

The performance and energy-efficiency of the StreamWorks architecture is evaluated for stream kernel and application benchmarks by implementing the architecture in 45nm TSMC and comparison with a low power RISC core and a contemporary accelerator.
ContributorsPanda, Amrit (Author) / Chatha, Karam S. (Thesis advisor) / Wu, Carole-Jean (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2014