This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 78
152030-Thumbnail Image.png
Description
Recently, the use of zinc oxide (ZnO) nanowires as an interphase in composite materials has been demonstrated to increase the interfacial shear strength between carbon fiber and an epoxy matrix. In this research work, the strong adhesion between ZnO and carbon fiber is investigated to elucidate the interactions at the

Recently, the use of zinc oxide (ZnO) nanowires as an interphase in composite materials has been demonstrated to increase the interfacial shear strength between carbon fiber and an epoxy matrix. In this research work, the strong adhesion between ZnO and carbon fiber is investigated to elucidate the interactions at the interface that result in high interfacial strength. First, molecular dynamics (MD) simulations are performed to calculate the adhesive energy between bare carbon and ZnO. Since the carbon fiber surface has oxygen functional groups, these were modeled and MD simulations showed the preference of ketones to strongly interact with ZnO, however, this was not observed in the case of hydroxyls and carboxylic acid. It was also found that the ketone molecules ability to change orientation facilitated the interactions with the ZnO surface. Experimentally, the atomic force microscope (AFM) was used to measure the adhesive energy between ZnO and carbon through a liftoff test by employing highly oriented pyrolytic graphite (HOPG) substrate and a ZnO covered AFM tip. Oxygen functionalization of the HOPG surface shows the increase of adhesive energy. Additionally, the surface of ZnO was modified to hold a negative charge, which demonstrated an increase in the adhesive energy. This increase in adhesion resulted from increased induction forces given the relatively high polarizability of HOPG and the preservation of the charge on ZnO surface. It was found that the additional negative charge can be preserved on the ZnO surface because there is an energy barrier since carbon and ZnO form a Schottky contact. Other materials with the same ionic properties of ZnO but with higher polarizability also demonstrated good adhesion to carbon. This result substantiates that their induced interaction can be facilitated not only by the polarizability of carbon but by any of the materials at the interface. The versatility to modify the magnitude of the induced interaction between carbon and an ionic material provides a new route to create interfaces with controlled interfacial strength.
ContributorsGalan Vera, Magdian Ulises (Author) / Sodano, Henry A (Thesis advisor) / Jiang, Hanqing (Committee member) / Solanki, Kiran (Committee member) / Oswald, Jay (Committee member) / Speyer, Gil (Committee member) / Arizona State University (Publisher)
Created2013
151945-Thumbnail Image.png
Description
In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a

In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a constellation of heterogeneous processing elements (PEs) (general purpose PEs and application-specific integrated circuits (ASICS)). A typical MPSoC will be composed of a application processor, such as an ARM Coretex-A9 with cache coherent memory hierarchy, and several application sub-systems. Each of these sub-systems are composed of highly optimized instruction processors, graphics/DSP processors, and custom hardware accelerators. Typically, these sub-systems utilize scratchpad memories (SPM) rather than support cache coherency. The overall architecture is an integration of the various sub-systems through a high bandwidth system-level interconnect (such as a Network-on-Chip (NoC)). The shift to MPSoCs has been fueled by three major factors: demand for high performance, the use of component libraries, and short design turn around time. As customers continue to desire more and more complex applications on their embedded devices the performance demand for these devices continues to increase. Designers have turned to using MPSoCs to address this demand. By using pre-made IP libraries designers can quickly piece together a MPSoC that will meet the application demands of the end user with minimal time spent designing new hardware. Additionally, the use of MPSoCs allows designers to generate new devices very quickly and thus reducing the time to market. In this work, a complete MPSoC synthesis design flow is presented. We first present a technique \cite{leary1_intro} to address the synthesis of the interconnect architecture (particularly Network-on-Chip (NoC)). We then address the synthesis of the memory architecture of a MPSoC sub-system \cite{leary2_intro}. Lastly, we present a co-synthesis technique to generate the functional and memory architectures simultaneously. The validity and quality of each synthesis technique is demonstrated through extensive experimentation.
ContributorsLeary, Glenn (Author) / Chatha, Karamvir S (Thesis advisor) / Vrudhula, Sarma (Committee member) / Shrivastava, Aviral (Committee member) / Beraha, Rudy (Committee member) / Arizona State University (Publisher)
Created2013
151523-Thumbnail Image.png
Description
Shock loading is a complex phenomenon that can lead to failure mechanisms such as strain localization, void nucleation and growth, and eventually spall fracture. Studying incipient stages of spall damage is of paramount importance to accurately determine initiation sites in the material microstructure where damage will nucleate and grow and

Shock loading is a complex phenomenon that can lead to failure mechanisms such as strain localization, void nucleation and growth, and eventually spall fracture. Studying incipient stages of spall damage is of paramount importance to accurately determine initiation sites in the material microstructure where damage will nucleate and grow and to formulate continuum models that account for the variability of the damage process due to microstructural heterogeneity. The length scale of damage with respect to that of the surrounding microstructure has proven to be a key aspect in determining sites of failure initiation. Correlations have been found between the damage sites and the surrounding microstructure to determine the preferred sites of spall damage, since it tends to localize at and around the regions of intrinsic defects such as grain boundaries and triple points. However, considerable amount of work still has to be done in this regard to determine the physics driving the damage at these intrinsic weak sites in the microstructure. The main focus of this research work is to understand the physical mechanisms behind the damage localization at these preferred sites. A crystal plasticity constitutive model is implemented with different damage criteria to study the effects of stress concentration and strain localization at the grain boundaries. A cohesive zone modeling technique is used to include the intrinsic strength of the grain boundaries in the simulations. The constitutive model is verified using single elements tests, calibrated using single crystal impact experiments and validated using bicrystal and multicrystal impact experiments. The results indicate that strain localization is the predominant driving force for damage initiation and evolution. The microstructural effects on theses damage sites are studied to attribute the extent of damage to microstructural features such as grain orientation, misorientation, Taylor factor and the grain boundary planes. The finite element simulations show good correlation with the experimental results and can be used as the preliminary step in developing accurate probabilistic models for damage nucleation.
ContributorsKrishnan, Kapil (Author) / Peralta, Pedro (Thesis advisor) / Mignolet, Marc (Committee member) / Sieradzki, Karl (Committee member) / Jiang, Hanqing (Committee member) / Oswald, Jay (Committee member) / Arizona State University (Publisher)
Created2013
151851-Thumbnail Image.png
Description
In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by

In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by the given input which is a finite system trajectory, but also quantifies to what extend does the sequence satisfies or violates the MTL specification. The implementation of the algorithm is the DP-TALIRO toolbox for MATLAB. Currently it is used as the temporal logic robust computing engine of S-TALIRO which is a tool for MATLAB searching for trajectories of minimal robustness in Simulink/ Stateflow. DP-TALIRO is expected to have near linear running time and constant memory requirement depending on the structure of the MTL formula. DP-TALIRO toolbox also integrates new features not supported in its ancestor FW-TALIRO such as parameter replacement, most related iteration and most related predicate. A derivative of DP-TALIRO which is DP-T-TALIRO is also addressed in this thesis which applies dynamic programming algorithm for time robustness computation. We test the running time of DP-TALIRO and compare it with FW-TALIRO. Finally, we present an application where DP-TALIRO is used as the robustness computation core of S-TALIRO for a parameter estimation problem.
ContributorsYang, Hengyi (Author) / Fainekos, Georgios (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2013
152778-Thumbnail Image.png
Description
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense

Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
ContributorsBoyd, Jeffrey Michael (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Shrivastava, Aviral (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
153495-Thumbnail Image.png
Description
Recent studies of the occurrence of post-flutter limit cycle oscillations (LCO) of the F-16 have provided good support to the long-standing hypothesis that this phenomenon involves a nonlinear structural damping. A potential mechanism for the appearance of nonlinearity in the damping are the nonlinear geometric effects that arise when the

Recent studies of the occurrence of post-flutter limit cycle oscillations (LCO) of the F-16 have provided good support to the long-standing hypothesis that this phenomenon involves a nonlinear structural damping. A potential mechanism for the appearance of nonlinearity in the damping are the nonlinear geometric effects that arise when the deformations become large enough to exceed the linear regime. In this light, the focus of this investigation is first on extending nonlinear reduced order modeling (ROM) methods to include viscoelasticity which is introduced here through a linear Kelvin-Voigt model in the undeformed configuration. Proceeding with a Galerkin approach, the ROM governing equations of motion are obtained and are found to be of a generalized van der Pol-Duffing form with parameters depending on the structure and the chosen basis functions. An identification approach of the nonlinear damping parameters is next proposed which is applicable to structures modeled within commercial finite element software.

The effects of this nonlinear damping mechanism on the post-flutter response is next analyzed on the Goland wing through time-marching of the aeroelastic equations comprising a rational fraction approximation of the linear aerodynamic forces. It is indeed found that the nonlinearity in the damping can stabilize the unstable aerodynamics and lead to finite amplitude limit cycle oscillations even when the stiffness related nonlinear geometric effects are neglected. The incorporation of these latter effects in the model is found to further decrease the amplitude of LCO even though the dominant bending motions do not seem to stiffen as the level of displacements is increased in static analyses.
ContributorsSong, Pengchao (Author) / Mignolet, Marc P (Thesis advisor) / Chattopadhyay, Aditi (Committee member) / Oswald, Jay (Committee member) / Arizona State University (Publisher)
Created2015
153325-Thumbnail Image.png
Description
The football helmet is a device used to help mitigate the occurrence of impact-related traumatic (TBI) and minor traumatic brain injuries (mTBI) in the game of American football. The current design methodology of using a hard shell with an energy absorbing liner may be adequate for minimizing TBI, however it

The football helmet is a device used to help mitigate the occurrence of impact-related traumatic (TBI) and minor traumatic brain injuries (mTBI) in the game of American football. The current design methodology of using a hard shell with an energy absorbing liner may be adequate for minimizing TBI, however it has had less effect in minimizing mTBI. The latest research in brain injury mechanisms has established that the current design methodology has produced a helmet to reduce linear acceleration of the head. However, angular accelerations also have an adverse effect on the brain response, and must be investigated as a contributor of brain injury.

To help better understand how the football helmet design features effect the brain response during impact, this research develops a validated football helmet model and couples it with a full LS-DYNA human body model developed by the Global Human Body Modeling Consortium (v4.1.1). The human body model is a conglomeration of several validated models of different sections of the body. Of particular interest for this research is the Wayne State University Head Injury Model for modeling the brain. These human body models were validated using a combination of cadaveric and animal studies. In this study, the football helmet was validated by laboratory testing using drop tests on the crown of the helmet. By coupling the two models into one finite element model, the brain response to impact loads caused by helmet design features can be investigated. In the present research, LS-DYNA is used to study a helmet crown impact with a rigid steel plate so as to obtain the strain-rate, strain, and stress experienced in the corpus callosum, midbrain, and brain stem as these anatomical regions are areas of concern with respect to mTBI.
ContributorsDarling, Timothy (Author) / Rajan, Subramaniam D. (Thesis advisor) / Muthuswamy, Jitendran (Thesis advisor) / Oswald, Jay (Committee member) / Mignolet, Marc (Committee member) / Arizona State University (Publisher)
Created2014
153327-Thumbnail Image.png
Description
Monte Carlo simulations are traditionally carried out for the determination of the amplification of forced vibration response of turbomachine/jet engine blades to mistuning. However, this effort can be computationally time consuming even when using the various reduced order modeling techniques. Accordingly, some investigations in the past have focused on obtaining

Monte Carlo simulations are traditionally carried out for the determination of the amplification of forced vibration response of turbomachine/jet engine blades to mistuning. However, this effort can be computationally time consuming even when using the various reduced order modeling techniques. Accordingly, some investigations in the past have focused on obtaining simple approximate estimates for this amplification. In particular, two of these have proposed the use of harmonic patterns of the blade properties around the disk as an approximate alternative to the many random patterns of Monte Carlo analyses. These investigations, while quite encouraging, have relied solely on single degree of freedom per sector models of the rotor.

In this light, the overall focus of the present effort is a revisit of harmonic

mistuning of rotors focusing first the confirmation of the previously obtained findings with a more detailed model of the blisk in both conditions of an isolated blade-dominated resonance and of a veering between blade and disk dominated modes. The latter condition cannot be simulated by a single degree of freedom per sector model. Further, the analysis will consider the distinct cases of mistuning due to variations of material properties (Young's modulus) and geometric properties (geometric mistuning). In the single degree of freedom model, both mistuning types are equivalent but they are not, as demonstrated here, in more realistic models. The difference arises because changes in geometry induce not only changes in natural frequencies of the blades alone but of their modes and the importance of these two sources of variability is discussed with both Monte Carlo simulation and harmonic mistuning results.

The present investigation focuses also on the possible extension of the harmonic mistuning concept and of its quantitative information that can be derived from such analyses. From it, a novel measure of blade-disk coupling is introduced and assessed in comparison with the coupling index introduced in the past. In conclusions, the low cost of harmonic mistuning computations in comparison with full Monte Carlo simulations is

demonstrated to be worthwhile to elucidate the basic behavior of the mistuned rotor in a random setting.
ContributorsSahoo, Saurav (Author) / Mignolet, Marc Paul (Thesis advisor) / Chattopadhyay, Aditi (Committee member) / Oswald, Jay (Committee member) / Arizona State University (Publisher)
Created2014
152997-Thumbnail Image.png
Description
Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The

Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The kernels are computationally intensive and are mainly characterized by real-time constraints that demand high throughput and data bandwidth with limited global data reuse. Conventional architectures fail to meet these demands due to their poorly matched execution models and the overheads associated with instruction and data movements.

This work presents StreamWorks, a multi-core embedded architecture for energy-efficient stream computing. The basic processing element in the StreamWorks architecture is the StreamEngine (SE) which is responsible for iteratively executing a stream kernel. SE introduces an instruction locking mechanism that exploits the iterative nature of the kernels and enables fine-grain instruction reuse. Each instruction in a SE is locked to a Reservation Station (RS) and revitalizes itself after execution; thus never retiring from the RS. The entire kernel is hosted in RS Banks (RSBs) close to functional units for energy-efficient instruction delivery. The dataflow semantics of stream kernels are captured by a context-aware dataflow execution mode that efficiently exploits the Instruction Level Parallelism (ILP) and Data-level parallelism (DLP) within stream kernels.

Multiple SEs are grouped together to form a StreamCluster (SC) that communicate via a local interconnect. A novel software FIFO virtualization technique with split-join functionality is proposed for efficient and scalable stream communication across SEs. The proposed communication mechanism exploits the Task-level parallelism (TLP) of the stream application. The performance and scalability of the communication mechanism is evaluated against the existing data movement schemes for scratchpad based multi-core architectures. Further, overlay schemes and architectural support are proposed that allow hosting any number of kernels on the StreamWorks architecture. The proposed oevrlay schemes for code management supports kernel(context) switching for the most common use cases and can be adapted for any multi-core architecture that use software managed local memories.

The performance and energy-efficiency of the StreamWorks architecture is evaluated for stream kernel and application benchmarks by implementing the architecture in 45nm TSMC and comparison with a low power RISC core and a contemporary accelerator.
ContributorsPanda, Amrit (Author) / Chatha, Karam S. (Thesis advisor) / Wu, Carole-Jean (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2014
153089-Thumbnail Image.png
Description
A benchmark suite that is representative of the programs a processor typically executes is necessary to understand a processor's performance or energy consumption characteristics. The first contribution of this work addresses this need for mobile platforms with MobileBench, a selection of representative smartphone applications. In smartphones, like any other

A benchmark suite that is representative of the programs a processor typically executes is necessary to understand a processor's performance or energy consumption characteristics. The first contribution of this work addresses this need for mobile platforms with MobileBench, a selection of representative smartphone applications. In smartphones, like any other portable computing systems, energy is a limited resource. Based on the energy characterization of a commercial widely-used smartphone, application cores are found to consume a significant part of the total energy consumption of the device. With this insight, the subsequent part of this thesis focuses on the portion of energy that is spent to move data from the memory system to the application core's internal registers. The primary motivation for this work comes from the relatively higher power consumption associated with a data movement instruction compared to that of an arithmetic instruction. The data movement energy cost is worsened esp. in a System on Chip (SoC) because the amount of data received and exchanged in a SoC based smartphone increases at an explosive rate. A detailed investigation is performed to quantify the impact of data movement

on the overall energy consumption of a smartphone device. To aid this study, microbenchmarks that generate desired data movement patterns between different levels of the memory hierarchy are designed. Energy costs of data movement are then computed by measuring the instantaneous power consumption of the device when the micro benchmarks are executed. This work makes an extensive use of hardware performance counters to validate the memory access behavior of microbenchmarks and to characterize the energy consumed in moving data. Finally, the calculated energy costs of data movement are used to characterize the portion of energy that MobileBench applications spend in moving data. The results of this study show that a significant 35% of the total device energy is spent in data movement alone. Energy is an increasingly important criteria in the context of designing architectures for future smartphones and this thesis offers insights into data movement energy consumption.
ContributorsPandiyan, Dhinakaran (Author) / Wu, Carole-Jean (Thesis advisor) / Shrivastava, Aviral (Committee member) / Lee, Yann-Hang (Committee member) / Arizona State University (Publisher)
Created2014