This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 70
151945-Thumbnail Image.png
Description
In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a

In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a constellation of heterogeneous processing elements (PEs) (general purpose PEs and application-specific integrated circuits (ASICS)). A typical MPSoC will be composed of a application processor, such as an ARM Coretex-A9 with cache coherent memory hierarchy, and several application sub-systems. Each of these sub-systems are composed of highly optimized instruction processors, graphics/DSP processors, and custom hardware accelerators. Typically, these sub-systems utilize scratchpad memories (SPM) rather than support cache coherency. The overall architecture is an integration of the various sub-systems through a high bandwidth system-level interconnect (such as a Network-on-Chip (NoC)). The shift to MPSoCs has been fueled by three major factors: demand for high performance, the use of component libraries, and short design turn around time. As customers continue to desire more and more complex applications on their embedded devices the performance demand for these devices continues to increase. Designers have turned to using MPSoCs to address this demand. By using pre-made IP libraries designers can quickly piece together a MPSoC that will meet the application demands of the end user with minimal time spent designing new hardware. Additionally, the use of MPSoCs allows designers to generate new devices very quickly and thus reducing the time to market. In this work, a complete MPSoC synthesis design flow is presented. We first present a technique \cite{leary1_intro} to address the synthesis of the interconnect architecture (particularly Network-on-Chip (NoC)). We then address the synthesis of the memory architecture of a MPSoC sub-system \cite{leary2_intro}. Lastly, we present a co-synthesis technique to generate the functional and memory architectures simultaneously. The validity and quality of each synthesis technique is demonstrated through extensive experimentation.
ContributorsLeary, Glenn (Author) / Chatha, Karamvir S (Thesis advisor) / Vrudhula, Sarma (Committee member) / Shrivastava, Aviral (Committee member) / Beraha, Rudy (Committee member) / Arizona State University (Publisher)
Created2013
152007-Thumbnail Image.png
Description
The implications of a changing climate have a profound impact on human life, society, and policy making. The need for accurate climate prediction becomes increasingly important as we better understand these implications. Currently, the most widely used climate prediction relies on the synthesis of climate model simulations organized by the

The implications of a changing climate have a profound impact on human life, society, and policy making. The need for accurate climate prediction becomes increasingly important as we better understand these implications. Currently, the most widely used climate prediction relies on the synthesis of climate model simulations organized by the Coupled Model Intercomparison Project (CMIP); these simulations are ensemble-averaged to construct projections for the 21st century climate. However, a significant degree of bias and variability in the model simulations for the 20th century climate is well-known at both global and regional scales. Based on that insight, this study provides an alternative approach for constructing climate projections that incorporates knowledge of model bias. This approach is demonstrated to be a viable alternative which can be easily implemented by water resource managers for potentially more accurate projections. Tests of the new approach are provided on a global scale with an emphasis on semiarid regional studies for their particular vulnerability to water resource changes, using both the former CMIP Phase 3 (CMIP3) and current Phase 5 (CMIP5) model archives. This investigation is accompanied by a detailed analysis of the dynamical processes and water budget to understand the behaviors and sources of model biases. Sensitivity studies of selected CMIP5 models are also performed with an atmospheric component model by testing the relationship between climate change forcings and model simulated response. The information derived from each study is used to determine the progressive quality of coupled climate models in simulating the global water cycle by rigorously investigating sources of model bias related to the moisture budget. As such, the conclusions of this project are highly relevant to model development and potentially may be used to further improve climate projections.
ContributorsBaker, Noel C (Author) / Huang, Huei-Ping (Thesis advisor) / Trimble, Steve (Committee member) / Anderson, James (Committee member) / Clarke, Amanda (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2013
151672-Thumbnail Image.png
Description
ABSTRACT A vortex tube is a device of a simple structure with no moving parts that can be used to separate a compressed gas into a hot stream and a cold stream. Many studies have been carried out to find the mechanisms of the energy separation in the vortex tube.

ABSTRACT A vortex tube is a device of a simple structure with no moving parts that can be used to separate a compressed gas into a hot stream and a cold stream. Many studies have been carried out to find the mechanisms of the energy separation in the vortex tube. Recent rapid development in computational fluid dynamics is providing a powerful tool to investigate the complex flow in the vortex tube. However various issues in these numerical simulations remain, such as choosing the most suitable turbulent model, as well as the lack of systematic comparative analysis. LES model for the vortex tube simulation is hardly used in the present literatures, and the influence of parameters on the performance of the vortex tube has scarcely been studied. This study is aimed to find the influence of various parameters on the performance of the vortex tube, the best geometric value of vortex tube and the realizable method to reach the required cold out flow rate 40 kg/s . First of all, setting up an original 3-D simulation vortex tube model. By comparing experiment results reported in the literature and our simulation results, a most suitable model for the simulation of the vortex tube is obtained. Secondly, we perform simulations to optimize parameters that can deliver a set of desired output, such as cold stream pressure, temperature and flow-rate. We also discuss the use of the cold air flow for petroleum engineering applications.
ContributorsCang, Ruijin (Author) / Chen, Kangping (Thesis advisor) / Huang, Hueiping (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2013
151532-Thumbnail Image.png
Description
Modern day gas turbine designers face the problem of hot mainstream gas ingestion into rotor-stator disk cavities. To counter this ingestion, seals are installed on the rotor and stator disk rims and purge air, bled off from the compressor, is injected into the cavities. It is desirable to reduce the

Modern day gas turbine designers face the problem of hot mainstream gas ingestion into rotor-stator disk cavities. To counter this ingestion, seals are installed on the rotor and stator disk rims and purge air, bled off from the compressor, is injected into the cavities. It is desirable to reduce the supply of purge air as this decreases the net power output as well as efficiency of the gas turbine. Since the purge air influences the disk cavity flow field and effectively the amount of ingestion, the aim of this work was to study the cavity velocity field experimentally using Particle Image Velocimetry (PIV). Experiments were carried out in a model single-stage axial flow turbine set-up that featured blades as well as vanes, with purge air supplied at the hub of the rotor-stator disk cavity. Along with the rotor and stator rim seals, an inner labyrinth seal was provided which split the disk cavity into a rim cavity and an inner cavity. First, static gage pressure distribution was measured to ensure that nominally steady flow conditions had been achieved. The PIV experiments were then performed to map the velocity field on the radial-tangential plane within the rim cavity at four axial locations. Instantaneous velocity maps obtained by PIV were analyzed sector-by-sector to understand the rim cavity flow field. It was observed that the tangential velocity dominated the cavity flow at low purge air flow rate, its dominance decreasing with increase in the purge air flow rate. Radially inboard of the rim cavity, negative radial velocity near the stator surface and positive radial velocity near the rotor surface indicated the presence of a recirculation region in the cavity whose radial extent increased with increase in the purge air flow rate. Qualitative flow streamline patterns are plotted within the rim cavity for different experimental conditions by combining the PIV map information with ingestion measurements within the cavity as reported in Thiagarajan (2013).
ContributorsPathak, Parag (Author) / Roy, Ramendra P (Thesis advisor) / Calhoun, Ronald (Committee member) / Lee, Taewoo (Committee member) / Arizona State University (Publisher)
Created2013
152502-Thumbnail Image.png
Description
Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced

Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced climate changes in the 21st century. The computer simulations performed with those models and archived by the Coupled Model Intercomparison Project - Phase 5 (CMIP5) form the most comprehensive quantitative basis for the prediction of global environmental changes on decadal-to-centennial time scales. While the CMIP5 archives have been widely used for policy making, the inherent biases in the models have not been systematically examined. The main objective of this study is to validate the CMIP5 simulations of the 20th century climate with observations to quantify the biases and uncertainties in state-of-the-art climate models. Specifically, this work focuses on three major features in the atmosphere: the jet streams over the North Pacific and Atlantic Oceans and the low level jet (LLJ) stream over central North America which affects the weather in the United States, and the near-surface wind field over North America which is relevant to energy applications. The errors in the model simulations of those features are systematically quantified and the uncertainties in future predictions are assessed for stakeholders to use in climate applications. Additional atmospheric model simulations are performed to determine the sources of the errors in climate models. The results reject a popular idea that the errors in the sea surface temperature due to an inaccurate ocean circulation contributes to the errors in major atmospheric jet streams.
ContributorsKulkarni, Sujay (Author) / Huang, Huei-Ping (Thesis advisor) / Calhoun, Ronald (Committee member) / Peet, Yulia (Committee member) / Arizona State University (Publisher)
Created2014
151851-Thumbnail Image.png
Description
In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by

In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by the given input which is a finite system trajectory, but also quantifies to what extend does the sequence satisfies or violates the MTL specification. The implementation of the algorithm is the DP-TALIRO toolbox for MATLAB. Currently it is used as the temporal logic robust computing engine of S-TALIRO which is a tool for MATLAB searching for trajectories of minimal robustness in Simulink/ Stateflow. DP-TALIRO is expected to have near linear running time and constant memory requirement depending on the structure of the MTL formula. DP-TALIRO toolbox also integrates new features not supported in its ancestor FW-TALIRO such as parameter replacement, most related iteration and most related predicate. A derivative of DP-TALIRO which is DP-T-TALIRO is also addressed in this thesis which applies dynamic programming algorithm for time robustness computation. We test the running time of DP-TALIRO and compare it with FW-TALIRO. Finally, we present an application where DP-TALIRO is used as the robustness computation core of S-TALIRO for a parameter estimation problem.
ContributorsYang, Hengyi (Author) / Fainekos, Georgios (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2013
152778-Thumbnail Image.png
Description
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense

Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
ContributorsBoyd, Jeffrey Michael (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Shrivastava, Aviral (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
152997-Thumbnail Image.png
Description
Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The

Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The kernels are computationally intensive and are mainly characterized by real-time constraints that demand high throughput and data bandwidth with limited global data reuse. Conventional architectures fail to meet these demands due to their poorly matched execution models and the overheads associated with instruction and data movements.

This work presents StreamWorks, a multi-core embedded architecture for energy-efficient stream computing. The basic processing element in the StreamWorks architecture is the StreamEngine (SE) which is responsible for iteratively executing a stream kernel. SE introduces an instruction locking mechanism that exploits the iterative nature of the kernels and enables fine-grain instruction reuse. Each instruction in a SE is locked to a Reservation Station (RS) and revitalizes itself after execution; thus never retiring from the RS. The entire kernel is hosted in RS Banks (RSBs) close to functional units for energy-efficient instruction delivery. The dataflow semantics of stream kernels are captured by a context-aware dataflow execution mode that efficiently exploits the Instruction Level Parallelism (ILP) and Data-level parallelism (DLP) within stream kernels.

Multiple SEs are grouped together to form a StreamCluster (SC) that communicate via a local interconnect. A novel software FIFO virtualization technique with split-join functionality is proposed for efficient and scalable stream communication across SEs. The proposed communication mechanism exploits the Task-level parallelism (TLP) of the stream application. The performance and scalability of the communication mechanism is evaluated against the existing data movement schemes for scratchpad based multi-core architectures. Further, overlay schemes and architectural support are proposed that allow hosting any number of kernels on the StreamWorks architecture. The proposed oevrlay schemes for code management supports kernel(context) switching for the most common use cases and can be adapted for any multi-core architecture that use software managed local memories.

The performance and energy-efficiency of the StreamWorks architecture is evaluated for stream kernel and application benchmarks by implementing the architecture in 45nm TSMC and comparison with a low power RISC core and a contemporary accelerator.
ContributorsPanda, Amrit (Author) / Chatha, Karam S. (Thesis advisor) / Wu, Carole-Jean (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2014
153086-Thumbnail Image.png
Description
A municipal electric utility in Mesa, Arizona with a peak load of approximately 85 megawatts (MW) was analyzed to determine how the implementation of renewable resources (both wind and solar) would affect the overall cost of energy purchased by the utility. The utility currently purchases all of its energy

A municipal electric utility in Mesa, Arizona with a peak load of approximately 85 megawatts (MW) was analyzed to determine how the implementation of renewable resources (both wind and solar) would affect the overall cost of energy purchased by the utility. The utility currently purchases all of its energy through long term energy supply contracts and does not own any generation assets and so optimization was achieved by minimizing the overall cost of energy while adhering to specific constraints on how much energy the utility could purchase from the short term energy market. Scenarios were analyzed for a five percent and a ten percent penetration of renewable energy in the years 2015 and 2025. Demand Side Management measures (through thermal storage in the City's district cooling system, electric vehicles, and customers' air conditioning improvements) were evaluated to determine if they would mitigate some of the cost increases that resulted from the addition of renewable resources.

In the 2015 simulation, wind energy was less expensive than solar to integrate to the supply mix. When five percent of the utility's energy requirements in 2015 are met by wind, this caused a 3.59% increase in the overall cost of energy. When that five percent is met by solar in 2015, it is estimated to cause a 3.62% increase in the overall cost of energy. A mix of wind and solar in 2015 caused a lower increase in the overall cost of energy of 3.57%. At the ten percent implementation level in 2015, solar, wind, and a mix of solar and wind caused increases of 7.28%, 7.51% and 7.27% respectively in the overall cost of energy.

In 2025, at the five percent implementation level, wind and solar caused increases in the overall cost of energy of 3.07% and 2.22% respectively. In 2025, at the ten percent implementation level, wind and solar caused increases in the overall cost of energy of 6.23% and 4.67% respectively.

Demand Side Management reduced the overall cost of energy by approximately 0.6%, mitigating some of the cost increase from adding renewable resources.
ContributorsCadorin, Anthony (Author) / Phelan, Patrick (Thesis advisor) / Calhoun, Ronald (Committee member) / Trimble, Steve (Committee member) / Arizona State University (Publisher)
Created2014
153089-Thumbnail Image.png
Description
A benchmark suite that is representative of the programs a processor typically executes is necessary to understand a processor's performance or energy consumption characteristics. The first contribution of this work addresses this need for mobile platforms with MobileBench, a selection of representative smartphone applications. In smartphones, like any other

A benchmark suite that is representative of the programs a processor typically executes is necessary to understand a processor's performance or energy consumption characteristics. The first contribution of this work addresses this need for mobile platforms with MobileBench, a selection of representative smartphone applications. In smartphones, like any other portable computing systems, energy is a limited resource. Based on the energy characterization of a commercial widely-used smartphone, application cores are found to consume a significant part of the total energy consumption of the device. With this insight, the subsequent part of this thesis focuses on the portion of energy that is spent to move data from the memory system to the application core's internal registers. The primary motivation for this work comes from the relatively higher power consumption associated with a data movement instruction compared to that of an arithmetic instruction. The data movement energy cost is worsened esp. in a System on Chip (SoC) because the amount of data received and exchanged in a SoC based smartphone increases at an explosive rate. A detailed investigation is performed to quantify the impact of data movement

on the overall energy consumption of a smartphone device. To aid this study, microbenchmarks that generate desired data movement patterns between different levels of the memory hierarchy are designed. Energy costs of data movement are then computed by measuring the instantaneous power consumption of the device when the micro benchmarks are executed. This work makes an extensive use of hardware performance counters to validate the memory access behavior of microbenchmarks and to characterize the energy consumed in moving data. Finally, the calculated energy costs of data movement are used to characterize the portion of energy that MobileBench applications spend in moving data. The results of this study show that a significant 35% of the total device energy is spent in data movement alone. Energy is an increasingly important criteria in the context of designing architectures for future smartphones and this thesis offers insights into data movement energy consumption.
ContributorsPandiyan, Dhinakaran (Author) / Wu, Carole-Jean (Thesis advisor) / Shrivastava, Aviral (Committee member) / Lee, Yann-Hang (Committee member) / Arizona State University (Publisher)
Created2014