This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 193
Filtering by

Clear all filters

151983-Thumbnail Image.png
Description
This study is about Thai English (ThaiE), a variety of World Englishes that is presently spoken in Thailand, as the result of the spread of English and the recent Thai government policies towards English communication in Thailand. In the study, I examined the linguistic data of spoken ThaiE, collected from

This study is about Thai English (ThaiE), a variety of World Englishes that is presently spoken in Thailand, as the result of the spread of English and the recent Thai government policies towards English communication in Thailand. In the study, I examined the linguistic data of spoken ThaiE, collected from multiple sources both in the U.S.A. and Thailand. The study made use of a qualitative approach in examining the data, which were from (i) English interviews and questionnaires with 12 highly educated Thai speakers of English during my fieldwork in the Southwestern U.S.A., Central Thailand, and Northeastern Thailand, (ii) English speech samples from the media in Thailand, i.e. television programs, a news report, and a talk radio program, and (iii) the research articles on English used by Thai speakers of English. This study describes the typology of ThaiE in terms of its morpho-syntax, phonology, and sociolinguistics, with the main focus being placed on the structural characteristics of ThaiE. Based on the data, the results show that some of the ThaiE features are similar to the World Englishes features, but some are unique to ThaiE. Therefore, I argue that ThaiE is structurally considered a new variety of World Englishes at the present time. The findings also showed an interesting result, regarding the notion of ThaiE by the fieldwork interview participants. The majority of these participants (n=6) denied the existence of ThaiE, while the minority of the participants (n=5) believed ThaiE existed, and one participant was reluctant to give the answer. The study suggested that the participants' academic backgrounds, the unfamiliar notion of ThaiE, and the level of the participants' social interaction with everyday persons may have influenced their answers to the main research question.
ContributorsRogers, Uthairat (Author) / Gelderen, Elly van (Thesis advisor) / Mailhammer, Robert (Committee member) / Adams, Karen (Committee member) / Arizona State University (Publisher)
Created2013
151945-Thumbnail Image.png
Description
In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a

In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a constellation of heterogeneous processing elements (PEs) (general purpose PEs and application-specific integrated circuits (ASICS)). A typical MPSoC will be composed of a application processor, such as an ARM Coretex-A9 with cache coherent memory hierarchy, and several application sub-systems. Each of these sub-systems are composed of highly optimized instruction processors, graphics/DSP processors, and custom hardware accelerators. Typically, these sub-systems utilize scratchpad memories (SPM) rather than support cache coherency. The overall architecture is an integration of the various sub-systems through a high bandwidth system-level interconnect (such as a Network-on-Chip (NoC)). The shift to MPSoCs has been fueled by three major factors: demand for high performance, the use of component libraries, and short design turn around time. As customers continue to desire more and more complex applications on their embedded devices the performance demand for these devices continues to increase. Designers have turned to using MPSoCs to address this demand. By using pre-made IP libraries designers can quickly piece together a MPSoC that will meet the application demands of the end user with minimal time spent designing new hardware. Additionally, the use of MPSoCs allows designers to generate new devices very quickly and thus reducing the time to market. In this work, a complete MPSoC synthesis design flow is presented. We first present a technique \cite{leary1_intro} to address the synthesis of the interconnect architecture (particularly Network-on-Chip (NoC)). We then address the synthesis of the memory architecture of a MPSoC sub-system \cite{leary2_intro}. Lastly, we present a co-synthesis technique to generate the functional and memory architectures simultaneously. The validity and quality of each synthesis technique is demonstrated through extensive experimentation.
ContributorsLeary, Glenn (Author) / Chatha, Karamvir S (Thesis advisor) / Vrudhula, Sarma (Committee member) / Shrivastava, Aviral (Committee member) / Beraha, Rudy (Committee member) / Arizona State University (Publisher)
Created2013
152000-Thumbnail Image.png
Description
Despite the vast research on language carried out by the generative linguistics of Noam Chomsky and his followers since the 1950s, for theoretical reasons (mainly their attention to the mental abstraction of language structure rather than language as a performed product), historical linguistics from the start lay outside their research

Despite the vast research on language carried out by the generative linguistics of Noam Chomsky and his followers since the 1950s, for theoretical reasons (mainly their attention to the mental abstraction of language structure rather than language as a performed product), historical linguistics from the start lay outside their research interest. This study is an attempt to bridge the gap between the formalism and theoretical constructs introduced by generative grammar, whose ultimate goal is to provide not only a description but also an explanation to linguistic phenomena, and historical linguistics, which studies the evolution of language over time. This main objective is met by providing a formal account of the changes hwæðer undergoes throughout the Old English (OE) period. This seemingly inconspicuous word presents itself as a case of particular investigative interest in that it reflects the different stages proclaimed by the theoretical assumptions implemented in the study, namely the economy principles responsible for what has become known as the CP cycle: the Head Preference Principle and the Late Merge Principle, whereby pronominal hwæðer would raise to the specifier position for topicalization purposes, then after frequent use in that position, it would be base-generated there under Late Merge, until later reanalysis as the head of the Complementizer Phrase (CP) under Head Preference. Thus, I set out to classify the diverse functions of OE hwæðer by identifying and analyzing all instances as recorded in the diachronic part of the Helsinki Corpus. Both quantitative and qualitative analyses of the data have rendered the following results: 1) a fully satisfactory functional and chronological classification has been obtained by analyzing the data under investigation following a formal theoretical approach; and 2) a step-by-step historical analysis proves to be indispensable for understanding how language works at the abstract level from a historical point of view. This project is part of a growing body of research on language change which attempts to describe and explain the evolution of certain words as these change in form and function.
ContributorsParra-Guinaldo, Víctor (Author) / Gelderen, Elly van (Thesis advisor) / Bjork, Robert (Committee member) / Nilsen, Don L. F. (Committee member) / Arizona State University (Publisher)
Created2013
151616-Thumbnail Image.png
Description
Linguistic subjectivity and subjectification are fields of research that are relatively new to those working in English linguistics. After a discussion of linguistic subjectivity and subjectification as they relate to English, I investigate the subjectification of a specific English adjective, and how its usage has changed over time. Subjectivity is

Linguistic subjectivity and subjectification are fields of research that are relatively new to those working in English linguistics. After a discussion of linguistic subjectivity and subjectification as they relate to English, I investigate the subjectification of a specific English adjective, and how its usage has changed over time. Subjectivity is held by many linguists of today to be the major governing factor behind the ordering of English prenominal adjectives. Through the use of a questionnaire, I investigate the effect of subjectivity on English prenominal adjective order from the perspective of the native English speaker. I then discuss the results of the questionnaire, what they mean in relation to how subjectivity affects that order, and a few of the patterns that emerged as I analyzed the data.
ContributorsSkarstedt, Luke (Author) / Gelderen, Elly van (Thesis advisor) / Bjork, Robert (Committee member) / Adams, Karen (Committee member) / Arizona State University (Publisher)
Created2013
151527-Thumbnail Image.png
Description
Rapid technology scaling, the main driver of the power and performance improvements of computing solutions, has also rendered our computing systems extremely susceptible to transient errors called soft errors. Among the arsenal of techniques to protect computation from soft errors, Control Flow Checking (CFC) based techniques have gained a reputation

Rapid technology scaling, the main driver of the power and performance improvements of computing solutions, has also rendered our computing systems extremely susceptible to transient errors called soft errors. Among the arsenal of techniques to protect computation from soft errors, Control Flow Checking (CFC) based techniques have gained a reputation of effective, yet low-cost protection mechanism. The basic idea is that, there is a high probability that a soft-fault in program execution will eventually alter the control flow of the program. Therefore just by making sure that the control flow of the program is correct, significant protection can be achieved. More than a dozen techniques for CFC have been developed over the last several decades, ranging from hardware techniques, software techniques, and hardware-software hybrid techniques as well. Our analysis shows that existing CFC techniques are not only ineffective in protecting from soft errors, but cause additional power and performance overheads. For this analysis, we develop and validate a simulation based experimental setup to accurately and quantitatively estimate the architectural vulnerability of a program execution on a processor micro-architecture. We model the protection achieved by various state-of-the-art CFC techniques in this quantitative vulnerability estimation setup, and find out that software only CFC protection schemes (CFCSS, CFCSS+NA, CEDA) increase system vulnerability by 18% to 21% with 17% to 38% performance overhead. Hybrid CFC protection (CFEDC) increases vulnerability by 5%, while the vulnerability remains almost the same for hardware only CFC protection (CFCET); notwithstanding the hardware overheads of design cost, area, and power incurred in the hardware modifications required for their implementations.
ContributorsRhisheekesan, Abhishek (Author) / Shrivastava, Aviral (Thesis advisor) / Colbourn, Charles Joseph (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2013
152415-Thumbnail Image.png
Description
We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale

We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale to hundreds and thousands of cores. In addition, caches and coherence logic already take 20-50% of the total power consumption of the processor and 30-60% of die area. Therefore, a more scalable architecture is needed for manycore architectures. Software Managed Manycore (SMM) architectures emerge as a solution. They have scalable memory design in which each core has direct access to only its local scratchpad memory, and any data transfers to/from other memories must be done explicitly in the application using Direct Memory Access (DMA) commands. Lack of automatic memory management in the hardware makes such architectures extremely power-efficient, but they also become difficult to program. If the code/data of the task mapped onto a core cannot fit in the local scratchpad memory, then DMA calls must be added to bring in the code/data before it is required, and it may need to be evicted after its use. However, doing this adds a lot of complexity to the programmer's job. Now programmers must worry about data management, on top of worrying about the functional correctness of the program - which is already quite complex. This dissertation presents a comprehensive compiler and runtime integration to automatically manage the code and data of each task in the limited local memory of the core. We firstly developed a Complete Circular Stack Management. It manages stack frames between the local memory and the main memory, and addresses the stack pointer problem as well. Though it works, we found we could further optimize the management for most cases. Thus a Smart Stack Data Management (SSDM) is provided. In this work, we formulate the stack data management problem and propose a greedy algorithm for the same. Later on, we propose a general cost estimation algorithm, based on which CMSM heuristic for code mapping problem is developed. Finally, heap data is dynamic in nature and therefore it is hard to manage it. We provide two schemes to manage unlimited amount of heap data in constant sized region in the local memory. In addition to those separate schemes for different kinds of data, we also provide a memory partition methodology.
ContributorsBai, Ke (Author) / Shrivastava, Aviral (Thesis advisor) / Chatha, Karamvir (Committee member) / Xue, Guoliang (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
Description
The speech of non-native (L2) speakers of a language contains phonological rules that differentiate them from native speakers. These phonological rules characterize or distinguish accents in an L2. The Shibboleth program creates combinatorial rule-sets to describe the phonological pattern of these accents and classifies L2 speakers into their native language.

The speech of non-native (L2) speakers of a language contains phonological rules that differentiate them from native speakers. These phonological rules characterize or distinguish accents in an L2. The Shibboleth program creates combinatorial rule-sets to describe the phonological pattern of these accents and classifies L2 speakers into their native language. The training and classification is done in Shibboleth by support vector machines using a Gaussian radial basis kernel. In one experiment run using Shibboleth, the program correctly identified the native language (L1) of a speaker of unknown origin 42% of the time when there were six possible L1s in which to classify the speaker. This rate is significantly better than the 17% chance classification rate. Chi-squared test (1, N=24) =10.800, p=.0010 In a second experiment, Shibboleth was not able to determine the native language family of a speaker of unknown origin at a rate better than chance (33-44%) when the L1 was not in the transcripts used for training the language family rule-set. Chi-squared test (1, N=18) =1.000, p=.3173 The 318 participants for both experiments were from the Speech Accent Archive (Weinberger, 2013), and ranged in age from 17 to 80 years old. Forty percent of the speakers were female and 60% were male. The factor that most influenced correct classification was higher age of onset for the L2. A higher number of years spent living in an English-speaking country did not have the expected positive effect on classification.
ContributorsFrost, Wende (Author) / Gelderen, Elly van (Thesis advisor) / Perzanowski, Dennis (Committee member) / Gee, Elisabeth (Committee member) / Arizona State University (Publisher)
Created2013
151851-Thumbnail Image.png
Description
In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by

In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by the given input which is a finite system trajectory, but also quantifies to what extend does the sequence satisfies or violates the MTL specification. The implementation of the algorithm is the DP-TALIRO toolbox for MATLAB. Currently it is used as the temporal logic robust computing engine of S-TALIRO which is a tool for MATLAB searching for trajectories of minimal robustness in Simulink/ Stateflow. DP-TALIRO is expected to have near linear running time and constant memory requirement depending on the structure of the MTL formula. DP-TALIRO toolbox also integrates new features not supported in its ancestor FW-TALIRO such as parameter replacement, most related iteration and most related predicate. A derivative of DP-TALIRO which is DP-T-TALIRO is also addressed in this thesis which applies dynamic programming algorithm for time robustness computation. We test the running time of DP-TALIRO and compare it with FW-TALIRO. Finally, we present an application where DP-TALIRO is used as the robustness computation core of S-TALIRO for a parameter estimation problem.
ContributorsYang, Hengyi (Author) / Fainekos, Georgios (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2013
152771-Thumbnail Image.png
Description
ABSTRACT There are many parts of speech and morphological items in a linguistic lexicon that may be optional in order to have a cohesive language with a complete range of expression. Negation is not one of them. Negation appears to be absolutely essential from a linguistic (and indeed, a psychological)

ABSTRACT There are many parts of speech and morphological items in a linguistic lexicon that may be optional in order to have a cohesive language with a complete range of expression. Negation is not one of them. Negation appears to be absolutely essential from a linguistic (and indeed, a psychological) point of view within any human language. Humans need to be able to say in some fashion "No" and to express our not doing things in various ways. During the discussions that appear in this thesis, I expound upon the historical changes that can be seen within three different language branches - North Germanic (with Gothic, Old Saxon, Old Norse, Swedish, and Icelandic), West Germanic (with English), and Celtic (with Welsh) - focusing on negation particles in particular and their position within these languages. I also examine how each of these chosen languages has seen negation shift over time in relation to Jespersen's negation cycle. Finally, I compare and contrast the results I see from these languages, demonstrating that they all three do follow a distinct negation cycle. I also explain how these three negation cycles are chronologically not in sync with one another and obviously all changed at different rates. This appears to be the case even within the different branches of the Germanic family.
ContributorsLoewenhagen, Angela C (Author) / Gelderen, Elly van (Committee member) / Bjork, Robert (Committee member) / Gillon, Carrie (Committee member) / Arizona State University (Publisher)
Created2014
152778-Thumbnail Image.png
Description
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense

Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
ContributorsBoyd, Jeffrey Michael (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Shrivastava, Aviral (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014