This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 305
Filtering by

Clear all filters

151708-Thumbnail Image.png
Description
Simultaneously culture heroes and stumbling buffoons, Tricksters bring cultural tools to the people and make the world more habitable. There are common themes in these figures that remain fruitful for the advancement of culture, theory, and critical praxis. This dissertation develops a method for opening a dialogue with Trickster figures.

Simultaneously culture heroes and stumbling buffoons, Tricksters bring cultural tools to the people and make the world more habitable. There are common themes in these figures that remain fruitful for the advancement of culture, theory, and critical praxis. This dissertation develops a method for opening a dialogue with Trickster figures. It draws from established literature to present a newly conceived and more flexible Trickster archetype. This archetype is more than a collection of traits; it builds on itself processually to form a method for analysis. The critical Trickster archetype includes the fundamental act of crossing borders; the twin ontologies of ambiguity and liminality; the particular tactics of humor, duplicity, and shape shifting; and the overarching cultural roles of culture hero and stumbling buffoon. Running parallel to each archetypal element, though, are Trickster's overarching critical spirit of Quixotic utopianism and underlying telos of manipulating human relationships. The character 'Q' from Star Trek: The Next Generation is used to demonstrate the critical Trickster archetype. To be more useful for critical cultural studies, Trickster figures must also be connected to their socio-cultural and historical contexts. Thus, this dissertation offers a second set of analytics, a dialogical method that connects Tricksters to the worlds they make more habitable. This dialogical method, developed from the work of M. M. Bakhtin and others, consists of three analytical tools: utterance, intertextuality, and chronotope. Utterance bounds the text for analysis. Intertextuality connects the utterance, the text, to its context. Chronotope suggests particular spatio-temporal relationships that help reveal the cultural significance of a dialogical performance. Performance artists Andre Stitt, Ann Liv Young, and Steven Leyba are used to demonstrate the method of Trickster dialogics. A concluding discussion of Trickster's unique chronotope reveals its contributions to conceptions of utopia and futurity. This dissertation offers theoretical advancements about the significance and tactics of subversive communication practices. It offers a new and unique method for cultural and performative analyses that can be expanded into different kinds of dialogics. Trickster dialogics can also be used generatively to direct and guide the further development of performative praxis.
ContributorsSalinas, Chema (Author) / de la Garza, Amira (Thesis advisor) / Carlson, Cheree (Committee member) / Olson, Clark (Committee member) / Ellsworth, Angela (Committee member) / Arizona State University (Publisher)
Created2013
152220-Thumbnail Image.png
Description
Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment

Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment on depression. Subjects are scheduled with doctors on a regular basis and asked questions about recent emotional situations. Patients who are experiencing severe depression are more likely to miss an appointment and leave the data missing for that particular visit. Data that are not missing at random may produce bias in results if the missing mechanism is not taken into account. In other words, the missing mechanism is related to the unobserved responses. Data are said to be non-ignorable missing if the probabilities of missingness depend on quantities that might not be included in the model. Classical pattern-mixture models for non-ignorable missing values are widely used for longitudinal data analysis because they do not require explicit specification of the missing mechanism, with the data stratified according to a variety of missing patterns and a model specified for each stratum. However, this usually results in under-identifiability, because of the need to estimate many stratum-specific parameters even though the eventual interest is usually on the marginal parameters. Pattern mixture models have the drawback that a large sample is usually required. In this thesis, two studies are presented. The first study is motivated by an open problem from pattern mixture models. Simulation studies from this part show that information in the missing data indicators can be well summarized by a simple continuous latent structure, indicating that a large number of missing data patterns may be accounted by a simple latent factor. Simulation findings that are obtained in the first study lead to a novel model, a continuous latent factor model (CLFM). The second study develops CLFM which is utilized for modeling the joint distribution of missing values and longitudinal outcomes. The proposed CLFM model is feasible even for small sample size applications. The detailed estimation theory, including estimating techniques from both frequentist and Bayesian perspectives is presented. Model performance and evaluation are studied through designed simulations and three applications. Simulation and application settings change from correctly-specified missing data mechanism to mis-specified mechanism and include different sample sizes from longitudinal studies. Among three applications, an AIDS study includes non-ignorable missing values; the Peabody Picture Vocabulary Test data have no indication on missing data mechanism and it will be applied to a sensitivity analysis; the Growth of Language and Early Literacy Skills in Preschoolers with Developmental Speech and Language Impairment study, however, has full complete data and will be used to conduct a robust analysis. The CLFM model is shown to provide more precise estimators, specifically on intercept and slope related parameters, compared with Roy's latent class model and the classic linear mixed model. This advantage will be more obvious when a small sample size is the case, where Roy's model experiences challenges on estimation convergence. The proposed CLFM model is also robust when missing data are ignorable as demonstrated through a study on Growth of Language and Early Literacy Skills in Preschoolers.
ContributorsZhang, Jun (Author) / Reiser, Mark R. (Thesis advisor) / Barber, Jarrett (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St Louis, Robert D. (Committee member) / Arizona State University (Publisher)
Created2013
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
152189-Thumbnail Image.png
Description
This work presents two complementary studies that propose heuristic methods to capture characteristics of data using the ensemble learning method of random forest. The first study is motivated by the problem in education of determining teacher effectiveness in student achievement. Value-added models (VAMs), constructed as linear mixed models, use students’

This work presents two complementary studies that propose heuristic methods to capture characteristics of data using the ensemble learning method of random forest. The first study is motivated by the problem in education of determining teacher effectiveness in student achievement. Value-added models (VAMs), constructed as linear mixed models, use students’ test scores as outcome variables and teachers’ contributions as random effects to ascribe changes in student performance to the teachers who have taught them. The VAMs teacher score is the empirical best linear unbiased predictor (EBLUP). This approach is limited by the adequacy of the assumed model specification with respect to the unknown underlying model. In that regard, this study proposes alternative ways to rank teacher effects that are not dependent on a given model by introducing two variable importance measures (VIMs), the node-proportion and the covariate-proportion. These VIMs are novel because they take into account the final configuration of the terminal nodes in the constitutive trees in a random forest. In a simulation study, under a variety of conditions, true rankings of teacher effects are compared with estimated rankings obtained using three sources: the newly proposed VIMs, existing VIMs, and EBLUPs from the assumed linear model specification. The newly proposed VIMs outperform all others in various scenarios where the model was misspecified. The second study develops two novel interaction measures. These measures could be used within but are not restricted to the VAM framework. The distribution-based measure is constructed to identify interactions in a general setting where a model specification is not assumed in advance. In turn, the mean-based measure is built to estimate interactions when the model specification is assumed to be linear. Both measures are unique in their construction; they take into account not only the outcome values, but also the internal structure of the trees in a random forest. In a separate simulation study, under a variety of conditions, the proposed measures are found to identify and estimate second-order interactions.
ContributorsValdivia, Arturo (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Broatch, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
152244-Thumbnail Image.png
Description
Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR)

Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR) delivered via mobile technology that could potentially provide rich, contextualized learning for understanding concepts related to statistics education. This study examined the effects of AR experiences for learning basic statistical concepts. Using a 3 x 2 research design, this study compared learning gains of 252 undergraduate and graduate students from a pre- and posttest given before and after interacting with one of three types of augmented reality experiences, a high AR experience (interacting with three dimensional images coupled with movement through a physical space), a low AR experience (interacting with three dimensional images without movement), or no AR experience (two dimensional images without movement). Two levels of collaboration (pairs and no pairs) were also included. Additionally, student perceptions toward collaboration opportunities and engagement were compared across the six treatment conditions. Other demographic information collected included the students' previous statistics experience, as well as their comfort level in using mobile devices. The moderating variables included prior knowledge (high, average, and low) as measured by the student's pretest score. Taking into account prior knowledge, students with low prior knowledge assigned to either high or low AR experience had statistically significant higher learning gains than those assigned to a no AR experience. On the other hand, the results showed no statistical significance between students assigned to work individually versus in pairs. Students assigned to both high and low AR experience perceived a statistically significant higher level of engagement than their no AR counterparts. Students with low prior knowledge benefited the most from the high AR condition in learning gains. Overall, the AR application did well for providing a hands-on experience working with statistical data. Further research on AR and its relationship to spatial cognition, situated learning, high order skill development, performance support, and other classroom applications for learning is still needed.
ContributorsConley, Quincy (Author) / Atkinson, Robert K (Thesis advisor) / Nguyen, Frank (Committee member) / Nelson, Brian C (Committee member) / Arizona State University (Publisher)
Created2013
151992-Thumbnail Image.png
Description
Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools;

Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools; see e.g., Froelich & Habing, 2007). It remains to be seen how such procedures perform in the context of small-scale assessments characterized by relatively small sample sizes and/or short tests. The fact that some procedures come with minimum allowable values for characteristics of the data, such as the number of items, may even render them unusable for some small-scale assessments. Other measures designed to assess dimensionality do not come with such limitations and, as such, may perform better under conditions that do not lend themselves to evaluation via statistics that rely on asymptotic theory. The current work aimed to evaluate the performance of one such metric, the standardized generalized dimensionality discrepancy measure (SGDDM; Levy & Svetina, 2011; Levy, Xu, Yel, & Svetina, 2012), under both large- and small-scale testing conditions. A Monte Carlo study was conducted to compare the performance of DIMTEST and the SGDDM statistic in terms of evaluating assumptions of unidimensionality in item response data under a variety of conditions, with an emphasis on the examination of these procedures in small-scale assessments. Similar to previous research, increases in either test length or sample size resulted in increased power. The DIMTEST procedure appeared to be a conservative test of the null hypothesis of unidimensionality. The SGDDM statistic exhibited rejection rates near the nominal rate of .05 under unidimensional conditions, though the reliability of these results may have been less than optimal due to high sampling variability resulting from a relatively limited number of replications. Power values were at or near 1.0 for many of the multidimensional conditions. It was only when the sample size was reduced to N = 100 that the two approaches diverged in performance. Results suggested that both procedures may be appropriate for sample sizes as low as N = 250 and tests as short as J = 12 (SGDDM) or J = 19 (DIMTEST). When used as a diagnostic tool, SGDDM may be appropriate with as few as N = 100 cases combined with J = 12 items. The study was somewhat limited in that it did not include any complex factorial designs, nor were the strength of item discrimination parameters or correlation between factors manipulated. It is recommended that further research be conducted with the inclusion of these factors, as well as an increase in the number of replications when using the SGDDM procedure.
ContributorsReichenberg, Ray E (Author) / Levy, Roy (Thesis advisor) / Thompson, Marilyn S. (Thesis advisor) / Green, Samuel B. (Committee member) / Arizona State University (Publisher)
Created2013
151740-Thumbnail Image.png
Description
MOVE was a choreographic project that investigated content in conjunction with the creative process. The yearlong collaborative creative process utilized improvisational and compositional experiments to research the movement potential of the human body, as well as movement's ability to be an emotional catalyst. Multiple showings were held to receive feedback

MOVE was a choreographic project that investigated content in conjunction with the creative process. The yearlong collaborative creative process utilized improvisational and compositional experiments to research the movement potential of the human body, as well as movement's ability to be an emotional catalyst. Multiple showings were held to receive feedback from a variety of viewers. Production elements were designed in conjunction with the development of the evening-length dance work. As a result of discussion and research, several process-revealing sections were created to provide clear relationships between pedestrian/daily functional movement and technical movement. Each section within MOVE addressed movement as an emotional catalyst, resulting in a variety of emotional textures. The sections were placed in a non-linear structure in order for the audience to have the space to create their own connections between concepts. Community was developed in rehearsal via touch/weight sharing, and translated to the performance of MOVE via a communal, instinctive approach to the performance of the work. Community was also created between the movers and the audience via the design of the performance space. The production elements all revolved around the human body, and offered different viewpoints into various body parts. The choreographer, designers, and movers all participated in the creation of the production elements, resulting in a clear understanding of MOVE by the entire community involved. The overall creation, presentation, and reflection of MOVE was a view into the choreographer's growth as a dance artist, and her values of people and movement.
ContributorsPeterson, Britta Joy (Author) / Fitzgerald, Mary (Thesis advisor) / Schupp, Karen (Committee member) / Mcneal Hunt, Diane (Committee member) / Arizona State University (Publisher)
Created2013
151976-Thumbnail Image.png
Description
Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs

Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs two basic schemes for testing parallel generated streams. The first applies serial tests to the individual streams and then tests the resulting P-values for uniformity. The second turns all the parallel generated streams into one long vector and then applies serial tests to the resulting concatenated stream. Various forms of stream dependence can be missed by each approach because neither one fully addresses the multivariate nature of the accumulated data when generators are run in parallel. This dissertation identifies these potential faults in the parallel testing methodologies of TestU01 and investigates two different methods to better detect inter-stream dependencies: correlation motivated multivariate tests and vector time series based tests. These methods have been implemented in an extension to TestU01 built in C++ and the unique aspects of this extension are discussed. A variety of different generation scenarios are then examined using the TestU01 suite in concert with the extension. This enhanced software package is found to better detect certain forms of inter-stream dependencies than the original TestU01 suites of tests.
ContributorsIsmay, Chester (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2013
151781-Thumbnail Image.png
Description
This study compares the Hummel Concertos in A Minor, Op. 85 and B Minor, Op. 89 and the Chopin Concertos in E Minor, Op. 11 and F Minor, Op. 21. On initial hearing of Hummel's rarely played concertos, one immediately detects similarities with Chopin's concerto style. Upon closer examination, one

This study compares the Hummel Concertos in A Minor, Op. 85 and B Minor, Op. 89 and the Chopin Concertos in E Minor, Op. 11 and F Minor, Op. 21. On initial hearing of Hummel's rarely played concertos, one immediately detects similarities with Chopin's concerto style. Upon closer examination, one discovers a substantial number of interesting and significant parallels with Chopin's concertos, many of which are highlighted in this research project. Hummel belongs to a generation of composers who made a shift away from the Classical style, and Chopin, as an early Romantic, absorbed much from his immediate predecessors in establishing his highly unique style. I have chosen to focus on Chopin's concertos to demonstrate this association. The essay begins with a discussion of the historical background of Chopin's formative years as it pertains to the formation of his compositional style, Hummel's role and influence in the contemporary musical arena, as well as interactions between the two composers. It then provides the historical background of the aforementioned concertos leading to a comparative analysis, which includes structural, melodic, harmonic, and motivic parallels. With a better understanding of his stylistic influences, and of how Chopin assimilated them in the creation of his masterful works, the performer can adopt a more informed approach to the interpretation of these two concertos, which are among the most beloved masterpieces in piano literature.
ContributorsYam, Jessica (Author) / Hamilton, Robert (Thesis advisor) / Levy, Benjamin (Committee member) / Ryan, Russell (Committee member) / Arizona State University (Publisher)
Created2013
151792-Thumbnail Image.png
Description
Imitation is the genesis of change. One basic principle of human nature is that people imitate what they see and hear. In the professional choral arena, musicians extend the high art of imitation through fine-tuning, and creative reinterpretation. Stimulated by this cycle, the color of the twenty-first-century professional choir shifted

Imitation is the genesis of change. One basic principle of human nature is that people imitate what they see and hear. In the professional choral arena, musicians extend the high art of imitation through fine-tuning, and creative reinterpretation. Stimulated by this cycle, the color of the twenty-first-century professional choir shifted compared to that of professional choirs from the 1950s through 1970s, causing an evolution in choral sound. In a series of interviews with iconic composers and conductors of professional choirs, the subjects involved in the study conveyed comprehensive and personal accounts outlining how professional choirs have refined the standard of choral sound. The paper is organized into three sections: (1) where have we been, (2) where are we now and (3) where are we going? It explores various conductors' perceptions of how and why choirs are unique when compared to earlier generations and what they believe caused the shift in choral tone. Paired with this perspective is the role of modern composers, whose progressive compositional techniques helped shape the modern choral sound. The subjects involved in the study further theorize how current inclinations may potentially shape the future of professional choral music. Although the subjects expressed differing opinions about the quality of the twenty-first-century choral tone, many agree that there have been specific transformations since the 1970s. The shift in choral tone occurred due to developments in vocal technique, exploration of contemporary compositional extended techniques, an adherence to historically informed performance practice, imitation of vocal colors from numerous cultures, incorporation of technology and emulation of sound perceived on recordings. Additionally, choral music subtly became prominent in film scores, and innovative conductors created progressive concert programming, and developed novel approaches to entertain audiences. Samplings of contributors involved in this study include: John Rutter, Harry Christophers, Charles Bruffy, Nigel Short, Craig Hella Johnson, Alice Parker, Michael McGlynn, Phillip Brunelle, Craig Jessop, Libby Larsen, Ola Gjeilo, Cecilia McDowall, Jaakko Mäntyjärvi and Stephen Paulus.
ContributorsRugen, Kira Zeeman (Author) / Rugen, Kira Z (Thesis advisor) / Reber, William (Committee member) / Saucier, Catherine (Committee member) / Doan, Jerry (Committee member) / Bailey, Wayne (Committee member) / Arizona State University (Publisher)
Created2013