This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 365
Filtering by

Clear all filters

152220-Thumbnail Image.png
Description
Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment

Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment on depression. Subjects are scheduled with doctors on a regular basis and asked questions about recent emotional situations. Patients who are experiencing severe depression are more likely to miss an appointment and leave the data missing for that particular visit. Data that are not missing at random may produce bias in results if the missing mechanism is not taken into account. In other words, the missing mechanism is related to the unobserved responses. Data are said to be non-ignorable missing if the probabilities of missingness depend on quantities that might not be included in the model. Classical pattern-mixture models for non-ignorable missing values are widely used for longitudinal data analysis because they do not require explicit specification of the missing mechanism, with the data stratified according to a variety of missing patterns and a model specified for each stratum. However, this usually results in under-identifiability, because of the need to estimate many stratum-specific parameters even though the eventual interest is usually on the marginal parameters. Pattern mixture models have the drawback that a large sample is usually required. In this thesis, two studies are presented. The first study is motivated by an open problem from pattern mixture models. Simulation studies from this part show that information in the missing data indicators can be well summarized by a simple continuous latent structure, indicating that a large number of missing data patterns may be accounted by a simple latent factor. Simulation findings that are obtained in the first study lead to a novel model, a continuous latent factor model (CLFM). The second study develops CLFM which is utilized for modeling the joint distribution of missing values and longitudinal outcomes. The proposed CLFM model is feasible even for small sample size applications. The detailed estimation theory, including estimating techniques from both frequentist and Bayesian perspectives is presented. Model performance and evaluation are studied through designed simulations and three applications. Simulation and application settings change from correctly-specified missing data mechanism to mis-specified mechanism and include different sample sizes from longitudinal studies. Among three applications, an AIDS study includes non-ignorable missing values; the Peabody Picture Vocabulary Test data have no indication on missing data mechanism and it will be applied to a sensitivity analysis; the Growth of Language and Early Literacy Skills in Preschoolers with Developmental Speech and Language Impairment study, however, has full complete data and will be used to conduct a robust analysis. The CLFM model is shown to provide more precise estimators, specifically on intercept and slope related parameters, compared with Roy's latent class model and the classic linear mixed model. This advantage will be more obvious when a small sample size is the case, where Roy's model experiences challenges on estimation convergence. The proposed CLFM model is also robust when missing data are ignorable as demonstrated through a study on Growth of Language and Early Literacy Skills in Preschoolers.
ContributorsZhang, Jun (Author) / Reiser, Mark R. (Thesis advisor) / Barber, Jarrett (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St Louis, Robert D. (Committee member) / Arizona State University (Publisher)
Created2013
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
152189-Thumbnail Image.png
Description
This work presents two complementary studies that propose heuristic methods to capture characteristics of data using the ensemble learning method of random forest. The first study is motivated by the problem in education of determining teacher effectiveness in student achievement. Value-added models (VAMs), constructed as linear mixed models, use students’

This work presents two complementary studies that propose heuristic methods to capture characteristics of data using the ensemble learning method of random forest. The first study is motivated by the problem in education of determining teacher effectiveness in student achievement. Value-added models (VAMs), constructed as linear mixed models, use students’ test scores as outcome variables and teachers’ contributions as random effects to ascribe changes in student performance to the teachers who have taught them. The VAMs teacher score is the empirical best linear unbiased predictor (EBLUP). This approach is limited by the adequacy of the assumed model specification with respect to the unknown underlying model. In that regard, this study proposes alternative ways to rank teacher effects that are not dependent on a given model by introducing two variable importance measures (VIMs), the node-proportion and the covariate-proportion. These VIMs are novel because they take into account the final configuration of the terminal nodes in the constitutive trees in a random forest. In a simulation study, under a variety of conditions, true rankings of teacher effects are compared with estimated rankings obtained using three sources: the newly proposed VIMs, existing VIMs, and EBLUPs from the assumed linear model specification. The newly proposed VIMs outperform all others in various scenarios where the model was misspecified. The second study develops two novel interaction measures. These measures could be used within but are not restricted to the VAM framework. The distribution-based measure is constructed to identify interactions in a general setting where a model specification is not assumed in advance. In turn, the mean-based measure is built to estimate interactions when the model specification is assumed to be linear. Both measures are unique in their construction; they take into account not only the outcome values, but also the internal structure of the trees in a random forest. In a separate simulation study, under a variety of conditions, the proposed measures are found to identify and estimate second-order interactions.
ContributorsValdivia, Arturo (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Broatch, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
152244-Thumbnail Image.png
Description
Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR)

Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR) delivered via mobile technology that could potentially provide rich, contextualized learning for understanding concepts related to statistics education. This study examined the effects of AR experiences for learning basic statistical concepts. Using a 3 x 2 research design, this study compared learning gains of 252 undergraduate and graduate students from a pre- and posttest given before and after interacting with one of three types of augmented reality experiences, a high AR experience (interacting with three dimensional images coupled with movement through a physical space), a low AR experience (interacting with three dimensional images without movement), or no AR experience (two dimensional images without movement). Two levels of collaboration (pairs and no pairs) were also included. Additionally, student perceptions toward collaboration opportunities and engagement were compared across the six treatment conditions. Other demographic information collected included the students' previous statistics experience, as well as their comfort level in using mobile devices. The moderating variables included prior knowledge (high, average, and low) as measured by the student's pretest score. Taking into account prior knowledge, students with low prior knowledge assigned to either high or low AR experience had statistically significant higher learning gains than those assigned to a no AR experience. On the other hand, the results showed no statistical significance between students assigned to work individually versus in pairs. Students assigned to both high and low AR experience perceived a statistically significant higher level of engagement than their no AR counterparts. Students with low prior knowledge benefited the most from the high AR condition in learning gains. Overall, the AR application did well for providing a hands-on experience working with statistical data. Further research on AR and its relationship to spatial cognition, situated learning, high order skill development, performance support, and other classroom applications for learning is still needed.
ContributorsConley, Quincy (Author) / Atkinson, Robert K (Thesis advisor) / Nguyen, Frank (Committee member) / Nelson, Brian C (Committee member) / Arizona State University (Publisher)
Created2013
151896-Thumbnail Image.png
Description
Purpose: Exercise interventions often result in less than predicted weight loss or even weight gain in some individuals, with over half of the weight that is lost often being regained within one year. The current study hypothesized that one year following a 12-week supervised exercise intervention, women who continued to

Purpose: Exercise interventions often result in less than predicted weight loss or even weight gain in some individuals, with over half of the weight that is lost often being regained within one year. The current study hypothesized that one year following a 12-week supervised exercise intervention, women who continued to exercise regularly but initially gained weight would lose the weight gained, reverting back to baseline with no restoration of set-point, or continue to lose weight if weight was initially lost. Conversely, those who discontinued purposeful exercise at the conclusion of the study were expected to continue to gain or regain weight. Methods: 24 women who completed the initial 12-week exercise intervention (90min/week of supervised treadmill walking at 70%VO2peak) participated in a follow-up study one year after the conclusion of the exercise intervention. Subjects underwent Dual-energy X-Ray Absorptiometry at baseline, 12-weeks, and 15 months, and filled out physical activity questionnaires at 15 months. Results: A considerable amount of heterogeneity was observed in body weight and fat mass changes among subjects, but there was no significant overall change in weight or fat mass from baseline to follow-up. 15 women were categorized as compensators and as a group gained weight (+ 0.94±3.26kg) and fat mass (+0.22±3.25kg) compared to the 9 non-compensators who lost body weight (-0.26±3.59kg) and had essentially no change in fat mass (+0.01±2.61kg) from 12-weeks to follow-up. There was a significant between group difference (p=.003) in change in fat mass from 12-weeks to follow-up between subjects who continued to regularly vigorously exercise (-2.205±3.070kg), and those who did not (+1.320±2.156kg). Additionally, energy compensation from baseline to 12-weeks and early body weight and composition changes during the intervention were moderate predictors of body weight and composition changes from baseline to follow-up. Conclusion: The main finding of this study is that following a 12-week supervised exercise intervention, women displayed a net loss of fat mass during the follow-up period if regular vigorous exercise was continued, regardless of whether they were classified as compensators or non-compensators during the initial intervention.
ContributorsCabbage, Clarissa Marie (Author) / Gaesser, Glenn (Thesis advisor) / Chisum, Jack (Committee member) / Campbell, Kathryn (Committee member) / Arizona State University (Publisher)
Created2013
151992-Thumbnail Image.png
Description
Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools;

Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools; see e.g., Froelich & Habing, 2007). It remains to be seen how such procedures perform in the context of small-scale assessments characterized by relatively small sample sizes and/or short tests. The fact that some procedures come with minimum allowable values for characteristics of the data, such as the number of items, may even render them unusable for some small-scale assessments. Other measures designed to assess dimensionality do not come with such limitations and, as such, may perform better under conditions that do not lend themselves to evaluation via statistics that rely on asymptotic theory. The current work aimed to evaluate the performance of one such metric, the standardized generalized dimensionality discrepancy measure (SGDDM; Levy & Svetina, 2011; Levy, Xu, Yel, & Svetina, 2012), under both large- and small-scale testing conditions. A Monte Carlo study was conducted to compare the performance of DIMTEST and the SGDDM statistic in terms of evaluating assumptions of unidimensionality in item response data under a variety of conditions, with an emphasis on the examination of these procedures in small-scale assessments. Similar to previous research, increases in either test length or sample size resulted in increased power. The DIMTEST procedure appeared to be a conservative test of the null hypothesis of unidimensionality. The SGDDM statistic exhibited rejection rates near the nominal rate of .05 under unidimensional conditions, though the reliability of these results may have been less than optimal due to high sampling variability resulting from a relatively limited number of replications. Power values were at or near 1.0 for many of the multidimensional conditions. It was only when the sample size was reduced to N = 100 that the two approaches diverged in performance. Results suggested that both procedures may be appropriate for sample sizes as low as N = 250 and tests as short as J = 12 (SGDDM) or J = 19 (DIMTEST). When used as a diagnostic tool, SGDDM may be appropriate with as few as N = 100 cases combined with J = 12 items. The study was somewhat limited in that it did not include any complex factorial designs, nor were the strength of item discrimination parameters or correlation between factors manipulated. It is recommended that further research be conducted with the inclusion of these factors, as well as an increase in the number of replications when using the SGDDM procedure.
ContributorsReichenberg, Ray E (Author) / Levy, Roy (Thesis advisor) / Thompson, Marilyn S. (Thesis advisor) / Green, Samuel B. (Committee member) / Arizona State University (Publisher)
Created2013
151927-Thumbnail Image.png
Description
INTRODUCTION: Exercise performed at moderate to vigorous intensities has been shown to generate a post exercise hypotensive response. Whether this response is observed with very low exercise intensities is unclear. PURPOSE: To compare post physical activity ambulatory blood pressure (ABP) response to a single worksite walking day and a normal

INTRODUCTION: Exercise performed at moderate to vigorous intensities has been shown to generate a post exercise hypotensive response. Whether this response is observed with very low exercise intensities is unclear. PURPOSE: To compare post physical activity ambulatory blood pressure (ABP) response to a single worksite walking day and a normal sedentary work day in pre-hypertensive adults. METHODS: Participants were 7 pre-hypertensive (127 + 8 mmHg / 83 + 8 mmHg) adults (3 male, 4 female, age = 42 + 12 yr) who participated in a randomized, cross-over study that included a control and a walking treatment. Only those who indicated regularly sitting at least 8 hours/day and no structured physical activity were enrolled. Treatment days were randomly assigned and were performed one week apart. Walking treatment consisted of periodically increasing walk time up to 2.5 hours over the course of an 8 hour work day on a walking workstation (Steelcase Company, Grand Rapids, MI). Walk speed was set at 1 mph. Participants wore an ambulatory blood pressure cuff (Oscar 2, SunTech Medical, Morrisville, NC) for 24-hours on both treatment days. Participants maintained normal daily activities on the control day. ABP data collected from 9:00 am until 10:00 pm of the same day were included in statistical analyses. Linear mixed models were used to detect differences in systolic (SBP) and diastolic blood pressure (DBP) by treatment condition over the whole day and post workday for the time periods between 4 -10 pm when participants were no longer at work. RESULTS:BP was significantly lower in response to the walking treatment compared to the control day (Mean SBP 126 +7 mmHg vs.124 +7 mmHg, p=.043; DBP 80 + 3 mmHg vs. 77 + 3 mmHg, p = 0.001 respectively). Post workday (4:00 to 10:00 pm) SBP decreased 3 mmHg (p=.017) and DBP decreased 4 mmHg (p<.001) following walking. CONCLUSION: Even low intensity exercise such as walking on a walking workstation is effective for significantly reducing acute BP when compared to a normal work day.
ContributorsZeigler, Zachary (Author) / Swan, Pamela (Thesis advisor) / Buman, Matthew (Committee member) / Gaesser, Glenn (Committee member) / Arizona State University (Publisher)
Created2013
151976-Thumbnail Image.png
Description
Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs

Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs two basic schemes for testing parallel generated streams. The first applies serial tests to the individual streams and then tests the resulting P-values for uniformity. The second turns all the parallel generated streams into one long vector and then applies serial tests to the resulting concatenated stream. Various forms of stream dependence can be missed by each approach because neither one fully addresses the multivariate nature of the accumulated data when generators are run in parallel. This dissertation identifies these potential faults in the parallel testing methodologies of TestU01 and investigates two different methods to better detect inter-stream dependencies: correlation motivated multivariate tests and vector time series based tests. These methods have been implemented in an extension to TestU01 built in C++ and the unique aspects of this extension are discussed. A variety of different generation scenarios are then examined using the TestU01 suite in concert with the extension. This enhanced software package is found to better detect certain forms of inter-stream dependencies than the original TestU01 suites of tests.
ContributorsIsmay, Chester (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2013
151284-Thumbnail Image.png
Description
Dietary protein is known to increase postprandial thermogenesis more so than carbohydrates or fats, probably related to the fact that amino acids have no immediate form of storage in the body and can become toxic if not readily incorporated into body tissues or excreted. It is also well documented that

Dietary protein is known to increase postprandial thermogenesis more so than carbohydrates or fats, probably related to the fact that amino acids have no immediate form of storage in the body and can become toxic if not readily incorporated into body tissues or excreted. It is also well documented that subjects report greater satiety on high- versus low-protein diets and that subject compliance tends to be greater on high-protein diets, thus contributing to their popularity. What is not as well known is how a high-protein diet affects resting metabolic rate over time, and what is even less well known is if resting metabolic rate changes significantly when a person consuming an omnivorous diet suddenly adopts a vegetarian one. This pilot study sought to determine whether subjects adopting a vegetarian diet would report decreased satiety or demonstrate a decreased metabolic rate due to a change in protein intake and possible increase in carbohydrates. Further, this study sought to validate a new device called the SenseWear Armband (SWA) to determine if it might be sensitive enough to detect subtle changes in metabolic rate related to diet. Subjects were tested twice on all variables, at baseline and post-test. Independent and related samples tests revealed no significant differences between or within groups for any variable at any time point in the study. The SWA had a strong positive correlation to the Oxycon Mobile metabolic cart but due to a lack of change in metabolic rate, its sensitivity was undetermined. These data do not support the theory that adopting a vegetarian diet results in a long-term change in metabolic rate.
ContributorsMoore, Amy (Author) / Johnston, Carol (Thesis advisor) / Appel, Christy (Thesis advisor) / Gaesser, Glenn (Committee member) / Arizona State University (Publisher)
Created2012
151552-Thumbnail Image.png
Description
The use of bias indicators in psychological measurement has been contentious, with some researchers questioning whether they actually suppress or moderate the ability of substantive psychological indictors to discriminate (McGrath, Mitchell, Kim, & Hough, 2010). Bias indicators on the MMPI-2-RF (F-r, Fs, FBS-r, K-r, and L-r) were tested for suppression

The use of bias indicators in psychological measurement has been contentious, with some researchers questioning whether they actually suppress or moderate the ability of substantive psychological indictors to discriminate (McGrath, Mitchell, Kim, & Hough, 2010). Bias indicators on the MMPI-2-RF (F-r, Fs, FBS-r, K-r, and L-r) were tested for suppression or moderation of the ability of the RC1 and NUC scales to discriminate between Epileptic Seizures (ES) and Non-epileptic Seizures (NES, a conversion disorder that is often misdiagnosed as ES). RC1 and NUC had previously been found to be the best scales on the MMPI-2-RF to differentiate between ES and NES, with optimal cut scores occurring at a cut score of 65 for RC1 (classification rate of 68%) and 85 for NUC (classification rate of 64%; Locke et al., 2010). The MMPI-2-RF was completed by 429 inpatients on the Epilepsy Monitoring Unit (EMU) at the Scottsdale Mayo Clinic Hospital, all of whom had confirmed diagnoses of ES or NES. Moderated logistic regression was used to test for moderation and logistic regression was used to test for suppression. Classification rates of RC1 and NUC were calculated at different bias level indicators to evaluate clinical utility for diagnosticians. No moderation was found. Suppression was found for F-r, Fs, K-r, and L-r with RC1, and for all variables with NUC. For F-r and Fs, the optimal RC1 and NUC cut scores increased at higher levels of bias, but tended to decrease at higher levels of K-r, L-r, and FBS-r. K-r provided the greatest suppression for RC1, as well as the greatest increases in classification rates at optimal cut scores, given different levels of bias. It was concluded that, consistent with expectations, taking account of bias indicator suppression on the MMPI-2-RF can improve discrimination of ES and NES. At higher levels of negative impression management, higher cut scores on substantive scales are needed to attain optimal discrimination, whereas at higher levels of positive impression management and FBS-r, lower cut scores are needed. Using these new cut scores resulted in modest improvements in accuracy in discrimination. These findings are consistent with prior research in showing the efficacy of bias indicators, and extend the findings to a psycho-medical context.
ContributorsWershba, Rebecca E (Author) / Lanyon, Richard I (Thesis advisor) / Barrera, Manuel (Committee member) / Karoly, Paul (Committee member) / Millsap, Roger E (Committee member) / Arizona State University (Publisher)
Created2013