Matching Items (614)
152220-Thumbnail Image.png
Description
Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment

Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment on depression. Subjects are scheduled with doctors on a regular basis and asked questions about recent emotional situations. Patients who are experiencing severe depression are more likely to miss an appointment and leave the data missing for that particular visit. Data that are not missing at random may produce bias in results if the missing mechanism is not taken into account. In other words, the missing mechanism is related to the unobserved responses. Data are said to be non-ignorable missing if the probabilities of missingness depend on quantities that might not be included in the model. Classical pattern-mixture models for non-ignorable missing values are widely used for longitudinal data analysis because they do not require explicit specification of the missing mechanism, with the data stratified according to a variety of missing patterns and a model specified for each stratum. However, this usually results in under-identifiability, because of the need to estimate many stratum-specific parameters even though the eventual interest is usually on the marginal parameters. Pattern mixture models have the drawback that a large sample is usually required. In this thesis, two studies are presented. The first study is motivated by an open problem from pattern mixture models. Simulation studies from this part show that information in the missing data indicators can be well summarized by a simple continuous latent structure, indicating that a large number of missing data patterns may be accounted by a simple latent factor. Simulation findings that are obtained in the first study lead to a novel model, a continuous latent factor model (CLFM). The second study develops CLFM which is utilized for modeling the joint distribution of missing values and longitudinal outcomes. The proposed CLFM model is feasible even for small sample size applications. The detailed estimation theory, including estimating techniques from both frequentist and Bayesian perspectives is presented. Model performance and evaluation are studied through designed simulations and three applications. Simulation and application settings change from correctly-specified missing data mechanism to mis-specified mechanism and include different sample sizes from longitudinal studies. Among three applications, an AIDS study includes non-ignorable missing values; the Peabody Picture Vocabulary Test data have no indication on missing data mechanism and it will be applied to a sensitivity analysis; the Growth of Language and Early Literacy Skills in Preschoolers with Developmental Speech and Language Impairment study, however, has full complete data and will be used to conduct a robust analysis. The CLFM model is shown to provide more precise estimators, specifically on intercept and slope related parameters, compared with Roy's latent class model and the classic linear mixed model. This advantage will be more obvious when a small sample size is the case, where Roy's model experiences challenges on estimation convergence. The proposed CLFM model is also robust when missing data are ignorable as demonstrated through a study on Growth of Language and Early Literacy Skills in Preschoolers.
ContributorsZhang, Jun (Author) / Reiser, Mark R. (Thesis advisor) / Barber, Jarrett (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St Louis, Robert D. (Committee member) / Arizona State University (Publisher)
Created2013
150135-Thumbnail Image.png
Description
It is common in the analysis of data to provide a goodness-of-fit test to assess the performance of a model. In the analysis of contingency tables, goodness-of-fit statistics are frequently employed when modeling social science, educational or psychological data where the interest is often directed at investigating the association among

It is common in the analysis of data to provide a goodness-of-fit test to assess the performance of a model. In the analysis of contingency tables, goodness-of-fit statistics are frequently employed when modeling social science, educational or psychological data where the interest is often directed at investigating the association among multi-categorical variables. Pearson's chi-squared statistic is well-known in goodness-of-fit testing, but it is sometimes considered to produce an omnibus test as it gives little guidance to the source of poor fit once the null hypothesis is rejected. However, its components can provide powerful directional tests. In this dissertation, orthogonal components are used to develop goodness-of-fit tests for models fit to the counts obtained from the cross-classification of multi-category dependent variables. Ordinal categories are assumed. Orthogonal components defined on marginals are obtained when analyzing multi-dimensional contingency tables through the use of the QR decomposition. A subset of these orthogonal components can be used to construct limited-information tests that allow one to identify the source of lack-of-fit and provide an increase in power compared to Pearson's test. These tests can address the adverse effects presented when data are sparse. The tests rely on the set of first- and second-order marginals jointly, the set of second-order marginals only, and the random forest method, a popular algorithm for modeling large complex data sets. The performance of these tests is compared to the likelihood ratio test as well as to tests based on orthogonal polynomial components. The derived goodness-of-fit tests are evaluated with studies for detecting two- and three-way associations that are not accounted for by a categorical variable factor model with a single latent variable. In addition the tests are used to investigate the case when the model misspecification involves parameter constraints for large and sparse contingency tables. The methodology proposed here is applied to data from the 38th round of the State Survey conducted by the Institute for Public Policy and Michigan State University Social Research (2005) . The results illustrate the use of the proposed techniques in the context of a sparse data set.
ContributorsMilovanovic, Jelena (Author) / Young, Dennis (Thesis advisor) / Reiser, Mark R. (Thesis advisor) / Wilson, Jeffrey (Committee member) / Eubank, Randall (Committee member) / Yang, Yan (Committee member) / Arizona State University (Publisher)
Created2011
153801-Thumbnail Image.png
Description
Designing a hazard intelligence platform enables public agencies to organize diversity and manage complexity in collaborative partnerships. To maintain the integrity of the platform while preserving the prosocial ethos, understanding the dynamics of “non-regulatory supplements” to central governance is crucial. In conceptualization, social responsiveness is shaped by communicative actions, in

Designing a hazard intelligence platform enables public agencies to organize diversity and manage complexity in collaborative partnerships. To maintain the integrity of the platform while preserving the prosocial ethos, understanding the dynamics of “non-regulatory supplements” to central governance is crucial. In conceptualization, social responsiveness is shaped by communicative actions, in which coordination is attained through negotiated agreements by way of the evaluation of validity claims. The dynamic processes involve information processing and knowledge sharing. The access and the use of collaborative intelligence can be examined by notions of traceability and intelligence cohort. Empirical evidence indicates that social traceability is statistical significant and positively associated with the improvement of collaborative performance. Moreover, social traceability positively contributes to the efficacy of technical traceability, but not vice versa. Furthermore, technical traceability significantly contributes to both moderate and high performance improvement; while social traceability is only significant for moderate performance improvement. Therefore, the social effect is limited and contingent. The results further suggest strategic considerations. Social significance: social traceability is the fundamental consideration to high cohort performance. Cocktail therapy: high cohort performance involves an integrative strategy with high social traceability and high technical traceability. Servant leadership: public agencies should exercise limited authority and perform a supporting role in the provision of appropriate technical traceability, while actively promoting social traceability in the system.
ContributorsWang, Chao-shih (Author) / Van Fleet, David (Thesis advisor) / Grebitus, Carola (Committee member) / Wilson, Jeffrey (Committee member) / Shultz, Clifford (Committee member) / Arizona State University (Publisher)
Created2015
156264-Thumbnail Image.png
Description
The Pearson and likelihood ratio statistics are well-known in goodness-of-fit testing and are commonly used for models applied to multinomial count data. When data are from a table formed by the cross-classification of a large number of variables, these goodness-of-fit statistics may have lower power and inaccurate Type I error

The Pearson and likelihood ratio statistics are well-known in goodness-of-fit testing and are commonly used for models applied to multinomial count data. When data are from a table formed by the cross-classification of a large number of variables, these goodness-of-fit statistics may have lower power and inaccurate Type I error rate due to sparseness. Pearson's statistic can be decomposed into orthogonal components associated with the marginal distributions of observed variables, and an omnibus fit statistic can be obtained as a sum of these components. When the statistic is a sum of components for lower-order marginals, it has good performance for Type I error rate and statistical power even when applied to a sparse table. In this dissertation, goodness-of-fit statistics using orthogonal components based on second- third- and fourth-order marginals were examined. If lack-of-fit is present in higher-order marginals, then a test that incorporates the higher-order marginals may have a higher power than a test that incorporates only first- and/or second-order marginals. To this end, two new statistics based on the orthogonal components of Pearson's chi-square that incorporate third- and fourth-order marginals were developed, and the Type I error, empirical power, and asymptotic power under different sparseness conditions were investigated. Individual orthogonal components as test statistics to identify lack-of-fit were also studied. The performance of individual orthogonal components to other popular lack-of-fit statistics were also compared. When the number of manifest variables becomes larger than 20, most of the statistics based on marginal distributions have limitations in terms of computer resources and CPU time. Under this problem, when the number manifest variables is larger than or equal to 20, the performance of a bootstrap based method to obtain p-values for Pearson-Fisher statistic, fit to confirmatory dichotomous variable factor analysis model, and the performance of Tollenaar and Mooijaart (2003) statistic were investigated.
ContributorsDassanayake, Mudiyanselage Maduranga Kasun (Author) / Reiser, Mark R. (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St. Louis, Robert (Committee member) / Kamarianakis, Ioannis (Committee member) / Arizona State University (Publisher)
Created2018
133353-Thumbnail Image.png
Description
This research compares shifts in a SuperSpec titanium nitride (TiN) kinetic inductance detector's (KID's) resonant frequency with accepted models for other KIDs. SuperSpec, which is being developed at the University of Colorado Boulder, is an on-chip spectrometer designed with a multiplexed readout with multiple KIDs that is set up for

This research compares shifts in a SuperSpec titanium nitride (TiN) kinetic inductance detector's (KID's) resonant frequency with accepted models for other KIDs. SuperSpec, which is being developed at the University of Colorado Boulder, is an on-chip spectrometer designed with a multiplexed readout with multiple KIDs that is set up for a broadband transmission of these measurements. It is useful for detecting radiation in the mm and sub mm wavelengths which is significant since absorption and reemission of photons by dust causes radiation from distant objects to reach us in infrared and far-infrared bands. In preparation for testing, our team installed stages designed previously by Paul Abers and his group into our cryostat and designed and installed other parts necessary for the cryostat to be able to test devices on the 250 mK stage. This work included the design and construction of additional parts, a new setup for the wiring in the cryostat, the assembly, testing, and installation of several stainless steel coaxial cables for the measurements through the devices, and other cryogenic and low pressure considerations. The SuperSpec KID was successfully tested on this 250 mK stage thus confirming that the new setup is functional. Our results are in agreement with existing models which suggest that the breaking of cooper pairs in the detector's superconductor which occurs in response to temperature, optical load, and readout power will decrease the resonant frequencies. A negative linear relationship in our results appears, as expected, since the parameters are varied only slightly so that a linear approximation is appropriate. We compared the rate at which the resonant frequency responded to temperature and found it to be close to the expected value.
ContributorsDiaz, Heriberto Chacon (Author) / Mauskopf, Philip (Thesis director) / McCartney, Martha (Committee member) / Department of Physics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
This paper considers what factors influence student interest, motivation, and continued engagement. Studies show anticipated extrinsic rewards for activity participation have been shown to reduce intrinsic value for that activity. This might suggest that grade point average (GPA) has a similar effect on academic interests. Further, when incentives such as

This paper considers what factors influence student interest, motivation, and continued engagement. Studies show anticipated extrinsic rewards for activity participation have been shown to reduce intrinsic value for that activity. This might suggest that grade point average (GPA) has a similar effect on academic interests. Further, when incentives such as scholarships, internships, and careers are GPA-oriented, students must adopt performance goals in courses to guarantee success. However, performance goals have not been shown to correlated with continued interest in a topic. Current literature proposes that student involvement in extracurricular activities, focused study groups, and mentored research are crucial to student success. Further, students may express either a fixed or growth mindset, which influences their approach to challenges and opportunities for growth. The purpose of this study was to collect individual cases of students' experiences in college. The interview method was chosen to collect complex information that could not be gathered from standard surveys. To accomplish this, questions were developed based on content areas related to education and motivation theory. The content areas included activities and meaning, motivation, vision, and personal development. The developed interview method relied on broad questions that would be followed by specific "probing" questions. We hypothesize that this would result in participant-led discussions and unique narratives from the participant. Initial findings suggest that some of the questions were effective in eliciting detailed responses, though results were dependent on the interviewer. From the interviews we find that students value their group involvements, leadership opportunities, and relationships with mentors, which parallels results found in other studies.
ContributorsAbrams, Sara (Author) / Hartwell, Lee (Thesis director) / Correa, Kevin (Committee member) / Department of Psychology (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133355-Thumbnail Image.png
Description
This study estimates the capitalization effect of golf courses in Maricopa County using the hedonic pricing method. It draws upon a dataset of 574,989 residential transactions from 2000 to 2006 to examine how the aesthetic, non-golf benefits of golf courses capitalize across a gradient of proximity measures. The measures for

This study estimates the capitalization effect of golf courses in Maricopa County using the hedonic pricing method. It draws upon a dataset of 574,989 residential transactions from 2000 to 2006 to examine how the aesthetic, non-golf benefits of golf courses capitalize across a gradient of proximity measures. The measures for amenity value extend beyond home adjacency and include considerations for homes within a range of discrete walkability buffers of golf courses. The models also distinguish between public and private golf courses as a proxy for the level of golf course access perceived by non-golfers. Unobserved spatial characteristics of the neighborhoods around golf courses are controlled for by increasing the extent of spatial fixed effects from city, to census tract, and finally to 2000 meter golf course ‘neighborhoods.’ The estimation results support two primary conclusions. First, golf course proximity is found to be highly valued for adjacent homes and homes up to 50 meters way from a course, still evident but minimal between 50 and 150 meters, and insignificant at all other distance ranges. Second, private golf courses do not command a higher proximity premia compared to public courses with the exception of homes within 25 to 50 meters of a course, indicating that the non-golf benefits of courses capitalize similarly, regardless of course type. The results of this study motivate further investigation into golf course features that signal access or add value to homes in the range of capitalization, particularly for near-adjacent homes between 50 and 150 meters thought previously not to capitalize.
ContributorsJoiner, Emily (Author) / Abbott, Joshua (Thesis director) / Smith, Kerry (Committee member) / Economics Program in CLAS (Contributor) / School of Sustainability (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133364-Thumbnail Image.png
Description
The objective of this paper is to provide an educational diagnostic into the technology of blockchain and its application for the supply chain. Education on the topic is important to prevent misinformation on the capabilities of blockchain. Blockchain as a new technology can be confusing to grasp given the wide

The objective of this paper is to provide an educational diagnostic into the technology of blockchain and its application for the supply chain. Education on the topic is important to prevent misinformation on the capabilities of blockchain. Blockchain as a new technology can be confusing to grasp given the wide possibilities it can provide. This can convolute the topic by being too broad when defined. Instead, the focus will be maintained on explaining the technical details about how and why this technology works in improving the supply chain. The scope of explanation will not be limited to the solutions, but will also detail current problems. Both public and private blockchain networks will be explained and solutions they provide in supply chains. In addition, other non-blockchain systems will be described that provide important pieces in supply chain operations that blockchain cannot provide. Blockchain when applied to the supply chain provides improved consumer transparency, management of resources, logistics, trade finance, and liquidity.
ContributorsKrukar, Joel Michael (Author) / Oke, Adegoke (Thesis director) / Duarte, Brett (Committee member) / Hahn, Richard (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Economics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133379-Thumbnail Image.png
Description
The Super Catalan numbers are a known set of numbers which have so far eluded a combinatorial interpretation. Several weighted interpretations have appeared since their discovery, one of which was discovered by William Kuszmaul in 2017. In this paper, we connect the weighted Super Catalan structure created previously by Kuszmaul

The Super Catalan numbers are a known set of numbers which have so far eluded a combinatorial interpretation. Several weighted interpretations have appeared since their discovery, one of which was discovered by William Kuszmaul in 2017. In this paper, we connect the weighted Super Catalan structure created previously by Kuszmaul and a natural $q$-analogue of the Super Catalan numbers. We do this by creating a statistic $\sigma$ for which the $q$ Super Catalan numbers, $S_q(m,n)=\sum_X (-1)^{\mu(X)} q^{\sigma(X)}$. In doing so, we take a step towards finding a strict combinatorial interpretation for the Super Catalan numbers.
ContributorsHouse, John Douglas (Author) / Fishel, Susanna (Thesis director) / Childress, Nancy (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
131503-Thumbnail Image.png
Description
Construction is a defining characteristic of geometry classes. In a traditional classroom, teachers and students use physical tools (i.e. a compass and straight-edge) in their constructions. However, with modern technology, construction is possible through the use of digital applications such as GeoGebra and Geometer’s SketchPad.
Many other studies have

Construction is a defining characteristic of geometry classes. In a traditional classroom, teachers and students use physical tools (i.e. a compass and straight-edge) in their constructions. However, with modern technology, construction is possible through the use of digital applications such as GeoGebra and Geometer’s SketchPad.
Many other studies have researched the benefits of digital manipulatives and digital environments through student completion of tasks and testing. This study intends to research students’ use of the digital tools and manipulatives, along with the students’ interactions with the digital environment. To this end, I conducted exploratory teaching experiments with two calculus I students.
In the exploratory teaching experiments, students were introduced to a GeoGebra application developed by Fischer (2019), which includes instructional videos and corresponding quizzes, as well as exercises and interactive notepads, where students could use digital tools to construct line segments and circles (corresponding to the physical straight-edge and compass). The application built up the students’ foundational knowledge, culminating in the construction and verbal proof of Euclid’s Elements, Proposition 1 (Euclid, 1733).
The central findings of this thesis are the students’ interactions with the digital environment, with observed changes in their conceptions of radii and circles, and in their use of tools. The students were observed to have conceptions of radii as a process, a geometric shape, and a geometric object. I observed the students’ conceptions of a circle change from a geometric shape to a geometric object, and with that change, observed the students’ use of tools change from a measuring focus to a property focus.
I report a summary of the students’ work and classify their reasoning and actions into the above categories, and an analysis of how the digital environment impacts the students’ conceptions. I also briefly discuss the impact of the findings on pedagogy and future research.
ContributorsSakauye, Noelle Marie (Author) / Roh, Kyeong Hah (Thesis director) / Zandieh, Michelle (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05