Matching Items (19)
Filtering by

Clear all filters

151476-Thumbnail Image.png
Description
The health benefits of physical activity are widely accepted. Emerging research also indicates that sedentary behaviors can carry negative health consequences regardless of physical activity level. This dissertation explored four projects that examined measurement properties of physical activity and sedentary behavior monitors. Project one identified the oxygen costs of four

The health benefits of physical activity are widely accepted. Emerging research also indicates that sedentary behaviors can carry negative health consequences regardless of physical activity level. This dissertation explored four projects that examined measurement properties of physical activity and sedentary behavior monitors. Project one identified the oxygen costs of four other care activities in seventeen adults. Pushing a wheelchair and pushing a stroller were identified as moderate-intensity activities. Minutes spent engaged in these activities contribute towards meeting the 2008 Physical Activity Guidelines. Project two identified the oxygen costs of common cleaning activities in sixteen adults. Mopping a floor was identified as moderate-intensity physical activity, while cleaning a kitchen and cleaning a bathtub were identified as light-intensity physical activity. Minutes spent engaged in mopping a floor contributes towards meeting the 2008 Physical Activity Guidelines. Project three evaluated the differences in number of minutes spent in activity levels when utilizing different epoch lengths in accelerometry. A shorter epoch length (1-second, 5-seconds) accumulated significantly more minutes of sedentary behaviors than a longer epoch length (60-seconds). The longer epoch length also identified significantly more time engaged in light-intensity activities than the shorter epoch lengths. Future research needs to account for epoch length selection when conducting physical activity and sedentary behavior assessment. Project four investigated the accuracy of four activity monitors in assessing activities that were either sedentary behaviors or light-intensity physical activities. The ActiGraph GT3X+ assessed the activities least accurately, while the SenseWear Armband and ActivPAL assessed activities equally accurately. The monitor used to assess physical activity and sedentary behaviors may influence the accuracy of the measurement of a construct.
ContributorsMeckes, Nathanael (Author) / Ainsworth, Barbara E (Thesis advisor) / Belyea, Michael (Committee member) / Buman, Matthew (Committee member) / Gaesser, Glenn (Committee member) / Wharton, Christopher (Christopher Mack), 1977- (Committee member) / Arizona State University (Publisher)
Created2012
Description
ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP).

ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP). The research also explores educator influence (voice) in evaluation design and the role information brokers have in local decision making. Findings are situated in an evidentiary and policy context at both the LEA and state policy levels.

The study employs a single-phase, concurrent, mixed-methods research design triangulating multiple sources of qualitative and quantitative evidence onto a single (unified) validation construct: Teacher Instructional Quality. It focuses on assessing the characteristics of metrics used to construct quantitative ratings of instructional competence and the alignment of stakeholder perspectives to facets implicit in the evaluation framework. Validity examinations include assembly of criterion, content, reliability, consequential and construct articulation evidences. Perceptual perspectives were obtained from teachers, principals, district leadership, and state policy decision makers. Data for this study came from a large suburban public school district in metropolitan Phoenix, Arizona.

Study findings suggest that the evaluation framework is insufficient for supporting high stakes, consequential inferences of teacher instructional quality. This is based, in part on the following: (1) Weak associations between VAM and PP metrics; (2) Unstable VAM measures across time and between tested content areas; (3) Less than adequate scale reliabilities; (4) Lack of coherence between theorized and empirical PP factor structures; (5) Omission/underrepresentation of important instructional attributes/effects; (6) Stakeholder concerns over rater consistency, bias, and the inability of test scores to adequately represent instructional competence; (7) Negative sentiments regarding the system's ability to improve instructional competence and/or student learning; (8) Concerns regarding unintended consequences including increased stress, lower morale, harm to professional identity, and restricted learning opportunities; and (9) The general lack of empowerment and educator exclusion from the decision making process. Study findings also highlight the value of information brokers in policy decision making and the importance of having access to unbiased empirical information during the design and implementation phases of important change initiatives.
ContributorsSloat, Edward F. (Author) / Wetzel, Keith (Thesis advisor) / Amrein-Beardsley, Audrey (Thesis advisor) / Ewbank, Ann (Committee member) / Shough, Lori (Committee member) / Arizona State University (Publisher)
Created2015
153035-Thumbnail Image.png
Description
Dimensional Metrology is the branch of science that determines length, angular, and geometric relationships within manufactured parts and compares them with required tolerances. The measurements can be made using either manual methods or sampled coordinate metrology (Coordinate measuring machines). Manual measurement methods have been in practice for a long time

Dimensional Metrology is the branch of science that determines length, angular, and geometric relationships within manufactured parts and compares them with required tolerances. The measurements can be made using either manual methods or sampled coordinate metrology (Coordinate measuring machines). Manual measurement methods have been in practice for a long time and are well accepted in the industry, but are slow for the present day manufacturing. On the other hand CMMs are relatively fast, but these methods are not well established yet. The major problem that needs to be addressed is the type of feature fitting algorithm used for evaluating tolerances. In a CMM the use of different feature fitting algorithms on a feature gives different values, and there is no standard that describes the type of feature fitting algorithm to be used for a specific tolerance. Our research is focused on identifying the feature fitting algorithm that is best used for each type of tolerance. Each algorithm is identified as the one to best represent the interpretation of geometric control as defined by the ASME Y14.5 standard and on the manual methods used for the measurement of a specific tolerance type. Using these algorithms normative procedures for CMMs are proposed for verifying tolerances. The proposed normative procedures are implemented as software. Then the procedures are verified by comparing the results from software with that of manual measurements.

To aid this research a library of feature fitting algorithms is developed in parallel. The library consists of least squares, Chebyshev and one sided fits applied on the features of line, plane, circle and cylinder. The proposed normative procedures are useful for evaluating tolerances in CMMs. The results evaluated will be in accordance to the standard. The ambiguity in choosing the algorithms is prevented. The software developed can be used in quality control for inspection purposes.
ContributorsVemulapalli, Prabath (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Takahashi, Timothy (Committee member) / Arizona State University (Publisher)
Created2014
150189-Thumbnail Image.png
Description
This thesis research attempts to observe, measure and visualize the communication patterns among developers of an open source community and analyze how this can be inferred in terms of progress of that open source project. Here I attempted to analyze the Ubuntu open source project's email data (9 subproject log

This thesis research attempts to observe, measure and visualize the communication patterns among developers of an open source community and analyze how this can be inferred in terms of progress of that open source project. Here I attempted to analyze the Ubuntu open source project's email data (9 subproject log archives over a period of five years) and focused on drawing more precise metrics from different perspectives of the communication data. Also, I attempted to overcome the scalability issue by using Apache Pig libraries, which run on a MapReduce framework based Hadoop Cluster. I described four metrics based on which I observed and analyzed the data and also presented the results which show the required patterns and anomalies to better understand and infer the communication. Also described the usage experience with Pig Latin (scripting language of Apache Pig Libraries) for this research and how they brought the feature of scalability, simplicity, and visibility in this data intensive research work. These approaches are useful in project monitoring, to augment human observation and reporting, in social network analysis, to track individual contributions.
ContributorsMotamarri, Lakshminarayana (Author) / Santanam, Raghu (Thesis advisor) / Ye, Jieping (Thesis advisor) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
Description
It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement

It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement on conventional real-world performance. These measurements are then used as inputs for a model optimal, model agnostic, smoothing for calibration of a laser scribe and online tracking of velocimeter using video input. Using appropriate smooth interpolation to increase effective sample density can reduce uncertainty and improve estimates. Use of the proper negative offset of the template function has the result of creating a convolution with higher local curvature than either template of target function which allows improved center-finding. Using the Akaike Information Criterion with a smoothing spline function it is possible to perform a model-optimal smooth on scalar measurements without knowing the underlying model and to determine the function describing the uncertainty in that optimal smooth. An example of empiric derivation of the parameters for a rudimentary Kalman Filter from this is then provided, and tested. Using the techniques of Exploratory Data Analysis and the "Formulize" genetic algorithm tool to convert the spline models into more accessible analytic forms resulted in stable, properly generalized, KF with performance and simplicity that exceeds "textbook" implementations thereof. Validation of the measurement includes that, in analytic case, it led to arbitrary precision in measurement of feature; in reasonable test case using the methods proposed, a reasonable and consistent maximum error of around 0.3% the length of a pixel was achieved and in practice using pixels that were 700nm in size feature position was located to within ± 2 nm. Robust applicability is demonstrated by the measurement of indicator position for a King model 2-32-G-042 rotameter.
ContributorsMunroe, Michael R (Author) / Phelan, Patrick (Thesis advisor) / Kostelich, Eric (Committee member) / Mahalov, Alex (Committee member) / Arizona State University (Publisher)
Created2012
151192-Thumbnail Image.png
Description
This research addressed concerns regarding the measurement of cyberbullying and aimed to develop a reliable and valid measure of cyberbullying perpetration and victimization. Despite the growing body of literature on cyberbullying, several measurement concerns were identified and addressed in two pilot studies. These concerns included the most appropriate time frame

This research addressed concerns regarding the measurement of cyberbullying and aimed to develop a reliable and valid measure of cyberbullying perpetration and victimization. Despite the growing body of literature on cyberbullying, several measurement concerns were identified and addressed in two pilot studies. These concerns included the most appropriate time frame for behavioral recall, use of the term "cyberbullying" in questionnaire instructions, whether to refer to power in instances of cyberbullying, and best practices for designing self-report measures to reflect how young adults understand and communicate about cyberbullying. Mixed methodology was employed in two pilot studies to address these concerns and to determine how to best design a measure which participants could respond to accurately and honestly. Pilot study one consisted of an experimental examination of time frame for recall and use of the term on the outcomes of honesty, accuracy, and social desirability. Pilot study two involved a qualitative examination of several measurement concerns through focus groups held with young adults. Results suggested that one academic year was the most appropriate time frame for behavioral recall, to avoid use of the term "cyberbullying" in questionnaire instructions, to include references to power, and other suggestions for the improving the method in the main study to bolster participants' attention. These findings informed the development of a final measure in the main study which aimed to be both practical in its ability to capture prevalence and precise in its ability to measure frequency. The main study involved examining the psychometric properties, reliability, and validity of the final measure. Results of the main study indicated that the final measure exhibited qualities of an index and was assessed as such. Further, structural equation modeling techniques and test-retest procedures indicated the measure had good reliability. And, good predictive validity and satisfactory convergent validity was established for the final measure. Results derived from the measure concerning prevalence, frequency, and chronicity are presented within the scope of findings in cyberbullying literature. Implications for practice and future directions for research with the measure developed here are discussed.
ContributorsSavage, Matthew (Author) / Roberto, Anthony J (Thesis advisor) / Palazzolo, Kellie E (Committee member) / Thompson, Marilyn S (Committee member) / Arizona State University (Publisher)
Created2012
150281-Thumbnail Image.png
Description
Two-dimensional vision-based measurement is an ideal choice for measuring small or fragile parts that could be damaged using conventional contact measurement methods. Two-dimensional vision-based measurement systems can be quite expensive putting the technology out of reach of inventors and others. The vision-based measurement tool design developed in this thesis is

Two-dimensional vision-based measurement is an ideal choice for measuring small or fragile parts that could be damaged using conventional contact measurement methods. Two-dimensional vision-based measurement systems can be quite expensive putting the technology out of reach of inventors and others. The vision-based measurement tool design developed in this thesis is a low cost alternative that can be made for less than $500US from off-the-shelf parts and free software. The design is based on the USB microscope. The USB microscope was once considered a toy, similar to the telescopes and microscopes of the 17th century, but has recently started finding applications in industry, laboratories, and schools. In order to convert the USB microscope into a measurement tool, research in the following areas was necessary: currently available vision-based measurement systems, machine vision technologies, microscope design, photographic methods, digital imaging, illumination, edge detection, and computer aided drafting applications. The result of the research was a two-dimensional vision-based measurement system that is extremely versatile, easy to use, and, best of all, inexpensive.
ContributorsGraham, Linda L. (Author) / Biekert, Russell (Thesis advisor) / Macia, Narciso (Committee member) / Meitz, Robert (Committee member) / Arizona State University (Publisher)
Created2011
154532-Thumbnail Image.png
Description
Modern systems that measure dynamical phenomena often have limitations as to how many sensors can operate at any given time step. This thesis considers a sensor scheduling problem in which the source of a diffusive phenomenon is to be localized using single point measurements of its concentration. With a

Modern systems that measure dynamical phenomena often have limitations as to how many sensors can operate at any given time step. This thesis considers a sensor scheduling problem in which the source of a diffusive phenomenon is to be localized using single point measurements of its concentration. With a linear diffusion model, and in the absence of noise, classical observability theory describes whether or not the system's initial state can be deduced from a given set of linear measurements. However, it does not describe to what degree the system is observable. Different metrics of observability have been proposed in literature to address this issue. Many of these methods are based on choosing optimal or sub-optimal sensor schedules from a predetermined collection of possibilities. This thesis proposes two greedy algorithms for a one-dimensional and two-dimensional discrete diffusion processes. The first algorithm considers a deterministic linear dynamical system and deterministic linear measurements. The second algorithm considers noise on the measurements and is compared to a Kalman filter scheduling method described in published work.
ContributorsNajam, Anbar (Author) / Cochran, Douglas (Thesis advisor) / Turaga, Pavan (Committee member) / Wang, Chao (Committee member) / Arizona State University (Publisher)
Created2016
154307-Thumbnail Image.png
Description
Self-control has been shown to predict both health risk and health protective outcomes. Although top-down or “good” self-control is typically examined as a unidimensional construct, research on “poor” self-control suggests that multiple dimensions may be necessary to capture aspects of self-control. The current study sought to create a new brief

Self-control has been shown to predict both health risk and health protective outcomes. Although top-down or “good” self-control is typically examined as a unidimensional construct, research on “poor” self-control suggests that multiple dimensions may be necessary to capture aspects of self-control. The current study sought to create a new brief survey measure of top-down self-control that differentiates between self-control capacity, internal motivation, and external motivation. Items were adapted from the Brief Self-Control Scale (BSCS; Tangney, Baumeister, & Boone, 2004) and were administered through two online surveys to 347 undergraduate students enrolled in introductory psychology courses at Arizona State University. The Self-Control Motivation and Capacity Survey (SCMCS) showed strong evidence of validity and reliability. Exploratory and confirmatory factor analyses supported a 3-factor structure of the scale consistent with the underlying theoretical model. The final 15-item measure demonstrated excellent model fit, chi-square = 89.722 p=.077, CFI = .989, RMSEA = .032, SRMR = .045. Despite several limitations including the cross-sectional nature of most analyses, self-control capacity, internal motivation, and external motivation uniquely related to various self-reported behavioral outcomes, and accounted for additional variance beyond that accounted for by the BSCS. Future studies are needed to establish the stability of multiple dimensions of self-control, and to develop state-like and domain-specific measures of self-control. While more research in this area is needed, the current study demonstrates the importance of studying multiple aspects of top-down self-control, and may ultimately facilitate the tailoring of interventions to the needs of individuals based on unique profiles of self-control capacity and motivation.
ContributorsPapova, Anna (Author) / Corbin, William R. (Thesis advisor) / Karoly, Paul (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2016
154852-Thumbnail Image.png
Description
Statistical mediation analysis allows researchers to identify the most important the mediating constructs in the causal process studied. Information about the mediating processes can be used to make interventions more powerful by enhancing successful program components and by not implementing components that did not significantly change the outcome. Identifying mediators

Statistical mediation analysis allows researchers to identify the most important the mediating constructs in the causal process studied. Information about the mediating processes can be used to make interventions more powerful by enhancing successful program components and by not implementing components that did not significantly change the outcome. Identifying mediators is especially relevant when the hypothesized mediating construct consists of multiple related facets. The general definition of the construct and its facets might relate differently to external criteria. However, current methods do not allow researchers to study the relationships between general and specific aspects of a construct to an external criterion simultaneously. This study proposes a bifactor measurement model for the mediating construct as a way to represent the general aspect and specific facets of a construct simultaneously. Monte Carlo simulation results are presented to help to determine under what conditions researchers can detect the mediated effect when one of the facets of the mediating construct is the true mediator, but the mediator is treated as unidimensional. Results indicate that parameter bias and detection of the mediated effect depends on the facet variance represented in the mediation model. This study contributes to the largely unexplored area of measurement issues in statistical mediation analysis.
ContributorsGonzález, Oscar (Author) / Mackinnon, David P (Thesis advisor) / Grimm, Kevin J. (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2016