Matching Items (19)

Filtering by

Clear all filters

154307-Thumbnail Image.png

Self-control motivation and capacity scale: a new measure of multiple facets of self-control

Description

Self-control has been shown to predict both health risk and health protective outcomes. Although top-down or “good” self-control is typically examined as a unidimensional construct, research on “poor” self-control suggests that multiple dimensions may be necessary to capture aspects of

Self-control has been shown to predict both health risk and health protective outcomes. Although top-down or “good” self-control is typically examined as a unidimensional construct, research on “poor” self-control suggests that multiple dimensions may be necessary to capture aspects of self-control. The current study sought to create a new brief survey measure of top-down self-control that differentiates between self-control capacity, internal motivation, and external motivation. Items were adapted from the Brief Self-Control Scale (BSCS; Tangney, Baumeister, & Boone, 2004) and were administered through two online surveys to 347 undergraduate students enrolled in introductory psychology courses at Arizona State University. The Self-Control Motivation and Capacity Survey (SCMCS) showed strong evidence of validity and reliability. Exploratory and confirmatory factor analyses supported a 3-factor structure of the scale consistent with the underlying theoretical model. The final 15-item measure demonstrated excellent model fit, chi-square = 89.722 p=.077, CFI = .989, RMSEA = .032, SRMR = .045. Despite several limitations including the cross-sectional nature of most analyses, self-control capacity, internal motivation, and external motivation uniquely related to various self-reported behavioral outcomes, and accounted for additional variance beyond that accounted for by the BSCS. Future studies are needed to establish the stability of multiple dimensions of self-control, and to develop state-like and domain-specific measures of self-control. While more research in this area is needed, the current study demonstrates the importance of studying multiple aspects of top-down self-control, and may ultimately facilitate the tailoring of interventions to the needs of individuals based on unique profiles of self-control capacity and motivation.

Contributors

Agent

Created

Date Created
2016

151192-Thumbnail Image.png

Developing a measure of cyberbullying perpetration and victimization

Description

This research addressed concerns regarding the measurement of cyberbullying and aimed to develop a reliable and valid measure of cyberbullying perpetration and victimization. Despite the growing body of literature on cyberbullying, several measurement concerns were identified and addressed in two

This research addressed concerns regarding the measurement of cyberbullying and aimed to develop a reliable and valid measure of cyberbullying perpetration and victimization. Despite the growing body of literature on cyberbullying, several measurement concerns were identified and addressed in two pilot studies. These concerns included the most appropriate time frame for behavioral recall, use of the term "cyberbullying" in questionnaire instructions, whether to refer to power in instances of cyberbullying, and best practices for designing self-report measures to reflect how young adults understand and communicate about cyberbullying. Mixed methodology was employed in two pilot studies to address these concerns and to determine how to best design a measure which participants could respond to accurately and honestly. Pilot study one consisted of an experimental examination of time frame for recall and use of the term on the outcomes of honesty, accuracy, and social desirability. Pilot study two involved a qualitative examination of several measurement concerns through focus groups held with young adults. Results suggested that one academic year was the most appropriate time frame for behavioral recall, to avoid use of the term "cyberbullying" in questionnaire instructions, to include references to power, and other suggestions for the improving the method in the main study to bolster participants' attention. These findings informed the development of a final measure in the main study which aimed to be both practical in its ability to capture prevalence and precise in its ability to measure frequency. The main study involved examining the psychometric properties, reliability, and validity of the final measure. Results of the main study indicated that the final measure exhibited qualities of an index and was assessed as such. Further, structural equation modeling techniques and test-retest procedures indicated the measure had good reliability. And, good predictive validity and satisfactory convergent validity was established for the final measure. Results derived from the measure concerning prevalence, frequency, and chronicity are presented within the scope of findings in cyberbullying literature. Implications for practice and future directions for research with the measure developed here are discussed.

Contributors

Agent

Created

Date Created
2012

150189-Thumbnail Image.png

Large scale analytical insights of email communication patterns

Description

This thesis research attempts to observe, measure and visualize the communication patterns among developers of an open source community and analyze how this can be inferred in terms of progress of that open source project. Here I attempted to analyze

This thesis research attempts to observe, measure and visualize the communication patterns among developers of an open source community and analyze how this can be inferred in terms of progress of that open source project. Here I attempted to analyze the Ubuntu open source project's email data (9 subproject log archives over a period of five years) and focused on drawing more precise metrics from different perspectives of the communication data. Also, I attempted to overcome the scalability issue by using Apache Pig libraries, which run on a MapReduce framework based Hadoop Cluster. I described four metrics based on which I observed and analyzed the data and also presented the results which show the required patterns and anomalies to better understand and infer the communication. Also described the usage experience with Pig Latin (scripting language of Apache Pig Libraries) for this research and how they brought the feature of scalability, simplicity, and visibility in this data intensive research work. These approaches are useful in project monitoring, to augment human observation and reporting, in social network analysis, to track individual contributions.

Contributors

Agent

Created

Date Created
2011

150281-Thumbnail Image.png

Repurposing technology: an innovative low cost two-dimensional noncontact measurement tool

Description

Two-dimensional vision-based measurement is an ideal choice for measuring small or fragile parts that could be damaged using conventional contact measurement methods. Two-dimensional vision-based measurement systems can be quite expensive putting the technology out of reach of inventors and others.

Two-dimensional vision-based measurement is an ideal choice for measuring small or fragile parts that could be damaged using conventional contact measurement methods. Two-dimensional vision-based measurement systems can be quite expensive putting the technology out of reach of inventors and others. The vision-based measurement tool design developed in this thesis is a low cost alternative that can be made for less than $500US from off-the-shelf parts and free software. The design is based on the USB microscope. The USB microscope was once considered a toy, similar to the telescopes and microscopes of the 17th century, but has recently started finding applications in industry, laboratories, and schools. In order to convert the USB microscope into a measurement tool, research in the following areas was necessary: currently available vision-based measurement systems, machine vision technologies, microscope design, photographic methods, digital imaging, illumination, edge detection, and computer aided drafting applications. The result of the research was a two-dimensional vision-based measurement system that is extremely versatile, easy to use, and, best of all, inexpensive.

Contributors

Agent

Created

Date Created
2011

149584-Thumbnail Image.png

Measurement and analysis of ergonomic loads on mechanical system installers

Description

Construction work is ergonomically hazardous, as it requires numerous awkward postures, heavy lifting and other forceful exertions. Prolonged repetition and overexertion have a cumulative effect on workers often resulting in work related musculoskeletal disorders (WMSDs). The United States

Construction work is ergonomically hazardous, as it requires numerous awkward postures, heavy lifting and other forceful exertions. Prolonged repetition and overexertion have a cumulative effect on workers often resulting in work related musculoskeletal disorders (WMSDs). The United States spends approximately $850 billion a year on WMSDs. Mechanical installation workers experience serious overexertion injuries at rates exceeding the national average for all industries and all construction workers, and second only to laborers. The main contributing factors of WMSDs are ergonomic loads and extreme stresses due to incorrect postures. The motivation for this study is to reduce the WMSDs among mechanical system (HVAC system) installation workers. To achieve this goal, it is critical to reduce the ergonomic loads and extreme postures of these installers. This study has the following specific aims: (1) To measure the ergonomic loads on specific body regions (shoulders, back, neck, and legs) for different HVAC installation activities; and (2) To investigate how different activity parameters (material characteristics, equipment, workers, etc.) affect the severity and duration of ergonomic demands. The study focuses on the following activities: (1) layout, (2) ground assembly of ductwork, and (3) installation of duct and equipment at ceiling height using different methods. The researcher observed and analyzed 15 HVAC installation activities among three Arizona mechanical contractors. Ergonomic analysis of the activities using a postural guide developed from RULA and REBA methods was performed. The simultaneous analysis of the production tasks and the ergonomic loads identified the tasks with the highest postural loads for different body regions and the influence of the different work variables on extreme body postures. Based on this analysis the results support recommendations to mitigate long duration activities and exposure to extreme postures. These recommendations can potentially reduce risk, improve productivity and lower injury costs in the long term.

Contributors

Agent

Created

Date Created
2011

149587-Thumbnail Image.png

Application of methods in physical activity measurement

Description

It is broadly accepted that physical activity provides substantial health benefits. Despite strong evidence, approximately 60% to 95% of US adults are insufficiently active to obtain these health benefits. This dissertation explored five projects that examined the measurement properties and

It is broadly accepted that physical activity provides substantial health benefits. Despite strong evidence, approximately 60% to 95% of US adults are insufficiently active to obtain these health benefits. This dissertation explored five projects that examined the measurement properties and methodology for a variety of physical activity assessment methods. Project one identified validity evidence for the new MyWellness Key accelerometer in sixteen adults. The MyWellness Key demonstrated acceptable validity evidence when compared to a criterion accelerometer during graded treadmill walking and in free-living settings. This supports the use of the MyWellness Key accelerometer to measure physical activity. Project two evaluated validity (study 1) and test-retest reliability evidence (study 2) of the Global Physical Activity Questionnaire (GPAQ) in a two part study. The GPAQ was compared to direct and indirect criterion measures including object and subjective physical activity instruments. These data provided preliminary validity and reliability evidence for the GPAQ that support its use to assess physical activity. Project three investigated the optimal h.d-1 of accelerometer wear time needed to assess daily physical activity. Using a semi-simulation approach, data from 124 participants were used to compare 10-13 h.d-1 to the criterion 14 h.d-1. This study suggested that a minimum accelerometer wear time of 13 h.d-1 is needed to provide a valid measure of daily physical activity. Project four evaluated validity and reliability evidence of a novel method (Movement and Activity in Physical Space [MAPS] score) that combines accelerometer and GPS data to assess person-environment interactions. Seventy-five healthy adults wore an accelerometer and GPS receiver for three days to provide MAPS scores. This study provided evidence for use of a MAPS score for future research and clinical use. Project five used accelerometer data from 1,000 participants from the 2005-2006 National Health and Nutrition Examination Study. A semi-simulation approach was used to assess the effect of accelerometer wear time (10-14 h.d-1) on physical activity data. These data showed wearing for 12 h.d-1 or less may underestimate time spent in various intensities of physical activity.

Contributors

Agent

Created

Date Created
2011

149542-Thumbnail Image.png

Standardization of CMM algorithms and development of inspection maps for geometric tolerances

Description

The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized

The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the substitute feature evaluated for an actual part in the inspection stage. The first step in this research is to investigate the algorithms used for evaluating substitute features in current CMM software. For this, a survey of feature fitting algorithms available in the literature was performed and then a case study was done to reverse engineer the feature fitting algorithms used in commercial CMM software. The experiments proved that algorithms based on least squares technique are mostly used for GD&T; inspection and this wrong choice of fitting algorithm results in errors and deficiency in the inspection process. Based on the results, a standardization of fitting algorithms is proposed in light of the definition provided in the ASME Y14.5 standard and an interpretation of manual inspection practices. Standardized algorithms for evaluating substitute features from CMM data, consistent with the ASME Y14.5 standard and manual inspection practices for each tolerance type applicable to planar features are developed. Second, these standardized algorithms developed for substitute feature fitting are then used to develop i-Maps for size, orientation and flatness tolerances that apply to their respective feature types. Third, a methodology for Statistical Process Control (SPC) using the I-Maps is proposed by direct fitting of i-Maps into the parent T-Maps. Different methods of computing i-Maps, namely, finding mean, computing the convex hull and principal component analysis are explored. The control limits for the process are derived from inspection samples and a framework for statistical control of the process is developed. This also includes computation of basic SPC and process capability metrics.

Contributors

Agent

Created

Date Created
2011

153035-Thumbnail Image.png

Reconciling the differences between tolerance specification and measurement methods

Description

Dimensional Metrology is the branch of science that determines length, angular, and geometric relationships within manufactured parts and compares them with required tolerances. The measurements can be made using either manual methods or sampled coordinate metrology (Coordinate measuring machines). Manual

Dimensional Metrology is the branch of science that determines length, angular, and geometric relationships within manufactured parts and compares them with required tolerances. The measurements can be made using either manual methods or sampled coordinate metrology (Coordinate measuring machines). Manual measurement methods have been in practice for a long time and are well accepted in the industry, but are slow for the present day manufacturing. On the other hand CMMs are relatively fast, but these methods are not well established yet. The major problem that needs to be addressed is the type of feature fitting algorithm used for evaluating tolerances. In a CMM the use of different feature fitting algorithms on a feature gives different values, and there is no standard that describes the type of feature fitting algorithm to be used for a specific tolerance. Our research is focused on identifying the feature fitting algorithm that is best used for each type of tolerance. Each algorithm is identified as the one to best represent the interpretation of geometric control as defined by the ASME Y14.5 standard and on the manual methods used for the measurement of a specific tolerance type. Using these algorithms normative procedures for CMMs are proposed for verifying tolerances. The proposed normative procedures are implemented as software. Then the procedures are verified by comparing the results from software with that of manual measurements.

To aid this research a library of feature fitting algorithms is developed in parallel. The library consists of least squares, Chebyshev and one sided fits applied on the features of line, plane, circle and cylinder. The proposed normative procedures are useful for evaluating tolerances in CMMs. The results evaluated will be in accordance to the standard. The ambiguity in choosing the algorithms is prevented. The software developed can be used in quality control for inspection purposes.

Contributors

Agent

Created

Date Created
2014

152562-Thumbnail Image.png

Development and verification of a library of future fitting algorithms for CMMs

Description

Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed

Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed to see whether the fitted feature lies within the specified tolerance limits or not. Coordinate Measuring Machines (CMMs) use feature fitting algorithms that incorporate least square estimates as a basis for obtaining minimum, maximum, and zone fits. However, a comprehensive set of algorithms addressing the fitting procedure (all datums, targets) for every tolerance class is not available. Therefore, a Library of algorithms is developed to aid the process of feature fitting, and tolerance verification. This paper addresses linear, planar, circular, and cylindrical features only. This set of algorithms described conforms to the international Standards for GD&T.; In order to reduce the number of points to be analyzed, and to identify the possible candidate points for linear, circular and planar features, 2D and 3D convex hulls are used. For minimum, maximum, and Chebyshev cylinders, geometric search algorithms are used. Algorithms are divided into three major categories: least square, unconstrained, and constrained fits. Primary datums require one sided unconstrained fits for their verification. Secondary datums require one sided constrained fits for their verification. For size and other tolerance verifications, we require both unconstrained and constrained fits

Contributors

Agent

Created

Date Created
2014

Examining the validity of a state policy-directed framework for evaluating teacher instructional quality: informing policy, impacting practice

Description

ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates

ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP). The research also explores educator influence (voice) in evaluation design and the role information brokers have in local decision making. Findings are situated in an evidentiary and policy context at both the LEA and state policy levels.

The study employs a single-phase, concurrent, mixed-methods research design triangulating multiple sources of qualitative and quantitative evidence onto a single (unified) validation construct: Teacher Instructional Quality. It focuses on assessing the characteristics of metrics used to construct quantitative ratings of instructional competence and the alignment of stakeholder perspectives to facets implicit in the evaluation framework. Validity examinations include assembly of criterion, content, reliability, consequential and construct articulation evidences. Perceptual perspectives were obtained from teachers, principals, district leadership, and state policy decision makers. Data for this study came from a large suburban public school district in metropolitan Phoenix, Arizona.

Study findings suggest that the evaluation framework is insufficient for supporting high stakes, consequential inferences of teacher instructional quality. This is based, in part on the following: (1) Weak associations between VAM and PP metrics; (2) Unstable VAM measures across time and between tested content areas; (3) Less than adequate scale reliabilities; (4) Lack of coherence between theorized and empirical PP factor structures; (5) Omission/underrepresentation of important instructional attributes/effects; (6) Stakeholder concerns over rater consistency, bias, and the inability of test scores to adequately represent instructional competence; (7) Negative sentiments regarding the system's ability to improve instructional competence and/or student learning; (8) Concerns regarding unintended consequences including increased stress, lower morale, harm to professional identity, and restricted learning opportunities; and (9) The general lack of empowerment and educator exclusion from the decision making process. Study findings also highlight the value of information brokers in policy decision making and the importance of having access to unbiased empirical information during the design and implementation phases of important change initiatives.

Contributors

Agent

Created

Date Created
2015