Matching Items (3)

Filtering by

Clear all filters

135293-Thumbnail Image.png

Formulation of Logic Model & Evaluation Plan for CPLC Insurance Program: A Collaboration Between Chicanos Por La Causa and the C.A.R.E. Program at the T. Denny Sanford School of Social and Family Dynamics, Arizona State University

Description

There is a widespread inequality in health care access and insured rates suffered by the Latino, Spanish-speaking population in Arizona, resulting in poor health measures and economic burden. The passage of the Affordable Care Act in 2010 provided mechanisms to

There is a widespread inequality in health care access and insured rates suffered by the Latino, Spanish-speaking population in Arizona, resulting in poor health measures and economic burden. The passage of the Affordable Care Act in 2010 provided mechanisms to alleviate this disparity, however, many Latino communities lack accessible information and means to gain access to health insurance enrollment. Chicanos Por La Causa (CPLC) is a community based organizing that provides many services to low-income communities across Arizona, one of which is the CPLC Insurance Program. In collaboration with the Community Action Research Experiences (CARE) at Arizona State University, the program was studied to help address the need of a LOGIC model and evaluation plan to determine its effectiveness. Interviews with three executives within CPLC were conducted in conjunction with a literature review to determine the inputs, strategies, outputs, and outcomes of the LOGIC model that drive CPLC Insurance's mission. Evaluation measures were then created to provide the necessary quantitative data that can best show to what degree the program is achieving its goals. Specifically, the results indicated the key outcomes that drive the LOGIC model, and an evaluation plan designed to provide indicators of these outcomes was produced. The implications of this study are that the suggested data collection can verify how effectively the program's actions are creating positive change, as well as show where further improvements may be necessary to maximize effectiveness.

Contributors

Created

Date Created
2016-05

136787-Thumbnail Image.png

Evaluations in the City of Phoenix Head Start Agencies

Description

There is a serious need for early childhood intervention practices for children who are living at or below the poverty line. Since 1965 Head Start has provided a federally funded, free preschool program for children in this population. The City

There is a serious need for early childhood intervention practices for children who are living at or below the poverty line. Since 1965 Head Start has provided a federally funded, free preschool program for children in this population. The City of Phoenix Head Start program consists of nine delegate agencies, seven of which reside in school districts. These agencies are currently not conducting local longitudinal evaluations of their preschool graduates. The purpose of this study was to recommend initial steps the City of Phoenix grantee and the delegate agencies can take to begin a longitudinal evaluation process of their Head Start programs. Seven City of Phoenix Head Start agency directors were interviewed. These interviews provided information about the attitudes of the directors when considering longitudinal evaluations and how Head Start already evaluates their programs through internal assessments. The researcher also took notes on the Third Grade Follow-Up to the Head Start Executive Summary in order to make recommendations to the City of Phoenix Head Start programs about the best practices for longitudinal student evaluations.

Contributors

Created

Date Created
2014-05

Examining the validity of a state policy-directed framework for evaluating teacher instructional quality: informing policy, impacting practice

Description

ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates

ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP). The research also explores educator influence (voice) in evaluation design and the role information brokers have in local decision making. Findings are situated in an evidentiary and policy context at both the LEA and state policy levels.

The study employs a single-phase, concurrent, mixed-methods research design triangulating multiple sources of qualitative and quantitative evidence onto a single (unified) validation construct: Teacher Instructional Quality. It focuses on assessing the characteristics of metrics used to construct quantitative ratings of instructional competence and the alignment of stakeholder perspectives to facets implicit in the evaluation framework. Validity examinations include assembly of criterion, content, reliability, consequential and construct articulation evidences. Perceptual perspectives were obtained from teachers, principals, district leadership, and state policy decision makers. Data for this study came from a large suburban public school district in metropolitan Phoenix, Arizona.

Study findings suggest that the evaluation framework is insufficient for supporting high stakes, consequential inferences of teacher instructional quality. This is based, in part on the following: (1) Weak associations between VAM and PP metrics; (2) Unstable VAM measures across time and between tested content areas; (3) Less than adequate scale reliabilities; (4) Lack of coherence between theorized and empirical PP factor structures; (5) Omission/underrepresentation of important instructional attributes/effects; (6) Stakeholder concerns over rater consistency, bias, and the inability of test scores to adequately represent instructional competence; (7) Negative sentiments regarding the system's ability to improve instructional competence and/or student learning; (8) Concerns regarding unintended consequences including increased stress, lower morale, harm to professional identity, and restricted learning opportunities; and (9) The general lack of empowerment and educator exclusion from the decision making process. Study findings also highlight the value of information brokers in policy decision making and the importance of having access to unbiased empirical information during the design and implementation phases of important change initiatives.

Contributors

Agent

Created

Date Created
2015