Matching Items (5)
Filtering by

Clear all filters

136787-Thumbnail Image.png
Description
There is a serious need for early childhood intervention practices for children who are living at or below the poverty line. Since 1965 Head Start has provided a federally funded, free preschool program for children in this population. The City of Phoenix Head Start program consists of nine delegate agencies,

There is a serious need for early childhood intervention practices for children who are living at or below the poverty line. Since 1965 Head Start has provided a federally funded, free preschool program for children in this population. The City of Phoenix Head Start program consists of nine delegate agencies, seven of which reside in school districts. These agencies are currently not conducting local longitudinal evaluations of their preschool graduates. The purpose of this study was to recommend initial steps the City of Phoenix grantee and the delegate agencies can take to begin a longitudinal evaluation process of their Head Start programs. Seven City of Phoenix Head Start agency directors were interviewed. These interviews provided information about the attitudes of the directors when considering longitudinal evaluations and how Head Start already evaluates their programs through internal assessments. The researcher also took notes on the Third Grade Follow-Up to the Head Start Executive Summary in order to make recommendations to the City of Phoenix Head Start programs about the best practices for longitudinal student evaluations.
Created2014-05
137252-Thumbnail Image.png
Description
The purpose of this study was to provide a foundation for a plan for evaluations of the impact of the Learning Center on elementary school children with respect to academic achievement and school-related behaviors. Exploratory pre- and posttest data were collected and analyzed and recommendations were provided for a broader

The purpose of this study was to provide a foundation for a plan for evaluations of the impact of the Learning Center on elementary school children with respect to academic achievement and school-related behaviors. Exploratory pre- and posttest data were collected and analyzed and recommendations were provided for a broader evaluation plan to be used in the future. The experience from the exploratory evaluation, limitations and the recommendations in this study can be used by Chicanos Por La Causa to strengthen the Learning Center and thereby optimize the benefit to the children served within the San Marina residential community.
ContributorsLodhi, Osman Sultan (Author) / Roosa, Mark (Thesis director) / Dumka, Larry (Committee member) / Perez, Norma (Committee member) / Barrett, The Honors College (Contributor) / Department of Chemistry and Biochemistry (Contributor) / Department of Psychology (Contributor)
Created2014-05
154146-Thumbnail Image.png
Description
Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was

Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was developed for generating open response biology questions. The generated questions were compared to professional authorized questions.

To boost students’ learning experience, adaptive selection was built on the generated questions. Bayesian Knowledge Tracing was used as embedded assessment of the student’s current competence so that a suitable question could be selected based on the student’s previous performance. A between-subjects experiment with 42 participants was performed, where half of the participants studied with adaptive selected questions and the rest studied with mal-adaptive order of questions. Both groups significantly improved their test scores, and the participants in adaptive group registered larger learning gains than participants in the control group.

To explore the possibility of generating rich instructional feedback for machine-generated questions, a question-paragraph mapping task was identified. Given a set of questions and a list of paragraphs for a textbook, the goal of the task was to map the related paragraphs to each question. An algorithm was developed whose performance was comparable to human annotators.

A multiple-choice question with high quality distractors (incorrect answers) can be pedagogically valuable as well as being much easier to grade than open-response questions. Thus, an algorithm was developed to generate good distractors for multiple-choice questions. The machine-generated multiple-choice questions were compared to human-generated questions in terms of three measures: question difficulty, question discrimination and distractor usefulness. By recruiting 200 participants from Amazon Mechanical Turk, it turned out that the two types of questions performed very closely on all the three measures.
ContributorsZhang, Lishang (Author) / VanLehn, Kurt (Thesis advisor) / Baral, Chitta (Committee member) / Hsiao, Ihan (Committee member) / Wright, Christian (Committee member) / Arizona State University (Publisher)
Created2015
135293-Thumbnail Image.png
Description
There is a widespread inequality in health care access and insured rates suffered by the Latino, Spanish-speaking population in Arizona, resulting in poor health measures and economic burden. The passage of the Affordable Care Act in 2010 provided mechanisms to alleviate this disparity, however, many Latino communities lack accessible information

There is a widespread inequality in health care access and insured rates suffered by the Latino, Spanish-speaking population in Arizona, resulting in poor health measures and economic burden. The passage of the Affordable Care Act in 2010 provided mechanisms to alleviate this disparity, however, many Latino communities lack accessible information and means to gain access to health insurance enrollment. Chicanos Por La Causa (CPLC) is a community based organizing that provides many services to low-income communities across Arizona, one of which is the CPLC Insurance Program. In collaboration with the Community Action Research Experiences (CARE) at Arizona State University, the program was studied to help address the need of a LOGIC model and evaluation plan to determine its effectiveness. Interviews with three executives within CPLC were conducted in conjunction with a literature review to determine the inputs, strategies, outputs, and outcomes of the LOGIC model that drive CPLC Insurance's mission. Evaluation measures were then created to provide the necessary quantitative data that can best show to what degree the program is achieving its goals. Specifically, the results indicated the key outcomes that drive the LOGIC model, and an evaluation plan designed to provide indicators of these outcomes was produced. The implications of this study are that the suggested data collection can verify how effectively the program's actions are creating positive change, as well as show where further improvements may be necessary to maximize effectiveness.
ContributorsCunningham, Matthew Lee (Author) / Fey, Richard (Thesis director) / Dumka, Larry (Committee member) / School of Molecular Sciences (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor) / T. Denny Sanford School of Social and Family Dynamics (Contributor)
Created2016-05
189209-Thumbnail Image.png
Description
In natural language processing, language models have achieved remarkable success over the last few years. The Transformers are at the core of most of these models. Their success can be mainly attributed to an enormous amount of curated data they are trained on. Even though such language models are trained

In natural language processing, language models have achieved remarkable success over the last few years. The Transformers are at the core of most of these models. Their success can be mainly attributed to an enormous amount of curated data they are trained on. Even though such language models are trained on massive curated data, they often need specific extracted knowledge to understand better and reason. This is because often relevant knowledge may be implicit or missing, which hampers machine reasoning. Apart from that, manual knowledge curation is time-consuming and erroneous. Hence, finding fast and effective methods to extract such knowledge from data is important for improving language models. This leads to finding ideal ways to utilize such knowledge by incorporating them into language models. Successful knowledge extraction and integration lead to an important question of knowledge evaluation of such models by developing tools or introducing challenging test suites to learn about their limitations and improve them further. So to improve the transformer-based models, understanding the role of knowledge becomes important. In the pursuit to improve language models with knowledge, in this dissertation I study three broad research directions spanning across the natural language, biomedical and cybersecurity domains: (1) Knowledge Extraction (KX) - How can transformer-based language models be leveraged to extract knowledge from data? (2) Knowledge Integration (KI) - How can such specific knowledge be used to improve such models? (3) Knowledge Evaluation (KE) - How can language models be evaluated for specific skills and understand their limitations? I propose methods to extract explicit textual, implicit structural, missing textual, and missing structural knowledge from natural language and binary programs using transformer-based language models. I develop ways to improve the language model’s multi-step and commonsense reasoning abilities using external knowledge. Finally, I develop challenging datasets which assess their numerical reasoning skills in both in-domain and out-of-domain settings.
ContributorsPal, Kuntal Kumar (Author) / Baral, Chitta (Thesis advisor) / Wang, Ruoyu (Committee member) / Blanco, Eduardo (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023