Filtering by
- All Subjects: Evaluation
- Creators: Baral, Chitta
- Creators: Armfield, Jessica Ann
- Creators: Bansal, Ajay
are constantly changing, and adapting to these changes in an academic curriculum
can be challenging. Given a specific aspect of a domain, there can be various levels of
proficiency that can be achieved by the students. Considering the wide array of needs,
diverse groups need customized course curriculum. The need for having an archetype
to design a course focusing on the outcomes paved the way for Outcome-based
Education (OBE). OBE focuses on the outcomes as opposed to the traditional way of
following a process [23]. According to D. Clark, the major reason for the creation of
Bloom’s taxonomy was not only to stimulate and inspire a higher quality of thinking
in academia – incorporating not just the basic fact-learning and application, but also
to evaluate and analyze on the facts and its applications [7]. Instructional Module
Development System (IMODS) is the culmination of both these models – Bloom’s
Taxonomy and OBE. It is an open-source web-based software that has been
developed on the principles of OBE and Bloom’s Taxonomy. It guides an instructor,
step-by-step, through an outcomes-based process as they define the learning
objectives, the content to be covered and develop an instruction and assessment plan.
The tool also provides the user with a repository of techniques based on the choices
made by them regarding the level of learning while defining the objectives. This helps
in maintaining alignment among all the components of the course design. The tool
also generates documentation to support the course design and provide feedback
when the course is lacking in certain aspects.
It is not just enough to come up with a model that theoretically facilitates
effective result-oriented course design. There should be facts, experiments and proof
that any model succeeds in achieving what it aims to achieve. And thus, there are two
research objectives of this thesis: (i) design a feature for course design feedback and
evaluate its effectiveness; (ii) evaluate the usefulness of a tool like IMODS on various
aspects – (a) the effectiveness of the tool in educating instructors on OBE; (b) the
effectiveness of the tool in providing appropriate and efficient pedagogy and
assessment techniques; (c) the effectiveness of the tool in building the learning
objectives; (d) effectiveness of the tool in document generation; (e) Usability of the
tool; (f) the effectiveness of OBE on course design and expected student outcomes.
The thesis presents a detailed algorithm for course design feedback, its pseudocode, a
description and proof of the correctness of the feature, methods used for evaluation
of the tool, experiments for evaluation and analysis of the obtained results.
To boost students’ learning experience, adaptive selection was built on the generated questions. Bayesian Knowledge Tracing was used as embedded assessment of the student’s current competence so that a suitable question could be selected based on the student’s previous performance. A between-subjects experiment with 42 participants was performed, where half of the participants studied with adaptive selected questions and the rest studied with mal-adaptive order of questions. Both groups significantly improved their test scores, and the participants in adaptive group registered larger learning gains than participants in the control group.
To explore the possibility of generating rich instructional feedback for machine-generated questions, a question-paragraph mapping task was identified. Given a set of questions and a list of paragraphs for a textbook, the goal of the task was to map the related paragraphs to each question. An algorithm was developed whose performance was comparable to human annotators.
A multiple-choice question with high quality distractors (incorrect answers) can be pedagogically valuable as well as being much easier to grade than open-response questions. Thus, an algorithm was developed to generate good distractors for multiple-choice questions. The machine-generated multiple-choice questions were compared to human-generated questions in terms of three measures: question difficulty, question discrimination and distractor usefulness. By recruiting 200 participants from Amazon Mechanical Turk, it turned out that the two types of questions performed very closely on all the three measures.
Preventing heat-associated morbidity and mortality is a public health priority in Maricopa County, Arizona (United States). The objective of this project was to evaluate Maricopa County cooling centers and gain insight into their capacity to provide relief for the public during extreme heat events. During the summer of 2014, 53 cooling centers were evaluated to assess facility and visitor characteristics. Maricopa County staff collected data by directly observing daily operations and by surveying managers and visitors. The cooling centers in Maricopa County were often housed within community, senior, or religious centers, which offered various services for at least 1500 individuals daily. Many visitors were unemployed and/or homeless. Many learned about a cooling center by word of mouth or by having seen the cooling center’s location. The cooling centers provide a valuable service and reach some of the region’s most vulnerable populations. This project is among the first to systematically evaluate cooling centers from a public health perspective and provides helpful insight to community leaders who are implementing or improving their own network of cooling centers.