Matching Items (4)
Filtering by

Clear all filters

154146-Thumbnail Image.png
Description
Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was

Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was developed for generating open response biology questions. The generated questions were compared to professional authorized questions.

To boost students’ learning experience, adaptive selection was built on the generated questions. Bayesian Knowledge Tracing was used as embedded assessment of the student’s current competence so that a suitable question could be selected based on the student’s previous performance. A between-subjects experiment with 42 participants was performed, where half of the participants studied with adaptive selected questions and the rest studied with mal-adaptive order of questions. Both groups significantly improved their test scores, and the participants in adaptive group registered larger learning gains than participants in the control group.

To explore the possibility of generating rich instructional feedback for machine-generated questions, a question-paragraph mapping task was identified. Given a set of questions and a list of paragraphs for a textbook, the goal of the task was to map the related paragraphs to each question. An algorithm was developed whose performance was comparable to human annotators.

A multiple-choice question with high quality distractors (incorrect answers) can be pedagogically valuable as well as being much easier to grade than open-response questions. Thus, an algorithm was developed to generate good distractors for multiple-choice questions. The machine-generated multiple-choice questions were compared to human-generated questions in terms of three measures: question difficulty, question discrimination and distractor usefulness. By recruiting 200 participants from Amazon Mechanical Turk, it turned out that the two types of questions performed very closely on all the three measures.
ContributorsZhang, Lishang (Author) / VanLehn, Kurt (Thesis advisor) / Baral, Chitta (Committee member) / Hsiao, Ihan (Committee member) / Wright, Christian (Committee member) / Arizona State University (Publisher)
Created2015
141218-Thumbnail Image.png
Description

Businesses, as with other sectors in society, are not yet taking sufficient action towards achieving sustainability. The United Nations recently agreed upon a set of Sustainable Development Goals (SDGs), which if properly harnessed, provide a framework (so far lacking) for businesses to meaningfully drive transformations to sustainability. This paper proposes

Businesses, as with other sectors in society, are not yet taking sufficient action towards achieving sustainability. The United Nations recently agreed upon a set of Sustainable Development Goals (SDGs), which if properly harnessed, provide a framework (so far lacking) for businesses to meaningfully drive transformations to sustainability. This paper proposes to operationalize the SDGs for businesses through a progressive framework for action with three discrete levels: communication, tactical, and strategic. Within the tactical and strategic levels, several innovative approaches are discussed and illustrated. The challenges of design and measurement as well as opportunities for accountability and the social side of Sustainability, together call for transdisciplinary, collective action. This paper demonstrates feasible pathways and approaches for businesses to take corporate social responsibility to the next level and utilize the SDG framework informed by sustainability science to support transformations towards the achievement of sustainability.

ContributorsRedman, Aaron (Author)
Created2018-06-30
189209-Thumbnail Image.png
Description
In natural language processing, language models have achieved remarkable success over the last few years. The Transformers are at the core of most of these models. Their success can be mainly attributed to an enormous amount of curated data they are trained on. Even though such language models are trained

In natural language processing, language models have achieved remarkable success over the last few years. The Transformers are at the core of most of these models. Their success can be mainly attributed to an enormous amount of curated data they are trained on. Even though such language models are trained on massive curated data, they often need specific extracted knowledge to understand better and reason. This is because often relevant knowledge may be implicit or missing, which hampers machine reasoning. Apart from that, manual knowledge curation is time-consuming and erroneous. Hence, finding fast and effective methods to extract such knowledge from data is important for improving language models. This leads to finding ideal ways to utilize such knowledge by incorporating them into language models. Successful knowledge extraction and integration lead to an important question of knowledge evaluation of such models by developing tools or introducing challenging test suites to learn about their limitations and improve them further. So to improve the transformer-based models, understanding the role of knowledge becomes important. In the pursuit to improve language models with knowledge, in this dissertation I study three broad research directions spanning across the natural language, biomedical and cybersecurity domains: (1) Knowledge Extraction (KX) - How can transformer-based language models be leveraged to extract knowledge from data? (2) Knowledge Integration (KI) - How can such specific knowledge be used to improve such models? (3) Knowledge Evaluation (KE) - How can language models be evaluated for specific skills and understand their limitations? I propose methods to extract explicit textual, implicit structural, missing textual, and missing structural knowledge from natural language and binary programs using transformer-based language models. I develop ways to improve the language model’s multi-step and commonsense reasoning abilities using external knowledge. Finally, I develop challenging datasets which assess their numerical reasoning skills in both in-domain and out-of-domain settings.
ContributorsPal, Kuntal Kumar (Author) / Baral, Chitta (Thesis advisor) / Wang, Ruoyu (Committee member) / Blanco, Eduardo (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023
158565-Thumbnail Image.png
Description
Making significant progress on the U.N. Sustainable Development Goals (SDGs) needs change agents equipped with key competencies in sustainability. While thousands of sustainability programs have emerged at various educational levels over the past decade, there is, as of yet, no reliable way to assess if these programs successfully convey key

Making significant progress on the U.N. Sustainable Development Goals (SDGs) needs change agents equipped with key competencies in sustainability. While thousands of sustainability programs have emerged at various educational levels over the past decade, there is, as of yet, no reliable way to assess if these programs successfully convey key competencies in sustainability. This dissertation contributes to addressing this gap in three ways. First, it reviews the body of work on key competencies in sustainability. Based on broad agreement around five key competencies as well as an emerging set of three, an extended framework is outlined that can be used as unified set of learning objectives across sustainability programs. The next chapter reviews the scholarly work on assessing sustainability competencies. Based on this review, a typology of assessment tools is proposed offering guidance to both educators and researchers. Finally, drawing on experience of the four-year “Educating Future Change Agents” project, the last chapter explores the results from a diverse set of competency assessments in numerous courses. The study appraises assessment practices and results to demonstrate opportunities and challenges in the current state of assessing key competencies in sustainability. The results of this doctoral thesis are expected to make a practical and scholarly contribution to the teaching and learning in sustainability programs, in particular with regards to reliably assessing key competencies in sustainability.
ContributorsRedman, Aaron (Author) / Wiek, Arnim (Thesis advisor) / Barth, Matthias (Committee member) / Basile, George (Committee member) / Fischer, Daniel (Committee member) / Mochizuki, Yoko (Committee member) / Arizona State University (Publisher)
Created2020