Matching Items (12)
Filtering by

Clear all filters

156205-Thumbnail Image.png
Description
The subliminal impact of framing of social, political and environmental issues such as climate change has been studied for decades in political science and communications research. Media framing offers an “interpretative package" for average citizens on how to make sense of climate change and its consequences to their livelihoods, how

The subliminal impact of framing of social, political and environmental issues such as climate change has been studied for decades in political science and communications research. Media framing offers an “interpretative package" for average citizens on how to make sense of climate change and its consequences to their livelihoods, how to deal with its negative impacts, and which mitigation or adaptation policies to support. A line of related work has used bag of words and word-level features to detect frames automatically in text. Such works face limitations since standard keyword based features may not generalize well to accommodate surface variations in text when different keywords are used for similar concepts.

This thesis develops a unique type of textual features that generalize triplets extracted from text, by clustering them into high-level concepts. These concepts are utilized as features to detect frames in text. Compared to uni-gram and bi-gram based models, classification and clustering using generalized concepts yield better discriminating features and a higher classification accuracy with a 12% boost (i.e. from 74% to 83% F-measure) and 0.91 clustering purity for Frame/Non-Frame detection.

The automatic discovery of complex causal chains among interlinked events and their participating actors has not yet been thoroughly studied. Previous studies related to extracting causal relationships from text were based on laborious and incomplete hand-developed lists of explicit causal verbs, such as “causes" and “results in." Such approaches result in limited recall because standard causal verbs may not generalize well to accommodate surface variations in texts when different keywords and phrases are used to express similar causal effects. Therefore, I present a system that utilizes generalized concepts to extract causal relationships. The proposed algorithms overcome surface variations in written expressions of causal relationships and discover the domino effects between climate events and human security. This semi-supervised approach alleviates the need for labor intensive keyword list development and annotated datasets. Experimental evaluations by domain experts achieve an average precision of 82%. Qualitative assessments of causal chains show that results are consistent with the 2014 IPCC report illuminating causal mechanisms underlying the linkages between climatic stresses and social instability.
ContributorsAlashri, Saud (Author) / Davulcu, Hasan (Thesis advisor) / Desouza, Kevin C. (Committee member) / Maciejewski, Ross (Committee member) / Hsiao, Sharon (Committee member) / Arizona State University (Publisher)
Created2018
156121-Thumbnail Image.png
Description
The technological revolution has caused the entire world to migrate to a digital environment and health care is no exception to this. Electronic Health Records (EHR) or Electronic Medical Records (EMR) are the digital repository for health data of patients. Nation wide efforts have been made by the federal government

The technological revolution has caused the entire world to migrate to a digital environment and health care is no exception to this. Electronic Health Records (EHR) or Electronic Medical Records (EMR) are the digital repository for health data of patients. Nation wide efforts have been made by the federal government to promote the usage of EHRs as they have been found to improve quality of health service. Although EHR systems have been implemented almost everywhere, active use of EHR applications have not replaced paper documentation. Rather, they are often used to store transcribed data from paper documentation after each clinical procedure. This process is found to be prone to errors such as data omission, incomplete data documentation and is also time consuming. This research aims to help improve adoption of real-time EHRs usage while documenting data by improving the usability of an iPad based EHR application that is used during resuscitation process in the intensive care unit. Using Cognitive theories and HCI frameworks, this research identified areas of improvement and customizations in the application that were required to exclusively match the work flow of the resuscitation team at the Mayo Clinic. In addition to this, a Handwriting Recognition Engine (HRE) was integrated into the application to support a stylus based information input into EHR, which resembles our target users’ traditional pen and paper based documentation process. The EHR application was updated and then evaluated with end users at the Mayo clinic. The users found the application to be efficient, usable and they showed preference in using this application over the paper-based documentation.
ContributorsSubbiah, Naveen Kumar (Author) / Patel, Vimla L. (Thesis advisor) / Hsiao, Sharon (Thesis advisor) / Sen, Ayan (Committee member) / Atkinson, Robert K (Committee member) / Arizona State University (Publisher)
Created2018
156475-Thumbnail Image.png
Description
This research start utilizing an efficient sparse inverse covariance matrix (precision matrix) estimation technique to identify a set of highly correlated discriminative perspectives between radical and counter-radical groups. A ranking system has been developed that utilizes ranked perspectives to map Islamic organizations on a set of socio-cultural, political and behavioral

This research start utilizing an efficient sparse inverse covariance matrix (precision matrix) estimation technique to identify a set of highly correlated discriminative perspectives between radical and counter-radical groups. A ranking system has been developed that utilizes ranked perspectives to map Islamic organizations on a set of socio-cultural, political and behavioral scales based on their web site corpus. Simultaneously, a gold standard ranking of these organizations was created through domain experts and compute expert-to-expert agreements and present experimental results comparing the performance of the QUIC based scaling system to another baseline method for organizations. The QUIC based algorithm not only outperforms the baseline methods, but it is also the only system that consistently performs at area expert-level accuracies for all scales. Also, a multi-scale ideological model has been developed and it investigates the correlates of Islamic extremism in Indonesia, Nigeria and UK. This analysis demonstrate that violence does not correlate strongly with broad Muslim theological or sectarian orientations; it shows that religious diversity intolerance is the only consistent and statistically significant ideological correlate of Islamic extremism in these countries, alongside desire for political change in UK and Indonesia, and social change in Nigeria. Next, dynamic issues and communities tracking system based on NMF(Non-negative Matrix Factorization) co-clustering algorithm has been built to better understand the dynamics of virtual communities. The system used between Iran and Saudi Arabia to build and apply a multi-party agent-based model that can demonstrate the role of wedges and spoilers in a complex environment where coalitions are dynamic. Lastly, a visual intelligence platform for tracking the diffusion of online social movements has been developed called LookingGlass to track the geographical footprint, shifting positions and flows of individuals, topics and perspectives between groups. The algorithm utilize large amounts of text collected from a wide variety of organizations’ media outlets to discover their hotly debated topics, and their discriminative perspectives voiced by opposing camps organized into multiple scales. Discriminating perspectives is utilized to classify and map individual Tweeter’s message content to social movements based on the perspectives expressed in their tweets.
ContributorsKim, Nyunsu (Author) / Davulcu, Hasan (Thesis advisor) / Sen, Arunabha (Committee member) / Hsiao, Sharon (Committee member) / Corman, Steven (Committee member) / Arizona State University (Publisher)
Created2018
157095-Thumbnail Image.png
Description
An old proverb claims that “two heads are better than one”. Crowdsourcing research and practice have taken this to heart, attempting to show that thousands of heads can be even better. This is not limited to leveraging a crowd’s knowledge, but also their creativity—the ability to generate something not only

An old proverb claims that “two heads are better than one”. Crowdsourcing research and practice have taken this to heart, attempting to show that thousands of heads can be even better. This is not limited to leveraging a crowd’s knowledge, but also their creativity—the ability to generate something not only useful, but also novel. In practice, there are initiatives such as Free and Open Source Software communities developing innovative software. In research, the field of crowdsourced creativity, which attempts to design scalable support mechanisms, is blooming. However, both contexts still present many opportunities for advancement.

In this dissertation, I seek to advance both the knowledge of limitations in current technologies used in practice as well as the mechanisms that can be used for large-scale support. The overall research question I explore is: “How can we support large-scale creative collaboration in distributed online communities?” I first advance existing support techniques by evaluating the impact of active support in brainstorming performance. Furthermore, I leverage existing theoretical models of individual idea generation as well as recommender system techniques to design CrowdMuse, a novel adaptive large-scale idea generation system. CrowdMuse models users in order to adapt itself to each individual. I evaluate the system’s efficacy through two large-scale studies. I also advance knowledge of current large-scale practices by examining common communication channels under the lens of Creativity Support Tools, yielding a list of creativity bottlenecks brought about by the affordances of these channels. Finally, I connect both ends of this dissertation by deploying CrowdMuse in an Open Source online community for two weeks. I evaluate their usage of the system as well as its perceived benefits and issues compared to traditional communication tools.

This dissertation makes the following contributions to the field of large-scale creativity: 1) the design and evaluation of a first-of-its-kind adaptive brainstorming system; 2) the evaluation of the effects of active inspirations compared to simple idea exposure; 3) the development and application of a set of creativity support design heuristics to uncover creativity bottlenecks; and 4) an exploration of large-scale brainstorming systems’ usefulness to online communities.
Contributorsda Silva Girotto, Victor Augusto (Author) / Walker, Erin A (Thesis advisor) / Burleson, Winslow (Thesis advisor) / Maciejewski, Ross (Committee member) / Hsiao, Sharon (Committee member) / Bigham, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2019
156951-Thumbnail Image.png
Description
Visual processing in social media platforms is a key step in gathering and understanding information in the era of Internet and big data. Online data is rich in content, but its processing faces many challenges including: varying scales for objects of interest, unreliable and/or missing labels, the inadequacy of single

Visual processing in social media platforms is a key step in gathering and understanding information in the era of Internet and big data. Online data is rich in content, but its processing faces many challenges including: varying scales for objects of interest, unreliable and/or missing labels, the inadequacy of single modal data and difficulty in analyzing high dimensional data. Towards facilitating the processing and understanding of online data, this dissertation primarily focuses on three challenges that I feel are of great practical importance: handling scale differences in computer vision tasks, such as facial component detection and face retrieval, developing efficient classifiers using partially labeled data and noisy data, and employing multi-modal models and feature selection to improve multi-view data analysis. For the first challenge, I propose a scale-insensitive algorithm to expedite and accurately detect facial landmarks. For the second challenge, I propose two algorithms that can be used to learn from partially labeled data and noisy data respectively. For the third challenge, I propose a new framework that incorporates feature selection modules into LDA models.
ContributorsZhou, Xu (Author) / Li, Baoxin (Thesis advisor) / Hsiao, Sharon (Committee member) / Davulcu, Hasan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2018
134905-Thumbnail Image.png
Description
Research has shown that the cheat sheet preparation process helps students with performance in exams. However, results have been inconclusive in determining the most effective guiding principles in creating and using cheat sheets. The traditional method of collecting and annotating cheat sheets is time consuming and exhaustive, and fails to

Research has shown that the cheat sheet preparation process helps students with performance in exams. However, results have been inconclusive in determining the most effective guiding principles in creating and using cheat sheets. The traditional method of collecting and annotating cheat sheets is time consuming and exhaustive, and fails to capture students' preparation process. This thesis examines the development and usage of a new web-based cheat sheet creation tool, Study Genie, and its effects on student performance in an introductory computer science and programming course. Results suggest that actions associated with editing and organizing cheat sheets are positively correlated with exam performance, and that there is a significant difference between the activity of high-performing and low-performing students. Through these results, Study Genie presents itself as an opportunity for mass data collection and to provide insight into the assembly process rather than just the finished product in cheat sheet creation.
ContributorsWu, Jiaqi (Co-author) / Wen, Terry (Co-author) / Hsiao, Sharon (Thesis director) / Walker, Erin (Committee member) / Computer Science and Engineering Program (Contributor) / School of Life Sciences (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
154641-Thumbnail Image.png
Description
Proliferation of social media websites and discussion forums in the last decade has resulted in social media mining emerging as an effective mechanism to extract consumer patterns. Most research on social media and pharmacovigilance have concentrated on

Adverse Drug Reaction (ADR) identification. Such methods employ a step of drug search followed

Proliferation of social media websites and discussion forums in the last decade has resulted in social media mining emerging as an effective mechanism to extract consumer patterns. Most research on social media and pharmacovigilance have concentrated on

Adverse Drug Reaction (ADR) identification. Such methods employ a step of drug search followed by classification of the associated text as consisting an ADR or not. Although this method works efficiently for ADR classifications, if ADR evidence is present in users posts over time, drug mentions fail to capture such ADRs. It also fails to record additional user information which may provide an opportunity to perform an in-depth analysis for lifestyle habits and possible reasons for any medical problems.

Pre-market clinical trials for drugs generally do not include pregnant women, and so their effects on pregnancy outcomes are not discovered early. This thesis presents a thorough, alternative strategy for assessing the safety profiles of drugs during pregnancy by utilizing user timelines from social media. I explore the use of a variety of state-of-the-art social media mining techniques, including rule-based and machine learning techniques, to identify pregnant women, monitor their drug usage patterns, categorize their birth outcomes, and attempt to discover associations between drugs and bad birth outcomes.

The technique used models user timelines as longitudinal patient networks, which provide us with a variety of key information about pregnancy, drug usage, and post-

birth reactions. I evaluate the distinct parts of the pipeline separately, validating the usefulness of each step. The approach to use user timelines in this fashion has produced very encouraging results, and can be employed for a range of other important tasks where users/patients are required to be followed over time to derive population-based measures.
ContributorsChandrashekar, Pramod Bharadwaj (Author) / Davulcu, Hasan (Thesis advisor) / Gonzalez, Graciela (Thesis advisor) / Hsiao, Sharon (Committee member) / Arizona State University (Publisher)
Created2016
168452-Thumbnail Image.png
Description
Personalized learning is gaining popularity in online computer science education due to its characteristics of pacing the learning progress and adapting the instructional approach to each individual learner from a diverse background. Among various instructional methods in computer science education, hands-on labs have unique requirements of understanding learners' behavior and

Personalized learning is gaining popularity in online computer science education due to its characteristics of pacing the learning progress and adapting the instructional approach to each individual learner from a diverse background. Among various instructional methods in computer science education, hands-on labs have unique requirements of understanding learners' behavior and assessing learners' performance for personalization. Hands-on labs are a critical learning approach for cybersecurity education. It provides real-world complex problem scenarios and helps learners develop a deeper understanding of knowledge and concepts while solving real-world problems. But there are unique challenges when using hands-on labs for cybersecurity education. Existing hands-on lab exercises materials are usually managed in a problem-centric fashion, while it lacks a coherent way to manage existing labs and provide productive lab exercising plans for cybersecurity learners. To solve these challenges, a personalized learning platform called ThoTh Lab specifically designed for computer science hands-on labs in a cloud environment is established. ThoTh Lab can identify the learning style from student activities and adapt learning material accordingly. With the awareness of student learning styles, instructors are able to use techniques more suitable for the specific student, and hence, improve the speed and quality of the learning process. ThoTh Lab also provides student performance prediction, which allows the instructors to change the learning progress and take other measurements to help the students timely. A knowledge graph in the cybersecurity domain is also constructed using Natural language processing (NLP) technologies including word embedding and hyperlink-based concept mining. This knowledge graph is then utilized during the regular learning process to build a personalized lab recommendation system by suggesting relevant labs based on students' past learning history to maximize their learning outcomes. To evaluate ThoTh Lab, several in-class experiments were carried out in cybersecurity classes for both graduate and undergraduate students at Arizona State University and data was collected over several semesters. The case studies show that, by leveraging the personalized lab platform, students tend to be more absorbed in a lab project, show more interest in the cybersecurity area, spend more effort on the project and gain enhanced learning outcomes.
ContributorsDeng, Yuli (Author) / Huang, Dijiang (Thesis advisor) / Li, Baoxin (Committee member) / Zhao, Ming (Committee member) / Hsiao, Sharon (Committee member) / Arizona State University (Publisher)
Created2021
132493-Thumbnail Image.png
Description
The nonprofit organization, I Am Zambia, works to give supplemental education to young women in Lusaka. I Am Zambia is creating sustainable change by educating these females, who can then lift their families and communities out of poverty. The ultimate goal of this thesis was to explore and implement high

The nonprofit organization, I Am Zambia, works to give supplemental education to young women in Lusaka. I Am Zambia is creating sustainable change by educating these females, who can then lift their families and communities out of poverty. The ultimate goal of this thesis was to explore and implement high level systematic problem solving through basic and specialized computational thinking curriculum at I Am Zambia in order to give these women an even larger stepping stool into a successful future.

To do this, a 4-week long pilot curriculum was created, implemented, and tested through an optional class at I Am Zambia, available to women who had already graduated from the year-long I Am Zambia Academy program. A total of 18 women ages 18-24 chose to enroll in the course. There were a total of 10 lessons, taught over 20 class period. These lessons covered four main computational thinking frameworks: introduction to computational thinking, algorithmic thinking, pseudocode, and debugging. Knowledge retention was tested through the use of a CS educational tool, QuizIt, created by the CSI Lab of School of Computing, Informatics and Decision Systems Engineering at Arizona State University. Furthermore, pre and post tests were given to assess the successfulness of the curriculum in teaching students the aforementioned concepts. 14 of the 18 students successfully completed the pre and post test.

Limitations of this study and suggestions for how to improve this curriculum in order to extend it into a year long course are also presented at the conclusion of this paper.
ContributorsGriffin, Hadley Meryl (Author) / Hsiao, Sharon (Thesis director) / Mutsumi, Nakamura (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Computer Science and Engineering Program (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
157864-Thumbnail Image.png
Description
Computer science education is an increasingly vital area of study with various challenges that increase the difficulty level for new students resulting in higher attrition rates. As part of an effort to resolve this issue, a new visual programming language environment was developed for this research, the Visual IoT and

Computer science education is an increasingly vital area of study with various challenges that increase the difficulty level for new students resulting in higher attrition rates. As part of an effort to resolve this issue, a new visual programming language environment was developed for this research, the Visual IoT and Robotics Programming Language Environment (VIPLE). VIPLE is based on computational thinking and flowchart, which reduces the needs of memorization of detailed syntax in text-based programming languages. VIPLE has been used at Arizona State University (ASU) in multiple years and sections of FSE100 as well as in universities worldwide. Another major issue with teaching large programming classes is the potential lack of qualified teaching assistants to grade and offer insight to a student’s programs at a level beyond output analysis.

In this dissertation, I propose a novel framework for performing semantic autograding, which analyzes student programs at a semantic level to help students learn with additional and systematic help. A general autograder is not practical for general programming languages, due to the flexibility of semantics. A practical autograder is possible in VIPLE, because of its simplified syntax and restricted options of semantics. The design of this autograder is based on the concept of theorem provers. To achieve this goal, I employ a modified version of Pi-Calculus to represent VIPLE programs and Hoare Logic to formalize program requirements. By building on the inference rules of Pi-Calculus and Hoare Logic, I am able to construct a theorem prover that can perform automated semantic analysis. Furthermore, building on this theorem prover enables me to develop a self-learning algorithm that can learn the conditions for a program’s correctness according to a given solution program.
ContributorsDe Luca, Gennaro (Author) / Chen, Yinong (Thesis advisor) / Liu, Huan (Thesis advisor) / Hsiao, Sharon (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2020