Matching Items (29)
Filtering by

Clear all filters

Does School Participatory Budgeting Increase Students’ Political Efficacy? Bandura’s “Sources,” Civic Pedagogy, and Education for Democracy
Description

Does school participatory budgeting (SPB) increase students’ political efficacy? SPB, which is implemented in thousands of schools around the world, is a democratic process of deliberation and decision-making in which students determine how to spend a portion of the school’s budget. We examined the impact of SPB on political efficacy

Does school participatory budgeting (SPB) increase students’ political efficacy? SPB, which is implemented in thousands of schools around the world, is a democratic process of deliberation and decision-making in which students determine how to spend a portion of the school’s budget. We examined the impact of SPB on political efficacy in one middle school in Arizona. Our participants’ (n = 28) responses on survey items designed to measure self-perceived growth in political efficacy indicated a large effect size (Cohen’s d = 1.46), suggesting that SPB is an effective approach to civic pedagogy, with promising prospects for developing students’ political efficacy.

ContributorsGibbs, Norman P. (Author) / Bartlett, Tara Lynn (Author) / Schugurensky, Daniel, 1958- (Author)
Created2021-05-01
150139-Thumbnail Image.png
Description
Although there are many forms of organization on the Web, one of the most prominent ways to organize web content and websites are tags. Tags are keywords or terms that are assigned to a specific piece of content in order to help users understand the common relationships between pieces of

Although there are many forms of organization on the Web, one of the most prominent ways to organize web content and websites are tags. Tags are keywords or terms that are assigned to a specific piece of content in order to help users understand the common relationships between pieces of content. Tags can either be assigned by an algorithm, the author, or the community. These tags can also be organized into tag clouds, which are visual representations of the structure and organization contained implicitly within these tags. Importantly, little is known on how we use these different tagging structures to understand the content and structure of a given site. This project examines 2 different characteristics of tagging structures: font size and spatial orientation. In order to examine how these different characteristics might interact with individual differences in attentional control, a measure of working memory capacity (WMC) was included. The results showed that spatial relationships affect how well users understand the structure of a website. WMC was not shown to have any significant effect; neither was varying the font size. These results should better inform how tags and tag clouds are used on the Web, and also provide an estimation of what properties to include when designing and implementing a tag cloud on a website.
ContributorsBanas, Steven (Author) / Sanchez, Christopher A (Thesis advisor) / Branaghan, Russell (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2011
152310-Thumbnail Image.png
Description
The wide adoption and continued advancement of information and communications technologies (ICT) have made it easier than ever for individuals and groups to stay connected over long distances. These advances have greatly contributed in dramatically changing the dynamics of the modern day workplace to the point where it is now

The wide adoption and continued advancement of information and communications technologies (ICT) have made it easier than ever for individuals and groups to stay connected over long distances. These advances have greatly contributed in dramatically changing the dynamics of the modern day workplace to the point where it is now commonplace to see large, distributed multidisciplinary teams working together on a daily basis. However, in this environment, motivating, understanding, and valuing the diverse contributions of individual workers in collaborative enterprises becomes challenging. To address these issues, this thesis presents the goals, design, and implementation of Taskville, a distributed workplace game played by teams on large, public displays. Taskville uses a city building metaphor to represent the completion of individual and group tasks within an organization. Promising results from two usability studies and two longitudinal studies at a multidisciplinary school demonstrate that Taskville supports personal reflection and improves team awareness through an engaging workplace activity.
ContributorsNikkila, Shawn (Author) / Sundaram, Hari (Thesis advisor) / Byrne, Daragh (Committee member) / Davulcu, Hasan (Committee member) / Olson, Loren (Committee member) / Arizona State University (Publisher)
Created2013
150848-Thumbnail Image.png
Description
The Game As Life - Life As Game (GALLAG) project investigates how people might change their lives if they think of and/or experience their life as a game. The GALLAG system aims to help people reach their personal goals through the use of context-aware computing, and tailored games and applications.

The Game As Life - Life As Game (GALLAG) project investigates how people might change their lives if they think of and/or experience their life as a game. The GALLAG system aims to help people reach their personal goals through the use of context-aware computing, and tailored games and applications. To accomplish this, the GALLAG system uses a combination of sensing technologies, remote audio/video feedback, mobile devices and an application programming interface (API) to empower users to create their own context-aware applications. However, the API requires programming through source code, a task that is too complicated and abstract for many users. This thesis presents GALLAG Strip, a novel approach to programming sensor-based context-aware applications that combines the Programming With Demonstration technique and a mobile device to enable users to experience their applications as they program them. GALLAG Strip lets users create sensor-based context-aware applications in an intuitive and appealing way without the need of computer programming skills; instead, they program their applications by physically demonstrating their envisioned interactions within a space using the same interface that they will later use to interact with the system, that is, using GALLAG-compatible sensors and mobile devices. GALLAG Strip was evaluated through a study with end users in a real world setting, measuring their ability to program simple and complex applications accurately and in a timely manner. The evaluation also comprises a benchmark with expert GALLAG system programmers in creating the same applications. Data and feedback collected from the study show that GALLAG Strip successfully allows users to create sensor-based context-aware applications easily and accurately without the need of prior programming skills currently required by the GALLAG system and enables them to create almost all of their envisioned applications.
ContributorsGarduno Massieu, Luis (Author) / Burleson, Winslow (Thesis advisor) / Hekler, Eric (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2012
151151-Thumbnail Image.png
Description
Technology in the modern day has ensured that learning of skills and behavior may be both widely disseminated and cheaply available. An example of this is the concept of virtual reality (VR) training. Virtual Reality training ensures that learning can be provided often, in a safe simulated setting, and it

Technology in the modern day has ensured that learning of skills and behavior may be both widely disseminated and cheaply available. An example of this is the concept of virtual reality (VR) training. Virtual Reality training ensures that learning can be provided often, in a safe simulated setting, and it may be delivered in a manner that makes it engaging while negating the need to purchase special equipment. This thesis presents a case study in the form of a time critical, team based medical scenario known as Advanced Cardiac Life Support (ACLS). A framework and methodology associated with the design of a VR trainer for ACLS is detailed. In addition, in order to potentially provide an engaging experience, the simulator was designed to incorporate immersive elements and a multimodal interface (haptic, visual, and auditory). A study was conducted to test two primary hypotheses namely: a meaningful transfer of skill is achieved from virtual reality training to real world mock codes and the presence of immersive components in virtual reality leads to an increase in the performance gained. The participant pool consisted of 54 clinicians divided into 9 teams of 6 members each. The teams were categorized into three treatment groups: immersive VR (3 teams), minimally immersive VR (3 teams), and control (3 teams). The study was conducted in 4 phases from a real world mock code pretest to assess baselines to a 30 minute VR training session culminating in a final mock code to assess the performance change from the baseline. The minimally immersive team was treated as control for the immersive components. The teams were graded, in both VR and mock code sessions, using the evaluation metric used in real world mock codes. The study revealed that the immersive VR groups saw greater performance gain from pretest to posttest than the minimally immersive and control groups in case of the VFib/VTach scenario (~20% to ~5%). Also the immersive VR groups had a greater performance gain than the minimally immersive groups from the first to the final session of VFib/VTach (29% to -13%) and PEA (27% to 15%).
ContributorsVankipuram, Akshay (Author) / Li, Baoxin (Thesis advisor) / Burleson, Winslow (Committee member) / Kahol, Kanav (Committee member) / Arizona State University (Publisher)
Created2012
149535-Thumbnail Image.png
Description
In the modern age, where teams consist of people from disparate locations, remote team training is highly desired. Moreover, team members' overlapping schedules force their mentors to focus on individual training instead of team training. Team training is an integral part of collaborative team work. With the advent of modern

In the modern age, where teams consist of people from disparate locations, remote team training is highly desired. Moreover, team members' overlapping schedules force their mentors to focus on individual training instead of team training. Team training is an integral part of collaborative team work. With the advent of modern technologies such as Web 2.0, cloud computing, etc. it is possible to revolutionize the delivery of time-critical team training in varied domains of healthcare military and education. Collaborative Virtual Environments (CVEs), also known as virtual worlds, and the existing worldwide footprint of high speed internet, would make remote team training ubiquitous. Such an integrated system would potentially help in assisting actual mentors to overcome the challenges in team training. ACLS is a time-critical activity which requires a high performance team effort. This thesis proposes a system that leverages a virtual world (VW) and provides an integrated learning platform for Advanced Cardiac Life Support (ACLS) case scenarios. The system integrates feedback devices such as haptic device so that real time feedback can be provided. Participants can log in remotely and work in a team to diagnose the given scenario. They can be trained and tested for ACLS within the virtual world. This system is well equipped with persuasive elements which aid in learning. The simulated training in this system was validated to teach novices the procedural aspect of ACLS. Sixteen participants were divided into four groups (two control groups and two experimental groups) of four participants. All four groups went through didactic session where they learned about ACLS and its procedures. A quiz after the didactic session revealed that all four groups had equal knowledge about ACLS. The two experimental groups went through training and testing in the virtual world. Experimental group 2 which was aided by the persuasive elements performed better than the control group. To validate the training capabilities of the virtual world system, final transfer test was conducted in real world setting at Banner Simulation Center on high fidelity mannequins. The test revealed that the experimental groups (average score 65/100) performed better than the control groups (average score 16/100). The experimental group 2 which was aided by the persuasive elements (average score 70/100) performed better than the experimental group 1 (average score 55/100). This shows that the persuasive technology can be useful for training purposes.
ContributorsParab, Sainath (Author) / Kahol, Kanav (Thesis advisor) / Burleson, Wnslow (Thesis advisor) / Li, Baioxin (Committee member) / Arizona State University (Publisher)
Created2010
191905-Thumbnail Image.png
DescriptionIntroduction chapter to the book, Educating for Democracy: The Case for Participatory Budgeting in Schools
ContributorsBartlett, Tara Lynn (Author) / Schugurensky, Daniel, 1958- (Author)
Created2024-01-28
187854-Thumbnail Image.png
Description
Traditional sports coaching involves face-to-face instructions with athletes or playingback 2D videos of athletes’ training. However, if the coach is not in the same area as the athlete, then the coach will not be able to see the athlete’s full body and thus cannot give precise guidance to the athlete, limiting the

Traditional sports coaching involves face-to-face instructions with athletes or playingback 2D videos of athletes’ training. However, if the coach is not in the same area as the athlete, then the coach will not be able to see the athlete’s full body and thus cannot give precise guidance to the athlete, limiting the athlete’s improvement. To address these challenges, this paper proposes Augmented Coach, an augmented reality platform where coaches can view, manipulate and comment on athletes’ movement volumetric video data remotely via the network. In particular, this work includes a). Capturing the athlete’s movement video data with Kinects and converting it into point cloud format b). Transmitting the point cloud data to the coach’s Oculus headset via 5G or wireless network c). Coach’s commenting on the athlete’s joints. In addition, the evaluation of Augmented Coach includes an assessment of its performance from five metrics via the wireless network and 5G network environment, but also from the coaches’ and athletes’ experience of using it. The result shows that Augmented Coach enables coaches to instruct athletes from a distance and provide effective feedback for correcting athletes’ motions under the network.
ContributorsQiao, Yunhan (Author) / LiKamWa, Robert (Thesis advisor) / Bansal, Ajay (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2023
156774-Thumbnail Image.png
Description
Research has shown that the learning processes can be enriched and enhanced with the presence of affective interventions. The goal of this dissertation was to design, implement, and evaluate an affective agent that provides affective support in real-time in order to enrich the student’s learning experience and performance by inducing

Research has shown that the learning processes can be enriched and enhanced with the presence of affective interventions. The goal of this dissertation was to design, implement, and evaluate an affective agent that provides affective support in real-time in order to enrich the student’s learning experience and performance by inducing and/or maintaining a productive learning path. This work combined research and best practices from affective computing, intelligent tutoring systems, and educational technology to address the design and implementation of an affective agent and corresponding pedagogical interventions. It included the incorporation of the affective agent into an Exploratory Learning Environment (ELE) adapted for this research.

A gendered, three-dimensional, animated, human-like character accompanied by text- and speech-based dialogue visually represented the proposed affective agent. The agent’s pedagogical interventions considered inputs from the ELE (interface, model building, and performance events) and from the user (emotional and cognitive events). The user’s emotional events captured by biometric sensors and processed by a decision-level fusion algorithm for a multimodal system in combination with the events from the ELE informed the production-rule-based behavior engine to define and trigger pedagogical interventions. The pedagogical interventions were focused on affective dimensions and occurred in the form of affective dialogue prompts and animations.

An experiment was conducted to assess the impact of the affective agent, Hope, on the student’s learning experience and performance. In terms of the student’s learning experience, the effect of the agent was analyzed in four components: perception of the instructional material, perception of the usefulness of the agent, ELE usability, and the affective responses from the agent triggered by the student’s affective states.

Additionally, in terms of the student’s performance, the effect of the agent was analyzed in five components: tasks completed, time spent solving a task, planning time while solving a task, usage of the provided help, and attempts to successfully complete a task. The findings from the experiment did not provide the anticipated results related to the effect of the agent; however, the results provided insights to improve diverse components in the design of affective agents as well as for the design of the behavior engines and algorithms to detect, represent, and handle affective information.
ContributorsChavez Echeagaray, Maria Elena (Author) / Atkinson, Robert K (Thesis advisor) / Burleson, Winslow (Thesis advisor) / Graesser, Arthur C. (Committee member) / VanLehn, Kurt (Committee member) / Walker, Erin A (Committee member) / Arizona State University (Publisher)
Created2018
157253-Thumbnail Image.png
Description
Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including

Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including machine partners. The increasing capabilities and complexity of machines allow them to work physically with humans. However, their improvements may interfere with the accuracy for people to calibrate trust in machines and their capabilities, which requires an understanding of attribution biases’ effect on human-machine coordination. Specifically, the current thesis explores how the development of trust in a partner is influenced by attribution biases and people’s assignment of blame for a negative outcome. This study can also suggest how a machine partner should be designed to react to environmental disturbances and report the appropriate level of information about external conditions.
ContributorsHsiung, Chi-Ping (M.S.) (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Thesis advisor) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2019