Filtering by
- Genre: Art--Study and teaching--Technological innovations.
- Genre: Conference papers and proceedings
- Genre: Music--History and criticism--Web-based instruction--Arizona--Case studies.
Human team members show a remarkable ability to infer the state of their partners and anticipate their needs and actions. Prior research demonstrates that an artificial system can make some predictions accurately concerning artificial agents. This study investigated whether an artificial system could generate a robust Theory of Mind of human teammates. An urban search and rescue (USAR) task environment was developed to elicit human teamwork and evaluate inference and prediction about team members by software agents and humans. The task varied team members’ roles and skills, types of task synchronization and interdependence, task risk and reward, completeness of mission planning, and information asymmetry. The task was implemented in MinecraftTM and applied in a study of 64 teams, each with three remotely distributed members. An evaluation of six Artificial Social Intelligences (ASI) and several human observers addressed the accuracy with which each predicted team performance, inferred experimentally manipulated knowledge of team members, and predicted member actions. All agents performed above chance; humans slightly outperformed ASI agents on some tasks and significantly outperformed ASI agents on others; no one ASI agent reliably outperformed the others; and the accuracy of ASI agents and human observers improved rapidly though modestly during the brief trials.
Using a 2 x 3 factorial design, this study compared learner outcomes and motivation across technologies (audio-only, video, AR) and groupings (individuals, dyads) with 182 undergraduate and graduate students who were self-identified art novices. Learner outcomes were measured by post-activity spoken responses to a painting reproduction with the pre-activity response as a moderating variable. Motivation was measured by the sum score of a reduced version of the Instructional Materials Motivational Survey (IMMS), accounting for attention, relevance, confidence, and satisfaction, with total time spent in learning activity as the moderating variable. Information on participant demographics, technology usage, and art experience was also collected.
Participants were randomly assigned to one of six conditions that differed by technology and grouping before completing a learning activity where they viewed four high-resolution, printed-to-scale painting reproductions in a gallery-like setting while listening to audio-recorded conversations of two experts discussing the actual paintings. All participants listened to expert conversations but the video and AR conditions received visual supports via mobile device.
Though no main effects were found for technology or groupings, findings did include statistically significant higher learner outcomes in the elements of design subscale (characteristics most represented by the visual supports of the AR application) than the audio-only conditions. When participants saw digital representations of line, shape, and color directly on the paintings, they were more likely to identify those same features in the post-activity painting. Seeing what the experts see, in a situated environment, resulted in evidence that participants began to view paintings in a manner similar to the experts. This is evidence of the value of the temporal and spatial contiguity afforded by AR in cognitive modeling learning environments.