Matching Items (23)
152310-Thumbnail Image.png
Description
The wide adoption and continued advancement of information and communications technologies (ICT) have made it easier than ever for individuals and groups to stay connected over long distances. These advances have greatly contributed in dramatically changing the dynamics of the modern day workplace to the point where it is now

The wide adoption and continued advancement of information and communications technologies (ICT) have made it easier than ever for individuals and groups to stay connected over long distances. These advances have greatly contributed in dramatically changing the dynamics of the modern day workplace to the point where it is now commonplace to see large, distributed multidisciplinary teams working together on a daily basis. However, in this environment, motivating, understanding, and valuing the diverse contributions of individual workers in collaborative enterprises becomes challenging. To address these issues, this thesis presents the goals, design, and implementation of Taskville, a distributed workplace game played by teams on large, public displays. Taskville uses a city building metaphor to represent the completion of individual and group tasks within an organization. Promising results from two usability studies and two longitudinal studies at a multidisciplinary school demonstrate that Taskville supports personal reflection and improves team awareness through an engaging workplace activity.
ContributorsNikkila, Shawn (Author) / Sundaram, Hari (Thesis advisor) / Byrne, Daragh (Committee member) / Davulcu, Hasan (Committee member) / Olson, Loren (Committee member) / Arizona State University (Publisher)
Created2013
Description
As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential

As the application of interactive media systems expands to address broader problems in health, education and creative practice, they fall within a higher dimensional space for which it is inherently more complex to design. In response to this need an emerging area of interactive system design, referred to as experiential media systems, applies hybrid knowledge synthesized across multiple disciplines to address challenges relevant to daily experience. Interactive neurorehabilitation (INR) aims to enhance functional movement therapy by integrating detailed motion capture with interactive feedback in a manner that facilitates engagement and sensorimotor learning for those who have suffered neurologic injury. While INR shows great promise to advance the current state of therapies, a cohesive media design methodology for INR is missing due to the present lack of substantial evidence within the field. Using an experiential media based approach to draw knowledge from external disciplines, this dissertation proposes a compositional framework for authoring visual media for INR systems across contexts and applications within upper extremity stroke rehabilitation. The compositional framework is applied across systems for supervised training, unsupervised training, and assisted reflection, which reflect the collective work of the Adaptive Mixed Reality Rehabilitation (AMRR) Team at Arizona State University, of which the author is a member. Formal structures and a methodology for applying them are described in detail for the visual media environments designed by the author. Data collected from studies conducted by the AMRR team to evaluate these systems in both supervised and unsupervised training contexts is also discussed in terms of the extent to which the application of the compositional framework is supported and which aspects require further investigation. The potential broader implications of the proposed compositional framework and methodology are the dissemination of interdisciplinary information to accelerate the informed development of INR applications and to demonstrate the potential benefit of generalizing integrative approaches, merging arts and science based knowledge, for other complex problems related to embodied learning.
ContributorsLehrer, Nicole (Author) / Rikakis, Thanassis (Committee member) / Olson, Loren (Committee member) / Wolf, Steven L. (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
153158-Thumbnail Image.png
Description
Stroke is a leading cause of disability with varying effects across stroke survivors necessitating comprehensive approaches to rehabilitation. Interactive neurorehabilitation (INR) systems represent promising technological solutions that can provide an array of sensing, feedback and analysis tools which hold the potential to maximize clinical therapy as well as extend therapy

Stroke is a leading cause of disability with varying effects across stroke survivors necessitating comprehensive approaches to rehabilitation. Interactive neurorehabilitation (INR) systems represent promising technological solutions that can provide an array of sensing, feedback and analysis tools which hold the potential to maximize clinical therapy as well as extend therapy to the home. Currently, there are a variety of approaches to INR design, which coupled with minimal large-scale clinical data, has led to a lack of cohesion in INR design. INR design presents an inherently complex space as these systems have multiple users including stroke survivors, therapists and designers, each with their own user experience needs. This dissertation proposes that comprehensive INR design, which can address this complex user space, requires and benefits from the application of interdisciplinary research that spans motor learning and interactive learning. A methodology for integrated and iterative design approaches to INR task experience, assessment, hardware, software and interactive training protocol design is proposed within the comprehensive example of design and implementation of a mixed reality rehabilitation system for minimally supervised environments. This system was tested with eight stroke survivors who showed promising results in both functional and movement quality improvement. The results of testing the system with stroke survivors as well as observing user experiences will be presented along with suggested improvements to the proposed design methodology. This integrative design methodology is proposed to have benefit for not only comprehensive INR design but also complex interactive system design in general.
ContributorsBaran, Michael (Author) / Rikakis, Thanassis (Thesis advisor) / Olson, Loren (Thesis advisor) / Wolf, Steven L. (Committee member) / Ingalls, Todd (Committee member) / Arizona State University (Publisher)
Created2014
149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149780-Thumbnail Image.png
Description
The demand for handheld portable computing in education, business and research has resulted in advanced mobile devices with powerful processors and large multi-touch screens. Such devices are capable of handling tasks of moderate computational complexity such as word processing, complex Internet transactions, and even human motion analysis. Apple's iOS devices,

The demand for handheld portable computing in education, business and research has resulted in advanced mobile devices with powerful processors and large multi-touch screens. Such devices are capable of handling tasks of moderate computational complexity such as word processing, complex Internet transactions, and even human motion analysis. Apple's iOS devices, including the iPhone, iPod touch and the latest in the family - the iPad, are among the well-known and widely used mobile devices today. Their advanced multi-touch interface and improved processing power can be exploited for engineering and STEM demonstrations. Moreover, these devices have become a part of everyday student life. Hence, the design of exciting mobile applications and software represents a great opportunity to build student interest and enthusiasm in science and engineering. This thesis presents the design and implementation of a portable interactive signal processing simulation software on the iOS platform. The iOS-based object-oriented application is called i-JDSP and is based on the award winning Java-DSP concept. It is implemented in Objective-C and C as a native Cocoa Touch application that can be run on any iOS device. i-JDSP offers basic signal processing simulation functions such as Fast Fourier Transform, filtering, spectral analysis on a compact and convenient graphical user interface and provides a very compelling multi-touch programming experience. Built-in modules also demonstrate concepts such as the Pole-Zero Placement. i-JDSP also incorporates sound capture and playback options that can be used in near real-time analysis of speech and audio signals. All simulations can be visually established by forming interactive block diagrams through multi-touch and drag-and-drop. Computations are performed on the mobile device when necessary, making the block diagram execution fast. Furthermore, the extensive support for user interactivity provides scope for improved learning. The results of i-JDSP assessment among senior undergraduate and first year graduate students revealed that the software created a significant positive impact and increased the students' interest and motivation and in understanding basic DSP concepts.
ContributorsLiu, Jinru (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Kostas (Committee member) / Qian, Gang (Committee member) / Arizona State University (Publisher)
Created2011
149922-Thumbnail Image.png
Description
Bridging semantic gap is one of the fundamental problems in multimedia computing and pattern recognition. The challenge of associating low-level signal with their high-level semantic interpretation is mainly due to the fact that semantics are often conveyed implicitly in a context, relying on interactions among multiple levels of concepts or

Bridging semantic gap is one of the fundamental problems in multimedia computing and pattern recognition. The challenge of associating low-level signal with their high-level semantic interpretation is mainly due to the fact that semantics are often conveyed implicitly in a context, relying on interactions among multiple levels of concepts or low-level data entities. Also, additional domain knowledge may often be indispensable for uncovering the underlying semantics, but in most cases such domain knowledge is not readily available from the acquired media streams. Thus, making use of various types of contextual information and leveraging corresponding domain knowledge are vital for effectively associating high-level semantics with low-level signals with higher accuracies in multimedia computing problems. In this work, novel computational methods are explored and developed for incorporating contextual information/domain knowledge in different forms for multimedia computing and pattern recognition problems. Specifically, a novel Bayesian approach with statistical-sampling-based inference is proposed for incorporating a special type of domain knowledge, spatial prior for the underlying shapes; cross-modality correlations via Kernel Canonical Correlation Analysis is explored and the learnt space is then used for associating multimedia contents in different forms; model contextual information as a graph is leveraged for regulating interactions among high-level semantic concepts (e.g., category labels), low-level input signal (e.g., spatial/temporal structure). Four real-world applications, including visual-to-tactile face conversion, photo tag recommendation, wild web video classification and unconstrained consumer video summarization, are selected to demonstrate the effectiveness of the approaches. These applications range from classic research challenges to emerging tasks in multimedia computing. Results from experiments on large-scale real-world data with comparisons to other state-of-the-art methods and subjective evaluations with end users confirmed that the developed approaches exhibit salient advantages, suggesting that they are promising for leveraging contextual information/domain knowledge for a wide range of multimedia computing and pattern recognition problems.
ContributorsWang, Zhesheng (Author) / Li, Baoxin (Thesis advisor) / Sundaram, Hari (Committee member) / Qian, Gang (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2011
133899-Thumbnail Image.png
Description
Emerging technologies, such as augmented reality (AR), are growing in popularity and accessibility at a fast pace. Developers are building more and more games and applications with this technology but few have stopped to think about what the best practices are for creating a good user experience (UX). Currently, there

Emerging technologies, such as augmented reality (AR), are growing in popularity and accessibility at a fast pace. Developers are building more and more games and applications with this technology but few have stopped to think about what the best practices are for creating a good user experience (UX). Currently, there are no universally accepted human-computer interaction guidelines for augmented reality because it is still relatively new. This paper examines three features - virtual content scale, indirect selection, and virtual buttons - in an attempt to discover their impact on the user experience in augmented reality. A Battleship game was developed using the Unity game engine with Vuforia, an augmented reality platform, and built as an iOS application to test these features. The hypothesis was that both virtual content scale and indirect selection would result in a more enjoyable and engaging user experience whereas the virtual button would be too confusing for users to fully appreciate the feature. Usability testing was conducted to gauge participants' responses to these features. After playing a base version of the game with no additional features and then a second version with one of the three features, participants rated their experiences and provided feedback in a four-part survey. It was observed during testing that people did not inherently move their devices around the augmented space and needed guidance to navigate the game. Most users were fascinated with the visuals of the game and two of the tested features. It was found that movement around the augmented space and feedback from the virtual content were critical aspects in creating a good user experience in augmented reality.
ContributorsBauman, Kirsten (Co-author) / Benson, Meera (Co-author) / Olson, Loren (Thesis director) / LiKamWa, Robert (Committee member) / School of the Arts, Media and Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133046-Thumbnail Image.png
Description
This study explores the results of an event hosted for undergraduate students in the Arts, Media and Engineering (AME) department at Arizona State University. 18 students were asked to sit and eat lunch with one another and share their opinions on personal and school-related topics. A follow-up survey consisting of

This study explores the results of an event hosted for undergraduate students in the Arts, Media and Engineering (AME) department at Arizona State University. 18 students were asked to sit and eat lunch with one another and share their opinions on personal and school-related topics. A follow-up survey consisting of eight questions was sent out to gauge how effective this event was in getting students to build stronger relationships with each other. Statistical analysis showed that 89% of students who attended would participate again and consider collaborating with another student at the event in future projects. From these results, a series of future interventions like the one mentioned in this paper could promote stronger relationships among students and add value to the department. A positive response from the students who participated could imply that students might be more inclined to reach out to classmates when in a setting made for that purpose.
ContributorsWheeler, Hannah M (Author) / Tinapple, David (Thesis director) / Olson, Loren (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
149461-Thumbnail Image.png
Description
This thesis investigates the role of activity visualization tools in increasing group awareness at the workspace. Today, electronic calendaring tools are widely used in the workplace. The primary function is to enable each person maintain a work schedule. They also are used to schedule meetings and share work details when

This thesis investigates the role of activity visualization tools in increasing group awareness at the workspace. Today, electronic calendaring tools are widely used in the workplace. The primary function is to enable each person maintain a work schedule. They also are used to schedule meetings and share work details when appropriate. However, a key limitation of current tools is that they do not enable people in the workplace to understand the activity of the group as a whole. A tool that increases group awareness would promote reflection; it would enable thoughtful engagement with one's co-workers. I have developed two tools: the first tool enables the worker to examine detailed task information of one's own tasks, within the context of his/her peers' anonymized task data. The second tool is a public display to promote group reflection. I have used an iterative design methodology to refine the tools. I developed ActivityStream desktop tool that enables users to examine the detailed information of their own activities and the aggregate information of other peers' activities. ActivityStream uses a client-server architecture. The server collected activity data from each user by parsing RSS feeds associated with their preferred online calendaring and task management tool, on a daily basis. The client software displays personalized aggregate data and user specific tasks, including task types. The client display visualizes the activity data at multiple time scales. The activity data for each user is represented though discrete blocks; interacting with the block will reveal task details. The activity of the rest of the group is anonymized and aggregated. ActivityStream visualizes the aggregated data via Bezier curves. I developed ActivityStream public display that shows a group people's activity levels change over time to promote group reflection. In particular, the public display shows the anonymized task activity data, over the course of one year. The public display visualizes data for each user using a Bezier curve. The display shows data from all users simultaneously. This representation enables users to reflect on the relationships across the group members, over the course of one year. The survey results revealed that users are more aware of their peers' activities in the workspace.
ContributorsZhang, Lu (Author) / Sundaram, Hari (Thesis advisor) / Qian, Gang (Thesis advisor) / Kelliher, Aisling (Committee member) / Arizona State University (Publisher)
Created2010
149621-Thumbnail Image.png
Description
Social situational awareness, or the attentiveness to one's social surroundings, including the people, their interactions and their behaviors is a complex sensory-cognitive-motor task that requires one to be engaged thoroughly in understanding their social interactions. These interactions are formed out of the elements of human interpersonal communication including both verbal

Social situational awareness, or the attentiveness to one's social surroundings, including the people, their interactions and their behaviors is a complex sensory-cognitive-motor task that requires one to be engaged thoroughly in understanding their social interactions. These interactions are formed out of the elements of human interpersonal communication including both verbal and non-verbal cues. While the verbal cues are instructive and delivered through speech, the non-verbal cues are mostly interpretive and requires the full attention of the participants to understand, comprehend and respond to them appropriately. Unfortunately certain situations are not conducive for a person to have complete access to their social surroundings, especially the non-verbal cues. For example, a person is who is blind or visually impaired may find that the non-verbal cues like smiling, head nod, eye contact, body gestures and facial expressions of their interaction partners are not accessible due to their sensory deprivation. The same could be said of people who are remotely engaged in a conversation and physically separated to have a visual access to one's body and facial mannerisms. This dissertation describes novel multimedia technologies to aid situations where it is necessary to mediate social situational information between interacting participants. As an example of the proposed system, an evidence-based model for understanding the accessibility problem faced by people who are blind or visually impaired is described in detail. From the derived model, a sleuth of sensing and delivery technologies that use state-of-the-art computer vision algorithms in combination with novel haptic interfaces are developed towards a) A Dyadic Interaction Assistant, capable of helping individuals who are blind to access important head and face based non-verbal communicative cues during one-on-one dyadic interactions, and b) A Group Interaction Assistant, capable of provide situational awareness about the interaction partners and their dynamics to a user who is blind, while also providing important social feedback about their own body mannerisms. The goal is to increase the effective social situational information that one has access to, with the conjuncture that a good awareness of one's social surroundings gives them the ability to understand and empathize with their interaction partners better. Extending the work from an important social interaction assistive technology, the need for enriched social situational awareness is everyday professional situations are also discussed, including, a) enriched remote interactions between physically separated interaction partners, and b) enriched communication between medical professionals during critical care procedures, towards enhanced patient safety. In the concluding remarks, this dissertation engages the readers into a science and technology policy discussion on the potential effect of a new technology like the social interaction assistant on the society. Discussing along the policy lines, social disability is highlighted as an important area that requires special attention from researchers and policy makers. Given that the proposed technology relies on wearable inconspicuous cameras, the discussion of privacy policies is extended to encompass newly evolving interpersonal interaction recorders, like the one presented in this dissertation.
ContributorsKrishna, Sreekar (Author) / Panchanathan, Sethuraman (Thesis advisor) / Black, John A. (Committee member) / Qian, Gang (Committee member) / Li, Baoxin (Committee member) / Shiota, Michelle (Committee member) / Arizona State University (Publisher)
Created2011