Matching Items (14)
Filtering by

Clear all filters

152593-Thumbnail Image.png
Description
Mobile apps have improved human lifestyle in various aspects ranging from instant messaging to tele-health. In the current app development paradigm, apps are being developed individually and agnostic of each other. The goal of this thesis is to allow a new world where multiple apps communicate with each other to

Mobile apps have improved human lifestyle in various aspects ranging from instant messaging to tele-health. In the current app development paradigm, apps are being developed individually and agnostic of each other. The goal of this thesis is to allow a new world where multiple apps communicate with each other to achieve synergistic benefits. To enable integration between apps, manual communication between developers is needed, which can be problematic on many levels. In order to promote app integration, a systematic approach towards data sharing between multiple apps is essential. However, current approaches to app integration require large code modifications to reap the benefits of shared data such as requiring developers to provide APIs or use large, invasive middlewares. In this thesis, a data sharing framework was developed providing a non-invasive interface between mobile apps for data sharing and integration. A separate app acts as a registry to allow apps to register database tables to be shared and query this information. Two health monitoring apps were developed to evaluate the sharing framework and different methods of data integration between apps to promote synergistic feedback. The health monitoring apps have shown non-invasive solutions can provide data sharing functionality without large code modifications and manual communication between developers.
ContributorsMilazzo, Joseph (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Varsamopoulos, Georgios (Committee member) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2014
153127-Thumbnail Image.png
Description
Many web search improvements have been developed since the advent of the modern search engine, but one underrepresented area is the application of specific customizations to search results for educational web sites. In order to address this issue and improve the relevance of search results in automated learning environments, this

Many web search improvements have been developed since the advent of the modern search engine, but one underrepresented area is the application of specific customizations to search results for educational web sites. In order to address this issue and improve the relevance of search results in automated learning environments, this work has integrated context-aware search principles with applications of preference based re-ranking and query modifications. This research investigates several aspects of context-aware search principles, specifically context-sensitive and preference based re-ranking of results which take user inputs as to their preferred content, and combines this with search query modifications which automatically search for a variety of modified terms based on the given search query, integrating these results into the overall re-ranking for the context. The result of this work is a novel web search algorithm which could be applied to any online learning environment attempting to collect relevant resources for learning about a given topic. The algorithm has been evaluated through user studies comparing traditional search results to the context-aware results returned through the algorithm for a given topic. These studies explore how this integration of methods could provide improved relevance in the search results returned when compared against other modern search engines.
ContributorsVan Egmond, Eric (Author) / Burleson, Winslow (Thesis advisor) / Syrotiuk, Violet (Thesis advisor) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2014
156121-Thumbnail Image.png
Description
The technological revolution has caused the entire world to migrate to a digital environment and health care is no exception to this. Electronic Health Records (EHR) or Electronic Medical Records (EMR) are the digital repository for health data of patients. Nation wide efforts have been made by the federal government

The technological revolution has caused the entire world to migrate to a digital environment and health care is no exception to this. Electronic Health Records (EHR) or Electronic Medical Records (EMR) are the digital repository for health data of patients. Nation wide efforts have been made by the federal government to promote the usage of EHRs as they have been found to improve quality of health service. Although EHR systems have been implemented almost everywhere, active use of EHR applications have not replaced paper documentation. Rather, they are often used to store transcribed data from paper documentation after each clinical procedure. This process is found to be prone to errors such as data omission, incomplete data documentation and is also time consuming. This research aims to help improve adoption of real-time EHRs usage while documenting data by improving the usability of an iPad based EHR application that is used during resuscitation process in the intensive care unit. Using Cognitive theories and HCI frameworks, this research identified areas of improvement and customizations in the application that were required to exclusively match the work flow of the resuscitation team at the Mayo Clinic. In addition to this, a Handwriting Recognition Engine (HRE) was integrated into the application to support a stylus based information input into EHR, which resembles our target users’ traditional pen and paper based documentation process. The EHR application was updated and then evaluated with end users at the Mayo clinic. The users found the application to be efficient, usable and they showed preference in using this application over the paper-based documentation.
ContributorsSubbiah, Naveen Kumar (Author) / Patel, Vimla L. (Thesis advisor) / Hsiao, Sharon (Thesis advisor) / Sen, Ayan (Committee member) / Atkinson, Robert K (Committee member) / Arizona State University (Publisher)
Created2018
Description
One of the core components of many video games is their artificial intelligence. Through AI, a game can tell stories, generate challenges, and create encounters for the player to overcome. Even though AI has continued to advance through the implementation of neural networks and machine learning, game AI tends to

One of the core components of many video games is their artificial intelligence. Through AI, a game can tell stories, generate challenges, and create encounters for the player to overcome. Even though AI has continued to advance through the implementation of neural networks and machine learning, game AI tends to implement a series of states or decisions instead to give the illusion of intelligence. Despite this limitation, games can still generate a wide range of experiences for the player. The Hybrid Game AI Framework is an AI system that combines the benefits of two commonly used approaches to developing game AI: Behavior Trees and Finite State Machines. Developed in the Unity Game Engine and the C# programming language, this AI Framework represents the research that went into studying modern approaches to game AI and my own attempt at implementing the techniques learned. Object-oriented programming concepts such as inheritance, abstraction, and low coupling are utilized with the intent to create game AI that's easy to implement and expand upon. The final goal was to create a flexible yet structured AI data structure while also minimizing drawbacks by combining Behavior Trees and Finite State Machines.
ContributorsRamirez Cordero, Erick Alberto (Author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Computer Science and Engineering Program (Contributor) / Computing and Informatics Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134100-Thumbnail Image.png
Description
Can a skill taught in a virtual environment be utilized in the physical world? This idea is explored by creating a Virtual Reality game for the HTC Vive to teach users how to play the drums. The game focuses on developing the user's muscle memory, improving the user's ability to

Can a skill taught in a virtual environment be utilized in the physical world? This idea is explored by creating a Virtual Reality game for the HTC Vive to teach users how to play the drums. The game focuses on developing the user's muscle memory, improving the user's ability to play music as they hear it in their head, and refining the user's sense of rhythm. Several different features were included to achieve this such as a score, different levels, a demo feature, and a metronome. The game was tested for its ability to teach and for its overall enjoyability by using a small sample group. Most participants of the sample group noted that they felt as if their sense of rhythm and drumming skill level would improve by playing the game. Through the findings of this project, it can be concluded that while it should not be considered as a complete replacement for traditional instruction, a virtual environment can be successfully used as a learning aid and practicing tool.
ContributorsDinapoli, Allison (Co-author) / Tuznik, Richard (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Computing and Informatics Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
134971-Thumbnail Image.png
Description
This thesis investigates students' learning behaviors through their interaction with an educational technology, Web Programming Grading Assistant. The technology was developed to facilitate the grading of paper-based examinations in large lecture-based classrooms and to provide richer and more meaningful feedback to students. A classroom study was designed and data was

This thesis investigates students' learning behaviors through their interaction with an educational technology, Web Programming Grading Assistant. The technology was developed to facilitate the grading of paper-based examinations in large lecture-based classrooms and to provide richer and more meaningful feedback to students. A classroom study was designed and data was gathered from an undergraduate computer-programming course in the fall of 2016. Analysis of the data revealed that there was a negative correlation between time lag of first review attempt and performance. A survey was developed and disseminated that gave insight into how students felt about the technology and what they normally do to study for programming exams. In conclusion, the knowledge gained in this study aids in the quest to better educate students in computer programming in large in-person classrooms.
ContributorsMurphy, Hannah (Author) / Hsiao, Ihan (Thesis director) / Nelson, Brian (Committee member) / School of Computing, Informatics, and Decision Systems Engineering (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
154470-Thumbnail Image.png
Description
For this master's thesis, an open learner model is integrated with Quinn, a teachable robotic agent developed at Arizona State University. This system is represented as a feedback system, which aims to improve a student’s understanding of a subject. It also helps to understand the effect of the learner model

For this master's thesis, an open learner model is integrated with Quinn, a teachable robotic agent developed at Arizona State University. This system is represented as a feedback system, which aims to improve a student’s understanding of a subject. It also helps to understand the effect of the learner model when it is represented by performance of the teachable agent. The feedback system represents performance of the teachable agent, and not of a student. Data in the feedback system is thus updated according to a student's understanding of the subject. This provides students an opportunity to enhance their understanding of a subject by analyzing their performance. To test the effectiveness of the feedback system, student understanding in two different conditions is analyzed. In the first condition a feedback report is not provided to the students, while in the second condition the feedback report is provided in the form of the agent’s performance.
ContributorsUpadhyay, Abha (Author) / Walker, Erin (Thesis advisor) / Nelson, Brian (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2016
154054-Thumbnail Image.png
Description
The American Heart Association recommended in 1997 the data elements that should be collected from resuscitations in hospitals. (15) Currently, data documentation from resuscitation events in hospitals, termed ‘code blue’ events, utilizes a paper form, which is institution-specific. Problems with data capture and transcription exists, due to the challenges of

The American Heart Association recommended in 1997 the data elements that should be collected from resuscitations in hospitals. (15) Currently, data documentation from resuscitation events in hospitals, termed ‘code blue’ events, utilizes a paper form, which is institution-specific. Problems with data capture and transcription exists, due to the challenges of dynamic documentation of patient, event and outcome variables as the code blue event unfolds.

This thesis is based on the hypothesis that an electronic version of code blue real-time data capture would lead to improved resuscitation data transcription, and enable clinicians to address deficiencies in quality of care. The primary goal of this thesis is to create an iOS based application, primarily designed for iPads, for code blue events at the Mayo Clinic Hospital. The secondary goal is to build an open-source software development framework for converting paper-based hospital protocols into digital format.

The tool created in this study enabled data documentation to be completed electronically rather than on paper for resuscitation outcomes. The tool was evaluated for usability with twenty nurses, the end-users, at Mayo Clinic in Phoenix, Arizona. The results showed the preference of users for the iPad application. Furthermore, a qualitative survey showed the clinicians perceived the electronic version to be more accurate and efficient than paper-based documentation, both of which are essential for an emergency code blue resuscitation procedure.
ContributorsBokhari, Wasif (Author) / Patel, Vimla L. (Thesis advisor) / Amresh, Ashish (Thesis advisor) / Nelson, Brian (Committee member) / Sen, Ayan (Committee member) / Arizona State University (Publisher)
Created2015
154605-Thumbnail Image.png
Description
With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can

With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can be of immense help. But the majority of traditional, non-virtual classes lack the ability to uncover such information that can serve as a feedback to the effectiveness of teaching. In majority of the schools paper exams and assignments provide the only form of assessment to measure the success of the students in achieving the course objectives. The overall grade obtained in paper exams and assignments need not present a complete picture of a student’s strengths and weaknesses. In part, this can be addressed by incorporating research-based technology into the classrooms to obtain real-time updates on students' progress. But introducing technology to provide real-time, class-wide engagement involves a considerable investment both academically and financially. This prevents the adoption of such technology thereby preventing the ideal, technology-enabled classrooms. With increasing class sizes, it is becoming impossible for teachers to keep a persistent track of their students progress and to provide personalized feedback. What if we can we provide technology support without adding more burden to the existing pedagogical approach? How can we enable semantic enrichment of exams that can translate to students' understanding of the topics taught in the class? Can we provide feedback to students that goes beyond only numbers and reveal areas that need their focus. In this research I focus on bringing the capability of conducting insightful analysis to paper exams with a less intrusive learning analytics approach that taps into the generic classrooms with minimum technology introduction. Specifically, the work focuses on automatic indexing of programming exam questions with ontological semantics. The thesis also focuses on designing and evaluating a novel semantic visual analytics suite for in-depth course monitoring. By visualizing the semantic information to illustrate the areas that need a student’s focus and enable teachers to visualize class level progress, the system provides a richer feedback to both sides for improvement.
ContributorsPandhalkudi Govindarajan, Sesha Kumar (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2016
Description
Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to

Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to develop and test a real time collaboration system between VR and MR. The system works similar to a Google document where two or more users can see what others are doing i.e. writing, modifying, viewing, etc. Similarly, the system developed during this study will enable users in VR and MR to collaborate in real time.

The study of developing a real-time cross-platform collaboration system between VR and MR takes into consideration a scenario in which multiple device users are connected to a multiplayer network where they are guided to perform various tasks concurrently.

Usability testing was conducted to evaluate participant perceptions of the system. Users were required to assemble a chair in alternating turns; thereafter users were required to fill a survey and give an audio interview. Results collected from the participants showed positive feedback towards using VR and MR for collaboration. However, there are several limitations with the current generation of devices that hinder mass adoption. Devices with better performance factors will lead to wider adoption.
ContributorsSeth, Nayan Sateesh (Author) / Nelson, Brian (Thesis advisor) / Walker, Erin (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2017