Matching Items (14)

Filtering by

Clear all filters

Description

For the average person, when they use a computer, they interact with two main groups: the Computer Input, which consists of a keyboard and a mouse, and the Computer Output, which consists of a monitor and speakers. For those with physical disabilities, traditional Computer Input and Output methods can be

For the average person, when they use a computer, they interact with two main groups: the Computer Input, which consists of a keyboard and a mouse, and the Computer Output, which consists of a monitor and speakers. For those with physical disabilities, traditional Computer Input and Output methods can be difficult or uncomfortable to use. I believe VR Technology can make using computers much more accessible for those individuals, and my application demonstrates that belief.

ContributorsGarcia, Mario (Author) / Johnson-Glenberg, Mina (Thesis director) / Bunch, Jacob (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
165564-Thumbnail Image.png
Description

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete’s form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete’s form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for feedback on how to improve. In this work, we present Augmented Coach, an augmented reality tool for coaches to give spatiotemporal feedback through a 3-dimensional point cloud of the athlete. The system allows coaches to view a pre-recorded video of their athlete in point cloud form, and provides them with the proper tools in order to go frame by frame to both analyze the athlete’s form and correct it. The result is a fundamentally new concept of an interactive video player, where the coach can remotely view the athlete in a 3-dimensional form and create annotations to help improve their form. We then conduct a user study with subject matter experts to evaluate the usability and capabilities of our system. As indicated by the results, Augmented Coach successfully acts as a supplement to in-person coaching, since it allows coaches to break down the video recording in a 3-dimensional space and provide feedback spatiotemporally. The results also indicate that Augmented Coach can be a complete coaching solution in a remote setting. This technology will be extremely relevant in the future as coaches look for new ways to improve their feedback methods, especially in a remote setting.

ContributorsChannar, Sameer (Author) / Dbeis, Yasser (Co-author) / Richards, Connor (Co-author) / LiKamWa, Robert (Thesis director) / Jayasuriya, Suren (Committee member) / Barrett, The Honors College (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
164974-Thumbnail Image.png
Description

The goal of this project was to determine if the chosen research and testing method would result in a game where students would practice math in the best way. This was done by creating a video game using Unity that followed key principles for designing a math game and for

The goal of this project was to determine if the chosen research and testing method would result in a game where students would practice math in the best way. This was done by creating a video game using Unity that followed key principles for designing a math game and for how students should practice math in general. Testing was done on participants to determine the strategies they used in order to play the game and these strategies were then defined and categorized based on their effectiveness and how well they met the learning principles. Also, the participants were asked a before and after question to determine if the game improved their overall attitude towards math to make sure the game was helping them learn and was not a hindrance. There was an overall increase in the participants’ feelings towards math after playing the game as well as beneficial strategies, so the research and testing method was overall a success.

ContributorsVaillancourt, Tyler (Author) / Kobayashi, Yoshihiro (Thesis director) / Amresh, Ashish (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
165865-Thumbnail Image.png
Description

Party on Wall Street is a web-based video game developed by Maroon and Gold Game Studios. As an educational entrepreneurship video game, Party on Wall Street provides a refreshing and exciting new experience for the tycoons in society who want a little more of that entrepreneurial lifestyle. With proper research

Party on Wall Street is a web-based video game developed by Maroon and Gold Game Studios. As an educational entrepreneurship video game, Party on Wall Street provides a refreshing and exciting new experience for the tycoons in society who want a little more of that entrepreneurial lifestyle. With proper research on customer demographics, Maroon and Gold Game Studio’s brand identity consists of a modern game with multiple use cases. With strong partnerships with multiple creatives and built from scratch game development, Party on Wall Street implements a fun, high intensity business competitive environment for players and students to engage in. This thesis consists of building an interactive experience through the use of AirConsole, a third party platform that hosts the game and allows players to join it by connecting to the same website on their mobile device. The primary user has access to hosting a game which can be casted to a larger screen, typically a television. When hosting a game, a room code is generated which can be typed in on the mobile device to connect to the game. When all players have joined the game, the host can initiate it. Players go through 6 rounds of pitch style investing presentations and have the opportunity to invest in other products with the ultimate goal of earning the most money. In the end, the game was successfully implemented, extensively user tested, and is under review by the AirConsole game team. Over the last year, the team successfully brought an idea through the entire product development process, learned to build a game in Unity, made practice of extensible testing and validation methods, and leveraged customer research and feedback to design a game that is ultimately both enjoyable and educational.

ContributorsWaters, Eric (Author) / Wood, Collin (Co-author) / Khan, Shaheer (Co-author) / Byrne, Jared (Thesis director) / Pierce, John (Committee member) / Balven, Rachel (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
165866-Thumbnail Image.png
Description

Party on Wall Street is a web-based video game developed by Maroon and Gold Game Studios. As an educational entrepreneurship video game, Party on Wall Street provides a refreshing and exciting new experience for the tycoons in society who want a little more of that entrepreneurial lifestyle. With proper research

Party on Wall Street is a web-based video game developed by Maroon and Gold Game Studios. As an educational entrepreneurship video game, Party on Wall Street provides a refreshing and exciting new experience for the tycoons in society who want a little more of that entrepreneurial lifestyle. With proper research on customer demographics, Maroon and Gold Game Studio’s brand identity consists of a modern game with multiple use cases. With strong partnerships with multiple creatives and built from scratch game development, Party on Wall Street implements a fun, high intensity business competitive environment for players and students to engage in. This thesis consists of building an interactive experience through the use of AirConsole, a third party platform that hosts the game and allows players to join it by connecting to the same website on their mobile device. The primary user has access to hosting a game which can be casted to a larger screen, typically a television. When hosting a game, a room code is generated which can be typed in on the mobile device to connect to the game. When all players have joined the game, the host can initiate it. Players go through 6 rounds of pitch style investing presentations and have the opportunity to invest in other products with the ultimate goal of earning the most money. In the end, the game was successfully implemented, extensively user tested, and is under review by the AirConsole game team. Over the last year, the team successfully brought an idea through the entire product development process, learned to build a game in Unity, made practice of extensible testing and validation methods, and leveraged customer research and feedback to design a game that is ultimately both enjoyable and educational.

ContributorsWood, Collin (Author) / Waters, Eric (Co-author) / Khan, Shaheer (Co-author) / Byrne, Jared (Thesis director) / Pierce, John (Committee member) / Balven, Rachel (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
134486-Thumbnail Image.png
Description

The objective of this creative project was to gain experience in digital modeling, animation, coding, shader development and implementation, model integration techniques, and application of gaming principles and design through developing a professional educational game. The team collaborated with Glendale Community College (GCC) to produce an interactive product intended to

The objective of this creative project was to gain experience in digital modeling, animation, coding, shader development and implementation, model integration techniques, and application of gaming principles and design through developing a professional educational game. The team collaborated with Glendale Community College (GCC) to produce an interactive product intended to supplement educational instructions regarding nutrition. The educational game developed, "Nutribots" features the player acting as a nutrition based nanobot sent to the small intestine to help the body. Throughout the game the player will be asked nutrition based questions to test their knowledge of proteins, carbohydrates, and lipids. If the player is unable to answer the question, they must use game mechanics to progress and receive the information as a reward. The level is completed as soon as the question is answered correctly. If the player answers the questions incorrectly twenty times within the entirety of the game, the team loses faith in the player, and the player must reset from title screen. This is to limit guessing and to make sure the player retains the information through repetition once it is demonstrated that they do not know the answers. The team was split into two different groups for the development of this game. The first part of the team developed models, animations, and textures using Autodesk Maya 2016 and Marvelous Designer. The second part of the team developed code and shaders, and implemented products from the first team using Unity and Visual Studio. Once a prototype of the game was developed, it was show-cased amongst peers to gain feedback. Upon receiving feedback, the team implemented the desired changes accordingly. Development for this project began on November 2015 and ended on April 2017. Special thanks to Laura Avila Department Chair and Jennifer Nolz from Glendale Community College Technology and Consumer Sciences, Food and Nutrition Department.

ContributorsNolz, Daisy (Co-author) / Martin, Austin (Co-author) / Quinio, Santiago (Co-author) / Armstrong, Jessica (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Valderrama, Jamie (Committee member) / School of Arts, Media and Engineering (Contributor) / School of Film, Dance and Theatre (Contributor) / Department of English (Contributor) / Computer Science and Engineering Program (Contributor) / Computing and Informatics Program (Contributor) / Herberger Institute for Design and the Arts (Contributor) / School of Sustainability (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
132921-Thumbnail Image.png
Description

Virtual reality gives users the opportunity to immerse themselves in an accurately
simulated computer-generated environment. These environments are accurately simulated in that they provide the appearance of- and allow users to interact with- the simulated environment. Using head-mounted displays, controllers, and auditory feedback, virtual reality provides a convincing simulation

Virtual reality gives users the opportunity to immerse themselves in an accurately
simulated computer-generated environment. These environments are accurately simulated in that they provide the appearance of- and allow users to interact with- the simulated environment. Using head-mounted displays, controllers, and auditory feedback, virtual reality provides a convincing simulation of interactable virtual worlds (Wikipedia, “Virtual reality”). The many worlds of virtual reality are often expansive, colorful, and detailed. However, there is one great flaw among them- an emotion evoked in many users through the exploration of such worlds-loneliness.
The content in these worlds is impressive, immersive, and entertaining. Without other people to share in these experiences, however, one can find themselves lonely. Users discover a feeling that no matter how many objects and colors surround them in countless virtual worlds, every world feels empty. As humans are social beings by nature, they feel lost without a sense of human connection and human interaction. Multiplayer experiences offer this missing element into the immersion of virtual reality worlds. Multiplayer offers users the opportunity to interact with other live people in a virtual simulation, which creates lasting memories and deeper, more meaningful immersion.

ContributorsJorgensen, Nicholas Keith (Co-author) / Jorgensen, Caitlin Nicole (Co-author) / Selgrad, Justin (Thesis director) / Ehgner, Arnaud (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133565-Thumbnail Image.png
Description

This paper details the process for designing both a simulation of the board game Jaipur, and an artificial intelligence (AI) agent that can play the game against a human player. When designing an AI for a card game, there are two major problems that can arise. The first is the

This paper details the process for designing both a simulation of the board game Jaipur, and an artificial intelligence (AI) agent that can play the game against a human player. When designing an AI for a card game, there are two major problems that can arise. The first is the difficulty of using a search space to analyze every possible set of future moves. Due to the randomized nature of the deck of cards, the search space rapidly leads to an exponentially growing set of potential game states to analyze when one tries to look more than one turn ahead. The second aspect that poses difficulty is the element of uncertainty that exists from opponent feedback. Certain moves are weak to specific opponent reactions, and these are difficult to predict due to hidden information. To circumvent these problems, the AI uses a greedy approach to decision making, attempting to maximize the value of its plays immediately, and not play for future turns. The agent utilizes conditional statements to evaluate the game state and choose a game action that it deems optimal, a heuristic to place an expected value (EV) of the goods it can choose from, and selects the best one based on this evaluation. Initial implementation of the simulation was done using C++ through a terminal application, and then was translated to a graphical interface using Unity and C#.

ContributorsOrr, James Christopher (Author) / Kobayashi, Yoshihiro (Thesis director) / Selgrad, Justin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133515-Thumbnail Image.png
Description

Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API

Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API for Natural Language Processing was incorporated into our game system. User experience elements like gestures, UI color change, and images were used to help guide users in memorizing and building structures. The process to create these elements were streamlined through the VRTK library in Unity. The game has two segments. The first segment is a tutorial level where the user learns to perform motions and in-game actions. The second segment is a game where the user must correctly create a structure by utilizing vocal commands and spatial recognition. A standardized usability test, System Usability Scale, was used to evaluate the effectiveness of the game. A survey was also created in order to evaluate a more descriptive user opinion. Overall, users gave a positive score on the System Usability Scale and slightly positive reviews in the custom survey.

ContributorsOrtega, Excel (Co-author) / Ryan, Alexander (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Computing and Informatics Program (Contributor) / School of Art (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
132302-Thumbnail Image.png
Description

The instruction of students in computer science concepts can be enhanced by creating programmable simulations and games. ASU VIPLE, which is a framework used to control simulations, robots, and for IoT applications, can be used as an educational tool. Further, the Unity engine allows the creation of 2D and 3D

The instruction of students in computer science concepts can be enhanced by creating programmable simulations and games. ASU VIPLE, which is a framework used to control simulations, robots, and for IoT applications, can be used as an educational tool. Further, the Unity engine allows the creation of 2D and 3D games. The development of basic minigames in Unity can provide simulations for students to program. One can run the Unity minigame and their corresponding VIPLE script to control them over a network connection as well as locally. The minigames conform to the robot output and robot input interfaces supported by VIPLE. With this goal in mind, a snake game, a space shooter game, and a runner game have been created as Unity simulations, which can be controlled by scripts made using VIPLE. These games represent simulated environments that, with movement output and sensor input, students can program simply and externally from VIPLE to help learn robotics and computer science principles.

ContributorsChristensen, Collin Riley (Author) / Chen, Yinong (Thesis director) / Kobayashi, Yoshihiro (Committee member) / Computer Science and Engineering Program (Contributor) / Computing and Informatics Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05