Matching Items (74)
137137-Thumbnail Image.png
Description
Speech recognition in games is rarely seen. This work presents a project, a 2D computer game named "The Emblems" which utilizes speech recognition as input. The game itself is a two person strategy game whose goal is to defeat the opposing player's army. This report focuses on the speech-recognition aspect

Speech recognition in games is rarely seen. This work presents a project, a 2D computer game named "The Emblems" which utilizes speech recognition as input. The game itself is a two person strategy game whose goal is to defeat the opposing player's army. This report focuses on the speech-recognition aspect of the project. The players interact on a turn-by-turn basis by speaking commands into the computer's microphone. When the computer recognizes a command, it will respond accordingly by having the player's unit perform an action on screen.
ContributorsNguyen, Jordan Ngoc (Author) / Kobayashi, Yoshihiro (Thesis director) / Maciejewski, Ross (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-05
137143-Thumbnail Image.png
Description
Methane (CH4) is very important in the environment as it is a greenhouse gas and important for the degradation of organic matter. During the last 200 years the atmospheric concentration of CH4 has tripled. Methanogens are methane-producing microbes from the Archaea domain that complete the final step in breaking down

Methane (CH4) is very important in the environment as it is a greenhouse gas and important for the degradation of organic matter. During the last 200 years the atmospheric concentration of CH4 has tripled. Methanogens are methane-producing microbes from the Archaea domain that complete the final step in breaking down organic matter to generate methane through a process called methanogenesis. They contribute to about 74% of the CH4 present on the Earth's atmosphere, producing 1 billion tons of methane annually. The purpose of this work is to generate a preliminary metabolic reconstruction model of two methanogens: Methanoregula boonei 6A8 and Methanosphaerula palustris E1-9c. M. boonei and M. palustris are part of the Methanomicrobiales order and perform hydrogenotrophic methanogenesis, which means that they reduce CO2 to CH4 by using H2 as their major electron donor. Metabolic models are frameworks for understanding a cell as a system and they provide the means to assess the changes in gene regulation in response in various environmental and physiological constraints. The Pathway-Tools software v16 was used to generate these draft models. The models were manually curated using literature searches, the KEGG database and homology methods with the Methanosarcina acetivorans strain, the closest methanogen strain with a nearly complete metabolic reconstruction. These preliminary models attempt to complete the pathways required for amino acid biosynthesis, methanogenesis, and major cofactors related to methanogenesis. The M. boonei reconstruction currently includes 99 pathways and has 82% of its reactions completed, while the M. palustris reconstruction includes 102 pathways and has 89% of its reactions completed.
ContributorsMahendra, Divya (Author) / Cadillo-Quiroz, Hinsby (Thesis director) / Wang, Xuan (Committee member) / Stout, Valerie (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / School of Life Sciences (Contributor) / Biomedical Informatics Program (Contributor)
Created2014-05
137149-Thumbnail Image.png
Description
The project, "The Emblems: OpenGL" is a 2D strategy game that incorporates Speech Recognition for control and OpenGL for computer graphics. Players control their own army by voice commands and try to eliminate the opponent's army. This report focuses on the 2D art and visual aspects of the project. There

The project, "The Emblems: OpenGL" is a 2D strategy game that incorporates Speech Recognition for control and OpenGL for computer graphics. Players control their own army by voice commands and try to eliminate the opponent's army. This report focuses on the 2D art and visual aspects of the project. There are different sprites for the player's army units and icons within the game. The game also has a grid for easy unit placement.
ContributorsHsia, Allen (Author) / Kobayashi, Yoshihiro (Thesis director) / Maciejewski, Ross (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-05
134629-Thumbnail Image.png
Description
Valley Fever, also known as coccidioidomycosis, is a respiratory disease that affects 10,000 people annually, primarily in Arizona and California. Due to a lack of gene annotation, diagnosis and treatment of Valley Fever is severely limited. In turn, gene annotation efforts are also hampered by incomplete genome sequencing. We intend

Valley Fever, also known as coccidioidomycosis, is a respiratory disease that affects 10,000 people annually, primarily in Arizona and California. Due to a lack of gene annotation, diagnosis and treatment of Valley Fever is severely limited. In turn, gene annotation efforts are also hampered by incomplete genome sequencing. We intend to use proteogenomic analysis to reannotate the Coccidioides posadasii str. Silveira genome from protein-level data. Protein samples extracted from both phases of Silveira were fragmented into peptides, sequenced, and compared against databases of known and predicted proteins sequences, as well as a de novo six-frame translation of the genome. 288 unique peptides were located that did not match a known Silveira annotation, and of those 169 were associated with another Coccidioides strain. Additionally, 17 peptides were found at the boundary of, or outside of, the current gene annotation comprising four distinct clusters. For one of these clusters, we were able to calculate a lower bound and an estimate for the size of the gap between two Silveira contigs using the Coccidioides immitis RS transcript associated with that cluster's peptides \u2014 these predictions were consistent with the current annotation's scaffold structure. Three peptides were associated with an actively translated transposon, and a putative active site was located within an intact LTR retrotransposon. We note that gene annotation is necessarily hindered by the quality and level of detail in prior genome sequencing efforts, and recommend that future studies involving reannotation include additional sequencing as well as gene annotation via proteogenomics or other methods.
ContributorsSherrard, Andrew (Author) / Lake, Douglas (Thesis director) / Grys, Thomas (Committee member) / Mitchell, Natalie (Committee member) / Computing and Informatics Program (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134486-Thumbnail Image.png
Description
The objective of this creative project was to gain experience in digital modeling, animation, coding, shader development and implementation, model integration techniques, and application of gaming principles and design through developing a professional educational game. The team collaborated with Glendale Community College (GCC) to produce an interactive product intended to

The objective of this creative project was to gain experience in digital modeling, animation, coding, shader development and implementation, model integration techniques, and application of gaming principles and design through developing a professional educational game. The team collaborated with Glendale Community College (GCC) to produce an interactive product intended to supplement educational instructions regarding nutrition. The educational game developed, "Nutribots" features the player acting as a nutrition based nanobot sent to the small intestine to help the body. Throughout the game the player will be asked nutrition based questions to test their knowledge of proteins, carbohydrates, and lipids. If the player is unable to answer the question, they must use game mechanics to progress and receive the information as a reward. The level is completed as soon as the question is answered correctly. If the player answers the questions incorrectly twenty times within the entirety of the game, the team loses faith in the player, and the player must reset from title screen. This is to limit guessing and to make sure the player retains the information through repetition once it is demonstrated that they do not know the answers. The team was split into two different groups for the development of this game. The first part of the team developed models, animations, and textures using Autodesk Maya 2016 and Marvelous Designer. The second part of the team developed code and shaders, and implemented products from the first team using Unity and Visual Studio. Once a prototype of the game was developed, it was show-cased amongst peers to gain feedback. Upon receiving feedback, the team implemented the desired changes accordingly. Development for this project began on November 2015 and ended on April 2017. Special thanks to Laura Avila Department Chair and Jennifer Nolz from Glendale Community College Technology and Consumer Sciences, Food and Nutrition Department.
ContributorsNolz, Daisy (Co-author) / Martin, Austin (Co-author) / Quinio, Santiago (Co-author) / Armstrong, Jessica (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Valderrama, Jamie (Committee member) / School of Arts, Media and Engineering (Contributor) / School of Film, Dance and Theatre (Contributor) / Department of English (Contributor) / Computer Science and Engineering Program (Contributor) / Computing and Informatics Program (Contributor) / Herberger Institute for Design and the Arts (Contributor) / School of Sustainability (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134533-Thumbnail Image.png
Description
Learning to program is no easy task, and many students experience their first programming during their university education. Unfortunately, programming classes have a large number of students enrolled, so it is nearly impossible for professors to associate with the students at an individual level and provide the personal attention each

Learning to program is no easy task, and many students experience their first programming during their university education. Unfortunately, programming classes have a large number of students enrolled, so it is nearly impossible for professors to associate with the students at an individual level and provide the personal attention each student needs. This project aims to provide professors with a tool to quickly respond to the current understanding of the students. This web-based application gives professors the control to quickly ask Java programming questions, and the ability to see the aggregate data on how many of the students have successfully completed the assigned questions. With this system, the students are provided with extra programming practice in a controlled environment, and if there is an error in their program, the system will provide feedback describing what the error means and what steps the student can take to fix it.
ContributorsVillela, Daniel Linus (Author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Hsiao, Sharon (Committee member) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
Description
The LeapMax Gestural Interaction System is a project which utilizes the Leap Motion controller and visual programming language Max to extract complex and accurate skeletal hand tracking data from a performer in a global 3-D context. The goal of this project was to develop a simple and efficient architecture for

The LeapMax Gestural Interaction System is a project which utilizes the Leap Motion controller and visual programming language Max to extract complex and accurate skeletal hand tracking data from a performer in a global 3-D context. The goal of this project was to develop a simple and efficient architecture for designing dynamic and compelling digital gestural interfaces. At the core of this work is a Max external object which uses a custom API to extract data from the Leap Motion service and retrieve it in Max. From this data, a library of Max objects for determining more complex gesture and posture information was generated and refined. These objects can be are highly flexible and modular and can be used to create complex control schemes for a variety of systems. To demonstrate the use of this system in a performance context, an experimental musical instrument was designed in which the Leap is combined with an absolute orientation sensor and mounted on the head of a performer. This setup leverages the head mounted Leap Motion paradigm used in VR systems to construct an interactive sonic environment within the context of the user's environment. The user's gestures are mapped to the controls of a synthesis engine which utilizes several forms of synthesis including granular synthesis, frequency modulation, and delay modulation.
ContributorsJones, George Cooper (Author) / Hayes, Lauren (Thesis director) / Byron, Lahey (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Computing and Informatics Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-12
133515-Thumbnail Image.png
Description
Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API

Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API for Natural Language Processing was incorporated into our game system. User experience elements like gestures, UI color change, and images were used to help guide users in memorizing and building structures. The process to create these elements were streamlined through the VRTK library in Unity. The game has two segments. The first segment is a tutorial level where the user learns to perform motions and in-game actions. The second segment is a game where the user must correctly create a structure by utilizing vocal commands and spatial recognition. A standardized usability test, System Usability Scale, was used to evaluate the effectiveness of the game. A survey was also created in order to evaluate a more descriptive user opinion. Overall, users gave a positive score on the System Usability Scale and slightly positive reviews in the custom survey.
ContributorsOrtega, Excel (Co-author) / Ryan, Alexander (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Computing and Informatics Program (Contributor) / School of Art (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134345-Thumbnail Image.png
Description
This study observes two fanfiction speech communities, Danny Phantom and Detective Conan. The members of these communities write stories based upon the canon within these two animated cartoons and interact with one another through reviews, author's notes, and story summaries. Using the speech community model, this community's unique practices and

This study observes two fanfiction speech communities, Danny Phantom and Detective Conan. The members of these communities write stories based upon the canon within these two animated cartoons and interact with one another through reviews, author's notes, and story summaries. Using the speech community model, this community's unique practices and communicative repertoire will be identified and analyzed. Both of these fandoms show similarities with the overarching general fanfiction speech community, but they also possess key differences that define them as their own separate community. Fan jargon is used frequently in author's notes, reviews, and summaries to indicate fan expertise and membership within the fandom as well as exclude newcomers from understanding the information. This jargon remains largely the same across languages, and using it properly is important to being considered a true fan. Furthermore, many stories share similar elements that are not present within the source material, indicating that the fandoms possess a shared communicative repertoire. Review practices also show strong cultural norms that demand that reviewers offer praise and encouragement to the writers. Most criticism is phrased extremely kindly to avoid breaking cultural norms. Those who do not follow these cultural norms are shunned by the community, and required to apologize to maintain proper fan membership. Fan hierarchy is also examined, including the ways that big name fans and reviewers exert centripetal and centrifugal forces upon the language, simultaneously pushing it towards standardization and variation. Authors also use many face saving techniques to demonstrate their own lack of knowledge within the community, especially if they are new or inexperienced. The members of these communities share a deep cultural connection that is strengthened by their practices and repertoires.
ContributorsDial, Ashtyn Nicole (Author) / Friedrich, Patricia (Thesis director) / O'Connor, Brendan (Committee member) / Computing and Informatics Program (Contributor) / Department of English (Contributor, Contributor, Contributor) / School of Music (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
Description
One of the core components of many video games is their artificial intelligence. Through AI, a game can tell stories, generate challenges, and create encounters for the player to overcome. Even though AI has continued to advance through the implementation of neural networks and machine learning, game AI tends to

One of the core components of many video games is their artificial intelligence. Through AI, a game can tell stories, generate challenges, and create encounters for the player to overcome. Even though AI has continued to advance through the implementation of neural networks and machine learning, game AI tends to implement a series of states or decisions instead to give the illusion of intelligence. Despite this limitation, games can still generate a wide range of experiences for the player. The Hybrid Game AI Framework is an AI system that combines the benefits of two commonly used approaches to developing game AI: Behavior Trees and Finite State Machines. Developed in the Unity Game Engine and the C# programming language, this AI Framework represents the research that went into studying modern approaches to game AI and my own attempt at implementing the techniques learned. Object-oriented programming concepts such as inheritance, abstraction, and low coupling are utilized with the intent to create game AI that's easy to implement and expand upon. The final goal was to create a flexible yet structured AI data structure while also minimizing drawbacks by combining Behavior Trees and Finite State Machines.
ContributorsRamirez Cordero, Erick Alberto (Author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Computer Science and Engineering Program (Contributor) / Computing and Informatics Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05