Matching Items (29)
149744-Thumbnail Image.png
Description
The video game graphics pipeline has traditionally rendered the scene using a polygonal approach. Advances in modern graphics hardware now allow the rendering of parametric methods. This thesis explores various smooth surface rendering methods that can be integrated into the video game graphics engine. Moving over to parametric or smooth

The video game graphics pipeline has traditionally rendered the scene using a polygonal approach. Advances in modern graphics hardware now allow the rendering of parametric methods. This thesis explores various smooth surface rendering methods that can be integrated into the video game graphics engine. Moving over to parametric or smooth surfaces from the polygonal domain has its share of issues and there is an inherent need to address various rendering bottlenecks that could hamper such a move. The game engine needs to choose an appropriate method based on in-game characteristics of the objects; character and animated objects need more sophisticated methods whereas static objects could use simpler techniques. Scaling the polygon count over various hardware platforms becomes an important factor. Much control is needed over the tessellation levels, either imposed by the hardware limitations or by the application, to be able to adaptively render the mesh without significant loss in performance. This thesis explores several methods that would help game engine developers in making correct design choices by optimally balancing the trade-offs while rendering the scene using smooth surfaces. It proposes a novel technique for adaptive tessellation of triangular meshes that vastly improves speed and tessellation count. It develops an approximate method for rendering Loop subdivision surfaces on tessellation enabled hardware. A taxonomy and evaluation of the methods is provided and a unified rendering system that provides automatic level of detail by switching between the methods is proposed.
ContributorsAmresh, Ashish (Author) / Farin, Gerlad (Thesis advisor) / Razdan, Anshuman (Thesis advisor) / Wonka, Peter (Committee member) / Hansford, Dianne (Committee member) / Arizona State University (Publisher)
Created2011
150447-Thumbnail Image.png
Description
Night vision goggles (NVGs) are widely used by helicopter pilots for flight missions at night, but the equipment can present visually confusing images especially in urban areas. A simulation tool with realistic nighttime urban images would help pilots practice and train for flight with NVGs. However, there is a lack

Night vision goggles (NVGs) are widely used by helicopter pilots for flight missions at night, but the equipment can present visually confusing images especially in urban areas. A simulation tool with realistic nighttime urban images would help pilots practice and train for flight with NVGs. However, there is a lack of tools for visualizing urban areas at night. This is mainly due to difficulties in gathering the light system data, placing the light systems at suitable locations, and rendering millions of lights with complex light intensity distributions (LID). Unlike daytime images, a city can have millions of light sources at night, including street lights, illuminated signs, and light shed from building interiors through windows. In this paper, a Procedural Lighting tool (PL), which predicts the positions and properties of street lights, is presented. The PL tool is used to accomplish three aims: (1) to generate vector data layers for geographic information systems (GIS) with statistically estimated information on lighting designs for streets, as well as the locations, orientations, and models for millions of streetlights; (2) to generate geo-referenced raster data to suitable for use as light maps that cover a large scale urban area so that the effect of millions of street light can be accurately rendered at real time, and (3) to extend existing 3D models by generating detailed light-maps that can be used as UV-mapped textures to render the model. An interactive graphical user interface (GUI) for configuring and previewing lights from a Light System Database (LDB) is also presented. The GUI includes physically accurate information about LID and also the lights' spectral power distributions (SPDs) so that a light-map can be generated for use with any sensor if the sensors luminosity function is known. Finally, for areas where more detail is required, a tool has been developed for editing and visualizing light effects over a 3D building from many light sources including area lights and windows. The above components are integrated in the PL tool to produce a night time urban view for not only a large-scale area but also a detail of a city building.
ContributorsChuang, Chia-Yuan (Author) / Femiani, John (Thesis advisor) / Razdan, Anshuman (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2011
Description
Distant is a Game Design Document describing an original game by the same name. The game was designed around the principle of core aesthetics, where the user experience is defined first and then the game is built from that experience. Distant is an action-exploration game set on a huge megastructure

Distant is a Game Design Document describing an original game by the same name. The game was designed around the principle of core aesthetics, where the user experience is defined first and then the game is built from that experience. Distant is an action-exploration game set on a huge megastructure floating in the atmosphere of Saturn. Players take on the role of HUE, an artificial intelligence trapped in the body of a maintenance robot, as he explores this strange world and uncovers its secrets. Using acrobatic movement abilities, players will solve puzzles, evade enemies, and explore the world from top to bottom. The world, known as the Strobilus Megastructure, is conical in shape, with living quarters and environmental system in the upper sections and factories and resource mining in the lower sections. The game world is split up into 10 major areas and countless minor and connecting areas. Special movement abilities like wall running and anti-gravity allow players to progress further down in the world. These abilities also allow players to solve more complicated puzzles, and to find more difficult to reach items. The story revolves around six artificial intelligences that were created to maintain the station. Many centuries ago, these AI helped humankind maintain their day-to-day lives and helped researchers working on new scientific breakthroughs. This led to the discovery of faster-than-light travel, and humanity left the station and our solar system to explore the cosmos. HUE, the AI in charge of human relations, fell into depression and shut down. Awakening several hundred years in the future, HUE sets out to find the other AI. Along the way he helps them reconnect and discovers the history and secrets of the station. Distant is intended for players looking for three things: A fantastic world full of discovery, a rich, character driven narrative, and challenging acrobatic gameplay. Players of any age or background are recommended to give it a try, but it will require investment and a willingness to improve. Distant is intended to change players, to force them to confront difficulty and different perspectives. Most games involve upgrading a character; Distant is a game that upgrades the player.
ContributorsGarttmeier, Colin Reiser (Author) / Collins, Daniel (Thesis director) / Amresh, Ashish (Committee member) / School of Arts, Media and Engineering (Contributor) / Computing and Informatics Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135212-Thumbnail Image.png
Description
The purpose of the Oculus Exercise research project we conducted was to find a way to entice individuals to attend a gym more often and for longer periods of time. We have found that many activities are being augmented by the increasingly popular virtual reality technology, and within that space

The purpose of the Oculus Exercise research project we conducted was to find a way to entice individuals to attend a gym more often and for longer periods of time. We have found that many activities are being augmented by the increasingly popular virtual reality technology, and within that space "gamifying" the activity seems to attract more users. Given the idea of making activities more entertaining to users through "gamification", we decided to incorporate virtual reality, using the Oculus Rift, to immerse users within a simulated environment to potentially drive the factors previously identified in respect to gym utilization. To start, we surveyed potential users to gauge potential interest in virtual reality and its usage in physical exercise. Based on the initial responses, we saw that there was a definite interest in "gamifying" physical exercises using virtual reality, and proceeded to design a prototype using Unreal Engine 4 -- which is an engine for creating high quality video games with support for virtual reality -- to experiment how it would affect a standard workout routine. After considering several options, we decided to move forward with designing our prototype to augment a spin machine with virtual reality due to its common usage within a gym, and the consistent cardiovascular exercise it entails, as well as the safety intrinsic to it being a mostly stationary device. By analyzing the results of a survey after experimenting upon a user test group, we can begin to correlate the benefits and the drawbacks of using virtual reality in physical exercise, and the feasibility of doing so.
ContributorsCarney, Nicholas (Co-author) / West, Andrew (Co-author) / Dobkins, Jacob (Co-author) / Amresh, Ashish (Thesis director) / Gray, Robert (Committee member) / Barrett, The Honors College (Contributor)
Created2016-05
134322-Thumbnail Image.png
Description
Young adults do not know basic emergency preparedness skills. Although there are materials out there such as printed and online materials form Center for Disease Control, it is unlikely that college-age people will take the time to read them. Some individuals have addressed the issue of young adults not wanting

Young adults do not know basic emergency preparedness skills. Although there are materials out there such as printed and online materials form Center for Disease Control, it is unlikely that college-age people will take the time to read them. Some individuals have addressed the issue of young adults not wanting to read materials by creating a fun interactive game in the San Francisco area, but since the game must be played in person, a solution like that can only reach so far. Studies suggest that virtual worlds are effective in teaching people new skills, so I have created a virtual world that will teach people basic emergency preparedness skills in a way that is memorable and appealing to a college-age audience. The logic used to teach players the concepts of emergency preparedness is case-based reasoning. Case-based reasoning is the process of solving new problems by remembering similar solutions in the past. By creating a simulation emergency situation in a virtual world, young adults are more likely to know what to do in the case of an actual emergency.
ContributorsTeplik, Julie Rachel (Author) / Craig, Scotty (Thesis director) / Amresh, Ashish (Committee member) / WPC Graduate Programs (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135422-Thumbnail Image.png
Description
The 2010s have seen video games rise to prominence as platforms for game developers, entertainers and advertisers to broadcast their ideas. This paper looks at the major steps in gaming history that led to games as a global mass communication tool, the way the Internet has created an industry built

The 2010s have seen video games rise to prominence as platforms for game developers, entertainers and advertisers to broadcast their ideas. This paper looks at the major steps in gaming history that led to games as a global mass communication tool, the way the Internet has created an industry built around broadcasting games and the potential future ramifications competitive gaming, emerging technology and intellectual property law hold on the world of video games.
ContributorsChesler, Jayson Daniel (Author) / Hill, Retha (Thesis director) / Amresh, Ashish (Committee member) / Walter Cronkite School of Journalism and Mass Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135822-Thumbnail Image.png
Description
Keyboard input biometric authentication systems are software systems which record keystroke information and use it to identify a typist. The primary statistics used to determine the accuracy of a keyboard biometric authentication system are the false acceptance rate (FAR) and false rejection rate (FRR), which are aimed to be as

Keyboard input biometric authentication systems are software systems which record keystroke information and use it to identify a typist. The primary statistics used to determine the accuracy of a keyboard biometric authentication system are the false acceptance rate (FAR) and false rejection rate (FRR), which are aimed to be as low as possible [1]. However, even if a system has a low FAR and FRR, there is nothing stopping an attacker from also monitoring an individual's typing habits in the same way a legitimate authentication system would, and using its knowledge of their habits to recreate virtual keyboard events for typing arbitrary text, with precise timing mimicking those habits, which would theoretically spoof a legitimate keyboard biometric authentication system into thinking it is the intended user doing the typing. A proof of concept of this very attack, called keyboard input biometric authentication spoofing, is the focus of this paper, with the purpose being to show that even if a biometric authentication system is reasonably accurate, with a low FAR and FRR, it can still potentially be very vulnerable to a well-crafted spoofing system. A rudimentary keyboard input biometric authentication system was written in C and C++ which drew influence from already existing methods and attempted new methods of authentication as well. A spoofing system was then built which exploited the authentication system's statistical representation of a user's typing habits to recreate keyboard events as described above. This proof of concept is aimed at raising doubts about the idea of relying too heavily upon keyboard input based biometric authentication systems since the user's typing input can demonstrably be spoofed in this way if an attacker has full access to the system, even if the system itself is accurate. The results are that the authentication system built for this study, when ran on a database of typing event logs recorded from 15 users in 4 sessions, had a 0% FAR and FRR (more detailed analysis of FAR and FRR is also presented), yet it was still very susceptible to being spoofed, with a 44% to 71% spoofing rate in some instances.
ContributorsJohnson, Peter Thomas (Author) / Nelson, Brian (Thesis director) / Amresh, Ashish (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136298-Thumbnail Image.png
Description
This paper will explore what makes ‘good’ virtual reality, that is, what constitutes the virtual reality threshold. It will explain what this has to do with the temporary death of virtual reality, and argue that that threshold has now been crossed and true virtual reality is now possible, as evidenced

This paper will explore what makes ‘good’ virtual reality, that is, what constitutes the virtual reality threshold. It will explain what this has to do with the temporary death of virtual reality, and argue that that threshold has now been crossed and true virtual reality is now possible, as evidenced by the current wave of virtual reality catalyzed by the Oculus Rift. The Rift will be used as a case study for examining specific aspects of the virtual reality threshold.
ContributorsLittle, Rebecca Ann (Author) / Amresh, Ashish (Thesis director) / Ghazarian, Arbi (Committee member) / Barrett, The Honors College (Contributor)
Created2015-05
154470-Thumbnail Image.png
Description
For this master's thesis, an open learner model is integrated with Quinn, a teachable robotic agent developed at Arizona State University. This system is represented as a feedback system, which aims to improve a student’s understanding of a subject. It also helps to understand the effect of the learner model

For this master's thesis, an open learner model is integrated with Quinn, a teachable robotic agent developed at Arizona State University. This system is represented as a feedback system, which aims to improve a student’s understanding of a subject. It also helps to understand the effect of the learner model when it is represented by performance of the teachable agent. The feedback system represents performance of the teachable agent, and not of a student. Data in the feedback system is thus updated according to a student's understanding of the subject. This provides students an opportunity to enhance their understanding of a subject by analyzing their performance. To test the effectiveness of the feedback system, student understanding in two different conditions is analyzed. In the first condition a feedback report is not provided to the students, while in the second condition the feedback report is provided in the form of the agent’s performance.
ContributorsUpadhyay, Abha (Author) / Walker, Erin (Thesis advisor) / Nelson, Brian (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2016
154054-Thumbnail Image.png
Description
The American Heart Association recommended in 1997 the data elements that should be collected from resuscitations in hospitals. (15) Currently, data documentation from resuscitation events in hospitals, termed ‘code blue’ events, utilizes a paper form, which is institution-specific. Problems with data capture and transcription exists, due to the challenges of

The American Heart Association recommended in 1997 the data elements that should be collected from resuscitations in hospitals. (15) Currently, data documentation from resuscitation events in hospitals, termed ‘code blue’ events, utilizes a paper form, which is institution-specific. Problems with data capture and transcription exists, due to the challenges of dynamic documentation of patient, event and outcome variables as the code blue event unfolds.

This thesis is based on the hypothesis that an electronic version of code blue real-time data capture would lead to improved resuscitation data transcription, and enable clinicians to address deficiencies in quality of care. The primary goal of this thesis is to create an iOS based application, primarily designed for iPads, for code blue events at the Mayo Clinic Hospital. The secondary goal is to build an open-source software development framework for converting paper-based hospital protocols into digital format.

The tool created in this study enabled data documentation to be completed electronically rather than on paper for resuscitation outcomes. The tool was evaluated for usability with twenty nurses, the end-users, at Mayo Clinic in Phoenix, Arizona. The results showed the preference of users for the iPad application. Furthermore, a qualitative survey showed the clinicians perceived the electronic version to be more accurate and efficient than paper-based documentation, both of which are essential for an emergency code blue resuscitation procedure.
ContributorsBokhari, Wasif (Author) / Patel, Vimla L. (Thesis advisor) / Amresh, Ashish (Thesis advisor) / Nelson, Brian (Committee member) / Sen, Ayan (Committee member) / Arizona State University (Publisher)
Created2015