Matching Items (19)
Filtering by

Clear all filters

131535-Thumbnail Image.png
Description
Visualizations are an integral component for communicating and evaluating modern networks. As data becomes more complex, info-graphics require a balance between visual noise and effective storytelling that is often restricted by layouts unsuitable for scalability. The challenge then rests upon researchers to effectively structure their information in a way that

Visualizations are an integral component for communicating and evaluating modern networks. As data becomes more complex, info-graphics require a balance between visual noise and effective storytelling that is often restricted by layouts unsuitable for scalability. The challenge then rests upon researchers to effectively structure their information in a way that allows for flexible, transparent illustration. We propose network graphing as an operative alternative for demonstrating community behavior over traditional charts which are unable to look past numeric data. In this paper, we explore methods for manipulating, processing, cleaning, and aggregating data in Python; a programming language tailored for handling structured data, which can then be formatted for analysis and modeling of social network tendencies in Gephi. We implement this data by applying an algorithm known as the Fruchterman-Reingold force-directed layout to datasets of Arizona State University’s research and collaboration network. The result is a visualization that analyzes the university’s infrastructure by providing insight about community behaviors between colleges. Furthermore, we highlight how the flexibility of this visualization provides a foundation for specific use cases by demonstrating centrality measures to find important liaisons that connect distant communities.
ContributorsMcMichael, Jacob Andrew (Author) / LiKamWa, Robert (Thesis director) / Anderson, Derrick (Committee member) / Goshert, Maxwell (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133899-Thumbnail Image.png
Description
Emerging technologies, such as augmented reality (AR), are growing in popularity and accessibility at a fast pace. Developers are building more and more games and applications with this technology but few have stopped to think about what the best practices are for creating a good user experience (UX). Currently, there

Emerging technologies, such as augmented reality (AR), are growing in popularity and accessibility at a fast pace. Developers are building more and more games and applications with this technology but few have stopped to think about what the best practices are for creating a good user experience (UX). Currently, there are no universally accepted human-computer interaction guidelines for augmented reality because it is still relatively new. This paper examines three features - virtual content scale, indirect selection, and virtual buttons - in an attempt to discover their impact on the user experience in augmented reality. A Battleship game was developed using the Unity game engine with Vuforia, an augmented reality platform, and built as an iOS application to test these features. The hypothesis was that both virtual content scale and indirect selection would result in a more enjoyable and engaging user experience whereas the virtual button would be too confusing for users to fully appreciate the feature. Usability testing was conducted to gauge participants' responses to these features. After playing a base version of the game with no additional features and then a second version with one of the three features, participants rated their experiences and provided feedback in a four-part survey. It was observed during testing that people did not inherently move their devices around the augmented space and needed guidance to navigate the game. Most users were fascinated with the visuals of the game and two of the tested features. It was found that movement around the augmented space and feedback from the virtual content were critical aspects in creating a good user experience in augmented reality.
ContributorsBauman, Kirsten (Co-author) / Benson, Meera (Co-author) / Olson, Loren (Thesis director) / LiKamWa, Robert (Committee member) / School of the Arts, Media and Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
132482-Thumbnail Image.png
DescriptionAcoustic Ecology is an undervalued field of study of the relationship between the environment and sound. This project aims to educate people on this topic and show people the importance by immersing them in virtual reality scenes. The scenes were created using VR180 content as well as 360° spatial audio.
ContributorsNeel, Jordan Tanner (Author) / LiKamWa, Robert (Thesis director) / Feisst, Sabine (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
Dale and Edna is a hybrid animated film and videogame experienced in virtual reality with dual storylines that increases in potential meanings through player interaction. Developed and played within Unreal Engine 4 using the HTC Vive, Oculus, or PlayStation VR, Dale and Edna allows for players to passively enjoy the

Dale and Edna is a hybrid animated film and videogame experienced in virtual reality with dual storylines that increases in potential meanings through player interaction. Developed and played within Unreal Engine 4 using the HTC Vive, Oculus, or PlayStation VR, Dale and Edna allows for players to passively enjoy the film element of the project or partake in the active videogame portion. Exploration of the virtual story world yields more information about that world, which may or may not alter the audience’s perception of the world. The film portion of the project is a static narrative with a plot that cannot be altered by players within the virtual world. In the static plot, the characters Dale and Edna discover and subsequently combat an alien invasion that appears to have the objective of demolishing Dale’s prize pumpkin. However, the aliens in the film plot are merely projections created by AR headsets that are reflecting Jimmy’s gameplay on his tablet. The audience is thus invited to question their perception of reality through combined use of VR and AR. The game element is a dynamic narrative scaffold that does not unfold as a traditional narrative might. Instead, what a player observes and interacts with within the sandbox level will determine the meaning those players come away from this project with. Both elements of the project feature modular code construction so developers can return to both the film and game portions of the project and make additions. This paper will analyze the chronological development of the project along with the guiding philosophy that was revealed in the result.
Keywords: virtual reality, film, videogame, sandbox
ContributorsKemp, Adam Lee (Co-author) / Kemp, Bradley (Co-author) / Kemp, Claire (Co-author) / LiKamWa, Robert (Thesis director) / Gilfillan, Daniel (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Thunderbird School of Global Management (Contributor) / School of Film, Dance and Theatre (Contributor) / School of International Letters and Cultures (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description

Java Mission-planning and Analysis for Remote Sensing (JMARS) is a geospatial software that provides mission planning and data-analysis tools with access to orbital data for planetary bodies like Mars and Venus. Using JMARS, terrain scenes can be prepared with an assortment of data layers along with any additional data sets.

Java Mission-planning and Analysis for Remote Sensing (JMARS) is a geospatial software that provides mission planning and data-analysis tools with access to orbital data for planetary bodies like Mars and Venus. Using JMARS, terrain scenes can be prepared with an assortment of data layers along with any additional data sets. These scenes can then be exported into the JMARS extended reality platform, which includes both augmented reality and virtual reality experiences. JMARS VR Viewer is a virtual reality experience that allows users to view three-dimensional terrain data in a fully immersive and interactive way. This tool also provides a collaborative environment for users to host a terrain scene where people can analyze the data together. The purpose of the project is to design a set of interactions in virtual reality to try and address these questions: (1) how do we make sense of larger complex geospatial datasets, (2) how can we design interactions that assist users in understanding layered data in both an individual and collaborative work environment, and (3) what are the effects on the user’s cognitive overload while using these interfaces.

ContributorsWang, Olivia (Author) / LiKamWa, Robert (Thesis director) / Gold, Lauren (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
131793-Thumbnail Image.png
Description
As the prevalence of augmented reality (AR) technology continues to increase, so too have methods for improving the appearance and behavior of computer-generated objects. This is especially significant as AR applications now expand to territories outside of the entertainment sphere and can be utilized for numerous purposes encompassing but

As the prevalence of augmented reality (AR) technology continues to increase, so too have methods for improving the appearance and behavior of computer-generated objects. This is especially significant as AR applications now expand to territories outside of the entertainment sphere and can be utilized for numerous purposes encompassing but not limited to education, specialized occupational training, retail & online shopping, design, marketing, and manufacturing. Due to the nature of AR technology, where computer-generated objects are being placed into a real-world environment, a decision has to be made regarding the visual connection between the tangible and the intangible. Should the objects blend seamlessly into their environment or purposefully stand out? It is not purely a stylistic choice. A developer must consider how their application will be used — in many instances an optimal user experience is facilitated by mimicking the real world as closely as possible; even simpler applications, such as those built primarily for mobile devices, can benefit from realistic AR. The struggle here lies in creating an immersive user experience that is not reliant on computationally-expensive graphics or heavy-duty models. The research contained in this thesis provides several ways for achieving photorealistic rendering in AR applications using a range of techniques, all of which are supported on mobile devices. These methods can be employed within the Unity Game Engine and incorporate shaders, render pipelines, node-based editors, post-processing, and light estimation.
ContributorsSchanberger, Schuyler Catherine (Author) / LiKamWa, Robert (Thesis director) / Jayasuriya, Suren (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132683-Thumbnail Image.png
Description
Augmented Reality (AR) is a tool increasingly available to young learners and educators. This paper documents and analyzes the creation of an AR app used as a tool to teach fractions to young learners and enhance their engagement in the classroom. As an emerging technology reaching diffusion into the general

Augmented Reality (AR) is a tool increasingly available to young learners and educators. This paper documents and analyzes the creation of an AR app used as a tool to teach fractions to young learners and enhance their engagement in the classroom. As an emerging technology reaching diffusion into the general populace, AR presents a unique opportunity to engage users in the digital and real world. Additionally, AR can be enabled on most modern phones and tablets; therefore, it is extremely accessible and has a low barrier to entry. To integrate AR into the classroom in an affordable way, I created leARn, an AR application intended to help young learners understand fractions. leARn is an application intended to be used alongside traditional teaching methods, in order to enhance the engagement of students in the classroom. Throughout the development of the product, I not only considered usability and design, but also the effectiveness of the app in the classroom. Moreover, due to collaboration with Arizona State University Research Enterprises, I tested the application in a classroom with sixth, seventh and eighth grade students. This paper presents the findings from that testing period and analysis of the educational effectiveness of the concept based on data received from students.
ContributorsVan Dobben, Maureen Veronica (Author) / LiKamWa, Robert (Thesis director) / Swisher, Kimberlee (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
130884-Thumbnail Image.png
Description
Commonly, image processing is handled on a CPU that is connected to the image sensor by a wire. In these far-sensor processing architectures, there is energy loss associated with sending data across an interconnect from the sensor to the CPU. In an effort to increase energy efficiency, near-sensor processing architectures

Commonly, image processing is handled on a CPU that is connected to the image sensor by a wire. In these far-sensor processing architectures, there is energy loss associated with sending data across an interconnect from the sensor to the CPU. In an effort to increase energy efficiency, near-sensor processing architectures have been developed, in which the sensor and processor are stacked directly on top of each other. This reduces energy loss associated with sending data off-sensor. However, processing near the image sensor causes the sensor to heat up. Reports of thermal noise in near-sensor processing architectures motivated us to study how temperature affects image quality on a commercial image sensor and how thermal noise affects computer vision task accuracy. We analyzed image noise across nine different temperatures and three sensor configurations to determine how image noise responds to an increase in temperature. Ultimately, our team used this information, along with transient analysis of a stacked image sensor’s thermal behavior, to advise thermal management strategies that leverage the benefits of near-sensor processing and prevent accuracy loss at problematic temperatures.
ContributorsJones, Britton Steele (Author) / LiKamWa, Robert (Thesis director) / Jayasuriya, Suren (Committee member) / Watts College of Public Service & Community Solut (Contributor) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
165433-Thumbnail Image.png
Description

Augmented Reality (AR) especially when used with mobile devices enables the creation of applications that can help students in chemistry learn anything from basic to more advanced concepts. In Chemistry specifically, the 3D representation of molecules and chemical structures is of vital importance to students and yet when printed in

Augmented Reality (AR) especially when used with mobile devices enables the creation of applications that can help students in chemistry learn anything from basic to more advanced concepts. In Chemistry specifically, the 3D representation of molecules and chemical structures is of vital importance to students and yet when printed in 2D as on textbooks and lecture notes it can be quite hard to understand those vital 3D concepts. ARsome Chemistry is an app that aims to utilize AR to display complex and simple molecules in 3D to actively teach students these concepts through quizzes and other features. The ARsome chemistry app uses image target recognition to allow students to hand-draw or print line angle structures or chemical formulas of molecules and then scan those targets to get 3D representation of molecules. Students can use their fingers and the touch screen to zoom, rotate, and highlight different portions of the molecule to gain a better understanding of the molecule's 3D structure. The ARsome chemistry app also features the ability to utilize image recognition to allow students to quiz themselves on drawing line-angle structures and show it to the camera for the app to check their work. The ARsome chemistry app is an accessible and cost-effective study aid platform for students for on demand, interactive, 3D representations of complex molecules.

ContributorsEvans, Brandon (Author) / LiKamWa, Robert (Thesis director) / Johnson, Mina (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
Description
Spatial audio can be especially useful for directing human attention. However, delivering spatial audio through speakers, rather than headphones that deliver audio directly to the ears, produces the issue of crosstalk, where sounds from each of the two speakers reach the opposite ear, inhibiting the spatialized effect. A research team

Spatial audio can be especially useful for directing human attention. However, delivering spatial audio through speakers, rather than headphones that deliver audio directly to the ears, produces the issue of crosstalk, where sounds from each of the two speakers reach the opposite ear, inhibiting the spatialized effect. A research team at Meteor Studio has developed an algorithm called Xblock that solves this issue using a crosstalk cancellation technique. This thesis project expands upon the existing Xblock IoT system by providing a way to test the accuracy of the directionality of sounds generated with spatial audio. More specifically, the objective is to determine whether the usage of Xblock with smart speakers can provide generalized audio localization, which refers to the ability to detect a general direction of where a sound might be coming from. This project also expands upon the existing Xblock technique to integrate voice commands, where users can verbalize the name of a lost item using the phrase, “Find [item]”, and the IoT system will use spatial audio to guide them to it.
ContributorsSong, Lucy (Author) / LiKamWa, Robert (Thesis director) / Berisha, Visar (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05