Matching Items (35)
Description
Generating real-world content for VR is challenging in terms of capturing and processing at high resolution and high frame-rates. The content needs to represent a truly immersive experience, where the user can look around in 360-degree view and perceive the depth of the scene. The existing solutions only capture and

Generating real-world content for VR is challenging in terms of capturing and processing at high resolution and high frame-rates. The content needs to represent a truly immersive experience, where the user can look around in 360-degree view and perceive the depth of the scene. The existing solutions only capture and offload the compute load to the server. But offloading large amounts of raw camera feeds takes longer latencies and poses difficulties for real-time applications. By capturing and computing on the edge, we can closely integrate the systems and optimize for low latency. However, moving the traditional stitching algorithms to battery constrained device needs at least three orders of magnitude reduction in power. We believe that close integration of capture and compute stages will lead to reduced overall system power.

We approach the problem by building a hardware prototype and characterize the end-to-end system bottlenecks of power and performance. The prototype has 6 IMX274 cameras and uses Nvidia Jetson TX2 development board for capture and computation. We found that capturing is bottlenecked by sensor power and data-rates across interfaces, whereas compute is limited by the total number of computations per frame. Our characterization shows that redundant capture and redundant computations lead to high power, huge memory footprint, and high latency. The existing systems lack hardware-software co-design aspects, leading to excessive data transfers across the interfaces and expensive computations within the individual subsystems. Finally, we propose mechanisms to optimize the system for low power and low latency. We emphasize the importance of co-design of different subsystems to reduce and reuse the data. For example, reusing the motion vectors of the ISP stage reduces the memory footprint of the stereo correspondence stage. Our estimates show that pipelining and parallelization on custom FPGA can achieve real time stitching.
ContributorsGunnam, Sridhar (Author) / LiKamWa, Robert (Thesis advisor) / Turaga, Pavan (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2018
156790-Thumbnail Image.png
Description
Vision processing on traditional architectures is inefficient due to energy-expensive off-chip data movements. Many researchers advocate pushing processing close to the sensor to substantially reduce data movements. However, continuous near-sensor processing raises the sensor temperature, impairing the fidelity of imaging/vision tasks.

The work characterizes the thermal implications of using 3D stacked

Vision processing on traditional architectures is inefficient due to energy-expensive off-chip data movements. Many researchers advocate pushing processing close to the sensor to substantially reduce data movements. However, continuous near-sensor processing raises the sensor temperature, impairing the fidelity of imaging/vision tasks.

The work characterizes the thermal implications of using 3D stacked image sensors with near-sensor vision processing units. The characterization reveals that near-sensor processing reduces system power but degrades image quality. For reasonable image fidelity, the sensor temperature needs to stay below a threshold, situationally determined by application needs. Fortunately, the characterization also identifies opportunities -- unique to the needs of near-sensor processing -- to regulate temperature based on dynamic visual task requirements and rapidly increase capture quality on demand.

Based on the characterization, the work proposes and investigate two thermal management strategies -- stop-capture-go and seasonal migration -- for imaging-aware thermal management. The work present parameters that govern the policy decisions and explore the trade-offs between system power and policy overhead. The work's evaluation shows that the novel dynamic thermal management strategies can unlock the energy-efficiency potential of near-sensor processing with minimal performance impact, without compromising image fidelity.
ContributorsKodukula, Venkatesh (Author) / LiKamWa, Robert (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Brunhaver, John (Committee member) / Arizona State University (Publisher)
Created2019
156833-Thumbnail Image.png
Description
Mixed reality mobile platforms co-locate virtual objects with physical spaces, creating immersive user experiences. To create visual harmony between virtual and physical spaces, the virtual scene must be accurately illuminated with realistic physical lighting. To this end, a system was designed that Generates Light Estimation Across Mixed-reality (GLEAM) devices to

Mixed reality mobile platforms co-locate virtual objects with physical spaces, creating immersive user experiences. To create visual harmony between virtual and physical spaces, the virtual scene must be accurately illuminated with realistic physical lighting. To this end, a system was designed that Generates Light Estimation Across Mixed-reality (GLEAM) devices to continually sense realistic lighting of a physical scene in all directions. GLEAM optionally operate across multiple mobile mixed-reality devices to leverage collaborative multi-viewpoint sensing for improved estimation. The system implements policies that prioritize resolution, coverage, or update interval of the illumination estimation depending on the situational needs of the virtual scene and physical environment.

To evaluate the runtime performance and perceptual efficacy of the system, GLEAM was implemented on the Unity 3D Game Engine. The implementation was deployed on Android and iOS devices. On these implementations, GLEAM can prioritize dynamic estimation with update intervals as low as 15 ms or prioritize high spatial quality with update intervals of 200 ms. User studies across 99 participants and 26 scene comparisons reported a preference towards GLEAM over other lighting techniques in 66.67% of the presented augmented scenes and indifference in 12.57% of the scenes. A controlled lighting user study on 18 participants revealed a general preference for policies that strike a balance between resolution and update rate.
ContributorsPrakash, Siddhant (Author) / LiKamWa, Robert (Thesis advisor) / Yang, Yezhou (Thesis advisor) / Hansford, Dianne (Committee member) / Arizona State University (Publisher)
Created2018
131535-Thumbnail Image.png
Description
Visualizations are an integral component for communicating and evaluating modern networks. As data becomes more complex, info-graphics require a balance between visual noise and effective storytelling that is often restricted by layouts unsuitable for scalability. The challenge then rests upon researchers to effectively structure their information in a way that

Visualizations are an integral component for communicating and evaluating modern networks. As data becomes more complex, info-graphics require a balance between visual noise and effective storytelling that is often restricted by layouts unsuitable for scalability. The challenge then rests upon researchers to effectively structure their information in a way that allows for flexible, transparent illustration. We propose network graphing as an operative alternative for demonstrating community behavior over traditional charts which are unable to look past numeric data. In this paper, we explore methods for manipulating, processing, cleaning, and aggregating data in Python; a programming language tailored for handling structured data, which can then be formatted for analysis and modeling of social network tendencies in Gephi. We implement this data by applying an algorithm known as the Fruchterman-Reingold force-directed layout to datasets of Arizona State University’s research and collaboration network. The result is a visualization that analyzes the university’s infrastructure by providing insight about community behaviors between colleges. Furthermore, we highlight how the flexibility of this visualization provides a foundation for specific use cases by demonstrating centrality measures to find important liaisons that connect distant communities.
ContributorsMcMichael, Jacob Andrew (Author) / LiKamWa, Robert (Thesis director) / Anderson, Derrick (Committee member) / Goshert, Maxwell (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133899-Thumbnail Image.png
Description
Emerging technologies, such as augmented reality (AR), are growing in popularity and accessibility at a fast pace. Developers are building more and more games and applications with this technology but few have stopped to think about what the best practices are for creating a good user experience (UX). Currently, there

Emerging technologies, such as augmented reality (AR), are growing in popularity and accessibility at a fast pace. Developers are building more and more games and applications with this technology but few have stopped to think about what the best practices are for creating a good user experience (UX). Currently, there are no universally accepted human-computer interaction guidelines for augmented reality because it is still relatively new. This paper examines three features - virtual content scale, indirect selection, and virtual buttons - in an attempt to discover their impact on the user experience in augmented reality. A Battleship game was developed using the Unity game engine with Vuforia, an augmented reality platform, and built as an iOS application to test these features. The hypothesis was that both virtual content scale and indirect selection would result in a more enjoyable and engaging user experience whereas the virtual button would be too confusing for users to fully appreciate the feature. Usability testing was conducted to gauge participants' responses to these features. After playing a base version of the game with no additional features and then a second version with one of the three features, participants rated their experiences and provided feedback in a four-part survey. It was observed during testing that people did not inherently move their devices around the augmented space and needed guidance to navigate the game. Most users were fascinated with the visuals of the game and two of the tested features. It was found that movement around the augmented space and feedback from the virtual content were critical aspects in creating a good user experience in augmented reality.
ContributorsBauman, Kirsten (Co-author) / Benson, Meera (Co-author) / Olson, Loren (Thesis director) / LiKamWa, Robert (Committee member) / School of the Arts, Media and Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
132482-Thumbnail Image.png
DescriptionAcoustic Ecology is an undervalued field of study of the relationship between the environment and sound. This project aims to educate people on this topic and show people the importance by immersing them in virtual reality scenes. The scenes were created using VR180 content as well as 360° spatial audio.
ContributorsNeel, Jordan Tanner (Author) / LiKamWa, Robert (Thesis director) / Feisst, Sabine (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
Dale and Edna is a hybrid animated film and videogame experienced in virtual reality with dual storylines that increases in potential meanings through player interaction. Developed and played within Unreal Engine 4 using the HTC Vive, Oculus, or PlayStation VR, Dale and Edna allows for players to passively enjoy the

Dale and Edna is a hybrid animated film and videogame experienced in virtual reality with dual storylines that increases in potential meanings through player interaction. Developed and played within Unreal Engine 4 using the HTC Vive, Oculus, or PlayStation VR, Dale and Edna allows for players to passively enjoy the film element of the project or partake in the active videogame portion. Exploration of the virtual story world yields more information about that world, which may or may not alter the audience’s perception of the world. The film portion of the project is a static narrative with a plot that cannot be altered by players within the virtual world. In the static plot, the characters Dale and Edna discover and subsequently combat an alien invasion that appears to have the objective of demolishing Dale’s prize pumpkin. However, the aliens in the film plot are merely projections created by AR headsets that are reflecting Jimmy’s gameplay on his tablet. The audience is thus invited to question their perception of reality through combined use of VR and AR. The game element is a dynamic narrative scaffold that does not unfold as a traditional narrative might. Instead, what a player observes and interacts with within the sandbox level will determine the meaning those players come away from this project with. Both elements of the project feature modular code construction so developers can return to both the film and game portions of the project and make additions. This paper will analyze the chronological development of the project along with the guiding philosophy that was revealed in the result.
Keywords: virtual reality, film, videogame, sandbox
ContributorsKemp, Adam Lee (Co-author) / Kemp, Bradley (Co-author) / Kemp, Claire (Co-author) / LiKamWa, Robert (Thesis director) / Gilfillan, Daniel (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Thunderbird School of Global Management (Contributor) / School of Film, Dance and Theatre (Contributor) / School of International Letters and Cultures (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
155774-Thumbnail Image.png
Description
In UAVs and parking lots, it is typical to first collect an enormous number of pixels using conventional imagers. This is followed by employment of expensive methods to compress by throwing away redundant data. Subsequently, the compressed data is transmitted to a ground station. The past decade has seen the

In UAVs and parking lots, it is typical to first collect an enormous number of pixels using conventional imagers. This is followed by employment of expensive methods to compress by throwing away redundant data. Subsequently, the compressed data is transmitted to a ground station. The past decade has seen the emergence of novel imagers called spatial-multiplexing cameras, which offer compression at the sensing level itself by providing an arbitrary linear measurements of the scene instead of pixel-based sampling. In this dissertation, I discuss various approaches for effective information extraction from spatial-multiplexing measurements and present the trade-offs between reliability of the performance and computational/storage load of the system. In the first part, I present a reconstruction-free approach to high-level inference in computer vision, wherein I consider the specific case of activity analysis, and show that using correlation filters, one can perform effective action recognition and localization directly from a class of spatial-multiplexing cameras, called compressive cameras, even at very low measurement rates of 1\%. In the second part, I outline a deep learning based non-iterative and real-time algorithm to reconstruct images from compressively sensed (CS) measurements, which can outperform the traditional iterative CS reconstruction algorithms in terms of reconstruction quality and time complexity, especially at low measurement rates. To overcome the limitations of compressive cameras, which are operated with random measurements and not particularly tuned to any task, in the third part of the dissertation, I propose a method to design spatial-multiplexing measurements, which are tuned to facilitate the easy extraction of features that are useful in computer vision tasks like object tracking. The work presented in the dissertation provides sufficient evidence to high-level inference in computer vision at extremely low measurement rates, and hence allows us to think about the possibility of revamping the current day computer systems.
ContributorsKulkarni, Kuldeep Sharad (Author) / Turaga, Pavan (Thesis advisor) / Li, Baoxin (Committee member) / Chakrabarti, Chaitali (Committee member) / Sankaranarayanan, Aswin (Committee member) / LiKamWa, Robert (Committee member) / Arizona State University (Publisher)
Created2017
168650-Thumbnail Image.png
Description
Nowadays, demand from the Internet of Things (IoT), automotive networking, and video applications is driving the transformation of Ethernet. It is a shift towards time-sensitive Ethernet. As a large amount of data is transmitted, many errors occur in the network. For this increased traffic, a Time Sensitive Network (TSN) is

Nowadays, demand from the Internet of Things (IoT), automotive networking, and video applications is driving the transformation of Ethernet. It is a shift towards time-sensitive Ethernet. As a large amount of data is transmitted, many errors occur in the network. For this increased traffic, a Time Sensitive Network (TSN) is important. Time-Sensitive Network (TSN) is a technology that provides a definitive service for time sensitive traffic in an Ethernet environment that provides time-synchronization. In order to efficiently manage these errors, countermeasures against errors are required. A system that maintains its function even in the event of an internal fault or failure is called a Fault-Tolerant system. For this, after configuring the network environment using the OMNET++ program, machine learning was used to estimate the optimal alternative routing path in case an error occurred in transmission. By setting an alternate path before an error occurs, I propose a method to minimize delay and minimize data loss when an error occurs. Various methods were compared. First, when no replication environment and secondly when ideal replication, thirdly random replication, and lastly replication using ML were tested. In these experiments, replication in an ideal environment showed the best results, which is because everything is optimal. However, except for such an ideal environment, replication prediction using the suggested ML showed the best results. These results suggest that the proposed method is effective, but there may be problems with efficiency and error control, so an additional overview is provided for further improvement.
ContributorsLee, Sang hee (Author) / Reisslein, Martin (Thesis advisor) / LiKamWa, Robert (Committee member) / Thyagaturu, Akhilesh (Committee member) / Arizona State University (Publisher)
Created2022
Description

Java Mission-planning and Analysis for Remote Sensing (JMARS) is a geospatial software that provides mission planning and data-analysis tools with access to orbital data for planetary bodies like Mars and Venus. Using JMARS, terrain scenes can be prepared with an assortment of data layers along with any additional data sets.

Java Mission-planning and Analysis for Remote Sensing (JMARS) is a geospatial software that provides mission planning and data-analysis tools with access to orbital data for planetary bodies like Mars and Venus. Using JMARS, terrain scenes can be prepared with an assortment of data layers along with any additional data sets. These scenes can then be exported into the JMARS extended reality platform, which includes both augmented reality and virtual reality experiences. JMARS VR Viewer is a virtual reality experience that allows users to view three-dimensional terrain data in a fully immersive and interactive way. This tool also provides a collaborative environment for users to host a terrain scene where people can analyze the data together. The purpose of the project is to design a set of interactions in virtual reality to try and address these questions: (1) how do we make sense of larger complex geospatial datasets, (2) how can we design interactions that assist users in understanding layered data in both an individual and collaborative work environment, and (3) what are the effects on the user’s cognitive overload while using these interfaces.

ContributorsWang, Olivia (Author) / LiKamWa, Robert (Thesis director) / Gold, Lauren (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05