In this thesis, I explored the interconnected ways in which human experience can shape and be shaped by environments of the future, such as interactive environments and spaces, embedded with sensors, enlivened by advanced algorithms for sensor data processing. I have developed an abstract representational experience into the vast and continual journey through life that shapes how we can use sensory immersion. The experimental work was housed in the iStage: an advanced black box space in the School of Arts, Media, and Engineering, which consists of video cameras, motion capture systems, spatial audio systems, and controllable lighting and projector systems. The malleable and interactive space of the iStage transformed into a reflective tool in which to gain insight into the overall shared, but very individual, emotional odyssey. Additionally, I surveyed participants after engaging in the experience to better understand their perceptions and interpretations of the experience. With the responses of participants' experiences and collective reflection upon the project I can begin to think about future iterations and how they might contain applications in health and/or wellness.
![187693-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2023-06/187693-Thumbnail%20Image.png?versionId=gn_6lWsK6rsOoQV05jEqi9GC5HB07043&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240617/us-west-2/s3/aws4_request&X-Amz-Date=20240617T044210Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=60d8869cd10c4795973e93d4793133243f98534e6a80f2aadbf83b005fdca6d4&itok=FAey80Bd)
![187831-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2023-06/187831-Thumbnail%20Image.png?versionId=UqTzolm0p9VIhQp0EF4LHFgkvJq9ZGXQ&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240617/us-west-2/s3/aws4_request&X-Amz-Date=20240617T074402Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=06b046c2facbed3e64dde4ca11bb7d09cba4e620ee7cedb00d39266607f713fe&itok=iPTua2ii)
![193546-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2024-05/193546-Thumbnail%20Image.png?versionId=EgrjkapFda_TXpXq3VB_jMIkmI60bVx4&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T031456Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=90504a5fdf6e9fb6fa2a249deee4d630b6d9f31f35e268a4b37b8b40c84724f4&itok=O57dMn89)
![191748-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2024-03/191748-Thumbnail%20Image.png?versionId=e8HncjeYryAtaV1NcqsoH3..cDjPrQ14&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240617/us-west-2/s3/aws4_request&X-Amz-Date=20240617T052136Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=aee189358a3619b6fae13f4d3acab7c8f5e7c8d62ed4f60055a7c9cf701d2ac0&itok=zAsZJBu7)
![157395-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/157395-Thumbnail%20Image.png?versionId=Hgb7eJU3bOnzOlzm9b9I1vUgsqLJYh26&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240617/us-west-2/s3/aws4_request&X-Amz-Date=20240617T083220Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=94009290878bca67749201444bddc083c5bbeff22225ea056eb67b68ea3472ed&itok=SWuIL0nu)
![156802-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-08/156802-Thumbnail%20Image.png?versionId=xkyBsWZZR_pMU7UYUWrNaQoMb.j6yZdF&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240617/us-west-2/s3/aws4_request&X-Amz-Date=20240617T001810Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=6676a5eca273d95bd3ee4218c372e564ca30b260ebc0e4c181ceb818f66b3cf8&itok=nWoBqN4q)
We approach the problem by building a hardware prototype and characterize the end-to-end system bottlenecks of power and performance. The prototype has 6 IMX274 cameras and uses Nvidia Jetson TX2 development board for capture and computation. We found that capturing is bottlenecked by sensor power and data-rates across interfaces, whereas compute is limited by the total number of computations per frame. Our characterization shows that redundant capture and redundant computations lead to high power, huge memory footprint, and high latency. The existing systems lack hardware-software co-design aspects, leading to excessive data transfers across the interfaces and expensive computations within the individual subsystems. Finally, we propose mechanisms to optimize the system for low power and low latency. We emphasize the importance of co-design of different subsystems to reduce and reuse the data. For example, reusing the motion vectors of the ISP stage reduces the memory footprint of the stereo correspondence stage. Our estimates show that pipelining and parallelization on custom FPGA can achieve real time stitching.
![156747-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/156747-Thumbnail%20Image.png?versionId=mxj7mMng_PnlO6RZ5gkHhiIWTuwKt0C6&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240617/us-west-2/s3/aws4_request&X-Amz-Date=20240617T052136Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=4e63a1ceef389fe42c13bf1efffe88c0748d368ac40329f7e667bf0da029f949&itok=Q0YCIR_V)
First, this work presents an application of mixture of experts models for quality robust visual recognition. First it is shown that human subjects outperform deep neural networks on classification of distorted images, and then propose a model, MixQualNet, that is more robust to distortions. The proposed model consists of ``experts'' that are trained on a particular type of image distortion. The final output of the model is a weighted sum of the expert models, where the weights are determined by a separate gating network. The proposed model also incorporates weight sharing to reduce the number of parameters, as well as increase performance.
Second, an application of mixture of experts to predict visual saliency is presented. A computational saliency model attempts to predict where humans will look in an image. In the proposed model, each expert network is trained to predict saliency for a set of closely related images. The final saliency map is computed as a weighted mixture of the expert networks' outputs, with weights determined by a separate gating network. The proposed model achieves better performance than several other visual saliency models and a baseline non-mixture model.
Finally, this work introduces a saliency model that is a weighted mixture of models trained for different levels of saliency. Levels of saliency include high saliency, which corresponds to regions where almost all subjects look, and low saliency, which corresponds to regions where some, but not all subjects look. The weighted mixture shows improved performance compared with baseline models because of the diversity of the individual model predictions.
![157215-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/157215-Thumbnail%20Image.png?versionId=AISMtx5xxSWBfertXDcWZgz4X4rrfrDG&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240617/us-west-2/s3/aws4_request&X-Amz-Date=20240617T074403Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=9a01d215e5ccd0ee2e0c19b256c1a76840c5b254c9142d37f30a2733653bc72e&itok=VS7HVtsr)
tion source is a challenging task with vital applications including surveillance and robotics.
Recent NLOS reconstruction advances have been achieved using time-resolved measure-
ments. Acquiring these time-resolved measurements requires expensive and specialized
detectors and laser sources. In work proposes a data-driven approach for NLOS 3D local-
ization requiring only a conventional camera and projector. The localisation is performed
using a voxelisation and a regression problem. Accuracy of greater than 90% is achieved
in localizing a NLOS object to a 5cm × 5cm × 5cm volume in real data. By adopting
the regression approach an object of width 10cm to localised to approximately 1.5cm. To
generalize to line-of-sight (LOS) scenes with non-planar surfaces, an adaptive lighting al-
gorithm is adopted. This algorithm, based on radiosity, identifies and illuminates scene
patches in the LOS which most contribute to the NLOS light paths, and can factor in sys-
tem power constraints. Improvements ranging from 6%-15% in accuracy with a non-planar
LOS wall using adaptive lighting is reported, demonstrating the advantage of combining
the physics of light transport with active illumination for data-driven NLOS imaging.