ASU Electronic Theses and Dissertations
This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.
In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.
Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.
Filtering by
- All Subjects: Electrical Engineering
The proposed representation.The efficacy of the proposed descriptor was explored on three applications: view-invariant activity analysis, 3D shape analysis, and non-linear dynamical modeling. Favorable results in both high-level recognition performance and improved performance in reduction of time-complexity when compared to other baseline methods are obtained.
sampling for both spatial and angular dimensions. Single-shot light field cameras
sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing
incoming rays onto a 2D sensor array. While this resolution can be recovered using
compressive sensing, these iterative solutions are slow in processing a light field. We
present a deep learning approach using a new, two branch network architecture,
consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution
4D light field from a single coded 2D image. This network decreases reconstruction
time significantly while achieving average PSNR values of 26-32 dB on a variety of
light fields. In particular, reconstruction time is decreased from 35 minutes to 6.7
minutes as compared to the dictionary method for equivalent visual quality. These
reconstructions are performed at small sampling/compression ratios as low as 8%,
allowing for cheaper coded light field cameras. We test our network reconstructions
on synthetic light fields, simulated coded measurements of real light fields captured
from a Lytro Illum camera, and real coded images from a custom CMOS diffractive
light field camera. The combination of compressive light field capture with deep
learning allows the potential for real-time light field video acquisition systems in the
future.