Theses and Dissertations
Displaying 1 - 2 of 2
Filtering by
- All Subjects: 3D reconstruction
- Creators: Turaga, Pavan
- Creators: Yang, Yezhou
Description
Fisheye cameras are special cameras that have a much larger field of view compared to
conventional cameras. The large field of view comes at a price of non-linear distortions
introduced near the boundaries of the images captured by such cameras. Despite this
drawback, they are being used increasingly in many applications of computer vision,
robotics, reconnaissance, astrophotography, surveillance and automotive applications.
The images captured from such cameras can be corrected for their distortion if the
cameras are calibrated and the distortion function is determined. Calibration also allows
fisheye cameras to be used in tasks involving metric scene measurement, metric
scene reconstruction and other simultaneous localization and mapping (SLAM) algorithms.
This thesis presents a calibration toolbox (FisheyeCDC Toolbox) that implements a collection of some of the most widely used techniques for calibration of fisheye cameras under one package. This enables an inexperienced user to calibrate his/her own camera without the need for a theoretical understanding about computer vision and camera calibration. This thesis also explores some of the applications of calibration such as distortion correction and 3D reconstruction.
conventional cameras. The large field of view comes at a price of non-linear distortions
introduced near the boundaries of the images captured by such cameras. Despite this
drawback, they are being used increasingly in many applications of computer vision,
robotics, reconnaissance, astrophotography, surveillance and automotive applications.
The images captured from such cameras can be corrected for their distortion if the
cameras are calibrated and the distortion function is determined. Calibration also allows
fisheye cameras to be used in tasks involving metric scene measurement, metric
scene reconstruction and other simultaneous localization and mapping (SLAM) algorithms.
This thesis presents a calibration toolbox (FisheyeCDC Toolbox) that implements a collection of some of the most widely used techniques for calibration of fisheye cameras under one package. This enables an inexperienced user to calibrate his/her own camera without the need for a theoretical understanding about computer vision and camera calibration. This thesis also explores some of the applications of calibration such as distortion correction and 3D reconstruction.
ContributorsKashyap Takmul Purushothama Raju, Vinay (Author) / Karam, Lina (Thesis advisor) / Turaga, Pavan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
Description
For a system of autonomous vehicles functioning together in a traffic scene, 3Dunderstanding of participants in the field of view or surrounding is very essential for
assessing the safety operation of the involved. This problem can be decomposed into online
pose and shape estimation, which has been a core research area of computer vision for over
a decade now. This work is an add-on to support and improve the joint estimate of the pose
and shape of vehicles from monocular cameras. The objective of jointly estimating the
vehicle pose and shape online is enabled by what is called an offline reconstruction
pipeline. In the offline reconstruction step, an approach to obtain the vehicle 3D shape with
keypoints labeled is formulated.
This work proposes a multi-view reconstruction pipeline using images and masks
which can create an approximate shape of vehicles and can be used as a shape prior. Then
a 3D model-fitting optimization approach to refine the shape prior using high quality
computer-aided design (CAD) models of vehicles is developed. A dataset of such 3D
vehicles with 20 keypoints annotated is prepared and call it the AvaCAR dataset. The
AvaCAR dataset can be used to estimate the vehicle shape and pose, without having the
need to collect significant amounts of data needed for adequate training of a neural
network. The online reconstruction can use this synthesis dataset to generate novel
viewpoints and simultaneously train a neural network for pose and shape estimation. Most
methods in the current literature using deep neural networks, that are trained to estimate
pose of the object from a single image, are inherently biased to the viewpoint of the images
used. This approach aims at addressing these existing limitations in the current method by
delivering the online estimation a shape prior which can generate novel views to account
for the bias due to viewpoint. The dataset is provided with ground truth extrinsic parameters
and the compact vector based shape representations which along with the multi-view
dataset can be used to efficiently trained neural networks for vehicle pose and shape
estimation. The vehicles in this library are evaluated with some standard metrics to assure
they are capable of aiding online estimation and model based tracking.
ContributorsDUTTA, PRABAL BIJOY (Author) / Yang, Yezhou (Thesis advisor) / Berman, Spring (Committee member) / Lu, Duo (Committee member) / Arizona State University (Publisher)
Created2022