Matching Items (4)

154717-Thumbnail Image.png

Land use and land cover classification using deep learning techniques

Description

Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a

Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a potential to locate several types of features; for example, forests, parking lots, airports, residential areas, or freeways in the imagery. However, the appearances of these things vary based on many things including the time that the image is captured, the sensor settings, processing done to rectify the image, and the geographical and cultural context of the region captured by the image. This thesis explores the use of deep convolutional neural networks to classify land use from very high spatial resolution (VHR), orthorectified, visible band multispectral imagery. Recent technological and commercial applications have driven the collection a massive amount of VHR images in the visible red, green, blue (RGB) spectral bands, this work explores the potential for deep learning algorithms to exploit this imagery for automatic land use/ land cover (LULC) classification. The benefits of automatic visible band VHR LULC classifications may include applications such as automatic change detection or mapping. Recent work has shown the potential of Deep Learning approaches for land use classification; however, this thesis improves on the state-of-the-art by applying additional dataset augmenting approaches that are well suited for geospatial data. Furthermore, the generalizability of the classifiers is tested by extensively evaluating the classifiers on unseen datasets and we present the accuracy levels of the classifier in order to show that the results actually generalize beyond the small benchmarks used in training. Deep networks have many parameters, and therefore they are often built with very large sets of labeled data. Suitably large datasets for LULC are not easy to come by, but techniques such as refinement learning allow networks trained for one task to be retrained to perform another recognition task. Contributions of this thesis include demonstrating that deep networks trained for image recognition in one task (ImageNet) can be efficiently transferred to remote sensing applications and perform as well or better than manually crafted classifiers without requiring massive training data sets. This is demonstrated on the UC Merced dataset, where 96% mean accuracy is achieved using a CNN (Convolutional Neural Network) and 5-fold cross validation. These results are further tested on unrelated VHR images at the same resolution as the training set.

Contributors

Agent

Created

Date Created
  • 2016

150353-Thumbnail Image.png

Augmented image classification using image registration techniques

Description

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.

Contributors

Agent

Created

Date Created
  • 2011

150819-Thumbnail Image.png

Development and applications of a multispectral microscopic imager for the in situ exploration of planetary surfaces

Description

Future robotic and human missions to the Moon and Mars will need in situ capabilities to characterize the mineralogy of rocks and soils within a microtextural context. Such spatially-correlated information

Future robotic and human missions to the Moon and Mars will need in situ capabilities to characterize the mineralogy of rocks and soils within a microtextural context. Such spatially-correlated information is considered crucial for correct petrogenetic interpretations and will be key observations for assessing the potential for past habitability on Mars. These data will also enable the selection of the highest value samples for further analysis and potential caching for return to Earth. The Multispectral Microscopic Imager (MMI), similar to a geologist's hand lens, advances the capabilities of current microimagers by providing multispectral, microscale reflectance images of geological samples, where each image pixel is comprised of a 21-band spectrum ranging from 463 to 1735 nm. To better understand the capabilities of the MMI in future surface missions to the Moon and Mars, geological samples comprising a range of Mars-relevant analog environments as well as 18 lunar rocks and four soils, from the Apollo collection were analyzed with the MMI. Results indicate that the MMI images resolve the fine-scale microtextural features of samples, and provide important information to help constrain mineral composition. Spectral end-member mapping revealed the distribution of Fe-bearing minerals (silicates and oxides), along with the presence of hydrated minerals. In the case of the lunar samples, the MMI observations also revealed the presence of opaques, glasses, and in some cases, the effects of space weathering in samples. MMI-based petrogenetic interpretations compare favorably with laboratory observations (including VNIR spectroscopy, XRD, and thin section petrography) and previously published analyses in the literature (for the lunar samples). The MMI was also deployed as part of the 2010 ILSO-ISRU field test on the slopes of Mauna Kea, Hawaii and inside the GeoLab as part of the 2011 Desert RATS field test at the Black Point Lava Flow in northern Arizona to better assess the performance of the MMI under realistic field conditions (including daylight illumination) and mission constraints to support human exploration. The MMI successfully imaged rocks and soils in outcrops and samples under field conditions and mission operation scenarios, revealing the value of the MMI to support future rover and astronaut exploration of planetary surfaces.

Contributors

Agent

Created

Date Created
  • 2012

151760-Thumbnail Image.png

3D rooftop detection and modeling using orthographic aerial images

Description

Detection of extruded features like rooftops and trees in aerial images automatically is a very active area of research. Elevated features identified from aerial imagery have potential applications in urban

Detection of extruded features like rooftops and trees in aerial images automatically is a very active area of research. Elevated features identified from aerial imagery have potential applications in urban planning, identifying cover in military training or flight training. Detection of such features using commonly available geospatial data like orthographic aerial imagery is very challenging because rooftop and tree textures are often camouflaged by similar looking features like roads, ground and grass. So, additonal data such as LIDAR, multispectral imagery and multiple viewpoints are exploited for more accurate detection. However, such data is often not available, or may be improperly registered or inacurate. In this thesis, we discuss a novel framework that only uses orthographic images for detection and modeling of rooftops. A segmentation scheme that initializes by assigning either foreground (rooftop) or background labels to certain pixels in the image based on shadows is proposed. Then it employs grabcut to assign one of those two labels to the rest of the pixels based on initial labeling. Parametric model fitting is performed on the segmented results in order to create a 3D scene and to facilitate roof-shape and height estimation. The framework can also benefit from additional geospatial data such as streetmaps and LIDAR, if available.

Contributors

Agent

Created

Date Created
  • 2013