Matching Items (7)

153051-Thumbnail Image.png

Connectivity control for quad-dominant meshes

Description

Quad-dominant (QD) meshes, i.e., three-dimensional, 2-manifold polygonal meshes comprising mostly four-sided faces (i.e., quads), are a popular choice for many applications such as polygonal shape modeling, computer animation, base meshes

Quad-dominant (QD) meshes, i.e., three-dimensional, 2-manifold polygonal meshes comprising mostly four-sided faces (i.e., quads), are a popular choice for many applications such as polygonal shape modeling, computer animation, base meshes for spline and subdivision surface, simulation, and architectural design. This thesis investigates the topic of connectivity control, i.e., exploring different choices of mesh connectivity to represent the same 3D shape or surface. One key concept of QD mesh connectivity is the distinction between regular and irregular elements: a vertex with valence 4 is regular; otherwise, it is irregular. In a similar sense, a face with four sides is regular; otherwise, it is irregular. For QD meshes, the placement of irregular elements is especially important since it largely determines the achievable geometric quality of the final mesh.

Traditionally, the research on QD meshes focuses on the automatic generation of pure quadrilateral or QD meshes from a given surface. Explicit control of the placement of irregular elements can only be achieved indirectly. To fill this gap, in this thesis, we make the following contributions. First, we formulate the theoretical background about the fundamental combinatorial properties of irregular elements in QD meshes. Second, we develop algorithms for the explicit control of irregular elements and the exhaustive enumeration of QD mesh connectivities. Finally, we demonstrate the importance of connectivity control for QD meshes in a wide range of applications.

Contributors

Agent

Created

Date Created
  • 2014

152996-Thumbnail Image.png

Generating and exploring design variations for architectural and urban layouts

Description

This thesis focuses on generating and exploring design variations for architectural and urban layouts. I propose to study this general problem in three selected contexts.

First, I introduce a framework to

This thesis focuses on generating and exploring design variations for architectural and urban layouts. I propose to study this general problem in three selected contexts.

First, I introduce a framework to generate many variations of a facade design that look similar to a given facade layout. Starting from an input image, the facade is hierarchically segmented and labeled with a collection of manual and automatic tools. The user can then model constraints that should be maintained in any variation of the input facade design. Subsequently, facade variations are generated for different facade sizes, where multiple variations can be produced for a certain size.

Second, I propose a method for a user to understand and systematically explore good building layouts. Starting from a discrete set of good layouts, I analytically characterize the local shape space of good layouts around each initial layout, compactly encode these spaces, and link them to support transitions across the different local spaces. I represent such transitions in the form of a portal graph. The user can then use the portal graph, along with the family of local shape spaces, to globally and locally explore the space of good building layouts.

Finally, I propose an algorithm to computationally design street networks that balance competing requirements such as quick travel time and reduced through traffic in residential neighborhoods. The user simply provides high-level functional specifications for a target neighborhood, while my algorithm best satisfies the specification by solving for both connectivity and geometric layout of the network.

Contributors

Agent

Created

Date Created
  • 2014

154357-Thumbnail Image.png

Vectorization in analyzing 2D/3D data

Description

Vectorization is an important process in the fields of graphics and image processing. In computer-aided design (CAD), drawings are scanned, vectorized and written as CAD files in a process called

Vectorization is an important process in the fields of graphics and image processing. In computer-aided design (CAD), drawings are scanned, vectorized and written as CAD files in a process called paper-to-CAD conversion or drawing conversion. In geographic information systems (GIS), satellite or aerial images are vectorized to create maps. In graphic design and photography, raster graphics can be vectorized for easier usage and resizing. Vector arts are popular as online contents. Vectorization takes raster images, point clouds, or a series of scattered data samples in space, outputs graphic elements of various types including points, lines, curves, polygons, parametric curves and surface patches. The vectorized representations consist of a different set of components and elements from that of the inputs. The change of representation is the key difference between vectorization and practices such as smoothing and filtering. Compared to the inputs, the vector outputs provide higher order of control and attributes such as smoothness. Their curvatures or gradients at the points are scale invariant and they are more robust data sources for downstream applications and analysis. This dissertation explores and broadens the scope of vectorization in various contexts. I propose a novel vectorization algorithm on raster images along with several new applications for vectorization mechanism in processing and analysing both 2D and 3D data sets. The main components of the research are: using vectorization in generating 3D models from 2D floor plans; a novel raster image vectorization methods and its applications in computer vision, image processing, and animation; and vectorization in visualizing and information extraction in 3D laser scan data. I also apply vectorization analysis towards human body scans and rock surface scans to show insights otherwise difficult to obtain.

Contributors

Agent

Created

Date Created
  • 2016

154717-Thumbnail Image.png

Land use and land cover classification using deep learning techniques

Description

Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a

Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a potential to locate several types of features; for example, forests, parking lots, airports, residential areas, or freeways in the imagery. However, the appearances of these things vary based on many things including the time that the image is captured, the sensor settings, processing done to rectify the image, and the geographical and cultural context of the region captured by the image. This thesis explores the use of deep convolutional neural networks to classify land use from very high spatial resolution (VHR), orthorectified, visible band multispectral imagery. Recent technological and commercial applications have driven the collection a massive amount of VHR images in the visible red, green, blue (RGB) spectral bands, this work explores the potential for deep learning algorithms to exploit this imagery for automatic land use/ land cover (LULC) classification. The benefits of automatic visible band VHR LULC classifications may include applications such as automatic change detection or mapping. Recent work has shown the potential of Deep Learning approaches for land use classification; however, this thesis improves on the state-of-the-art by applying additional dataset augmenting approaches that are well suited for geospatial data. Furthermore, the generalizability of the classifiers is tested by extensively evaluating the classifiers on unseen datasets and we present the accuracy levels of the classifier in order to show that the results actually generalize beyond the small benchmarks used in training. Deep networks have many parameters, and therefore they are often built with very large sets of labeled data. Suitably large datasets for LULC are not easy to come by, but techniques such as refinement learning allow networks trained for one task to be retrained to perform another recognition task. Contributions of this thesis include demonstrating that deep networks trained for image recognition in one task (ImageNet) can be efficiently transferred to remote sensing applications and perform as well or better than manually crafted classifiers without requiring massive training data sets. This is demonstrated on the UC Merced dataset, where 96% mean accuracy is achieved using a CNN (Convolutional Neural Network) and 5-fold cross validation. These results are further tested on unrelated VHR images at the same resolution as the training set.

Contributors

Agent

Created

Date Created
  • 2016

149744-Thumbnail Image.png

Smooth surfaces for video game development

Description

The video game graphics pipeline has traditionally rendered the scene using a polygonal approach. Advances in modern graphics hardware now allow the rendering of parametric methods. This thesis explores various

The video game graphics pipeline has traditionally rendered the scene using a polygonal approach. Advances in modern graphics hardware now allow the rendering of parametric methods. This thesis explores various smooth surface rendering methods that can be integrated into the video game graphics engine. Moving over to parametric or smooth surfaces from the polygonal domain has its share of issues and there is an inherent need to address various rendering bottlenecks that could hamper such a move. The game engine needs to choose an appropriate method based on in-game characteristics of the objects; character and animated objects need more sophisticated methods whereas static objects could use simpler techniques. Scaling the polygon count over various hardware platforms becomes an important factor. Much control is needed over the tessellation levels, either imposed by the hardware limitations or by the application, to be able to adaptively render the mesh without significant loss in performance. This thesis explores several methods that would help game engine developers in making correct design choices by optimally balancing the trade-offs while rendering the scene using smooth surfaces. It proposes a novel technique for adaptive tessellation of triangular meshes that vastly improves speed and tessellation count. It develops an approximate method for rendering Loop subdivision surfaces on tessellation enabled hardware. A taxonomy and evaluation of the methods is provided and a unified rendering system that provides automatic level of detail by switching between the methods is proposed.

Contributors

Agent

Created

Date Created
  • 2011

150447-Thumbnail Image.png

Lighting prediction and simulation in large nighttime urban scenes

Description

Night vision goggles (NVGs) are widely used by helicopter pilots for flight missions at night, but the equipment can present visually confusing images especially in urban areas. A simulation tool

Night vision goggles (NVGs) are widely used by helicopter pilots for flight missions at night, but the equipment can present visually confusing images especially in urban areas. A simulation tool with realistic nighttime urban images would help pilots practice and train for flight with NVGs. However, there is a lack of tools for visualizing urban areas at night. This is mainly due to difficulties in gathering the light system data, placing the light systems at suitable locations, and rendering millions of lights with complex light intensity distributions (LID). Unlike daytime images, a city can have millions of light sources at night, including street lights, illuminated signs, and light shed from building interiors through windows. In this paper, a Procedural Lighting tool (PL), which predicts the positions and properties of street lights, is presented. The PL tool is used to accomplish three aims: (1) to generate vector data layers for geographic information systems (GIS) with statistically estimated information on lighting designs for streets, as well as the locations, orientations, and models for millions of streetlights; (2) to generate geo-referenced raster data to suitable for use as light maps that cover a large scale urban area so that the effect of millions of street light can be accurately rendered at real time, and (3) to extend existing 3D models by generating detailed light-maps that can be used as UV-mapped textures to render the model. An interactive graphical user interface (GUI) for configuring and previewing lights from a Light System Database (LDB) is also presented. The GUI includes physically accurate information about LID and also the lights' spectral power distributions (SPDs) so that a light-map can be generated for use with any sensor if the sensors luminosity function is known. Finally, for areas where more detail is required, a tool has been developed for editing and visualizing light effects over a 3D building from many light sources including area lights and windows. The above components are integrated in the PL tool to produce a night time urban view for not only a large-scale area but also a detail of a city building.

Contributors

Agent

Created

Date Created
  • 2011

151760-Thumbnail Image.png

3D rooftop detection and modeling using orthographic aerial images

Description

Detection of extruded features like rooftops and trees in aerial images automatically is a very active area of research. Elevated features identified from aerial imagery have potential applications in urban

Detection of extruded features like rooftops and trees in aerial images automatically is a very active area of research. Elevated features identified from aerial imagery have potential applications in urban planning, identifying cover in military training or flight training. Detection of such features using commonly available geospatial data like orthographic aerial imagery is very challenging because rooftop and tree textures are often camouflaged by similar looking features like roads, ground and grass. So, additonal data such as LIDAR, multispectral imagery and multiple viewpoints are exploited for more accurate detection. However, such data is often not available, or may be improperly registered or inacurate. In this thesis, we discuss a novel framework that only uses orthographic images for detection and modeling of rooftops. A segmentation scheme that initializes by assigning either foreground (rooftop) or background labels to certain pixels in the image based on shadows is proposed. Then it employs grabcut to assign one of those two labels to the rest of the pixels based on initial labeling. Parametric model fitting is performed on the segmented results in order to create a 3D scene and to facilitate roof-shape and height estimation. The framework can also benefit from additional geospatial data such as streetmaps and LIDAR, if available.

Contributors

Agent

Created

Date Created
  • 2013