Matching Items (12)
Filtering by

Clear all filters

151336-Thumbnail Image.png
Description
Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis

Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis explores methods of linking publicly available data sources as a means of extrapolating missing information of Facebook. An application named "Visual Friends Income Map" has been created on Facebook to collect social network data and explore geodemographic properties to link publicly available data, such as the US census data. Multiple predictors are implemented to link data sets and extrapolate missing information from Facebook with accurate predictions. The location based predictor matches Facebook users' locations with census data at the city level for income and demographic predictions. Age and relationship based predictors are created to improve the accuracy of the proposed location based predictor utilizing social network link information. In the case where a user does not share any location information on their Facebook profile, a kernel density estimation location predictor is created. This predictor utilizes publicly available telephone record information of all people with the same surname of this user in the US to create a likelihood distribution of the user's location. This is combined with the user's IP level information in order to narrow the probability estimation down to a local regional constraint.
ContributorsMao, Jingxian (Author) / Maciejewski, Ross (Thesis advisor) / Farin, Gerald (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
151278-Thumbnail Image.png
Description
This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the

This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the NVIDIA CUDA framework; however, the proposed solution in this document uses the Microsoft general-purpose computing on graphics processing units API. The implementation allows for the simulation of a large number of particles in a real-time scenario. The solution presented here uses the Smoothed Particles Hydrodynamics algorithm to calculate the forces within the fluid; this algorithm provides a Lagrangian approach for discretizes the Navier-Stockes equations into a set of particles. Our solution uses the DirectCompute compute shaders to evaluate each particle using the multithreading and multi-core capabilities of the GPU increasing the overall performance. The solution then describes a method for extracting the fluid surface using the Marching Cubes method and the programmable interfaces exposed by the DirectX pipeline. Particularly, this document presents a method for using the Geometry Shader Stage to generate the triangle mesh as defined by the Marching Cubes method. The implementation results show the ability to simulate over 64K particles at a rate of 900 and 400 frames per second, not including the surface reconstruction steps and including the Marching Cubes steps respectively.
ContributorsFigueroa, Gustavo (Author) / Farin, Gerald (Thesis advisor) / Maciejewski, Ross (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
152562-Thumbnail Image.png
Description
Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed to see whether the fitted feature lies within the specified

Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed to see whether the fitted feature lies within the specified tolerance limits or not. Coordinate Measuring Machines (CMMs) use feature fitting algorithms that incorporate least square estimates as a basis for obtaining minimum, maximum, and zone fits. However, a comprehensive set of algorithms addressing the fitting procedure (all datums, targets) for every tolerance class is not available. Therefore, a Library of algorithms is developed to aid the process of feature fitting, and tolerance verification. This paper addresses linear, planar, circular, and cylindrical features only. This set of algorithms described conforms to the international Standards for GD&T.; In order to reduce the number of points to be analyzed, and to identify the possible candidate points for linear, circular and planar features, 2D and 3D convex hulls are used. For minimum, maximum, and Chebyshev cylinders, geometric search algorithms are used. Algorithms are divided into three major categories: least square, unconstrained, and constrained fits. Primary datums require one sided unconstrained fits for their verification. Secondary datums require one sided constrained fits for their verification. For size and other tolerance verifications, we require both unconstrained and constrained fits
ContributorsMohan, Prashant (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph K. (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2014
152996-Thumbnail Image.png
Description
This thesis focuses on generating and exploring design variations for architectural and urban layouts. I propose to study this general problem in three selected contexts.

First, I introduce a framework to generate many variations of a facade design that look similar to a given facade layout. Starting from an input image,

This thesis focuses on generating and exploring design variations for architectural and urban layouts. I propose to study this general problem in three selected contexts.

First, I introduce a framework to generate many variations of a facade design that look similar to a given facade layout. Starting from an input image, the facade is hierarchically segmented and labeled with a collection of manual and automatic tools. The user can then model constraints that should be maintained in any variation of the input facade design. Subsequently, facade variations are generated for different facade sizes, where multiple variations can be produced for a certain size.

Second, I propose a method for a user to understand and systematically explore good building layouts. Starting from a discrete set of good layouts, I analytically characterize the local shape space of good layouts around each initial layout, compactly encode these spaces, and link them to support transitions across the different local spaces. I represent such transitions in the form of a portal graph. The user can then use the portal graph, along with the family of local shape spaces, to globally and locally explore the space of good building layouts.

Finally, I propose an algorithm to computationally design street networks that balance competing requirements such as quick travel time and reduced through traffic in residential neighborhoods. The user simply provides high-level functional specifications for a target neighborhood, while my algorithm best satisfies the specification by solving for both connectivity and geometric layout of the network.
ContributorsBao, Fan (Author) / Wonka, Peter (Thesis advisor) / Maciejewski, Ross (Committee member) / Razdan, Anshuman (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2014
153035-Thumbnail Image.png
Description
Dimensional Metrology is the branch of science that determines length, angular, and geometric relationships within manufactured parts and compares them with required tolerances. The measurements can be made using either manual methods or sampled coordinate metrology (Coordinate measuring machines). Manual measurement methods have been in practice for a long time

Dimensional Metrology is the branch of science that determines length, angular, and geometric relationships within manufactured parts and compares them with required tolerances. The measurements can be made using either manual methods or sampled coordinate metrology (Coordinate measuring machines). Manual measurement methods have been in practice for a long time and are well accepted in the industry, but are slow for the present day manufacturing. On the other hand CMMs are relatively fast, but these methods are not well established yet. The major problem that needs to be addressed is the type of feature fitting algorithm used for evaluating tolerances. In a CMM the use of different feature fitting algorithms on a feature gives different values, and there is no standard that describes the type of feature fitting algorithm to be used for a specific tolerance. Our research is focused on identifying the feature fitting algorithm that is best used for each type of tolerance. Each algorithm is identified as the one to best represent the interpretation of geometric control as defined by the ASME Y14.5 standard and on the manual methods used for the measurement of a specific tolerance type. Using these algorithms normative procedures for CMMs are proposed for verifying tolerances. The proposed normative procedures are implemented as software. Then the procedures are verified by comparing the results from software with that of manual measurements.

To aid this research a library of feature fitting algorithms is developed in parallel. The library consists of least squares, Chebyshev and one sided fits applied on the features of line, plane, circle and cylinder. The proposed normative procedures are useful for evaluating tolerances in CMMs. The results evaluated will be in accordance to the standard. The ambiguity in choosing the algorithms is prevented. The software developed can be used in quality control for inspection purposes.
ContributorsVemulapalli, Prabath (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Takahashi, Timothy (Committee member) / Arizona State University (Publisher)
Created2014
153051-Thumbnail Image.png
Description
Quad-dominant (QD) meshes, i.e., three-dimensional, 2-manifold polygonal meshes comprising mostly four-sided faces (i.e., quads), are a popular choice for many applications such as polygonal shape modeling, computer animation, base meshes for spline and subdivision surface, simulation, and architectural design. This thesis investigates the topic of connectivity control, i.e., exploring different

Quad-dominant (QD) meshes, i.e., three-dimensional, 2-manifold polygonal meshes comprising mostly four-sided faces (i.e., quads), are a popular choice for many applications such as polygonal shape modeling, computer animation, base meshes for spline and subdivision surface, simulation, and architectural design. This thesis investigates the topic of connectivity control, i.e., exploring different choices of mesh connectivity to represent the same 3D shape or surface. One key concept of QD mesh connectivity is the distinction between regular and irregular elements: a vertex with valence 4 is regular; otherwise, it is irregular. In a similar sense, a face with four sides is regular; otherwise, it is irregular. For QD meshes, the placement of irregular elements is especially important since it largely determines the achievable geometric quality of the final mesh.

Traditionally, the research on QD meshes focuses on the automatic generation of pure quadrilateral or QD meshes from a given surface. Explicit control of the placement of irregular elements can only be achieved indirectly. To fill this gap, in this thesis, we make the following contributions. First, we formulate the theoretical background about the fundamental combinatorial properties of irregular elements in QD meshes. Second, we develop algorithms for the explicit control of irregular elements and the exhaustive enumeration of QD mesh connectivities. Finally, we demonstrate the importance of connectivity control for QD meshes in a wide range of applications.
ContributorsPeng, Chi-Han (Author) / Wonka, Peter (Thesis advisor) / Maciejewski, Ross (Committee member) / Farin, Gerald (Committee member) / Razdan, Anshuman (Committee member) / Arizona State University (Publisher)
Created2014
150086-Thumbnail Image.png
Description
Detecting anatomical structures, such as the carina, the pulmonary trunk and the aortic arch, is an important step in designing a CAD system of detection Pulmonary Embolism. The presented CAD system gets rid of the high-level prior defined knowledge to become a system which can easily extend to detect other

Detecting anatomical structures, such as the carina, the pulmonary trunk and the aortic arch, is an important step in designing a CAD system of detection Pulmonary Embolism. The presented CAD system gets rid of the high-level prior defined knowledge to become a system which can easily extend to detect other anatomic structures. The system is based on a machine learning algorithm --- AdaBoost and a general feature --- Haar. This study emphasizes on off-line and on-line AdaBoost learning. And in on-line AdaBoost, the thesis further deals with extremely imbalanced condition. The thesis first reviews several knowledge-based detection methods, which are relied on human being's understanding of the relationship between anatomic structures. Then the thesis introduces a classic off-line AdaBoost learning. The thesis applies different cascading scheme, namely multi-exit cascading scheme. The comparison between the two methods will be provided and discussed. Both of the off-line AdaBoost methods have problems in memory usage and time consuming. Off-line AdaBoost methods need to store all the training samples and the dataset need to be set before training. The dataset cannot be enlarged dynamically. Different training dataset requires retraining the whole process. The retraining is very time consuming and even not realistic. To deal with the shortcomings of off-line learning, the study exploited on-line AdaBoost learning approach. The thesis proposed a novel pool based on-line method with Kalman filters and histogram to better represent the distribution of the samples' weight. Analysis of the performance, the stability and the computational complexity will be provided in the thesis. Furthermore, the original on-line AdaBoost performs badly in imbalanced conditions, which occur frequently in medical image processing. In image dataset, positive samples are limited and negative samples are countless. A novel Self-Adaptive Asymmetric On-line Boosting method is presented. The method utilized a new asymmetric loss criterion with self-adaptability according to the ratio of exposed positive and negative samples and it has an advanced rule to update sample's importance weight taking account of both classification result and sample's label. Compared to traditional on-line AdaBoost Learning method, the new method can achieve far more accuracy in imbalanced conditions.
ContributorsWu, Hong (Author) / Liang, Jianming (Thesis advisor) / Farin, Gerald (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2011
150427-Thumbnail Image.png
Description
The Dual Marching Tetrahedra algorithm is a generalization of the Dual Marching Cubes algorithm, used to build a boundary surface around points which have been assigned a particular scalar density value, such as the data produced by and Magnetic Resonance Imaging or Computed Tomography scanner. This boundary acts as a

The Dual Marching Tetrahedra algorithm is a generalization of the Dual Marching Cubes algorithm, used to build a boundary surface around points which have been assigned a particular scalar density value, such as the data produced by and Magnetic Resonance Imaging or Computed Tomography scanner. This boundary acts as a skin between points which are determined to be "inside" and "outside" of an object. However, the DMT is vague in regards to exactly where each vertex of the boundary should be placed, which will not necessarily produce smooth results. Mesh smoothing algorithms which ignore the DMT data structures may distort the output mesh so that it could incorrectly include or exclude density points. Thus, an algorithm is presented here which is designed to smooth the output mesh, while obeying the underlying data structures of the DMT algorithm.
ContributorsJohnson, Sean (Author) / Farin, Gerald (Thesis advisor) / Richa, Andrea (Committee member) / Nallure Balasubramanian, Vineeth (Committee member) / Arizona State University (Publisher)
Created2011
149829-Thumbnail Image.png
Description
Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research,

Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research, a new three dimensional model for tolerance transfer in manufacturing process planning is presented that is user friendly in the sense that it is built upon the Coordinate Measuring Machine (CMM) readings that are readily available in any decent manufacturing facility. This model can take care of datum reference change between non orthogonal datums (squeezed datums), non-linearly oriented datums (twisted datums) etc. Graph theoretic approach based upon ACIS, C++ and MFC is laid out to facilitate its implementation for automation of the model. A totally new approach to determining dimensions and tolerances for the manufacturing process plan is also presented. Secondly, a new statistical model for the statistical tolerance analysis based upon joint probability distribution of the trivariate normal distributed variables is presented. 4-D probability Maps have been developed in which the probability value of a point in space is represented by the size of the marker and the associated color. Points inside the part map represent the pass percentage for parts manufactured. The effect of refinement with form and orientation tolerance is highlighted by calculating the change in pass percentage with the pass percentage for size tolerance only. Delaunay triangulation and ray tracing algorithms have been used to automate the process of identifying the points inside and outside the part map. Proof of concept software has been implemented to demonstrate this model and to determine pass percentages for various cases. The model is further extended to assemblies by employing convolution algorithms on two trivariate statistical distributions to arrive at the statistical distribution of the assembly. Map generated by using Minkowski Sum techniques on the individual part maps is superimposed on the probability point cloud resulting from convolution. Delaunay triangulation and ray tracing algorithms are employed to determine the assembleability percentages for the assembly.
ContributorsKhan, M Nadeem Shafi (Author) / Phelan, Patrick E (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Farin, Gerald (Committee member) / Roberts, Chell (Committee member) / Henderson, Mark (Committee member) / Arizona State University (Publisher)
Created2011
149487-Thumbnail Image.png
Description
Current trends in the Computer Aided Engineering (CAE) involve the integration of legacy mesh-based finite element software with newer solid-modeling kernels or full CAD systems in order to simplify laborious or highly specialized tasks in engineering analysis. In particular, mesh generation is becoming increasingly automated. In addition, emphasis is increasingly

Current trends in the Computer Aided Engineering (CAE) involve the integration of legacy mesh-based finite element software with newer solid-modeling kernels or full CAD systems in order to simplify laborious or highly specialized tasks in engineering analysis. In particular, mesh generation is becoming increasingly automated. In addition, emphasis is increasingly placed on full assembly (multi-part) models, which in turn necessitates an automated approach to contact analysis. This task is challenging due to increases in algebraic system size, as well as increases in the number of distorted elements - both of which necessitate manual intervention to maintain accuracy and conserve computer resources. In this investigation, it is demonstrated that the use of a mesh-free B-Spline finite element basis for structural contact problems results in significantly smaller algebraic systems than mesh-based approaches for similar grid spacings. The relative error in calculated contact pressure is evaluated for simple two dimensional smooth domains at discrete points within the contact zone and compared to the analytical Hertz solution, as well as traditional mesh-based finite element solutions for similar grid spacings. For smooth curved domains, the relative error in contact pressure is shown to be less than for bi-quadratic Serendipity elements. The finite element formulation draws on some recent innovations, in which the domain to be analyzed is integrated with the use of transformed Gauss points within the domain, and boundary conditions are applied via distance functions (R-functions). However, the basis is stabilized through a novel selective normalization procedure. In addition, a novel contact algorithm is presented in which the B-Spline support grid is re-used for contact detection. The algorithm is demonstrated for two simple 2-dimensional assemblies. Finally, a modified Penalty Method is demonstrated for connecting elements with incompatible bases.
ContributorsGrishin, Alexander (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joe (Committee member) / Hjelmstad, Keith (Committee member) / Huebner, Ken (Committee member) / Farin, Gerald (Committee member) / Peralta, Pedro (Committee member) / Arizona State University (Publisher)
Created2010