Matching Items (8)
Filtering by

Clear all filters

153035-Thumbnail Image.png
Description
Dimensional Metrology is the branch of science that determines length, angular, and geometric relationships within manufactured parts and compares them with required tolerances. The measurements can be made using either manual methods or sampled coordinate metrology (Coordinate measuring machines). Manual measurement methods have been in practice for a long time

Dimensional Metrology is the branch of science that determines length, angular, and geometric relationships within manufactured parts and compares them with required tolerances. The measurements can be made using either manual methods or sampled coordinate metrology (Coordinate measuring machines). Manual measurement methods have been in practice for a long time and are well accepted in the industry, but are slow for the present day manufacturing. On the other hand CMMs are relatively fast, but these methods are not well established yet. The major problem that needs to be addressed is the type of feature fitting algorithm used for evaluating tolerances. In a CMM the use of different feature fitting algorithms on a feature gives different values, and there is no standard that describes the type of feature fitting algorithm to be used for a specific tolerance. Our research is focused on identifying the feature fitting algorithm that is best used for each type of tolerance. Each algorithm is identified as the one to best represent the interpretation of geometric control as defined by the ASME Y14.5 standard and on the manual methods used for the measurement of a specific tolerance type. Using these algorithms normative procedures for CMMs are proposed for verifying tolerances. The proposed normative procedures are implemented as software. Then the procedures are verified by comparing the results from software with that of manual measurements.

To aid this research a library of feature fitting algorithms is developed in parallel. The library consists of least squares, Chebyshev and one sided fits applied on the features of line, plane, circle and cylinder. The proposed normative procedures are useful for evaluating tolerances in CMMs. The results evaluated will be in accordance to the standard. The ambiguity in choosing the algorithms is prevented. The software developed can be used in quality control for inspection purposes.
ContributorsVemulapalli, Prabath (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Takahashi, Timothy (Committee member) / Arizona State University (Publisher)
Created2014
149744-Thumbnail Image.png
Description
The video game graphics pipeline has traditionally rendered the scene using a polygonal approach. Advances in modern graphics hardware now allow the rendering of parametric methods. This thesis explores various smooth surface rendering methods that can be integrated into the video game graphics engine. Moving over to parametric or smooth

The video game graphics pipeline has traditionally rendered the scene using a polygonal approach. Advances in modern graphics hardware now allow the rendering of parametric methods. This thesis explores various smooth surface rendering methods that can be integrated into the video game graphics engine. Moving over to parametric or smooth surfaces from the polygonal domain has its share of issues and there is an inherent need to address various rendering bottlenecks that could hamper such a move. The game engine needs to choose an appropriate method based on in-game characteristics of the objects; character and animated objects need more sophisticated methods whereas static objects could use simpler techniques. Scaling the polygon count over various hardware platforms becomes an important factor. Much control is needed over the tessellation levels, either imposed by the hardware limitations or by the application, to be able to adaptively render the mesh without significant loss in performance. This thesis explores several methods that would help game engine developers in making correct design choices by optimally balancing the trade-offs while rendering the scene using smooth surfaces. It proposes a novel technique for adaptive tessellation of triangular meshes that vastly improves speed and tessellation count. It develops an approximate method for rendering Loop subdivision surfaces on tessellation enabled hardware. A taxonomy and evaluation of the methods is provided and a unified rendering system that provides automatic level of detail by switching between the methods is proposed.
ContributorsAmresh, Ashish (Author) / Farin, Gerlad (Thesis advisor) / Razdan, Anshuman (Thesis advisor) / Wonka, Peter (Committee member) / Hansford, Dianne (Committee member) / Arizona State University (Publisher)
Created2011
156833-Thumbnail Image.png
Description
Mixed reality mobile platforms co-locate virtual objects with physical spaces, creating immersive user experiences. To create visual harmony between virtual and physical spaces, the virtual scene must be accurately illuminated with realistic physical lighting. To this end, a system was designed that Generates Light Estimation Across Mixed-reality (GLEAM) devices to

Mixed reality mobile platforms co-locate virtual objects with physical spaces, creating immersive user experiences. To create visual harmony between virtual and physical spaces, the virtual scene must be accurately illuminated with realistic physical lighting. To this end, a system was designed that Generates Light Estimation Across Mixed-reality (GLEAM) devices to continually sense realistic lighting of a physical scene in all directions. GLEAM optionally operate across multiple mobile mixed-reality devices to leverage collaborative multi-viewpoint sensing for improved estimation. The system implements policies that prioritize resolution, coverage, or update interval of the illumination estimation depending on the situational needs of the virtual scene and physical environment.

To evaluate the runtime performance and perceptual efficacy of the system, GLEAM was implemented on the Unity 3D Game Engine. The implementation was deployed on Android and iOS devices. On these implementations, GLEAM can prioritize dynamic estimation with update intervals as low as 15 ms or prioritize high spatial quality with update intervals of 200 ms. User studies across 99 participants and 26 scene comparisons reported a preference towards GLEAM over other lighting techniques in 66.67% of the presented augmented scenes and indifference in 12.57% of the scenes. A controlled lighting user study on 18 participants revealed a general preference for policies that strike a balance between resolution and update rate.
ContributorsPrakash, Siddhant (Author) / LiKamWa, Robert (Thesis advisor) / Yang, Yezhou (Thesis advisor) / Hansford, Dianne (Committee member) / Arizona State University (Publisher)
Created2018
154976-Thumbnail Image.png
Description
Metal castings are selectively machined-based on dimensional control requirements. To ensure that all the finished surfaces are fully machined, each as-cast part needs to be measured and then adjusted optimally in its fixture. The topics of this thesis address two parts of this process: data translations and feature-fitting clouds of

Metal castings are selectively machined-based on dimensional control requirements. To ensure that all the finished surfaces are fully machined, each as-cast part needs to be measured and then adjusted optimally in its fixture. The topics of this thesis address two parts of this process: data translations and feature-fitting clouds of points measured on each cast part. For the first, a CAD model of the finished part is required to be communicated to the machine shop for performing various machining operations on the metal casting. The data flow must include GD&T specifications along with other special notes that may be required to communicate to the machinist. Current data exchange, among various digital applications, is limited to translation of only CAD geometry via STEP AP203. Therefore, an algorithm is developed in order to read, store and translate the data from a CAD file (for example SolidWorks, CREO) to a standard and machine readable format (ACIS format - *.sat). Second, the geometry of cast parts varies from piece to piece and hence fixture set-up parameters for each part must be adjusted individually. To predictively determine these adjustments, the datum surfaces, and to-be-machined surfaces are scanned individually and the point clouds reduced to feature fits. The scanned data are stored as separate point cloud files. The labels associated with the datum and to-be-machined (TBM) features are extracted from the *.sat file. These labels are further matched with the file name of the point cloud data to identify data for the respective features. The point cloud data and the CAD model are then used to fit the appropriate features (features at maximum material condition (MMC) for datums and features at least material condition (LMC) for TBM features) using the existing normative feature fitting (nFF) algorithm. Once the feature fitting is complete, a global datum reference frame (GDRF) is constructed based on the locating method that will be used to machine the part. The locating method is extracted from a fixture library that specifies the type of fixturing used to machine the part. All entities are transformed from its local coordinate system into the GDRF. The nominal geometry, fitted features, and the GD&T information are then stored in a neutral file format called the Constraint Tolerance Feature (CTF) Graph. The final outputs are then used to identify the locations of the critical features on each part and these are used to establish the adjustments for its setup prior to machining, in another module, not part of this thesis.
ContributorsRamnath, Satchit (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph (Committee member) / Hansford, Dianne (Committee member) / Arizona State University (Publisher)
Created2016
189274-Thumbnail Image.png
Description
Structural Magnetic Resonance Imaging analysis is a vital component in the study of Alzheimer’s Disease pathology and several techniques exist as part of the existing research conducted. In particular, volumetric approaches in this field are known to be beneficial due to the increased capability to express morphological characteristics when compared

Structural Magnetic Resonance Imaging analysis is a vital component in the study of Alzheimer’s Disease pathology and several techniques exist as part of the existing research conducted. In particular, volumetric approaches in this field are known to be beneficial due to the increased capability to express morphological characteristics when compared to manifold methods. To aid in the improvement of the field, this paper aims to propose an intrinsic volumetric conic system that can be applied to bounded volumetric meshes to enable a more effective study of subjects. The computation of the metric involves the use of heat kernel theory and conformal parameterization on genus-0 surfaces extended to a volumetric domain. Additionally, this paper also explores the use of the ’TetCNN’ architecture on the classification of hippocampal tetrahedral meshes to detect features that correspond to Alzheimer’s indicators. The model tested was able to achieve remarkable results with a measured classification accuracy of above 90% in the task of differentiating between subjects diagnosed with Alzheimer’s and normal control subjects.
ContributorsGeorge, John Varghese (Author) / Wang, Yalin (Thesis advisor) / Hansford, Dianne (Committee member) / Gupta, Vikash (Committee member) / Arizona State University (Publisher)
Created2023
171764-Thumbnail Image.png
Description
This dissertation constructs a new computational processing framework to robustly and precisely quantify retinotopic maps based on their angle distortion properties. More generally, this framework solves the problem of how to robustly and precisely quantify (angle) distortions of noisy or incomplete (boundary enclosed) 2-dimensional surface to surface mappings. This framework

This dissertation constructs a new computational processing framework to robustly and precisely quantify retinotopic maps based on their angle distortion properties. More generally, this framework solves the problem of how to robustly and precisely quantify (angle) distortions of noisy or incomplete (boundary enclosed) 2-dimensional surface to surface mappings. This framework builds upon the Beltrami Coefficient (BC) description of quasiconformal mappings that directly quantifies local mapping (circles to ellipses) distortions between diffeomorphisms of boundary enclosed plane domains homeomorphic to the unit disk. A new map called the Beltrami Coefficient Map (BCM) was constructed to describe distortions in retinotopic maps. The BCM can be used to fully reconstruct the original target surface (retinal visual field) of retinotopic maps. This dissertation also compared retinotopic maps in the visual processing cascade, which is a series of connected retinotopic maps responsible for visual data processing of physical images captured by the eyes. By comparing the BCM results from a large Human Connectome project (HCP) retinotopic dataset (N=181), a new computational quasiconformal mapping description of the transformed retinal image as it passes through the cascade is proposed, which is not present in any current literature. The description applied on HCP data provided direct visible and quantifiable geometric properties of the cascade in a way that has not been observed before. Because retinotopic maps are generated from in vivo noisy functional magnetic resonance imaging (fMRI), quantifying them comes with a certain degree of uncertainty. To quantify the uncertainties in the quantification results, it is necessary to generate statistical models of retinotopic maps from their BCMs and raw fMRI signals. Considering that estimating retinotopic maps from real noisy fMRI time series data using the population receptive field (pRF) model is a time consuming process, a convolutional neural network (CNN) was constructed and trained to predict pRF model parameters from real noisy fMRI data
ContributorsTa, Duyan Nguyen (Author) / Wang, Yalin (Thesis advisor) / Lu, Zhong-Lin (Committee member) / Hansford, Dianne (Committee member) / Liu, Huan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022
158074-Thumbnail Image.png
Description
In the last decade, the immense growth of computational power, enhanced data storage capabilities, and the increasing popularity of online learning systems has led to adaptive learning systems becoming more widely available. Parallel to infrastructure enhancements, more researchers have started to study the adaptive task selection systems, concluding that suggesting

In the last decade, the immense growth of computational power, enhanced data storage capabilities, and the increasing popularity of online learning systems has led to adaptive learning systems becoming more widely available. Parallel to infrastructure enhancements, more researchers have started to study the adaptive task selection systems, concluding that suggesting tasks appropriate to students' needs may increase students' learning gains.

This work built an adaptive task selection system for undergraduate organic chemistry students using a deep learning algorithm. The proposed model is based on a recursive neural network (RNN) architecture built with Long-Short Term Memory (LSTM) cells that recommends organic chemistry practice questions to students depending on their previous question selections.

For this study, educational data were collected from the Organic Chemistry Practice Environment (OPE) that is used in the Organic Chemistry course at Arizona State University. The OPE has more than three thousand questions. Each question is linked to one or more knowledge components (KCs) to enable recommendations that precisely address the knowledge that students need. Subject matter experts made the connection between questions and related KCs.

A linear model derived from students' exam results was used to identify skilled students. The neural network based recommendation system was trained using those skilled students' problem solving attempt sequences so that the trained system recommends questions that will likely improve learning gains the most. The model was evaluated by measuring the predicted questions' accuracy against learners' actual task selections. The proposed model not only accurately predicted the learners' actual task selection but also the correctness of their answers.
ContributorsKOSELER EMRE, Refika (Author) / VanLehn, Kurt A (Thesis advisor) / Davulcu, Hasan (Committee member) / Hsiao, Sharon (Committee member) / Hansford, Dianne (Committee member) / Arizona State University (Publisher)
Created2020
158908-Thumbnail Image.png
Description
While significant qualitative, user study-focused research has been done on augmented reality, relatively few studies have been conducted on multiple, co-located synchronously collaborating users in augmented reality. Recognizing the need for more collaborative user studies in augmented reality and the value such studies present, a user study is conducted of

While significant qualitative, user study-focused research has been done on augmented reality, relatively few studies have been conducted on multiple, co-located synchronously collaborating users in augmented reality. Recognizing the need for more collaborative user studies in augmented reality and the value such studies present, a user study is conducted of collaborative decision-making in augmented reality to investigate the following research question: “Does presenting data visualizations in augmented reality influence the collaborative decision-making behaviors of a team?” This user study evaluates how viewing data visualizations with augmented reality headsets impacts collaboration in small teams compared to viewing together on a single 2D desktop monitor as a baseline. Teams of two participants performed closed and open-ended evaluation tasks to collaboratively analyze data visualized in both augmented reality and on a desktop monitor. Multiple means of collecting and analyzing data were employed to develop a well-rounded context for results and conclusions, including software logging of participant interactions, qualitative analysis of video recordings of participant sessions, and pre- and post-study participant questionnaires. The results indicate that augmented reality doesn’t significantly change the quantity of team member communication but does impact the means and strategies participants use to collaborate.
ContributorsKintscher, Michael (Author) / Bryan, Chris (Thesis advisor) / Amresh, Ashish (Thesis advisor) / Hansford, Dianne (Committee member) / Johnson, Erik (Committee member) / Arizona State University (Publisher)
Created2020