Matching Items (31)

133979-Thumbnail Image.png

Optimal Co-Design of Structure Topology and Sensor Deployment for Balanced System Performance and Observability

Description

As technology increases in capability, its purposes can become multifaceted, meaning it must accomplish multiple requirements as opposed to just one. An example of said technology could be high speed

As technology increases in capability, its purposes can become multifaceted, meaning it must accomplish multiple requirements as opposed to just one. An example of said technology could be high speed airplane wings, which must be strong enough to withstand high loads, light enough to enable the aircraft to fly, and have enough thermal conductivity to withstand high temperatures. Two objectives in particular, topology and sensor deployment, are important for designing structures such as robots which need accurate sensor readings, known as observability. In an attempt to display how these two dissimilar objectives coincide with each other, a project was created around the idea of finding an optimum balance of both. This supposed state would allow the structure not only to remain strong and light but also to be monitored via sensors with a high degree of accuracy. The main focus of the project was to compare levels of observability of two known factors of input estimation error. The first system involves a structure that has been topologically optimized for compliance minimization, which increases input estimation error. The second system produces structures with random placements of sensors within the structure, which, as the average distance from load to sensor increases, induces input estimation error. These two changes in observability were compared to see which had a more direct effect. The main findings were that changes in topology had a much more direct effect over levels of observability than changes in sensor placement. Results also show that theoretical input estimation time is significantly reduced when compared to previous systems.

Contributors

Agent

Created

Date Created
  • 2018-05

134000-Thumbnail Image.png

Big Data Analytics for Pipe Damage and Risk Identification

Description

In this thesis, Inception V3, a convolutional neural network model from Google, was partially retrained to categorize pipeline images based on their damage modes. The images for different damage modes

In this thesis, Inception V3, a convolutional neural network model from Google, was partially retrained to categorize pipeline images based on their damage modes. The images for different damage modes of the pipeline were simulated through MATLAB to represent image data collected from in-line pipe inspection. The final convolutional layer of the model was retrained with the simulated pipeline images using TensorFlow as the base platform. First, a small-scale retraining was done with real images and simulated images to compare the differences in performance. Then, using simulated images, a 2^5 full factorial design of experiment and individual parametric studies were performed on five different chosen parameters, including training steps, learning rate, batch size, training data size and image noise. The effect of each parameter on the performance of the model was evaluated and analyzed. It is crucial to understand that due to the nature of the experiment, the learnings may or may not apply to neural network models trained for other tasks. After analyzing the results, the effects and trade-offs for each parameter are discussed in detail. In addition, a method of predicting the training time was proposed. Based on the findings, an optimized model was proposed for this training exercise, with 1180 training steps, a learning rate of 0.01, a batch size of 100 and a training data set of 200 images. The optimized model reached 87.2% accuracy with a training time of 2 minutes and 6 seconds. This study will enhance our understanding in applying machine learning techniques in damage and risk identification.

Contributors

Agent

Created

Date Created
  • 2018-05

135418-Thumbnail Image.png

Squeezing Out Electricity: Computer-Aided Design and Optimization of Electrodes of Solid Oxide Fuel Cells

Description

Solid oxide fuel cells have become a promising candidate in the development of high-density clean energy sources for the rapidly increasing demands in energy and global sustainability. In order to

Solid oxide fuel cells have become a promising candidate in the development of high-density clean energy sources for the rapidly increasing demands in energy and global sustainability. In order to understand more about solid oxide fuel cells, the important step is to understand how to model heterogeneous materials. Heterogeneous materials are abundant in nature and also created in various processes. The diverse properties exhibited by these materials result from their complex microstructures, which also make it hard to model the material. Microstructure modeling and reconstruction on a meso-scale level is needed in order to produce heterogeneous models without having to shave and image every slice of the physical material, which is a destructive and irreversible process. Yeong and Torquato [1] introduced a stochastic optimization technique that enables the generation of a model of the material with the use of correlation functions. Spatial correlation functions of each of the various phases within the heterogeneous structure are collected from a two-dimensional micrograph representing a slice of a solid oxide fuel cell through computational means. The assumption is that two-dimensional images contain key structural information representative of the associated full three-dimensional microstructure. The collected spatial correlation functions, a combination of one-point and two-point correlation functions are then outputted and are representative of the material. In the reconstruction process, the characteristic two-point correlation functions is then inputted through a series of computational modeling codes and software to generate a three-dimensional visual model that is statistically similar to that of the original two-dimensional micrograph. Furthermore, parameters of temperature cooling stages and number of pixel exchanges per temperature stage are utilized and altered accordingly to observe which parameters has a higher impact on the reconstruction results. Stochastic optimization techniques to produce three-dimensional visual models from two-dimensional micrographs are therefore a statistically reliable method to understanding heterogeneous materials.

Contributors

Agent

Created

Date Created
  • 2016-05

133887-Thumbnail Image.png

Evaluation of an Original Design for a Cost-Effective Wheel-Mounted Dynamometer for Road Vehicles

Description

This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque

This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable to results obtained using a conventional chassis dynamometer. Torque curves were generated via the experimental method under a variety of circumstances and also obtained professionally by a precision engine testing company. Metrics were created to measure the precision of the experimental device's ability to consistently generate torque curves and also to compare the similarity of these curves to the professionally obtained torque curves. The results revealed that although the test device does not quite provide the same level of precision as the professional chassis dynamometer, it does create torque curves that closely resemble the chassis dynamometer torque curves and exhibit a consistency between trials comparable to the professional results, even on rough road surfaces. The results suggest that the test device provides enough accuracy and precision to satisfy the needs of most consumers interested in measuring their vehicle's engine performance but probably lacks the level of accuracy and precision needed to appeal to professionals.

Contributors

Created

Date Created
  • 2018-05

147992-Thumbnail Image.png

Machine Learning of Real and Pseudo Physics: Modeling Dynamical Systems

Description

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario. In the paper we investigate how learning architectures can be manipulated for problem specific geometry. The result of this research provides that these problem specific models are valuable for accurate learning and predicting the dynamics of physics systems.<br/><br/>In order to properly model the physics of a real pendulum, modifications were made to a prior architecture which was sufficient in modeling an ideal pendulum. The necessary modifications to the previous network [13] were problem specific and not transferrable to all other non-conservative physics scenarios. The modified architecture successfully models real pendulum dynamics. This case study provides a basis for future research in augmenting the symplectic gradient of a Hamiltonian energy function to provide a generalized, non-conservative physics model.<br/><br/>A problem specific architecture was also utilized to create an accurate model for the two-car intersection case. The Costate Network proved to be an improvement from the previously used Value Network [17]. Note that this comparison is applied lightly due to slight implementation differences. The development of the Costate Network provides a basis for using characteristics to decompose functions and create a simplified learning problem.<br/><br/>This paper is successful in creating new opportunities to develop physics models, in which the sample cases should be used as a guide for modeling other real and pseudo physics. Although the focused models in this paper are not generalizable, it is important to note that these cases provide direction for future research.

Contributors

Agent

Created

Date Created
  • 2021-05

148001-Thumbnail Image.png

Learning Scalable Dynamical Models for Predicting Atomic Structures of High-Entropy Alloys

Description

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many different fields due to its ability to generalize well to different problems and produce computationally efficient, accurate predictions regarding the system of interest. In this thesis, we demonstrate the effectiveness of machine learning models applied to toy cases representative of simplified physics that are relevant to high-entropy alloy simulation. We show these models are effective at learning nonlinear dynamics for single and multi-particle cases and that more work is needed to accurately represent complex cases in which the system dynamics are chaotic. This thesis serves as a demonstration of the potential benefits of machine learning applied to high-entropy alloy simulations to generate fast, accurate predictions of nonlinear dynamics.

Contributors

Agent

Created

Date Created
  • 2021-05

135702-Thumbnail Image.png

Large-Scale Rapid Prototyping Utilizing Adaptive Slicing Techniques

Description

A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer

A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data, has been shown to be an effective method for quickly producing parts of a high geometric complexity in small quantities. 3D printing, a popular and successful implementation of this method, is well-suited to creating small-scale parts that require a fine layer resolution. However, it starts to become impractical for large-scale objects due to build volume and print speed limitations. The proposed layered manufacturing technique builds up models from layers of much thicker sheets of material that can be cut on three-axis CNC machines and assembled manually. Adaptive slicing techniques were utilized to vary layer thickness based on surface complexity to minimize both the cost and error of the layered model. This was realized as a multi-objective optimization problem where the number of layers used represented the cost and the geometric difference between the sliced model and the CAD model defined the error. This problem was approached with two different methods, one of which was a procedural process of placing layers from a set of discrete thicknesses based on the Boolean Exclusive OR (XOR) area difference between adjacent layers. The other method implemented an optimization solver to calculate the precise thickness of each layer to minimize the overall volumetric XOR difference between the sliced and original models. Both methods produced results that help validate the efficiency and practicality of the proposed layered manufacturing technique over existing AM technologies for large-scale applications.

Contributors

Agent

Created

Date Created
  • 2016-05

132909-Thumbnail Image.png

Design and Fabrication of a Low-Cost Gripper for a Swarm Robotic Platform

Description

This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including:

This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including: robotic design, programming, rapid prototyping, and control theory. An electronic Inertial Measurement Unit and a DC Motor were both used along with 3D printed plastic components and an electronic motor control board to develop a functional open-loop controlled gripper for use in collective transportation experiments. Code was developed that effectively acquired and filtered rate of rotation data alongside other code that allows for straightforward control of the DC motor through experimentally derived relationships between the voltage applied to the DC motor and the torque output of the DC motor. Additionally, several versions of the physical components are described through their development.

Contributors

Created

Date Created
  • 2019-05

132368-Thumbnail Image.png

Moving Target Defense: Defending against Adversarial Defense

Description

A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image

A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the reasons why particular combinations were more effective than others is explored.

Contributors

Agent

Created

Date Created
  • 2019-05

153932-Thumbnail Image.png

Problem map: a framework for investigating the role of problem formulation in creative design

Description

Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small

Design problem formulation is believed to influence creativity, yet it has received only modest attention in the research community. Past studies of problem formulation are scarce and often have small sample sizes. The main objective of this research is to understand how problem formulation affects creative outcome. Three research areas are investigated: development of a model which facilitates capturing the differences among designers' problem formulation; representation and implication of those differences; the relation between problem formulation and creativity.

This dissertation proposes the Problem Map (P-maps) ontological framework. P-maps represent designers' problem formulation in terms of six groups of entities (requirement, use scenario, function, artifact, behavior, and issue). Entities have hierarchies within each group and links among groups. Variables extracted from P-maps characterize problem formulation.

Three experiments were conducted. The first experiment was to study the similarities and differences between novice and expert designers. Results show that experts use more abstraction than novices do and novices are more likely to add entities in a specific order. Experts also discover more issues.

The second experiment was to see how problem formulation relates to creativity. Ideation metrics were used to characterize creative outcome. Results include but are not limited to a positive correlation between adding more issues in an unorganized way with quantity and variety, more use scenarios and functions with novelty, more behaviors and conflicts identified with quality, and depth-first exploration with all ideation metrics. Fewer hierarchies in use scenarios lower novelty and fewer links to requirements and issues lower quality of ideas.

The third experiment was to see if problem formulation can predict creative outcome. Models based on one problem were used to predict the creativity of another. Predicted scores were compared to assessments of independent judges. Quality and novelty are predicted more accurately than variety, and quantity. Backward elimination improves model fit, though reduces prediction accuracy.

P-maps provide a theoretical framework for formalizing, tracing, and quantifying conceptual design strategies. Other potential applications are developing a test of problem formulation skill, tracking students' learning of formulation skills in a course, and reproducing other researchers’ observations about designer thinking.

Contributors

Agent

Created

Date Created
  • 2015