Matching Items (39)
Filtering by

Clear all filters

152982-Thumbnail Image.png
Description
Damage detection in heterogeneous material systems is a complex problem and requires an in-depth understanding of the material characteristics and response under varying load and environmental conditions. A significant amount of research has been conducted in this field to enhance the fidelity of damage assessment methodologies, using a wide range

Damage detection in heterogeneous material systems is a complex problem and requires an in-depth understanding of the material characteristics and response under varying load and environmental conditions. A significant amount of research has been conducted in this field to enhance the fidelity of damage assessment methodologies, using a wide range of sensors and detection techniques, for both metallic materials and composites. However, detecting damage at the microscale is not possible with commercially available sensors. A probable way to approach this problem is through accurate and efficient multiscale modeling techniques, which are capable of tracking damage initiation at the microscale and propagation across the length scales. The output from these models will provide an improved understanding of damage initiation; the knowledge can be used in conjunction with information from physical sensors to improve the size of detectable damage. In this research, effort has been dedicated to develop multiscale modeling approaches and associated damage criteria for the estimation of damage evolution across the relevant length scales. Important issues such as length and time scales, anisotropy and variability in material properties at the microscale, and response under mechanical and thermal loading are addressed. Two different material systems have been studied: metallic material and a novel stress-sensitive epoxy polymer.

For metallic material (Al 2024-T351), the methodology initiates at the microscale where extensive material characterization is conducted to capture the microstructural variability. A statistical volume element (SVE) model is constructed to represent the material properties. Geometric and crystallographic features including grain orientation, misorientation, size, shape, principal axis direction and aspect ratio are captured. This SVE model provides a computationally efficient alternative to traditional techniques using representative volume element (RVE) models while maintaining statistical accuracy. A physics based multiscale damage criterion is developed to simulate the fatigue crack initiation. The crack growth rate and probable directions are estimated simultaneously.

Mechanically sensitive materials that exhibit specific chemical reactions upon external loading are currently being investigated for self-sensing applications. The "smart" polymer modeled in this research consists of epoxy resin, hardener, and a stress-sensitive material called mechanophore The mechanophore activation is based on covalent bond-breaking induced by external stimuli; this feature can be used for material-level damage detections. In this work Tris-(Cinnamoyl oxymethyl)-Ethane (TCE) is used as the cyclobutane-based mechanophore (stress-sensitive) material in the polymer matrix. The TCE embedded polymers have shown promising results in early damage detection through mechanically induced fluorescence. A spring-bead based network model, which bridges nanoscale information to higher length scales, has been developed to model this material system. The material is partitioned into discrete mass beads which are linked using linear springs at the microscale. A series of MD simulations were performed to define the spring stiffness in the statistical network model. By integrating multiple spring-bead models a network model has been developed to represent the material properties at the mesoscale. The model captures the statistical distribution of crosslinking degree of the polymer to represent the heterogeneous material properties at the microscale. The developed multiscale methodology is computationally efficient and provides a possible means to bridge multiple length scales (from 10 nm in MD simulation to 10 mm in FE model) without significant loss of accuracy. Parametric studies have been conducted to investigate the influence of the crosslinking degree on the material behavior. The developed methodology has been used to evaluate damage evolution in the self-sensing polymer.
ContributorsZhang, Jinjun (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Jiang, Hanqing (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2014
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
150833-Thumbnail Image.png
Description
Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components

Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components are often manually inspected to detect the presence of damage. This technique, known as schedule based maintenance, however, is expensive, time-consuming, and often limited to easily accessible structural elements. Therefore, there is an increased demand for robust and efficient Structural Health Monitoring (SHM) techniques that can be used for Condition Based Monitoring, which is the method in which structural components are inspected based upon damage metrics as opposed to flight hours. SHM relies on in situ frameworks for detecting early signs of damage in exposed and unexposed structural elements, offering not only reduced number of schedule based inspections, but also providing better useful life estimates. SHM frameworks require the development of different sensing technologies, algorithms, and procedures to detect, localize, quantify, characterize, as well as assess overall damage in aerospace structures so that strong estimations in the remaining useful life can be determined. The use of piezoelectric transducers along with guided Lamb waves is a method that has received considerable attention due to the weight, cost, and function of the systems based on these elements. The research in this thesis investigates the ability of Lamb waves to detect damage in feature dense anisotropic composite panels. Most current research negates the effects of experimental variability by performing tests on structurally simple isotropic plates that are used as a baseline and damaged specimen. However, in actual applications, variability cannot be negated, and therefore there is a need to research the effects of complex sample geometries, environmental operating conditions, and the effects of variability in material properties. This research is based on experiments conducted on a single blade-stiffened anisotropic composite panel that localizes delamination damage caused by impact. The overall goal was to utilize a correlative approach that used only the damage feature produced by the delamination as the damage index. This approach was adopted because it offered a simplistic way to determine the existence and location of damage without having to conduct a more complex wave propagation analysis or having to take into account the geometric complexities of the test specimen. Results showed that even in a complex structure, if the damage feature can be extracted and measured, then an appropriate damage index can be associated to it and the location of the damage can be inferred using a dense sensor array. The second experiment presented in this research studies the effects of temperature on damage detection when using one test specimen for a benchmark data set and another for damage data collection. This expands the previous experiment into exploring not only the effects of variable temperature, but also the effects of high experimental variability. Results from this work show that the damage feature in the data is not only extractable at higher temperatures, but that the data from one panel at one temperature can be directly compared to another panel at another temperature for baseline comparison due to linearity of the collected data.
ContributorsVizzini, Anthony James, II (Author) / Chattopadhyay, Aditi (Thesis advisor) / Fard, Masoud (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
156036-Thumbnail Image.png
Description
Topological methods for data analysis present opportunities for enforcing certain invariances of broad interest in computer vision: including view-point in activity analysis, articulation in shape analysis, and measurement invariance in non-linear dynamical modeling. The increasing success of these methods is attributed to the complementary information that topology provides, as well

Topological methods for data analysis present opportunities for enforcing certain invariances of broad interest in computer vision: including view-point in activity analysis, articulation in shape analysis, and measurement invariance in non-linear dynamical modeling. The increasing success of these methods is attributed to the complementary information that topology provides, as well as availability of tools for computing topological summaries such as persistence diagrams. However, persistence diagrams are multi-sets of points and hence it is not straightforward to fuse them with features used for contemporary machine learning tools like deep-nets. In this paper theoretically well-grounded approaches to develop novel perturbation robust topological representations are presented, with the long-term view of making them amenable to fusion with contemporary learning architectures. The proposed representation lives on a Grassmann manifold and hence can be efficiently used in machine learning pipelines.

The proposed representation.The efficacy of the proposed descriptor was explored on three applications: view-invariant activity analysis, 3D shape analysis, and non-linear dynamical modeling. Favorable results in both high-level recognition performance and improved performance in reduction of time-complexity when compared to other baseline methods are obtained.
ContributorsThopalli, Kowshik (Author) / Turaga, Pavan Kumar (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
156219-Thumbnail Image.png
Description
Deep learning architectures have been widely explored in computer vision and have

depicted commendable performance in a variety of applications. A fundamental challenge

in training deep networks is the requirement of large amounts of labeled training

data. While gathering large quantities of unlabeled data is cheap and easy, annotating

the data is an expensive

Deep learning architectures have been widely explored in computer vision and have

depicted commendable performance in a variety of applications. A fundamental challenge

in training deep networks is the requirement of large amounts of labeled training

data. While gathering large quantities of unlabeled data is cheap and easy, annotating

the data is an expensive process in terms of time, labor and human expertise.

Thus, developing algorithms that minimize the human effort in training deep models

is of immense practical importance. Active learning algorithms automatically identify

salient and exemplar samples from large amounts of unlabeled data and can augment

maximal information to supervised learning models, thereby reducing the human annotation

effort in training machine learning models. The goal of this dissertation is to

fuse ideas from deep learning and active learning and design novel deep active learning

algorithms. The proposed learning methodologies explore diverse label spaces to

solve different computer vision applications. Three major contributions have emerged

from this work; (i) a deep active framework for multi-class image classication, (ii)

a deep active model with and without label correlation for multi-label image classi-

cation and (iii) a deep active paradigm for regression. Extensive empirical studies

on a variety of multi-class, multi-label and regression vision datasets corroborate the

potential of the proposed methods for real-world applications. Additional contributions

include: (i) a multimodal emotion database consisting of recordings of facial

expressions, body gestures, vocal expressions and physiological signals of actors enacting

various emotions, (ii) four multimodal deep belief network models and (iii)

an in-depth analysis of the effect of transfer of multimodal emotion features between

source and target networks on classification accuracy and training time. These related

contributions help comprehend the challenges involved in training deep learning

models and motivate the main goal of this dissertation.
ContributorsRanganathan, Hiranmayi (Author) / Sethuraman, Panchanathan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Li, Baoxin (Committee member) / Chakraborty, Shayok (Committee member) / Arizona State University (Publisher)
Created2018
131525-Thumbnail Image.png
Description
The original version of Helix, the one I pitched when first deciding to make a video game
for my thesis, is an action-platformer, with the intent of metroidvania-style progression
and an interconnected world map.

The current version of Helix is a turn based role-playing game, with the intent of roguelike
gameplay and a dark

The original version of Helix, the one I pitched when first deciding to make a video game
for my thesis, is an action-platformer, with the intent of metroidvania-style progression
and an interconnected world map.

The current version of Helix is a turn based role-playing game, with the intent of roguelike
gameplay and a dark fantasy theme. We will first be exploring the challenges that came
with programming my own game - not quite from scratch, but also without a prebuilt
engine - then transition into game design and how Helix has evolved from its original form
to what we see today.
ContributorsDiscipulo, Isaiah K (Author) / Meuth, Ryan (Thesis director) / Kobayashi, Yoshihiro (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
136549-Thumbnail Image.png
Description
A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic

A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic materials that can change their physical properties \u2014 such as color, density, and even shape \u2014 based on predefined rules or continuous, autonomous collection of input. In this research, we are most interested in particles that can perform computations, bond with other particles, and move. In this paper, we provide a theoretical particle model that can be used to simulate the performance of such physical particle systems, as well as an algorithm to perform expansion, wherein these particles can be used to enclose spaces or even objects.
ContributorsLaff, Miles (Author) / Richa, Andrea (Thesis director) / Bazzi, Rida (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136516-Thumbnail Image.png
Description
Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot

Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot detection, we are interested in bots on Twitter that tweet Arabic extremist-like phrases. A testing dataset is collected using the honeypot method, and five different heuristics are measured for their effectiveness in detecting bots. The model underperformed, but we have laid the ground-work for a vastly untapped focus on bot detection: extremist ideal diffusion through bots.
ContributorsKarlsrud, Mark C. (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136442-Thumbnail Image.png
Description
A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to

A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to edge-line deflection data extracted from digital imagery of experimentally loaded beams. In addition, an Ellipse Logistic Model (ELM) has been proposed, using L1-regularized logistic regression, to predict the impact of a knot on the displacement of a beam. By classifying a knot as severely positive or negative, vs. mildly positive or negative, ELM can classify knots that lead to large changes to beam deflection, while not over-emphasizing knots that may not be a problem. Using ELM with a regression-fit Young's Modulus on three-point bending of Douglass Fir, it is possible estimate the effects a knot will have on the shape of the resulting displacement curve.
Created2015-05