Matching Items (5)
Filtering by

Clear all filters

149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
151367-Thumbnail Image.png
Description
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on

This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
ContributorsDeivanayagam, Arumugam (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2012
155076-Thumbnail Image.png
Description
Tall building developments are spreading across the globe at an ever-increasing rate (www.ctbuh.org). In 1982, the number of ‘tall buildings’ in North America was merely 1,701. This number rose to 26,053, in 2006. The global number of buildings, 200m or more in height, has risen from 286 to 602 in

Tall building developments are spreading across the globe at an ever-increasing rate (www.ctbuh.org). In 1982, the number of ‘tall buildings’ in North America was merely 1,701. This number rose to 26,053, in 2006. The global number of buildings, 200m or more in height, has risen from 286 to 602 in the last decade alone. This dissertation concentrates on design optimization of such, about-to-be modular, structures by implementing AISC 2010 design requirements. Along with a discussion on and classification of lateral load resisting systems, a few design optimization cases are also being studied. The design optimization results of full scale three dimensional buildings subject to multiple design criteria including stress, serviceability and dynamic response are discussed. The tool being used for optimization is GS-USA Frame3D© (henceforth referred to as Frame3D). Types of analyses being verified against a strong baseline of Abaqus 6.11-1, are stress analysis, modal analysis and buckling analysis.

The provisions in AISC 2010 allows us to bypass the limit state of flexural buckling in compression checks with a satisfactory buckling analysis. This grants us relief from the long and tedious effective length factor computations. Besides all the AISC design checks, an empirical equation to check beams with high shear and flexure is also being enforced.

In this study, we present the details of a tool that can be useful in design optimization - finite element modeling, translating AISC 2010 design code requirements into components of the FE and design optimization models. A comparative study of designs based on AISC 2010 and fixed allowable stresses, (regardless of the shape of cross section) is also being carried out.
ContributorsUnde, Yogesh (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2016
153100-Thumbnail Image.png
Description
Laminated composite materials are used in aerospace, civil and mechanical structural systems due to their superior material properties compared to the constituent materials as well as in comparison to traditional materials such as metals. Laminate structures are composed of multiple orthotropic material layers bonded together to form a single performing

Laminated composite materials are used in aerospace, civil and mechanical structural systems due to their superior material properties compared to the constituent materials as well as in comparison to traditional materials such as metals. Laminate structures are composed of multiple orthotropic material layers bonded together to form a single performing part. As such, the layup design of the material largely influences the structural performance. Optimization techniques such as the Genetic Algorithm (GA), Differential Evolution (DE), the Method of Feasible Directions (MFD), and others can be used to determine the optimal laminate composite material layup. In this thesis, sizing, shape and topology design optimization of laminated composites is carried out. Sizing optimization, such as the layer thickness, topology optimization, such as the layer orientation and material and the number of layers present, and shape optimization of the overall composite part contribute to the design optimization process of laminates. An optimization host program written in C++ has been developed to implement the optimization methodology of both population based and numerical gradient based methods. The performance of the composite structural system is evaluated through explicit finite element analysis of shell elements carried out using LS-DYNA. Results from numerical examples demonstrate that optimization design processes can significantly improve composite part performance through implementation of optimum material layup and part shape.
ContributorsMika, Krista (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2014
158635-Thumbnail Image.png
Description
Dimensionality reduction methods are examined for large-scale discrete problems, specifically for the solution of three-dimensional geophysics problems: the inversion of gravity and magnetic data. The matrices for the associated forward problems have beneficial structure for each depth layer of the volume domain, under mild assumptions, which facilitates the use of

Dimensionality reduction methods are examined for large-scale discrete problems, specifically for the solution of three-dimensional geophysics problems: the inversion of gravity and magnetic data. The matrices for the associated forward problems have beneficial structure for each depth layer of the volume domain, under mild assumptions, which facilitates the use of the two dimensional fast Fourier transform for evaluating forward and transpose matrix operations, providing considerable savings in both computational costs and storage requirements. Application of this approach for the magnetic problem is new in the geophysics literature. Further, the approach is extended for padded volume domains.

Stabilized inversion is obtained efficiently by applying novel randomization techniques within each update of the iteratively reweighted scheme. For a general rectangular linear system, a randomization technique combined with preconditioning is introduced and investigated. This is shown to provide well-conditioned inversion, stabilized through truncation. Applying this approach, while implementing matrix operations using the two dimensional fast Fourier transform, yields computationally effective inversion, in memory and cost. Validation is provided via synthetic data sets, and the approach is contrasted with the well-known LSRN algorithm when applied to these data sets. The results demonstrate a significant reduction in computational cost with the new algorithm. Further, this new algorithm produces results for inversion of real magnetic data consistent with those provided in literature.

Typically, the iteratively reweighted least squares algorithm depends on a standard Tikhonov formulation. Here, this is solved using both a randomized singular value de- composition and the iterative LSQR Krylov algorithm. The results demonstrate that the new algorithm is competitive with these approaches and offers the advantage that no regularization parameter needs to be found at each outer iteration.

Given its efficiency, investigating the new algorithm for the joint inversion of these data sets may be fruitful. Initial research on joint inversion using the two dimensional fast Fourier transform has recently been submitted and provides the basis for future work. Several alternative directions for dimensionality reduction are also discussed, including iteratively applying an approximate pseudo-inverse and obtaining an approximate Kronecker product decomposition via randomization for a general matrix. These are also topics for future consideration.
ContributorsHogue, Jarom David (Author) / Renaut, Rosemary A. (Thesis advisor) / Jackiewicz, Zdzislaw (Committee member) / Platte, Rodrigo B (Committee member) / Ringhofer, Christian (Committee member) / Wlefert, Bruno (Committee member) / Arizona State University (Publisher)
Created2020