This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 109
151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
152043-Thumbnail Image.png
Description
The main objective of this study is to investigate the mechanical behaviour of cementitious based composites subjected dynamic tensile loading, with effects of strain rate, temperature, addition of short fibres etc. Fabric pullout model and tension stiffening model based on finite difference model, previously developed at Arizona State University were

The main objective of this study is to investigate the mechanical behaviour of cementitious based composites subjected dynamic tensile loading, with effects of strain rate, temperature, addition of short fibres etc. Fabric pullout model and tension stiffening model based on finite difference model, previously developed at Arizona State University were used to help study the bonding mechanism between fibre and matrix, and the phenomenon of tension stiffening due to the addition of fibres and textiles. Uniaxial tension tests were conducted on strain-hardening cement-based composites (SHCC), textile reinforced concrete (TRC) with and without addition of short fibres, at the strain rates ranging from 25 s-1 to 100 s-1. Historical data on quasi-static tests of same materials were used to demonstrate the effects including increases in average tensile strength, strain capacity, work-to-fracture due to high strain rate. Polyvinyl alcohol (PVA), glass, polypropylene were employed as reinforcements of concrete. A state-of-the-art phantom v7 high speed camera was setup to record the video at frame rate of 10,000 fps. Random speckle pattern of texture style was made on the surface of specimens for image analysis. An optical non-contacting deformation measurement technique referred to as digital image correlation (DIC) method was used to conduct the image analysis by means of tracking the displacement field through comparison between the reference image and deformed images. DIC successfully obtained full-filed strain distribution, strain versus time responses, demonstrated the bonding mechanism from perspective of strain field, and corrected the stress-strain responses.
ContributorsYao, Yiming (Author) / Barzin, Mobasher (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2013
151953-Thumbnail Image.png
Description
Distributed inference has applications in a wide range of fields such as source localization, target detection, environment monitoring, and healthcare. In this dissertation, distributed inference schemes which use bounded transmit power are considered. The performance of the proposed schemes are studied for a variety of inference problems. In the first

Distributed inference has applications in a wide range of fields such as source localization, target detection, environment monitoring, and healthcare. In this dissertation, distributed inference schemes which use bounded transmit power are considered. The performance of the proposed schemes are studied for a variety of inference problems. In the first part of the dissertation, a distributed detection scheme where the sensors transmit with constant modulus signals over a Gaussian multiple access channel is considered. The deflection coefficient of the proposed scheme is shown to depend on the characteristic function of the sensing noise, and the error exponent for the system is derived using large deviation theory. Optimization of the deflection coefficient and error exponent are considered with respect to a transmission phase parameter for a variety of sensing noise distributions including impulsive ones. The proposed scheme is also favorably compared with existing amplify-and-forward (AF) and detect-and-forward (DF) schemes. The effect of fading is shown to be detrimental to the detection performance and simulations are provided to corroborate the analytical results. The second part of the dissertation studies a distributed inference scheme which uses bounded transmission functions over a Gaussian multiple access channel. The conditions on the transmission functions under which consistent estimation and reliable detection are possible is characterized. For the distributed estimation problem, an estimation scheme that uses bounded transmission functions is proved to be strongly consistent provided that the variance of the noise samples are bounded and that the transmission function is one-to-one. The proposed estimation scheme is compared with the amplify and forward technique and its robustness to impulsive sensing noise distributions is highlighted. It is also shown that bounded transmissions suffer from inconsistent estimates if the sensing noise variance goes to infinity. For the distributed detection problem, similar results are obtained by studying the deflection coefficient. Simulations corroborate our analytical results. In the third part of this dissertation, the problem of estimating the average of samples distributed at the nodes of a sensor network is considered. A distributed average consensus algorithm in which every sensor transmits with bounded peak power is proposed. In the presence of communication noise, it is shown that the nodes reach consensus asymptotically to a finite random variable whose expectation is the desired sample average of the initial observations with a variance that depends on the step size of the algorithm and the variance of the communication noise. The asymptotic performance is characterized by deriving the asymptotic covariance matrix using results from stochastic approximation theory. It is shown that using bounded transmissions results in slower convergence compared to the linear consensus algorithm based on the Laplacian heuristic. Simulations corroborate our analytical findings. Finally, a robust distributed average consensus algorithm in which every sensor performs a nonlinear processing at the receiver is proposed. It is shown that non-linearity at the receiver nodes makes the algorithm robust to a wide range of channel noise distributions including the impulsive ones. It is shown that the nodes reach consensus asymptotically and similar results are obtained as in the case of transmit non-linearity. Simulations corroborate our analytical findings and highlight the robustness of the proposed algorithm.
ContributorsDasarathan, Sivaraman (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Reisslein, Martin (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151771-Thumbnail Image.png
Description
This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a

This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a probabilistic and reference-free framework for estimating Lamb wave velocities and the damage location. The methodology for damage localization at unknown temperatures includes the following key elements: i) a model that can describe the change in Lamb wave velocities with temperature; ii) the extension of an advanced time-frequency based signal processing technique for enhanced time-of-flight feature extraction from a dispersive signal; iii) the development of a Bayesian damage localization framework incorporating data association and sensor fusion. The technique requires no additional transducers to be installed on a structure, and allows for the estimation of both the temperature and the wave velocity in the component. Additionally, the framework of the algorithm allows it to function completely in an unsupervised manner by probabilistically accounting for all measurement origin uncertainty. The novel algorithm was experimentally validated using an aluminum lug joint with a growing fatigue crack. The lug joint was interrogated using piezoelectric transducers at multiple fatigue crack lengths, and at temperatures between 20°C and 80°C. The results showed that the algorithm could accurately predict the temperature and wave speed of the lug joint. The localization results for the fatigue damage were found to correlate well with the true locations at long crack lengths, but loss of accuracy was observed in localizing small cracks due to time-of-flight measurement errors. To validate the algorithm across a wider range of temperatures the electromechanically coupled LISA/SIM model was used to simulate the effects of temperatures. The numerical results showed that this approach would be capable of experimentally estimating the temperature and velocity in the lug joint for temperatures from -60°C to 150°C. The velocity estimation algorithm was found to significantly increase the accuracy of localization at temperatures above 120°C when error due to incorrect velocity selection begins to outweigh the error due to time-of-flight measurements.
ContributorsHensberry, Kevin (Author) / Chattopadhyay, Aditi (Thesis advisor) / Liu, Yongming (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151367-Thumbnail Image.png
Description
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on

This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
ContributorsDeivanayagam, Arumugam (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2012
151455-Thumbnail Image.png
Description
Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving

Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focusses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.
ContributorsMoncada, Albert (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Yekani Fard, Masoud (Committee member) / Arizona State University (Publisher)
Created2012
151435-Thumbnail Image.png
Description
The main objective of this study is to develop an innovative system in the form of a sandwich panel type composite with textile reinforced skins and aerated concrete core. Existing theoretical concepts along with extensive experimental investigations were utilized to characterize the behavior of cement based systems in the presence

The main objective of this study is to develop an innovative system in the form of a sandwich panel type composite with textile reinforced skins and aerated concrete core. Existing theoretical concepts along with extensive experimental investigations were utilized to characterize the behavior of cement based systems in the presence of individual fibers and textile yarns. Part of this thesis is based on a material model developed here in Arizona State University to simulate experimental flexural response and back calculate tensile response. This concept is based on a constitutive law consisting of a tri-linear tension model with residual strength and a bilinear elastic perfectly plastic compression stress strain model. This parametric model was used to characterize Textile Reinforced Concrete (TRC) with aramid, carbon, alkali resistant glass, polypropylene TRC and hybrid systems of aramid and polypropylene. The same material model was also used to characterize long term durability issues with glass fiber reinforced concrete (GFRC). Historical data associated with effect of temperature dependency in aging of GFRC composites were used. An experimental study was conducted to understand the behavior of aerated concrete systems under high stain rate impact loading. Test setup was modeled on a free fall drop of an instrumented hammer using three point bending configuration. Two types of aerated concrete: autoclaved aerated concrete (AAC) and polymeric fiber-reinforced aerated concrete (FRAC) were tested and compared in terms of their impact behavior. The effect of impact energy on the mechanical properties was investigated for various drop heights and different specimen sizes. Both materials showed similar flexural load carrying capacity under impact, however, flexural toughness of fiber-reinforced aerated concrete was proved to be several degrees higher in magnitude than that provided by plain autoclaved aerated concrete. Effect of specimen size and drop height on the impact response of AAC and FRAC was studied and discussed. Results obtained were compared to the performance of sandwich beams with AR glass textile skins with aerated concrete core under similar impact conditions. After this extensive study it was concluded that this type of sandwich composite could be effectively used in low cost sustainable infrastructure projects.
ContributorsDey, Vikram (Author) / Mobasher, Barzin (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2012
152317-Thumbnail Image.png
Description
Nuclear magnetic resonance (NMR) is an important phenomenon involving nuclear magnetic moments in magnetic field, which can provide much information about a wide range of materials, including their chemical composition, chemical environments and nuclear spin interactions. The NMR spectrometer has been extensively developed and used in many areas of research.

Nuclear magnetic resonance (NMR) is an important phenomenon involving nuclear magnetic moments in magnetic field, which can provide much information about a wide range of materials, including their chemical composition, chemical environments and nuclear spin interactions. The NMR spectrometer has been extensively developed and used in many areas of research. In this thesis, studies in two different areas using NMR are presented. First, a new kind of nanoparticle, Gd(DTPA) intercalated layered double hydroxide (LDH), has been successfully synthesized in the laboratory of Prof. Dey in SEMTE at ASU. In Chapter II, the NMR relaxation studies of two types of LDH (Mg, Al-LDH and Zn, Al-LDH) are presented and the results show that when they are intercalated with Gd(DTPA) they have a higher relaxivity than current commercial magnetic resonance imaging (MRI) contrast agents, such as DTPA in water solution. So this material may be useful as an MRI contrast agent. Several conditions were examined, such as nanoparticle size, pH and intercalation percentage, to determine the optimal relaxivity of this nanoparticle. Further NMR studies and simulations were conducted to provide an explanation for the high relaxivity. Second, fly ash is a kind of cementitious material, which has been of great interest because, when activated by an alkaline solution, it exhibits the capability for replacing ordinary Portland cement as a concrete binder. However, the reaction of activated fly ash is not fully understood. In chapter III, pore structure and NMR studies of activated fly ash using different activators, including NaOH and KOH (4M and 8M) and Na/K silicate, are presented. The pore structure, degree of order and proportion of different components in the reaction product were obtained, which reveal much about the reaction and makeup of the final product.
ContributorsPeng, Zihui (Author) / Marzke, Robert F (Thesis advisor) / Dey, Sandwip Kumar (Committee member) / Neithalath, Narayanan (Committee member) / Chamberlin, Ralph Vary (Committee member) / Mccartney, Martha Rogers (Committee member) / Arizona State University (Publisher)
Created2013
152580-Thumbnail Image.png
Description
Tall buildings are spreading across the globe at an ever-increasing rate (www.ctbuh.org). The global number of buildings 200m or more in height has risen from 286 to 602 in the last decade alone. The increasing complexity of building architecture poses unique challenges in the structural design of modern tall buildings.

Tall buildings are spreading across the globe at an ever-increasing rate (www.ctbuh.org). The global number of buildings 200m or more in height has risen from 286 to 602 in the last decade alone. The increasing complexity of building architecture poses unique challenges in the structural design of modern tall buildings. Hence, innovative structural systems need to be evaluated to create an economical design that satisfies multiple design criteria. Design using traditional trial-and-error approach can be extremely time-consuming and the resultant design uneconomical. Thus, there is a need for an efficient numerical optimization tool that can explore and generate several design alternatives in the preliminary design phase which can lead to a more desirable final design. In this study, we present the details of a tool that can be very useful in preliminary design optimization - finite element modeling, design optimization, translating design code requirements into components of the FE and design optimization models, and pre-and post-processing to verify the veracity of the model. Emphasis is placed on development and deployment of various FE models (static, modal and dynamic analyses; linear, beam and plate/shell finite elements), design optimization problem formulation (sizing, shape, topology and material selection optimization) and numerical optimization tools (gradient-based and evolutionary optimization methods) [Rajan, 2001]. The design optimization results of full scale three dimensional buildings subject to multiple design criteria including stress, serviceability and dynamic response are discussed.
ContributorsSirigiri, Mamatha (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2014
152455-Thumbnail Image.png
Description
This dissertation introduces stochastic ordering of instantaneous channel powers of fading channels as a general method to compare the performance of a communication system over two different channels, even when a closed-form expression for the metric may not be available. Such a comparison is with respect to a variety of

This dissertation introduces stochastic ordering of instantaneous channel powers of fading channels as a general method to compare the performance of a communication system over two different channels, even when a closed-form expression for the metric may not be available. Such a comparison is with respect to a variety of performance metrics such as error rates, outage probability and ergodic capacity, which share common mathematical properties such as monotonicity, convexity or complete monotonicity. Complete monotonicity of a metric, such as the symbol error rate, in conjunction with the stochastic Laplace transform order between two fading channels implies the ordering of the two channels with respect to the metric. While it has been established previously that certain modulation schemes have convex symbol error rates, there is no study of the complete monotonicity of the same, which helps in establishing stronger channel ordering results. Toward this goal, the current research proves for the first time, that all 1-dimensional and 2-dimensional modulations have completely monotone symbol error rates. Furthermore, it is shown that the frequently used parametric fading distributions for modeling line of sight exhibit a monotonicity in the line of sight parameter with respect to the Laplace transform order. While the Laplace transform order can also be used to order fading distributions based on the ergodic capacity, there exist several distributions which are not Laplace transform ordered, although they have ordered ergodic capacities. To address this gap, a new stochastic order called the ergodic capacity order has been proposed herein, which can be used to compare channels based on the ergodic capacity. Using stochastic orders, average performance of systems involving multiple random variables are compared over two different channels. These systems include diversity combining schemes, relay networks, and signal detection over fading channels with non-Gaussian additive noise. This research also addresses the problem of unifying fading distributions. This unification is based on infinite divisibility, which subsumes almost all known fading distributions, and provides simplified expressions for performance metrics, in addition to enabling stochastic ordering.
ContributorsRajan, Adithya (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2014