Matching Items (265)
150125-Thumbnail Image.png
Description
Damage assessment and residual useful life estimation (RULE) are essential for aerospace, civil and naval structures. Structural Health Monitoring (SHM) attempts to automate the process of damage detection and identification. Multiscale modeling is a key element in SHM. It not only provides important information on the physics of failure, such

Damage assessment and residual useful life estimation (RULE) are essential for aerospace, civil and naval structures. Structural Health Monitoring (SHM) attempts to automate the process of damage detection and identification. Multiscale modeling is a key element in SHM. It not only provides important information on the physics of failure, such as damage initiation and growth, the output can be used as "virtual sensing" data for detection and prognosis. The current research is part of an ongoing multidisciplinary effort to develop an integrated SHM framework for metallic aerospace components. In this thesis a multiscale model has been developed by bridging the relevant length scales, micro, meso and macro (or structural scale). Micro structural representations obtained from material characterization studies are used to define the length scales and to capture the size and orientation of the grains at the micro level. Parametric studies are conducted to estimate material parameters used in this constitutive model. Numerical and experimental simulations are performed to investigate the effects of Representative Volume Element (RVE) size, defect area fraction and distribution. A multiscale damage criterion accounting for crystal orientation effect is developed. This criterion is applied for fatigue crack initial stage prediction. A damage evolution rule based on strain energy density is modified to incorporate crystal plasticity at the microscale (local). Optimization approaches are used to calculate global damage index which is used for the RVE failure prediciton. Potential cracking directions are provided from the damage criterion simultaneously. A wave propagation model is incorporated with the damage model to detect changes in sensing signals due to plastic deformation and damage growth.
ContributorsLuo, Chuntao (Author) / Chattopadhyay, Aditi (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Jiang, Hanqing (Committee member) / Dai, Lenore (Committee member) / Li, Jian (Committee member) / Arizona State University (Publisher)
Created2011
150098-Thumbnail Image.png
Description
Polymer and polymer matrix composites (PMCs) materials are being used extensively in different civil and mechanical engineering applications. The behavior of the epoxy resin polymers under different types of loading conditions has to be understood before the mechanical behavior of Polymer Matrix Composites (PMCs) can be accurately predicted. In many

Polymer and polymer matrix composites (PMCs) materials are being used extensively in different civil and mechanical engineering applications. The behavior of the epoxy resin polymers under different types of loading conditions has to be understood before the mechanical behavior of Polymer Matrix Composites (PMCs) can be accurately predicted. In many structural applications, PMC structures are subjected to large flexural loadings, examples include repair of structures against earthquake and engine fan cases. Therefore it is important to characterize and model the flexural mechanical behavior of epoxy resin materials. In this thesis, a comprehensive research effort was undertaken combining experiments and theoretical modeling to investigate the mechanical behavior of epoxy resins subject to different loading conditions. Epoxy resin E 863 was tested at different strain rates. Samples with dog-bone geometry were used in the tension tests. Small sized cubic, prismatic, and cylindrical samples were used in compression tests. Flexural tests were conducted on samples with different sizes and loading conditions. Strains were measured using the digital image correlation (DIC) technique, extensometers, strain gauges, and actuators. Effects of triaxiality state of stress were studied. Cubic, prismatic, and cylindrical compression samples undergo stress drop at yield, but it was found that only cubic samples experience strain hardening before failure. Characteristic points of tensile and compressive stress strain relation and load deflection curve in flexure were measured and their variations with strain rate studied. Two different stress strain models were used to investigate the effect of out-of-plane loading on the uniaxial stress strain response of the epoxy resin material. The first model is a strain softening with plastic flow for tension and compression. The influence of softening localization on material behavior was investigated using the DIC system. It was found that compression plastic flow has negligible influence on flexural behavior in epoxy resins, which are stronger in pre-peak and post-peak softening in compression than in tension. The second model was a piecewise-linear stress strain curve simplified in the post-peak response. Beams and plates with different boundary conditions were tested and analytically studied. The flexural over-strength factor for epoxy resin polymeric materials were also evaluated.
ContributorsYekani Fard, Masoud (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Li, Jian (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2011
150141-Thumbnail Image.png
Description
A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and

A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and their respective temperatures established simultaneously. Polystyrene and silica nanoparticles are synthesized with a variety of temperature-sensitive dyes such as BODIPY, rose Bengal, Rhodamine dyes 6G, 700, and 800, and Nile Blue A and Nile Red. Photographs are taken with a QImaging QM1 Questar EXi Retiga camera while particles are heated from 25 to 70 C and excited at 532 nm with a Coherent DPSS-532 laser. Photographs are converted to intensity images in MATLAB and analyzed for fluorescence intensity, and plots are generated in MATLAB to describe each dye's intensity vs temperature. Regression curves are created to describe change in fluorescence intensity over temperature. Dyes are compared as nanoparticle core material is varied. Large particles are also created to match the camera's optical resolution capabilities, and it is established that intensity values increase proportionally with nanoparticle size. Nile Red yielded the closest-fit model, with R2 values greater than 0.99 for a second-order polynomial fit. By contrast, Rhodamine 6G only yielded an R2 value of 0.88 for a third-order polynomial fit, making it the least reliable dye for temperature measurements using the polynomial model. Of particular interest in this work is Nile Blue A, whose fluorescence-temperature curve yielded a much different shape from the other dyes. It is recommended that future work describe a broader range of dyes and nanoparticle sizes, and use multiple excitation wavelengths to better quantify each dye's quantum efficiency. Further research into the effects of nanoparticle size on fluorescence intensity levels should be considered as the particles used here greatly exceed 2 ìm. In addition, Nile Blue A should be further investigated as to why its fluorescence-temperature curve did not take on a characteristic shape for a temperature-sensitive dye in these experiments.
ContributorsTomforde, Christine (Author) / Phelan, Patrick (Thesis advisor) / Dai, Lenore (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
149702-Thumbnail Image.png
Description
Gene therapy is a promising technology for the treatment of various nonheritable and genetically acquired diseases. It involves delivery of a therapeutic gene into target cells to induce cellular responses against diseases. Successful gene therapy requires an efficient gene delivery vector to deliver genetic materials into target cells. There are

Gene therapy is a promising technology for the treatment of various nonheritable and genetically acquired diseases. It involves delivery of a therapeutic gene into target cells to induce cellular responses against diseases. Successful gene therapy requires an efficient gene delivery vector to deliver genetic materials into target cells. There are two major classes of gene delivery vectors: viral and non-viral vectors. Recently, non-viral vectors such as cationic polymers have attracted more attention than viral vectors because they are versatile and non-immunogenic. However, cationic polymers suffer from poor gene delivery efficiency due to biological barriers. The objective of this research is to develop strategies to overcome the barriers and enhance polymer-mediated transgene expression. This study aimed to (i) develop new polymer vectors for gene delivery, (ii) investigate the intracellular barriers in polymer-mediated gene delivery, and (iii) explore new approaches to overcome the barriers. A cationic polymer library was developed by employing a parallel synthesis and high-throughput screening method. Lead polymers from the library were identified from the library based on relative levels of transgene expression and toxicity in PC3-PSMA prostate cancer cells. However, transgene expression levels were found to depend on intracellular localization of polymer-gene complexes (polyplexes). Transgene expression was higher when polyplexes were dispersed rather than localized in the cytoplasm. Combination treatments using small molecule chemotherapeutic drugs, e.g. histone deacetylase inhibitors (HDACi) or Aurora kinase inhibitor (AKI) increased dispersion of polyplexes in the cytoplasm and significantly enhanced transgene expression. The combination treatment using polymer-mediated delivery of p53 tumor-suppressor gene and AKI increased p53 expression in PC3-PSMA cells, inhibited the cell proliferation by ~80% and induced apoptosis. Polymer-mediated p53 gene delivery in combination with AKI offers a promising treatment strategy for in vivo and clinical studies of cancer gene therapy.
ContributorsBarua, Sutapa (Author) / Rege, Kaushal (Thesis advisor) / Dai, Lenore (Committee member) / Meldrum, Deirdre R. (Committee member) / Sierks, Michael (Committee member) / Voelkel-Johnson, Christina (Committee member) / Arizona State University (Publisher)
Created2011
152360-Thumbnail Image.png
Description
In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders

In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders and multipliers presented in [23] and [24]. First, we show how choice of algorithm and parallel adder design can be used to implement 2D Discrete Cosine Transform (DCT) algorithm with good performance but low area. Our implementation of the 2D DCT has comparable PSNR performance with respect to the algorithm presented in [23] with ~35-50% reduction in area. Next, we use the approximate 2x2 multiplier presented in [24] to implement parallel approximate multipliers. We demonstrate that if some of the 2x2 multipliers in the design of the parallel multiplier are accurate, the accuracy of the multiplier improves significantly, especially when two large numbers are multiplied. We choose Gaussian FIR Filter and Fast Fourier Transform (FFT) algorithms to illustrate the efficacy of our proposed approximate multiplier. We show that application of the proposed approximate multiplier improves the PSNR performance of 32x32 FFT implementation by 4.7 dB compared to the implementation using the approximate multiplier described in [24]. We also implement a state-of-the-art image enlargement algorithm, namely Segment Adaptive Gradient Angle (SAGA) [29], in hardware. The algorithm is mapped to pipelined hardware blocks and we synthesized the design using 90 nm technology. We show that a 64x64 image can be processed in 496.48 µs when clocked at 100 MHz. The average PSNR performance of our implementation using accurate parallel adders and multipliers is 31.33 dB and that using approximate parallel adders and multipliers is 30.86 dB, when evaluated against the original image. The PSNR performance of both designs is comparable to the performance of the double precision floating point MATLAB implementation of the algorithm.
ContributorsVasudevan, Madhu (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2013
152344-Thumbnail Image.png
Description
Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of

Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage.
ContributorsHuff, Daniel W (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Chakrabarti, Chaitali (Committee member) / Chattopadhyay, Aditi (Committee member) / Arizona State University (Publisher)
Created2013
151465-Thumbnail Image.png
Description
Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on

Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on a priori information and user-specified model parameters. Also, ECG beat morphologies, which vary greatly across patients and disease states, cannot be uniquely characterized by a single model. In this work, sequential Bayesian based methods are used to appropriately model and adaptively select the corresponding model parameters of ECG signals. An adaptive framework based on a sequential Bayesian tracking method is proposed to adaptively select the cardiac parameters that minimize the estimation error, thus precluding the need for pre-processing. Simulations using real ECG data from the online Physionet database demonstrate the improvement in performance of the proposed algorithm in accurately estimating critical heart disease parameters. In addition, two new approaches to ECG modeling are presented using the interacting multiple model and the sequential Markov chain Monte Carlo technique with adaptive model selection. Both these methods can adaptively choose between different models for various ECG beat morphologies without requiring prior ECG information, as demonstrated by using real ECG signals. A supervised Bayesian maximum-likelihood (ML) based classifier uses the estimated model parameters to classify different types of cardiac arrhythmias. However, the non-availability of sufficient amounts of representative training data and the large inter-patient variability pose a challenge to the existing supervised learning algorithms, resulting in a poor classification performance. In addition, recently developed unsupervised learning methods require a priori knowledge on the number of diseases to cluster the ECG data, which often evolves over time. In order to address these issues, an adaptive learning ECG classification method that uses Dirichlet process Gaussian mixture models is proposed. This approach does not place any restriction on the number of disease classes, nor does it require any training data. This algorithm is adapted to be patient-specific by labeling or identifying the generated mixtures using the Bayesian ML method, assuming the availability of labeled training data.
ContributorsEdla, Shwetha Reddy (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151860-Thumbnail Image.png
Description
Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal

Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal in both men and women. Developing new drugs for the treatment of cancer is both a slow and expensive process. It is estimated that it takes an average of 15 years and an expense of $800 million to bring a single new drug to the market. However, it is also estimated that nearly 40% of that cost could be avoided by finding alternative uses for drugs that have already been approved by the Food and Drug Administration (FDA). The research presented in this document describes the testing, identification, and mechanistic evaluation of novel methods for treating many human carcinomas using drugs previously approved by the FDA. A tissue culture plate-based screening of FDA approved drugs will identify compounds that can be used in combination with the protein TRAIL to induce apoptosis selectively in cancer cells. Identified leads will next be optimized using high-throughput microfluidic devices to determine the most effective treatment conditions. Finally, a rigorous mechanistic analysis will be conducted to understand how the FDA-approved drug mitoxantrone, sensitizes cancer cells to TRAIL-mediated apoptosis.
ContributorsTaylor, David (Author) / Rege, Kaushal (Thesis advisor) / Jayaraman, Arul (Committee member) / Nielsen, David (Committee member) / Kodibagkar, Vikram (Committee member) / Dai, Lenore (Committee member) / Arizona State University (Publisher)
Created2013
152139-Thumbnail Image.png
Description
ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a

ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a certain kind of membrane systems, is inspired by the way the neurons in brain interact using electrical spikes. Compared to the traditional Boolean logic, SNP systems not only perform similar functions but also provide a more promising solution for reliable computation. Two basic neuron types, Low Pass (LP) neurons and High Pass (HP) neurons, are introduced. These two basic types of neurons are capable to build an arbitrary SNP neuron. This leads to the conclusion that these two basic neuron types are Turing complete since SNP systems has been proved Turing complete. These two basic types of neurons are further used as the elements to construct general-purpose arithmetic circuits, such as adder, subtractor and comparator. In this thesis, erroneous behaviors of neurons are discussed. Transmission error (spike loss) is proved to be equivalent to threshold error, which makes threshold error discussion more universal. To improve the reliability, a new structure called motif is proposed. Compared to Triple Modular Redundancy improvement, motif design presents its efficiency and effectiveness in both single neuron and arithmetic circuit analysis. DRAM-based CMOS circuits are used to implement the two basic types of neurons. Functionality of basic type neurons is proved using the SPICE simulations. The motif improved adder and the comparator, as compared to conventional Boolean logic design, are much more reliable with lower leakage, and smaller silicon area. This leads to the conclusion that SNP system could provide a more promising solution for reliable computation than the conventional Boolean logic.
ContributorsAn, Pei (Author) / Cao, Yu (Thesis advisor) / Barnaby, Hugh (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2013