Matching Items (324)
150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
ContributorsThulasiram, Ramesh (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150111-Thumbnail Image.png
Description
Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to

Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to improve the upper bound on the size of the optimal solution. This dissertation presents an alternative method which can be used to improve a solution to a problem rather than construct a solution from scratch. Necessity analysis, which is the key to this approach, is the process of analyzing the necessity of each element in a solution. The post-optimization algorithm presented here utilizes the result of the necessity analysis to improve the quality of the solution by eliminating unnecessary objects from the solution. While this technique could potentially be applied to different domains, this dissertation focuses on k-restriction problems, where a solution to the problem can be presented as an array. A scalable post-optimization algorithm for covering arrays is described, which starts from a valid solution and performs necessity analysis to iteratively improve the quality of the solution. It is shown that not only can this technique improve upon the previously best known results, it can also be added as a refinement step to any construction technique and in most cases further improvements are expected. The post-optimization algorithm is then modified to accommodate every k-restriction problem; and this generic algorithm can be used as a starting point to create a reasonable sized solution for any such problem. This generic algorithm is then further refined for hash family problems, by adding a conflict graph analysis to the necessity analysis phase. By recoloring the conflict graphs a new degree of flexibility is explored, which can further improve the quality of the solution.
ContributorsNayeri, Peyman (Author) / Colbourn, Charles (Thesis advisor) / Konjevod, Goran (Thesis advisor) / Sen, Arunabha (Committee member) / Stanzione Jr, Daniel (Committee member) / Arizona State University (Publisher)
Created2011
150125-Thumbnail Image.png
Description
Damage assessment and residual useful life estimation (RULE) are essential for aerospace, civil and naval structures. Structural Health Monitoring (SHM) attempts to automate the process of damage detection and identification. Multiscale modeling is a key element in SHM. It not only provides important information on the physics of failure, such

Damage assessment and residual useful life estimation (RULE) are essential for aerospace, civil and naval structures. Structural Health Monitoring (SHM) attempts to automate the process of damage detection and identification. Multiscale modeling is a key element in SHM. It not only provides important information on the physics of failure, such as damage initiation and growth, the output can be used as "virtual sensing" data for detection and prognosis. The current research is part of an ongoing multidisciplinary effort to develop an integrated SHM framework for metallic aerospace components. In this thesis a multiscale model has been developed by bridging the relevant length scales, micro, meso and macro (or structural scale). Micro structural representations obtained from material characterization studies are used to define the length scales and to capture the size and orientation of the grains at the micro level. Parametric studies are conducted to estimate material parameters used in this constitutive model. Numerical and experimental simulations are performed to investigate the effects of Representative Volume Element (RVE) size, defect area fraction and distribution. A multiscale damage criterion accounting for crystal orientation effect is developed. This criterion is applied for fatigue crack initial stage prediction. A damage evolution rule based on strain energy density is modified to incorporate crystal plasticity at the microscale (local). Optimization approaches are used to calculate global damage index which is used for the RVE failure prediciton. Potential cracking directions are provided from the damage criterion simultaneously. A wave propagation model is incorporated with the damage model to detect changes in sensing signals due to plastic deformation and damage growth.
ContributorsLuo, Chuntao (Author) / Chattopadhyay, Aditi (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Jiang, Hanqing (Committee member) / Dai, Lenore (Committee member) / Li, Jian (Committee member) / Arizona State University (Publisher)
Created2011
150098-Thumbnail Image.png
Description
Polymer and polymer matrix composites (PMCs) materials are being used extensively in different civil and mechanical engineering applications. The behavior of the epoxy resin polymers under different types of loading conditions has to be understood before the mechanical behavior of Polymer Matrix Composites (PMCs) can be accurately predicted. In many

Polymer and polymer matrix composites (PMCs) materials are being used extensively in different civil and mechanical engineering applications. The behavior of the epoxy resin polymers under different types of loading conditions has to be understood before the mechanical behavior of Polymer Matrix Composites (PMCs) can be accurately predicted. In many structural applications, PMC structures are subjected to large flexural loadings, examples include repair of structures against earthquake and engine fan cases. Therefore it is important to characterize and model the flexural mechanical behavior of epoxy resin materials. In this thesis, a comprehensive research effort was undertaken combining experiments and theoretical modeling to investigate the mechanical behavior of epoxy resins subject to different loading conditions. Epoxy resin E 863 was tested at different strain rates. Samples with dog-bone geometry were used in the tension tests. Small sized cubic, prismatic, and cylindrical samples were used in compression tests. Flexural tests were conducted on samples with different sizes and loading conditions. Strains were measured using the digital image correlation (DIC) technique, extensometers, strain gauges, and actuators. Effects of triaxiality state of stress were studied. Cubic, prismatic, and cylindrical compression samples undergo stress drop at yield, but it was found that only cubic samples experience strain hardening before failure. Characteristic points of tensile and compressive stress strain relation and load deflection curve in flexure were measured and their variations with strain rate studied. Two different stress strain models were used to investigate the effect of out-of-plane loading on the uniaxial stress strain response of the epoxy resin material. The first model is a strain softening with plastic flow for tension and compression. The influence of softening localization on material behavior was investigated using the DIC system. It was found that compression plastic flow has negligible influence on flexural behavior in epoxy resins, which are stronger in pre-peak and post-peak softening in compression than in tension. The second model was a piecewise-linear stress strain curve simplified in the post-peak response. Beams and plates with different boundary conditions were tested and analytically studied. The flexural over-strength factor for epoxy resin polymeric materials were also evaluated.
ContributorsYekani Fard, Masoud (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Li, Jian (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2011
150141-Thumbnail Image.png
Description
A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and

A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and their respective temperatures established simultaneously. Polystyrene and silica nanoparticles are synthesized with a variety of temperature-sensitive dyes such as BODIPY, rose Bengal, Rhodamine dyes 6G, 700, and 800, and Nile Blue A and Nile Red. Photographs are taken with a QImaging QM1 Questar EXi Retiga camera while particles are heated from 25 to 70 C and excited at 532 nm with a Coherent DPSS-532 laser. Photographs are converted to intensity images in MATLAB and analyzed for fluorescence intensity, and plots are generated in MATLAB to describe each dye's intensity vs temperature. Regression curves are created to describe change in fluorescence intensity over temperature. Dyes are compared as nanoparticle core material is varied. Large particles are also created to match the camera's optical resolution capabilities, and it is established that intensity values increase proportionally with nanoparticle size. Nile Red yielded the closest-fit model, with R2 values greater than 0.99 for a second-order polynomial fit. By contrast, Rhodamine 6G only yielded an R2 value of 0.88 for a third-order polynomial fit, making it the least reliable dye for temperature measurements using the polynomial model. Of particular interest in this work is Nile Blue A, whose fluorescence-temperature curve yielded a much different shape from the other dyes. It is recommended that future work describe a broader range of dyes and nanoparticle sizes, and use multiple excitation wavelengths to better quantify each dye's quantum efficiency. Further research into the effects of nanoparticle size on fluorescence intensity levels should be considered as the particles used here greatly exceed 2 ìm. In addition, Nile Blue A should be further investigated as to why its fluorescence-temperature curve did not take on a characteristic shape for a temperature-sensitive dye in these experiments.
ContributorsTomforde, Christine (Author) / Phelan, Patrick (Thesis advisor) / Dai, Lenore (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
149702-Thumbnail Image.png
Description
Gene therapy is a promising technology for the treatment of various nonheritable and genetically acquired diseases. It involves delivery of a therapeutic gene into target cells to induce cellular responses against diseases. Successful gene therapy requires an efficient gene delivery vector to deliver genetic materials into target cells. There are

Gene therapy is a promising technology for the treatment of various nonheritable and genetically acquired diseases. It involves delivery of a therapeutic gene into target cells to induce cellular responses against diseases. Successful gene therapy requires an efficient gene delivery vector to deliver genetic materials into target cells. There are two major classes of gene delivery vectors: viral and non-viral vectors. Recently, non-viral vectors such as cationic polymers have attracted more attention than viral vectors because they are versatile and non-immunogenic. However, cationic polymers suffer from poor gene delivery efficiency due to biological barriers. The objective of this research is to develop strategies to overcome the barriers and enhance polymer-mediated transgene expression. This study aimed to (i) develop new polymer vectors for gene delivery, (ii) investigate the intracellular barriers in polymer-mediated gene delivery, and (iii) explore new approaches to overcome the barriers. A cationic polymer library was developed by employing a parallel synthesis and high-throughput screening method. Lead polymers from the library were identified from the library based on relative levels of transgene expression and toxicity in PC3-PSMA prostate cancer cells. However, transgene expression levels were found to depend on intracellular localization of polymer-gene complexes (polyplexes). Transgene expression was higher when polyplexes were dispersed rather than localized in the cytoplasm. Combination treatments using small molecule chemotherapeutic drugs, e.g. histone deacetylase inhibitors (HDACi) or Aurora kinase inhibitor (AKI) increased dispersion of polyplexes in the cytoplasm and significantly enhanced transgene expression. The combination treatment using polymer-mediated delivery of p53 tumor-suppressor gene and AKI increased p53 expression in PC3-PSMA cells, inhibited the cell proliferation by ~80% and induced apoptosis. Polymer-mediated p53 gene delivery in combination with AKI offers a promising treatment strategy for in vivo and clinical studies of cancer gene therapy.
ContributorsBarua, Sutapa (Author) / Rege, Kaushal (Thesis advisor) / Dai, Lenore (Committee member) / Meldrum, Deirdre R. (Committee member) / Sierks, Michael (Committee member) / Voelkel-Johnson, Christina (Committee member) / Arizona State University (Publisher)
Created2011
152344-Thumbnail Image.png
Description
Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of

Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage.
ContributorsHuff, Daniel W (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Chakrabarti, Chaitali (Committee member) / Chattopadhyay, Aditi (Committee member) / Arizona State University (Publisher)
Created2013
151465-Thumbnail Image.png
Description
Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on

Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on a priori information and user-specified model parameters. Also, ECG beat morphologies, which vary greatly across patients and disease states, cannot be uniquely characterized by a single model. In this work, sequential Bayesian based methods are used to appropriately model and adaptively select the corresponding model parameters of ECG signals. An adaptive framework based on a sequential Bayesian tracking method is proposed to adaptively select the cardiac parameters that minimize the estimation error, thus precluding the need for pre-processing. Simulations using real ECG data from the online Physionet database demonstrate the improvement in performance of the proposed algorithm in accurately estimating critical heart disease parameters. In addition, two new approaches to ECG modeling are presented using the interacting multiple model and the sequential Markov chain Monte Carlo technique with adaptive model selection. Both these methods can adaptively choose between different models for various ECG beat morphologies without requiring prior ECG information, as demonstrated by using real ECG signals. A supervised Bayesian maximum-likelihood (ML) based classifier uses the estimated model parameters to classify different types of cardiac arrhythmias. However, the non-availability of sufficient amounts of representative training data and the large inter-patient variability pose a challenge to the existing supervised learning algorithms, resulting in a poor classification performance. In addition, recently developed unsupervised learning methods require a priori knowledge on the number of diseases to cluster the ECG data, which often evolves over time. In order to address these issues, an adaptive learning ECG classification method that uses Dirichlet process Gaussian mixture models is proposed. This approach does not place any restriction on the number of disease classes, nor does it require any training data. This algorithm is adapted to be patient-specific by labeling or identifying the generated mixtures using the Bayesian ML method, assuming the availability of labeled training data.
ContributorsEdla, Shwetha Reddy (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
151480-Thumbnail Image.png
Description
The use of electromyography (EMG) signals to characterize muscle fatigue has been widely accepted. Initial work on characterizing muscle fatigue during isometric contractions demonstrated that its frequency decreases while its amplitude increases with the onset of fatigue. More recent work concentrated on developing techniques to characterize dynamic contractions for use

The use of electromyography (EMG) signals to characterize muscle fatigue has been widely accepted. Initial work on characterizing muscle fatigue during isometric contractions demonstrated that its frequency decreases while its amplitude increases with the onset of fatigue. More recent work concentrated on developing techniques to characterize dynamic contractions for use in clinical and training applications. Studies demonstrated that as fatigue progresses, the EMG signal undergoes a shift in frequency, and different physiological mechanisms on the possible cause of the shift were considered. Time-frequency processing, using the Wigner distribution or spectrogram, is one of the techniques used to estimate the instantaneous mean frequency and instantaneous median frequency of the EMG signal using a variety of techniques. However, these time-frequency methods suffer either from cross-term interference when processing signals with multiple components or time-frequency resolution due to the use of windowing. This study proposes the use of the matching pursuit decomposition (MPD) with a Gaussian dictionary to process EMG signals produced during both isometric and dynamic contractions. In particular, the MPD obtains unique time-frequency features that represent the EMG signal time-frequency dependence without suffering from cross-terms or loss in time-frequency resolution. As the MPD does not depend on an analysis window like the spectrogram, it is more robust in applying the timefrequency features to identify the spectral time-variation of the EGM signal.
ContributorsAustin, Hiroko (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Muthuswamy, Jitendran (Committee member) / Arizona State University (Publisher)
Created2012
151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013