Matching Items (222)
Filtering by

Clear all filters

158654-Thumbnail Image.png
Description
In recent years, the widespread use of deep neural networks (DNNs) has facilitated great improvements in performance for computer vision tasks like image classification and object recognition. In most realistic computer vision applications, an input image undergoes some form of image distortion such as blur and additive noise during image

In recent years, the widespread use of deep neural networks (DNNs) has facilitated great improvements in performance for computer vision tasks like image classification and object recognition. In most realistic computer vision applications, an input image undergoes some form of image distortion such as blur and additive noise during image acquisition or transmission. Deep networks trained on pristine images perform poorly when tested on such distortions. DNN predictions have also been shown to be vulnerable to carefully crafted adversarial perturbations. Specifically, so-called universal adversarial perturbations are image-agnostic perturbations that can be added to any image and can fool a target network into making erroneous predictions. This work proposes selective DNN feature regeneration to improve the robustness of existing DNNs to image distortions and universal adversarial perturbations.

In the context of common naturally occurring image distortions, a metric is proposed to identify the most susceptible DNN convolutional filters and rank them in order of the highest gain in classification accuracy upon correction. The proposed approach called DeepCorrect applies small stacks of convolutional layers with residual connections at the output of these ranked filters and trains them to correct the most distortion-affected filter activations, whilst leaving the rest of the pre-trained filter outputs in the network unchanged. Performance results show that applying DeepCorrect models for common vision tasks significantly improves the robustness of DNNs against distorted images and outperforms other alternative approaches.

In the context of universal adversarial perturbations, departing from existing defense strategies that work mostly in the image domain, a novel and effective defense which only operates in the DNN feature domain is presented. This approach identifies pre-trained convolutional features that are most vulnerable to adversarial perturbations and deploys trainable feature regeneration units which transform these DNN filter activations into resilient features that are robust to universal perturbations. Regenerating only the top 50% adversarially susceptible activations in at most 6 DNN layers and leaving all remaining DNN activations unchanged can outperform existing defense strategies across different network architectures and across various universal attacks.
ContributorsBorkar, Tejas Shyam (Author) / Karam, Lina J (Thesis advisor) / Turaga, Pavan (Committee member) / Jayasuriya, Suren (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2020
158845-Thumbnail Image.png
Description
The Human Gut Microbiome (GM) modulates a variety of structural, metabolic, and protective functions to benefit the host. A few recent studies also support the role of the gut microbiome in the regulation of bone health. The relationship between GM and bone health was analyzed based on the data collected

The Human Gut Microbiome (GM) modulates a variety of structural, metabolic, and protective functions to benefit the host. A few recent studies also support the role of the gut microbiome in the regulation of bone health. The relationship between GM and bone health was analyzed based on the data collected from a group of twenty-three adolescent boys and girls who participated in a controlled feeding study, during which two different doses (0 g/d fiber and 12 g/d fiber) of Soluble Corn Fiber (SCF) were added to their diet. This analysis was performed by predicting measures of Bone Mineral Density (BMD) and Bone Mineral Content (BMC) which are indicators of bone strength, using the GM sequence of proportions of 178 microbes collected from 23 subjects, by building a machine learning regression model. The model developed was evaluated by calculating performance metrics such as Root Mean Squared Error, Pearson’s correlation coefficient, and Spearman’s rank correlation coefficient, using cross-validation. A noticeable correlation was observed between the GM and bone health, and it was observed that the overall prediction correlation was higher with SCF intervention (r ~ 0.51). The genera of microbes that played an important role in this relationship were identified. Eubacterium (g), Bacteroides (g), Megamonas (g), Acetivibrio (g), Faecalibacterium (g), and Paraprevotella (g) were some of the microbes that showed an increase in proportion with SCF intervention.
ContributorsKetha Hazarath, Pravallika Reddy (Author) / Bliss, Daniel (Thesis advisor) / Whisner, Corrie (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2020
158804-Thumbnail Image.png
Description
Autonomic closure is a recently-proposed subgrid closure methodology for large eddy simulation (LES) that replaces the prescribed subgrid models used in traditional LES closure with highly generalized representations of subgrid terms and solution of a local system identification problem that allows the simulation itself to determine the local relation between

Autonomic closure is a recently-proposed subgrid closure methodology for large eddy simulation (LES) that replaces the prescribed subgrid models used in traditional LES closure with highly generalized representations of subgrid terms and solution of a local system identification problem that allows the simulation itself to determine the local relation between each subgrid term and the resolved variables at every point and time. The present study demonstrates, for the first time, practical LES based on fully dynamic implementation of autonomic closure for the subgrid stress and the subgrid scalar flux. It leverages the inherent computational efficiency of tensorally-correct generalized representations in terms of parametric quantities, and uses the fundamental representation theory of Smith (1971) to develop complete and minimal tensorally-correct representations for the subgrid stress and scalar flux. It then assesses the accuracy of these representations via a priori tests, and compares with the corresponding accuracy from nonparametric representations and from traditional prescribed subgrid models. It then assesses the computational stability of autonomic closure with these tensorally-correct parametric representations, via forward simulations with a high-order pseudo-spectral code, including the extent to which any added stabilization is needed to ensure computational stability, and compares with the added stabilization needed in traditional closure with prescribed subgrid models. Further, it conducts a posteriori tests based on forward simulations of turbulent conserved scalar mixing with the same pseudo-spectral code, in which velocity and scalar statistics from autonomic closure with these representations are compared with corresponding statistics from traditional closure using prescribed models, and with corresponding statistics of filtered fields from direct numerical simulation (DNS). These comparisons show substantially greater accuracy from autonomic closure than from traditional closure. This study demonstrates that fully dynamic autonomic closure is a practical approach for LES that requires accuracy even at the smallest resolved scales.
ContributorsStallcup, Eric Warren (Author) / Dahm, Werner J.A. (Thesis advisor) / Herrmann, Marcus (Committee member) / Calhoun, Ronald (Committee member) / Kim, Jeonglae (Committee member) / Kostelich, Eric J. (Committee member) / Arizona State University (Publisher)
Created2020
158807-Thumbnail Image.png
Description
Ultra High Performance (UHP) cementitious binders are a class of cement-based materials with high strength and ductility, designed for use in precast bridge connections, bridge superstructures, high load-bearing structural members like columns, and in structural repair and strengthening. This dissertation aims to elucidate the chemo-mechanical relationships in complex UHP binders

Ultra High Performance (UHP) cementitious binders are a class of cement-based materials with high strength and ductility, designed for use in precast bridge connections, bridge superstructures, high load-bearing structural members like columns, and in structural repair and strengthening. This dissertation aims to elucidate the chemo-mechanical relationships in complex UHP binders to facilitate better microstructure-based design of these materials and develop machine learning (ML) models to predict their scale-relevant properties from microstructural information.To establish the connection between micromechanical properties and constitutive materials, nanoindentation and scanning electron microscopy experiments are performed on several cementitious pastes. Following Bayesian statistical clustering, mixed reaction products with scattered nanomechanical properties are observed, attributable to the low degree of reaction of the constituent particles, enhanced particle packing, and very low water-to-binder ratio of UHP binders. Relating the phase chemistry to the micromechanical properties, the chemical intensity ratios of Ca/Si and Al/Si are found to be important parameters influencing the incorporation of Al into the C-S-H gel.
ML algorithms for classification of cementitious phases are found to require only the intensities of Ca, Si, and Al as inputs to generate accurate predictions for more homogeneous cement pastes. When applied to more complex UHP systems, the overlapping chemical intensities in the three dominant phases – Ultra High Stiffness (UHS), unreacted cementitious replacements, and clinker – led to ML models misidentifying these three phases. Similarly, a reduced amount of data available on the hard and stiff UHS phases prevents accurate ML regression predictions of the microstructural phase stiffness using only chemical information. The use of generic virtual two-phase microstructures coupled with finite element analysis is also adopted to train MLs to predict composite mechanical properties. This approach applied to three different representations of composite materials produces accurate predictions, thus providing an avenue for image-based microstructural characterization of multi-phase composites such UHP binders. This thesis provides insights into the microstructure of the complex, heterogeneous UHP binders and the utilization of big-data methods such as ML to predict their properties. These results are expected to provide means for rational, first-principles design of UHP mixtures.
ContributorsFord, Emily Lucile (Author) / Neithalath, Narayanan (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Mobasher, Barzin (Committee member) / Chawla, Nikhilesh (Committee member) / Hoover, Christian G. (Committee member) / Maneparambil, Kailas (Committee member) / Arizona State University (Publisher)
Created2020
158817-Thumbnail Image.png
Description
Over the past decade, machine learning research has made great strides and significant impact in several fields. Its success is greatly attributed to the development of effective machine learning algorithms like deep neural networks (a.k.a. deep learning), availability of large-scale databases and access to specialized hardware like Graphic Processing Units.

Over the past decade, machine learning research has made great strides and significant impact in several fields. Its success is greatly attributed to the development of effective machine learning algorithms like deep neural networks (a.k.a. deep learning), availability of large-scale databases and access to specialized hardware like Graphic Processing Units. When designing and training machine learning systems, researchers often assume access to large quantities of data that capture different possible variations. Variations in the data is needed to incorporate desired invariance and robustness properties in the machine learning system, especially in the case of deep learning algorithms. However, it is very difficult to gather such data in a real-world setting. For example, in certain medical/healthcare applications, it is very challenging to have access to data from all possible scenarios or with the necessary amount of variations as required to train the system. Additionally, the over-parameterized and unconstrained nature of deep neural networks can cause them to be poorly trained and in many cases over-confident which, in turn, can hamper their reliability and generalizability. This dissertation is a compendium of my research efforts to address the above challenges. I propose building invariant feature representations by wedding concepts from topological data analysis and Riemannian geometry, that automatically incorporate the desired invariance properties for different computer vision applications. I discuss how deep learning can be used to address some of the common challenges faced when working with topological data analysis methods. I describe alternative learning strategies based on unsupervised learning and transfer learning to address issues like dataset shifts and limited training data. Finally, I discuss my preliminary work on applying simple orthogonal constraints on deep learning feature representations to help develop more reliable and better calibrated models.
ContributorsSom, Anirudh (Author) / Turaga, Pavan (Thesis advisor) / Krishnamurthi, Narayanan (Committee member) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2020
158831-Thumbnail Image.png
Description
Students seldom spontaneously collaborate with each other. A system that can measure collaboration in real time could be useful, for example, by helping the teacher locate a group requiring guidance. To address this challenge, the research presented here focuses on building and comparing collaboration detectors for different types of classroom

Students seldom spontaneously collaborate with each other. A system that can measure collaboration in real time could be useful, for example, by helping the teacher locate a group requiring guidance. To address this challenge, the research presented here focuses on building and comparing collaboration detectors for different types of classroom problem solving activities, such as card sorting and handwriting.

Transfer learning using different representations was also studied with a goal of building collaboration detectors for one task can be used with a new task. Data for building such detectors were collected in the form of verbal interaction and user action logs from students’ tablets. Three qualitative levels of interactivity were distinguished: Collaboration, Cooperation and Asymmetric Contribution. Machine learning was used to induce a classifier that can assign a code for every episode based on the set of features. The results indicate that machine learned classifiers were reliable and can transfer.
ContributorsViswanathan, Sree Aurovindh (Author) / VanLehn, Kurt (Thesis advisor) / Hsiao, Ihan (Committee member) / Walker, Erin (Committee member) / D' Angelo, Cynthia (Committee member) / Arizona State University (Publisher)
Created2020
158864-Thumbnail Image.png
Description
Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health

Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health issues such as cerebral palsy, asthma and sudden infant death syndrome. One of the leading health complications in preterm infants is bradycardia - which is defined as the slower than expected heart rate, generally beating lower than 60 beats per minute. Bradycardia is often accompanied by low oxygen levels and can cause additional long term health problems in the premature infant.The implementation of a non-parametric method to predict the onset of brady- cardia is presented. This method assumes no prior knowledge of the data and uses kernel density estimation to predict the future onset of bradycardia events. The data is preprocessed, and then analyzed to detect the peaks in the ECG signals, following which different kernels are implemented to estimate the shared underlying distribu- tion of the data. The performance of the algorithm is evaluated using various metrics and the computational challenges and methods to overcome them are also discussed.
It is observed that the performance of the algorithm with regards to the kernels used are consistent with the theoretical performance of the kernel as presented in a previous work. The theoretical approach has also been automated in this work and the various implementation challenges have been addressed.
ContributorsMitra, Sinjini (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Moraffah, Bahman (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2020
158896-Thumbnail Image.png
Description
Cameras have become commonplace with wide-ranging applications of phone photography, computer vision, and medical imaging. With a growing need to reduce size and costs while maintaining image quality, the need to look past traditional style of cameras is becoming more apparent. Several non-traditional cameras have shown to be promising options

Cameras have become commonplace with wide-ranging applications of phone photography, computer vision, and medical imaging. With a growing need to reduce size and costs while maintaining image quality, the need to look past traditional style of cameras is becoming more apparent. Several non-traditional cameras have shown to be promising options for size-constraint applications, and while they may offer several advantages, they also usually are limited by image quality degradation due to optical or a need to reconstruct a captured image. In this thesis, we take a look at three of these non-traditional cameras: a pinhole camera, a diffusion-mask lensless camera, and an under-display camera (UDC).

For each of these cases, I present a feasible image restoration pipeline to correct for their particular limitations. For the pinhole camera, I present an early pipeline to allow for practical pinhole photography by reducing noise levels caused by low-light imaging, enhancing exposure levels, and sharpening the blur caused by the pinhole. For lensless cameras, we explore a neural network architecture that performs joint image reconstruction and point spread function (PSF) estimation to robustly recover images captured with multiple PSFs from different cameras. Using adversarial learning, this approach achieves improved reconstruction results that do not require explicit knowledge of the PSF at test-time and shows an added improvement in the reconstruction model’s ability to generalize to variations in the camera’s PSF. This allows lensless cameras to be utilized in a wider range of applications that require multiple cameras without the need to explicitly train a separate model for each new camera. For UDCs, we utilize a multi-stage approach to correct for low light transmission, blur, and haze. This pipeline uses a PyNET deep neural network architecture to perform a majority of the restoration, while additionally using a traditional optimization approach which is then fused in a learned manner in the second stage to improve high-frequency features. I show results from this novel fusion approach that is on-par with the state of the art.
ContributorsRego, Joshua D (Author) / Jayasuriya, Suren (Thesis advisor) / Blain Christen, Jennifer (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2020
158875-Thumbnail Image.png
Description
Elucidation of Antigen-Antibody (Ag-Ab) interactions is critical to the understanding of humoral immune responses to pathogenic infection. B cells are crucial components of the immune system that generate highly specific antibodies, such as IgG, towards epitopes on antigens. Serum IgG molecules carry specific molecular recognition information concerning the antigens that

Elucidation of Antigen-Antibody (Ag-Ab) interactions is critical to the understanding of humoral immune responses to pathogenic infection. B cells are crucial components of the immune system that generate highly specific antibodies, such as IgG, towards epitopes on antigens. Serum IgG molecules carry specific molecular recognition information concerning the antigens that initiated their production. If one could read it, this information can be used to predict B cell epitopes on target antigens in order to design effective epitope driven vaccines, therapies and serological assays. Immunosignature technology captures the specific information content of serum IgG from infected and uninfected individuals on high density microarrays containing ~105 nearly random peptide sequences. Although the sequences of the peptides are chosen to evenly cover amino acid sequence space, the pattern of serum IgG binding to the array contains a consistent signature associated with each specific disease (e.g., Valley fever, influenza) among many individuals. Here, the disease specific but agnostic behavior of the technology has been explored by profiling molecular recognition information for five pathogens causing life threatening infectious diseases (e.g. DENV, WNV, HCV, HBV, and T.cruzi). This was done by models developed using a machine learning algorithm to model the sequence dependence of the humoral immune responses as measured by the peptide arrays. It was shown that the disease specific binding information could be accurately related to the peptide sequences used on the array by the machine learning (ML) models. Importantly, it was demonstrated that the ML models could identify or predict known linear epitopes on antigens of the four viruses. Moreover, the models identified potential novel linear epitopes on antigens of the four viruses (each has 4-10 proteins in the proteome) and of T.cruzi (a eukaryotic parasite which has over 12,000 proteins in its proteome). Finally, the predicted epitopes were tested in serum IgG binding assays such as ELISAs. Unfortunately, the assay results were inconsistent due to problems with peptide/surface interactions. In a separate study for the development of antibody recruiting molecules (ARMs) to combat microbial infections, 10 peptides from the high density peptide arrays were tested in IgG binding assays using sera of healthy individuals to find a set of antibody binding termini (ABT, a ligand that binds to a variable region of the IgG). It was concluded that one peptide (peptide 7) may be used as a potential ABT. Overall, these findings demonstrate the applications of the immunosignature technology ranging from developing tools to predict linear epitopes on pathogens of small to large proteomes to the identification of an ABT for ARMs.
ContributorsCHOWDHURY, ROBAYET (Author) / Woodbury, Neal (Thesis advisor) / LaBaer, Joshua (Committee member) / Sulc, Petr (Committee member) / Arizona State University (Publisher)
Created2020
158464-Thumbnail Image.png
Description
In many biological research studies, including speech analysis, clinical research, and prediction studies, the validity of the study is dependent on the effectiveness of the training data set to represent the target population. For example, in speech analysis, if one is performing emotion classification based on speech, the performance of

In many biological research studies, including speech analysis, clinical research, and prediction studies, the validity of the study is dependent on the effectiveness of the training data set to represent the target population. For example, in speech analysis, if one is performing emotion classification based on speech, the performance of the classifier is mainly dependent on the number and quality of the training data set. For small sample sizes and unbalanced data, classifiers developed in this context may be focusing on the differences in the training data set rather than emotion (e.g., focusing on gender, age, and dialect).

This thesis evaluates several sampling methods and a non-parametric approach to sample sizes required to minimize the effect of these nuisance variables on classification performance. This work specifically focused on speech analysis applications, and hence the work was done with speech features like Mel-Frequency Cepstral Coefficients (MFCC) and Filter Bank Cepstral Coefficients (FBCC). The non-parametric divergence (D_p divergence) measure was used to study the difference between different sampling schemes (Stratified and Multistage sampling) and the changes due to the sentence types in the sampling set for the process.
ContributorsMariajohn, Aaquila (Author) / Berisha, Visar (Thesis advisor) / Spanias, Andreas (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2020