Matching Items (1,425)
Filtering by

Clear all filters

161823-Thumbnail Image.png
Description
While understanding of failure mechanisms for polymeric composites have improved vastly over recent decades, the ability to successfully monitor early failure and subsequent prevention has come of much interest in recent years. One such method to detect these failures involves the use of mechanochemistry, a field of chemistry in which

While understanding of failure mechanisms for polymeric composites have improved vastly over recent decades, the ability to successfully monitor early failure and subsequent prevention has come of much interest in recent years. One such method to detect these failures involves the use of mechanochemistry, a field of chemistry in which chemical reactions are initiated by deforming highly-strained bonds present in certain moieties. Mechanochemistry is utilized in polymeric composites as a means of stress-sensing, utilizing weak and force-responsive chemical bonds to activate signals when embedded in a composite material. These signals can then be detected to determine the amount of stress applied to a composite and subsequent potential damage that has occurred due to the stress. Among mechanophores, the cinnamoyl moiety is capable of stress response through fluorescent signal under mechanical load. The cinnamoyl group is fluorescent in its initial state and capable of undergoing photocycloaddition in the presence of ultraviolet (UV) light, followed by subsequent reversion when under mechanical load. Signal generation before the yield point of the material provides a form of damage precursor detection.This dissertation explores the implementation of mechanophores in novel approaches to overcome some of the many challenges within the mechanochemistry field. First, new methods of mechanophore detection were developed through utilization of Fourier transform infrared (FTIR) spectroscopy signals and in-situ stress sensing. Developing an in-situ testing method provided a two-fold advantage of higher resolution and more time efficiency over current methods involving image analysis with a fluorescent microscope. Second, bonding mechanophores covalently into the backbone of an epoxy matrix mitigated property loss due to mechanophore incorporation. This approach was accomplished through functionalizing either the resin or hardener component of the matrix. Finally, surface functionalization of fibers was performed and allowed for unaltered fabrication procedures of composite layups as well as provided increased adhesion at the fiber-matrix interphase. The developed materials could enable a simple, non-invasive, and non-detrimental structural health monitoring approach.
ContributorsGunckel, Ryan Patrick (Author) / Dai, Lenore (Thesis advisor) / Chattopadhyay, Aditi (Thesis advisor) / Lind Thomas, Mary Laura (Committee member) / Liu, Yongming (Committee member) / Forzani, Erica (Committee member) / Arizona State University (Publisher)
Created2021
161829-Thumbnail Image.png
Description
The use of spatial data has become very fundamental in today's world. Ranging from fitness trackers to food delivery services, almost all application records users' location information and require clean geospatial data to enhance various application features. As spatial data flows in from heterogeneous sources various problems arise. The study

The use of spatial data has become very fundamental in today's world. Ranging from fitness trackers to food delivery services, almost all application records users' location information and require clean geospatial data to enhance various application features. As spatial data flows in from heterogeneous sources various problems arise. The study of entity matching has been a fervent step in the process of producing clean usable data. Entity matching is an amalgamation of various sub-processes including blocking and matching. At the end of an entity matching pipeline, we get deduplicated records of the same real-world entity. Identifying various mentions of the same real-world locations is known as spatial entity matching. While entity matching received significant interest in the field of relational entity matching, the same cannot be said about spatial entity matching. In this dissertation, I build an end-to-end Geospatial Entity Matching framework, GEM, exploring spatial entity matching from a novel perspective. In the current state-of-the-art systems spatial entity matching is only done on one type of geometrical data variant. Instead of confining to matching spatial entities of only point geometry type, I work on extending the boundaries of spatial entity matching to match the more generic polygon geometry entities as well. I propose a methodology to provide support for three entity matching scenarios across different geometrical data types: point X point, point X polygon, polygon X polygon. As mentioned above entity matching consists of various steps but blocking, feature vector creation, and classification are the core steps of the system. GEM comprises an efficient and lightweight blocking technique, GeoPrune, that uses the geohash encoding mechanism to prune away the obvious non-matching spatial entities. Geohashing is a technique to convert a point location coordinates to an alphanumeric code string. This technique proves to be very effective and swift for the blocking mechanism. I leverage the Apache Sedona engine to create the feature vectors. Apache Sedona is a spatial database management system that holds the capacity of processing spatial SQL queries with multiple geometry types without compromising on their original coordinate vector representation. In this step, I re-purpose the spatial proximity operators (SQL queries) in Apache Sedona to create spatial feature dimensions that capture the proximity between a geospatial entity pair. The last step of an entity matching process is matching or classification. The classification step in GEM is a pluggable component, which consumes the feature vector for a spatial entity pair and determines whether the geolocations match or not. The component provides 3 machine learning models that consume the same feature vector and provide a label for the test data based on the training. I conduct experiments with the three classifiers upon multiple large-scale geospatial datasets consisting of both spatial and relational attributes. Data considered for experiments arrives from heterogeneous sources and we pre-align its schema manually. GEM achieves an F-measure of 1.0 for a point X point dataset with 176k total pairs, which is 42% higher than a state-of-the-art spatial EM baseline. It achieves F-measures of 0.966 and 0.993 for the point X polygon dataset with 302M total pairs, and the polygon X polygon dataset with 16M total pairs respectively.
ContributorsShah, Setu Nilesh (Author) / Sarwat, Mohamed (Thesis advisor) / Pedrielli, Giulia (Committee member) / Boscovic, Dragan (Committee member) / Arizona State University (Publisher)
Created2021
161833-Thumbnail Image.png
Description
The meteoric rise of Deep Neural Networks (DNN) has led to the development of various Machine Learning (ML) frameworks (e.g., Tensorflow, PyTorch). Every ML framework has a different way of handling DNN models, data types, operations involved, and the internal representations stored on disk or memory. There have been initiatives

The meteoric rise of Deep Neural Networks (DNN) has led to the development of various Machine Learning (ML) frameworks (e.g., Tensorflow, PyTorch). Every ML framework has a different way of handling DNN models, data types, operations involved, and the internal representations stored on disk or memory. There have been initiatives such as the Open Neural Network Exchange (ONNX) for a more standardized approach to machine learning for better interoperability between the various popular ML frameworks. Model Serving Platforms (MSP) (e.g., Tensorflow Serving, Clipper) are used for serving DNN models to applications and edge devices. These platforms have gained widespread use for their flexibility in serving DNN models created by various ML frameworks. They also have additional capabilities such as caching, automatic ensembling, and scheduling. However, few of these frameworks focus on optimizing the storage of these DNN models, some of which may take up to ∼130GB storage space(“Turing-NLG: A 17-billion-parameter language model by Microsoft” 2020). These MSPs leave it to the ML frameworks for optimizing the DNN model with various model compression techniques, such as quantization and pruning. This thesis investigates the viability of automatic cross-model compression using traditional deduplication techniques and storage optimizations. Scenarios are identified where different DNN models have shareable model weight parameters. “Chunking” a model into smaller pieces is explored as an approach for deduplication. This thesis also proposes a design for storage in a Relational Database Management System (RDBMS) that allows for automatic cross-model deduplication.
ContributorsDas, Amitabh (Author) / Zou, Jia (Thesis advisor) / Zhao, Ming (Thesis advisor) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021
161835-Thumbnail Image.png
Description
To optimize solar cell performance, it is necessary to properly design the doping profile in the absorber layer of the solar cell. For CdTe solar cells, Cu is used for providing p-type doping. Hence, having an estimator that, given the diffusion parameter set (time and Temperature) and the doping concentration

To optimize solar cell performance, it is necessary to properly design the doping profile in the absorber layer of the solar cell. For CdTe solar cells, Cu is used for providing p-type doping. Hence, having an estimator that, given the diffusion parameter set (time and Temperature) and the doping concentration at the junction, gives the junction depth of the absorber layer, is essential in the design process of CdTe solar cells (and other cell technologies). In this work it is called a forward (direct) estimation process. The backward (inverse) problem then is the one in which, given the junction depth and the desired concentration of Cu doping at the CdTe/CdS heterointerface, the estimator gives the time and/or the Temperature needed to achieve the desired doping profiles. This is called a backward (inverse) estimation process. Such estimators, both forward and backward, do not exist in the literature for solar cell technology. To train the Machine Learning (ML) estimator, it is necessary to first generate a large set of data that are obtained by using the PVRD-FASP Solver, which has been validated via comparison with experimental values. Note that this big dataset needs to be generated only once. Next, one uses Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) to extract the actual Cu doping profiles that result from the process of diffusion, annealing, and cool-down in the fabrication sequence of CdTe solar cells. Two deep learning neural network models are used: (1) Multilayer Perceptron Artificial Neural Network (MLPANN) model using a Keras Application Programmable Interface (API) with TensorFlow backend, and (2) Radial Basis Function Network (RBFN) model to predict the Cu doping profiles for different Temperatures and durations of the annealing process. Excellent agreement between the simulated results obtained with the PVRD-FASP Solver and the predicted values is obtained. It is important to mention here that it takes a significant amount of time to generate the Cu doping profiles given the initial conditions using the PVRD-FASP Solver, because solving the drift-diffusion-reaction model is mathematically a stiff problem and leads to numerical instabilities if the time steps are not small enough, which, in turn, affects the time needed for completion of one simulation run. The generation of the same with Machine Learning (ML) is almost instantaneous and can serve as an excellent simulation tool to guide future fabrication of optimal doping profiles in CdTe solar cells.
ContributorsSalman, Ghaith (Author) / Vasileska, Dragica (Thesis advisor) / Goodnick, Stephen M. (Thesis advisor) / Ringhofer, Christian (Committee member) / Banerjee, Ayan (Committee member) / Arizona State University (Publisher)
Created2021
161838-Thumbnail Image.png
Description
Visual question answering (VQA) is a task that answers the questions by giving an image, and thus involves both language and vision methods to solve, which make the VQA tasks a frontier interdisciplinary field. In recent years, as the great progress made in simple question tasks (e.g. object recognition), researchers

Visual question answering (VQA) is a task that answers the questions by giving an image, and thus involves both language and vision methods to solve, which make the VQA tasks a frontier interdisciplinary field. In recent years, as the great progress made in simple question tasks (e.g. object recognition), researchers start to shift their interests to the questions that require knowledge and reasoning. Knowledge-based VQA requires answering questions with external knowledge in addition to the content of images. One dataset that is mostly used in evaluating knowledge-based VQA is OK-VQA, but it lacks a gold standard knowledge corpus for retrieval. Existing work leverages different knowledge bases (e.g., ConceptNet and Wikipedia) to obtain external knowledge. Because of varying knowledge bases, it is hard to fairly compare models' performance. To address this issue, this paper collects a natural language knowledge base that can be used for any question answering (QA) system. Moreover, a Visual Retriever-Reader pipeline is proposed to approach knowledge-based VQA, where the visual retriever aims to retrieve relevant knowledge, and the visual reader seeks to predict answers based on given knowledge. The retriever is constructed with two versions: term based retriever which uses best matching 25 (BM25), and neural based retriever where the latest dense passage retriever (DPR) is introduced. To encode the visual information, the image and caption are encoded separately in the two kinds of neural based retriever: Image-DPR and Caption-DPR. There are also two styles of readers, classification reader and extraction reader. Both the retriever and reader are trained with weak supervision. The experimental results show that a good retriever can significantly improve the reader's performance on the OK-VQA challenge.
ContributorsZeng, Yankai (Author) / Baral, Chitta (Thesis advisor) / Yang, Yezhou (Committee member) / Ghayekhloo, Samira (Committee member) / Arizona State University (Publisher)
Created2021
161840-Thumbnail Image.png
Description
Soft thermal interface materials (TIMs) are critical for improving the thermal management of advanced microelectronic devices. Despite containing high thermal conductivity filler materials, TIM performance is limited by thermal resistances between fillers, filler-matrix, and external contact resistance. Recently, room-temperature liquid metals (LMs) started to be adapted as an alternative TIM

Soft thermal interface materials (TIMs) are critical for improving the thermal management of advanced microelectronic devices. Despite containing high thermal conductivity filler materials, TIM performance is limited by thermal resistances between fillers, filler-matrix, and external contact resistance. Recently, room-temperature liquid metals (LMs) started to be adapted as an alternative TIM for their low thermal resistance and fluidic nature. However, LM-based TIMs face challenges due to their low viscosity, non-wetting qualities, chemical reactivity, and corrosiveness towards aluminum.To address these concerns, this dissertation research investigates fundamental LM properties and assesses their utility for developing multiphase LM composites with strong thermal properties. Augmentation of LM with gallium oxide and air capsules lead to LM-base foams with improved spreading and patterning. Gallium oxides are responsible for stabilizing LM foam structures which is observed through electron microscopy, revealing a temporal evolution of air voids after shear mixing in air. The presence of air bubbles and oxide fragments in LM decreases thermal conductivity while increasing its viscosity as the shear mixing time is prolonged. An overall mechanism for foam generation in LM is presented in two stages: 1) oxide fragment accumulation and 2) air bubble entrapment and propagation. To avoid the low thermal conductivity air content, mixing of non-reactive particles of tungsten or silicon carbide (SiC) into LM forms paste-like LM-based mixtures that exhibit tunable high thermal conductivity 2-3 times beyond the matrix material. These filler materials remain chemically stable and do not react with LM over time while suspended. Gallium oxide-mediated wetting mechanisms for these non-wetting fillers are elucidated in oxygen rich and deficient environments. Three-phase composites consisting of LM and Ag-coated SiC fillers dispersed in a noncuring silicone oil matrix address LM-corrosion related issues. Ag-coated SiC particles enable improved wetting of the LM, and the results show that applied pressure is necessary for bridging of these LM-coated particles to improve filler thermal resistance. Compositional tuning between the fillers leads to thermal improvements in this multiphase composite. The results of this dissertation work aim to advance our current understanding of LMs and how to design LM-based composite materials for improved TIMs and other soft thermal applications.
ContributorsKong, Wilson (Author) / Wang, Robert Y (Thesis advisor) / Rykaczewski, Konrad (Thesis advisor) / Green, Matthew D (Committee member) / Tongay, Sefaattin (Committee member) / Arizona State University (Publisher)
Created2021
161844-Thumbnail Image.png
Description
Thermal management is a critical aspect of microelectronics packaging and often centers around preventing central processing units (CPUs) and graphics processing units (GPUs) from overheating. As the need for power going into these processors increases, so too does the need for more effective thermal management strategies. One such strategy is

Thermal management is a critical aspect of microelectronics packaging and often centers around preventing central processing units (CPUs) and graphics processing units (GPUs) from overheating. As the need for power going into these processors increases, so too does the need for more effective thermal management strategies. One such strategy is to utilize additive manufacturing to fabricate heat sinks with bio-inspired and cellular structures and is the focus of this thesis. In this study, a process was developed for manufacturing the copper alloy CuNi2SiCr on the 100w Concept Laser Mlab laser powder bed fusion 3D printer to obtain parts that were 94% dense, while dealing with challenges of low absorptivity in copper and its high potential for oxidation. The developed process was then used to manufacture and test heat sinks with traditional pin and fin designs to establish a baseline cooling effect, as determined from tests conducted on a substrate, CPU and heat spreader assembly. Two additional heat sinks were designed, the first of these being bio-inspired and the second incorporating Triply Periodic Minimal Surface (TPMS) cellular structures, with the aim of trying to improve the cooling effect relative to commercial heat sinks. The results showed that the pure copper commercial pin-design heat sink outperformed the additive manufactured (AM) pin-design heat sink under both natural and forced convection conditions due to its approximately tenfold higher thermal conductivity, but that the gap in performance could be bridged using the bio-inspired and Schwarz-P heat sink designs developed in this work and is an encouraging indicator that further improvements could be obtained with improved alloys, heat treatments and even more innovative designs.
ContributorsYaple, Jordan Marie (Author) / Bhate, Dhruv (Thesis advisor) / Azeredo, Bruno (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2021
161846-Thumbnail Image.png
Description
Complex systems appear when interaction among system components creates emergent behavior that is difficult to be predicted from component properties. The growth of Internet of Things (IoT) and embedded technology has increased complexity across several sectors (e.g., automotive, aerospace, agriculture, city infrastructures, home technologies, healthcare) where the paradigm of cyber-physical

Complex systems appear when interaction among system components creates emergent behavior that is difficult to be predicted from component properties. The growth of Internet of Things (IoT) and embedded technology has increased complexity across several sectors (e.g., automotive, aerospace, agriculture, city infrastructures, home technologies, healthcare) where the paradigm of cyber-physical systems (CPSs) has become a standard. While CPS enables unprecedented capabilities, it raises new challenges in system design, certification, control, and verification. When optimizing system performance computationally expensive simulation tools are often required, and search algorithms that sequentially interrogate a simulator to learn promising solutions are in great demand. This class of algorithms are black-box optimization techniques. However, the generality that makes black-box optimization desirable also causes computational efficiency difficulties when applied real problems. This thesis focuses on Bayesian optimization, a prominent black-box optimization family, and proposes new principles, translated in implementable algorithms, to scale Bayesian optimization to highly expensive, large scale problems. Four problem contexts are studied and approaches are proposed for practically applying Bayesian optimization concepts, namely: (1) increasing sample efficiency of a highly expensive simulator in the presence of other sources of information, where multi-fidelity optimization is used to leverage complementary information sources; (2) accelerating global optimization in the presence of local searches by avoiding over-exploitation with adaptive restart behavior; (3) scaling optimization to high dimensional input spaces by integrating Game theoretic mechanisms with traditional techniques; (4) accelerating optimization by embedding function structure when the reward function is a minimum of several functions. In the first context this thesis produces two multi-fidelity algorithms, a sample driven and model driven approach, and is implemented to optimize a serial production line; in the second context the Stochastic Optimization with Adaptive Restart (SOAR) framework is produced and analyzed with multiple applications to CPS falsification problems; in the third context the Bayesian optimization with sample fictitious play (BOFiP) algorithm is developed with an implementation in high-dimensional neural network training; in the last problem context the minimum surrogate optimization (MSO) framework is produced and combined with both Bayesian optimization and the SOAR framework with applications in simultaneous falsification of multiple CPS requirements.
ContributorsMathesen, Logan (Author) / Pedrielli, Giulia (Thesis advisor) / Candan, Kasim (Committee member) / Fainekos, Georgios (Committee member) / Gel, Esma (Committee member) / Montgomery, Douglas (Committee member) / Zabinsky, Zelda (Committee member) / Arizona State University (Publisher)
Created2021
161913-Thumbnail Image.png
Description
Artificial intelligence is one of the leading technologies that mimics the problem solving and decision making capabilities of the human brain. Machine learning algorithms, especially deep learning algorithms, are leading the way in terms of performance and robustness. They are used for various purposes, mainly for computer vision, speech recognition,

Artificial intelligence is one of the leading technologies that mimics the problem solving and decision making capabilities of the human brain. Machine learning algorithms, especially deep learning algorithms, are leading the way in terms of performance and robustness. They are used for various purposes, mainly for computer vision, speech recognition, and object detection. The algorithms are usually tested inaccuracy, and they utilize full floating-point precision (32 bits). The hardware would require a high amount of power and area to accommodate many parameters with full precision. In this exploratory work, the convolution autoencoder is quantized for the working of an event base camera. The model is designed so that the autoencoder can work on-chip, which would sufficiently decrease the latency in processing. Different quantization methods are used to quantize and binarize the weights and activations of this neural network model to be portable and power efficient. The sparsity term is added to make the model as robust and energy-efficient as possible. The network model was able to recoup the lost accuracy due to binarizing the weights and activation's to quantize the layers of the encoder selectively. This method of recouping the accuracy gives enough flexibility to introduce the network on the chip to get real-time processing from systems like event-based cameras. Lately, computer vision, especially object detection have made strides in their object detection accuracy. The algorithms can sufficiently detect and predict the objects in real-time. However, end-to-end detection of the algorithm is challenging due to the large parameter need and processing requirements. A change in the Non Maximum Suppression algorithm in SSD(Single Shot Detector)-Mobilenet-V1 resulted in less computational complexity without change in the quality of output metric. The Mean Average Precision(mAP) calculated suggests that this method can be implemented in the post-processing of other networks.
ContributorsKuzhively, Ajay Balu (Author) / Cao, Yu (Thesis advisor) / Seo, Jae-Sun (Committee member) / Fan, Delian (Committee member) / Arizona State University (Publisher)
Created2021
161914-Thumbnail Image.png
Description
Automation has become a staple in high volume manufacturing, where the consistency and quality of a product carries as much importance as the quantity produced. The Aerospace Industry has a vested interest in expanding the application of automation beyond simply manufacturing. In this project, the process of systems engineering has

Automation has become a staple in high volume manufacturing, where the consistency and quality of a product carries as much importance as the quantity produced. The Aerospace Industry has a vested interest in expanding the application of automation beyond simply manufacturing. In this project, the process of systems engineering has been applied to the Conceptual Design Phase of product development; specifically, the Preliminary Structural Design of a Composite wing for an Unmanned Air Vehicle (UAV). Automated structural analysis can be used to develop a composite wing structure that can be directly rendered in Computer Aided Drafting (CAD) and validated using Finite Element Analysis (FEA). This concept provides the user with the ability to quickly iterate designs and demonstrates how different the “optimal light weight” composite structure must look for UAV systems of varied weight, range, and flight maneuverability.
ContributorsBlair, Martin Caceres (Author) / Takahashi, Timothy (Thesis advisor) / Murthy, Raghavendra (Committee member) / Perez, Ruben (Committee member) / Arizona State University (Publisher)
Created2021