Matching Items (214)
Filtering by

Clear all filters

150830-Thumbnail Image.png
Description
Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from

Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from where the disorder has originated. Designing advanced localization algorithms that can adapt to environmental changes is considered a significant shift from manual diagnosis which is based on the knowledge and observation of the doctor, to an adaptive and improved brain disorder diagnosis as these algorithms can track activities that might not be noticed by the human eye. An important consideration of these localization algorithms, however, is to try and minimize the overall power consumption in order to improve the study and treatment of brain disorders. This thesis considers the problem of estimating dynamic parameters of neural dipole sources while minimizing the system's overall power consumption; this is achieved by minimizing the number of EEG/MEG measurements sensors without a loss in estimation performance accuracy. As the EEG/MEG measurements models are related non-linearity to the dipole source locations and moments, these dynamic parameters can be estimated using sequential Monte Carlo methods such as particle filtering. Due to the large number of sensors required to record EEG/MEG Measurements for use in the particle filter, over long period recordings, a large amounts of power is required for storage and transmission. In order to reduce the overall power consumption, two methods are proposed. The first method used the predicted mean square estimation error as the performance metric under the constraint of a maximum power consumption. The performance metric of the second method uses the distance between the location of the sensors and the location estimate of the dipole source at the previous time step; this sensor scheduling scheme results in maximizing the overall signal-to-noise ratio. The performance of both methods is demonstrated using simulated data, and both methods show that they can provide good estimation results with significant reduction in the number of activated sensors at each time step.
ContributorsMichael, Stefanos (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2012
150833-Thumbnail Image.png
Description
Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components

Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components are often manually inspected to detect the presence of damage. This technique, known as schedule based maintenance, however, is expensive, time-consuming, and often limited to easily accessible structural elements. Therefore, there is an increased demand for robust and efficient Structural Health Monitoring (SHM) techniques that can be used for Condition Based Monitoring, which is the method in which structural components are inspected based upon damage metrics as opposed to flight hours. SHM relies on in situ frameworks for detecting early signs of damage in exposed and unexposed structural elements, offering not only reduced number of schedule based inspections, but also providing better useful life estimates. SHM frameworks require the development of different sensing technologies, algorithms, and procedures to detect, localize, quantify, characterize, as well as assess overall damage in aerospace structures so that strong estimations in the remaining useful life can be determined. The use of piezoelectric transducers along with guided Lamb waves is a method that has received considerable attention due to the weight, cost, and function of the systems based on these elements. The research in this thesis investigates the ability of Lamb waves to detect damage in feature dense anisotropic composite panels. Most current research negates the effects of experimental variability by performing tests on structurally simple isotropic plates that are used as a baseline and damaged specimen. However, in actual applications, variability cannot be negated, and therefore there is a need to research the effects of complex sample geometries, environmental operating conditions, and the effects of variability in material properties. This research is based on experiments conducted on a single blade-stiffened anisotropic composite panel that localizes delamination damage caused by impact. The overall goal was to utilize a correlative approach that used only the damage feature produced by the delamination as the damage index. This approach was adopted because it offered a simplistic way to determine the existence and location of damage without having to conduct a more complex wave propagation analysis or having to take into account the geometric complexities of the test specimen. Results showed that even in a complex structure, if the damage feature can be extracted and measured, then an appropriate damage index can be associated to it and the location of the damage can be inferred using a dense sensor array. The second experiment presented in this research studies the effects of temperature on damage detection when using one test specimen for a benchmark data set and another for damage data collection. This expands the previous experiment into exploring not only the effects of variable temperature, but also the effects of high experimental variability. Results from this work show that the damage feature in the data is not only extractable at higher temperatures, but that the data from one panel at one temperature can be directly compared to another panel at another temperature for baseline comparison due to linearity of the collected data.
ContributorsVizzini, Anthony James, II (Author) / Chattopadhyay, Aditi (Thesis advisor) / Fard, Masoud (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
151156-Thumbnail Image.png
Description
Continuous underwater observation is a challenging engineering task that could be accomplished by development and deployment of a sensor array that can survive harsh underwater conditions. One approach to this challenge is a swarm of micro underwater robots, known as Sensorbots, that are equipped with biogeochemical sensors that can relay

Continuous underwater observation is a challenging engineering task that could be accomplished by development and deployment of a sensor array that can survive harsh underwater conditions. One approach to this challenge is a swarm of micro underwater robots, known as Sensorbots, that are equipped with biogeochemical sensors that can relay information among themselves in real-time. This innovative method for underwater exploration can contribute to a more comprehensive understanding of the ocean by not limiting sampling to a single point and time. In this thesis, Sensorbot Beta, a low-cost fully enclosed Sensorbot prototype for bench-top characterization and short-term field testing, is presented in a modular format that provides flexibility and the potential for rapid design. Sensorbot Beta is designed around a microcontroller driven platform comprised of commercial off-the-shelf components for all hardware to reduce cost and development time. The primary sensor incorporated into Sensorbot Beta is an in situ fluorescent pH sensor. Design considerations have been made for easy adoption of other fluorescent or phosphorescent sensors, such as dissolved oxygen or temperature. Optical components are designed in a format that enables additional sensors. A real-time data acquisition system, utilizing Bluetooth, allows for characterization of the sensor in bench top experiments. The Sensorbot Beta demonstrates rapid calibration and future work will include deployment for large scale experiments in a lake or ocean.
ContributorsJohansen, John (Civil engineer) (Author) / Meldrum, Deirdre R (Thesis advisor) / Chao, Shih-hui (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
151165-Thumbnail Image.png
Description
Camera calibration has applications in the fields of robotic motion, geographic mapping, semiconductor defect characterization, and many more. This thesis considers camera calibration for the purpose of high accuracy three-dimensional reconstruction when characterizing ball grid arrays within the semiconductor industry. Bouguet's calibration method is used following a set of criteria

Camera calibration has applications in the fields of robotic motion, geographic mapping, semiconductor defect characterization, and many more. This thesis considers camera calibration for the purpose of high accuracy three-dimensional reconstruction when characterizing ball grid arrays within the semiconductor industry. Bouguet's calibration method is used following a set of criteria with the purpose of studying the method's performance according to newly proposed standards. The performance of the camera calibration method is currently measured using standards such as pixel error and computational time. This thesis proposes the use of standard deviation of the intrinsic parameter estimation within a Monte Carlo simulation as a new standard of performance measure. It specifically shows that the standard deviation decreases based on the increased number of images input into the calibration routine. It is also shown that the default thresholds of the non-linear maximum likelihood estimation problem of the calibration method require change in order to improve computational time performance; however, the accuracy lost is negligable even for high accuracy requirements such as ball grid array characterization.
ContributorsStenger, Nickolas Arthur (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
151204-Thumbnail Image.png
Description
There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration is the first step for extracting 3D data from a

There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration is the first step for extracting 3D data from a 2D image. It plays a crucial role in computer vision and 3D reconstruction due to the fact that the accuracy of the reconstruction and 3D coordinate determination relies on the accuracy of the camera calibration to a great extent. This thesis presents a novel camera calibration method using a circular calibration pattern. The disadvantages and issues with existing state-of-the-art methods are discussed and are overcome in this work. The implemented system consists of techniques of local adaptive segmentation, ellipse fitting, projection and optimization. Simulation results are presented to illustrate the performance of the proposed scheme. These results show that the proposed method reduces the error as compared to the state-of-the-art for high-resolution images, and that the proposed scheme is more robust to blur in the imaged calibration pattern.
ContributorsPrakash, Charan Dudda (Author) / Karam, Lina J (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
151215-Thumbnail Image.png
Description
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures

Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
ContributorsEmre, Yunus (Author) / Chakrabarti, Chaitali (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Cao, Yu (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
150007-Thumbnail Image.png
Description
Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation

Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation of such a system requires a collaborative research effort in a variety of areas such as novel sensing techniques, robust algorithms for damage interrogation, high fidelity probabilistic progressive damage models, and hybrid residual life estimation models. This dissertation focuses on the sensing and damage estimation aspects of this multidisciplinary topic for application in metallic and composite material systems. The primary means of interrogating a structure in this work is through the use of Lamb wave propagation which works well for the thin structures used in aerospace applications. Piezoelectric transducers (PZTs) were selected for this application since they can be used as both sensors and actuators of guided waves. Placement of these transducers is an important issue in wave based approaches as Lamb waves are sensitive to changes in material properties, geometry, and boundary conditions which may obscure the presence of damage if they are not taken into account during sensor placement. The placement scheme proposed in this dissertation arranges piezoelectric transducers in a pitch-catch mode so the entire structure can be covered using a minimum number of sensors. The stress distribution of the structure is also considered so PZTs are placed in regions where they do not fail before the host structure. In order to process the data from these transducers, advanced signal processing techniques are employed to detect the presence of damage in complex structures. To provide a better estimate of the damage for accurate life estimation, machine learning techniques are used to classify the type of damage in the structure. A data structure analysis approach is used to reduce the amount of data collected and increase computational efficiency. In the case of low velocity impact damage, fiber Bragg grating (FBG) sensors were used with a nonlinear regression tool to reconstruct the loading at the impact site.
ContributorsCoelho, Clyde (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Wu, Tong (Committee member) / Das, Santanu (Committee member) / Rajadas, John (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
156044-Thumbnail Image.png
Description
In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based

In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based on the presence

of these agents. A theoretical framework was introduced which performs interaction

learning from demonstrations in a two-agent work environment, and it is called

Interaction Primitives.

This document is an in-depth description of the new state of the art Python

Framework for Interaction Primitives between two agents in a single as well as multiple

task work environment and extension of the original framework in a work environment

with multiple agents doing a single task. The original theory of Interaction

Primitives has been extended to create a framework which will capture correlation

between more than two agents while performing a single task. The new state of the

art Python framework is an intuitive, generic, easy to install and easy to use python

library which can be applied to use the Interaction Primitives framework in a work

environment. This library was tested in simulated environments and controlled laboratory

environment. The results and benchmarks of this library are available in the

related sections of this document.
ContributorsKumar, Ashish, M.S (Author) / Amor, Hani Ben (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
155963-Thumbnail Image.png
Description
Computer Vision as a eld has gone through signicant changes in the last decade.

The eld has seen tremendous success in designing learning systems with hand-crafted

features and in using representation learning to extract better features. In this dissertation

some novel approaches to representation learning and task learning are studied.

Multiple-instance learning which is

Computer Vision as a eld has gone through signicant changes in the last decade.

The eld has seen tremendous success in designing learning systems with hand-crafted

features and in using representation learning to extract better features. In this dissertation

some novel approaches to representation learning and task learning are studied.

Multiple-instance learning which is generalization of supervised learning, is one

example of task learning that is discussed. In particular, a novel non-parametric k-

NN-based multiple-instance learning is proposed, which is shown to outperform other

existing approaches. This solution is applied to a diabetic retinopathy pathology

detection problem eectively.

In cases of representation learning, generality of neural features are investigated

rst. This investigation leads to some critical understanding and results in feature

generality among datasets. The possibility of learning from a mentor network instead

of from labels is then investigated. Distillation of dark knowledge is used to eciently

mentor a small network from a pre-trained large mentor network. These studies help

in understanding representation learning with smaller and compressed networks.
ContributorsVenkatesan, Ragav (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017
156193-Thumbnail Image.png
Description
With the rise of the Big Data Era, an exponential amount of network data is being generated at an unprecedented rate across a wide-range of high impact micro and macro areas of research---from protein interaction to social networks. The critical challenge is translating this large scale network data into actionable

With the rise of the Big Data Era, an exponential amount of network data is being generated at an unprecedented rate across a wide-range of high impact micro and macro areas of research---from protein interaction to social networks. The critical challenge is translating this large scale network data into actionable information.

A key task in the data translation is the analysis of network connectivity via marked nodes---the primary focus of our research. We have developed a framework for analyzing network connectivity via marked nodes in large scale graphs, utilizing novel algorithms in three interrelated areas: (1) analysis of a single seed node via it’s ego-centric network (AttriPart algorithm); (2) pathway identification between two seed nodes (K-Simple Shortest Paths Multithreaded and Search Reduced (KSSPR) algorithm); and (3) tree detection, defining the interaction between three or more seed nodes (Shortest Path MST algorithm).

In an effort to address both fundamental and applied research issues, we have developed the LocalForcasting algorithm to explore how network connectivity analysis can be applied to local community evolution and recommender systems. The goal is to apply the LocalForecasting algorithm to various domains---e.g., friend suggestions in social networks or future collaboration in co-authorship networks. This algorithm utilizes link prediction in combination with the AttriPart algorithm to predict future connections in local graph partitions.

Results show that our proposed AttriPart algorithm finds up to 1.6x denser local partitions, while running approximately 43x faster than traditional local partitioning techniques (PageRank-Nibble). In addition, our LocalForecasting algorithm demonstrates a significant improvement in the number of nodes and edges correctly predicted over baseline methods. Furthermore, results for the KSSPR algorithm demonstrate a speed-up of up to 2.5x the standard k-simple shortest paths algorithm.
ContributorsFreitas, Scott (Author) / Tong, Hanghang (Thesis advisor) / Maciejewski, Ross (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2018