Matching Items (8)
Filtering by

Clear all filters

153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
136913-Thumbnail Image.png
Description
In recent years, networked systems have become prevalent in communications, computing, sensing, and many other areas. In a network composed of spatially distributed agents, network-wide synchronization of information about the physical environment and the network configuration must be maintained using measurements collected locally by the agents. Registration is a process

In recent years, networked systems have become prevalent in communications, computing, sensing, and many other areas. In a network composed of spatially distributed agents, network-wide synchronization of information about the physical environment and the network configuration must be maintained using measurements collected locally by the agents. Registration is a process for connecting the coordinate frames of multiple sets of data. This poses numerous challenges, particularly due to availability of direct communication only between neighboring agents in the network. These are exacerbated by uncertainty in the measurements and also by imperfect communication links. This research explored statistically based registration in a sensor network. The approach developed exploits measurements of offsets formed as differences of state values between pairs of agents that share a link in the network graph. It takes into account that the true offsets around any closed cycle in the network graph must sum to zero.
ContributorsPhuong, Shih-Ling (Author) / Cochran, Douglas (Thesis director) / Berman, Spring (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2014-05
154471-Thumbnail Image.png
Description
The data explosion in the past decade is in part due to the widespread use of rich sensors that measure various physical phenomenon -- gyroscopes that measure orientation in phones and fitness devices, the Microsoft Kinect which measures depth information, etc. A typical application requires inferring the underlying physical phenomenon

The data explosion in the past decade is in part due to the widespread use of rich sensors that measure various physical phenomenon -- gyroscopes that measure orientation in phones and fitness devices, the Microsoft Kinect which measures depth information, etc. A typical application requires inferring the underlying physical phenomenon from data, which is done using machine learning. A fundamental assumption in training models is that the data is Euclidean, i.e. the metric is the standard Euclidean distance governed by the L-2 norm. However in many cases this assumption is violated, when the data lies on non Euclidean spaces such as Riemannian manifolds. While the underlying geometry accounts for the non-linearity, accurate analysis of human activity also requires temporal information to be taken into account. Human movement has a natural interpretation as a trajectory on the underlying feature manifold, as it evolves smoothly in time. A commonly occurring theme in many emerging problems is the need to \emph{represent, compare, and manipulate} such trajectories in a manner that respects the geometric constraints. This dissertation is a comprehensive treatise on modeling Riemannian trajectories to understand and exploit their statistical and dynamical properties. Such properties allow us to formulate novel representations for Riemannian trajectories. For example, the physical constraints on human movement are rarely considered, which results in an unnecessarily large space of features, making search, classification and other applications more complicated. Exploiting statistical properties can help us understand the \emph{true} space of such trajectories. In applications such as stroke rehabilitation where there is a need to differentiate between very similar kinds of movement, dynamical properties can be much more effective. In this regard, we propose a generalization to the Lyapunov exponent to Riemannian manifolds and show its effectiveness for human activity analysis. The theory developed in this thesis naturally leads to several benefits in areas such as data mining, compression, dimensionality reduction, classification, and regression.
ContributorsAnirudh, Rushil (Author) / Turaga, Pavan (Thesis advisor) / Cochran, Douglas (Committee member) / Runger, George C. (Committee member) / Taylor, Thomas (Committee member) / Arizona State University (Publisher)
Created2016
135725-Thumbnail Image.png
Description
A distributed sensor network (DSN) is a set of spatially scattered intelligent sensors designed to obtain data across an environment. DSNs are becoming a standard architecture for collecting data over a large area. We need registration of nodal data across the network in order to properly exploit having multiple sensors.

A distributed sensor network (DSN) is a set of spatially scattered intelligent sensors designed to obtain data across an environment. DSNs are becoming a standard architecture for collecting data over a large area. We need registration of nodal data across the network in order to properly exploit having multiple sensors. One major problem worth investigating is ensuring the integrity of the data received, such as time synchronization. Consider a group of match filter sensors. Each sensor is collecting the same data, and comparing the data collected to a known signal. In an ideal world, each sensor would be able to collect the data without offsets or noise in the system. Two models can be followed from this. First, each sensor could make a decision on its own, and then the decisions could be collected at a ``fusion center'' which could then decide if the signal is present or not. The fusion center can then decide if the signal is present or not based on the number true-or-false decisions that each sensor has made. Alternatively, each sensor could relay the data that it collects to the fusion center, and it could then make a decision based on all of the data that it then receives. Since the fusion center would have more information to base its decision on in the latter case--as opposed to the former case where it only receives a true or false from each sensor--one would expect the latter model to perform better. In fact, this would be the gold standard for detection across a DSN. However, there is random noise in the world that causes corruption of data collection, especially among sensors in a DSN. Each sensor does not collect the data in the exact same way or with the same precision. We classify these imperfections in data collections as offsets, specifically the offset present in the data collected by one sensor with respect to the rest of the sensors in the network. Therefore, reconsider the two models for a DSN described above. We can naively implement either of these models for data collection. Alternatively, we can attempt to estimate the offsets between the sensors and compensate. One could see how it would be expected that estimating the offsets within the DSN would provide better overall results than not finding estimators. This thesis will be structured as follows. First, there will be an extensive investigation into detection theory and the impact that different types of offsets have on sensor networks. Following the theory, an algorithm for estimating the data offsets will be proposed correct for the offsets. Next, we will look at Monte Carlo simulation results to see the impact on sensor performance of data offsets in comparison to a sensor network without offsets present. The algorithm is then implemented, and further experiments will demonstrate sensor performance with offset detection.
ContributorsMonardo, Vincent James (Author) / Cochran, Douglas (Thesis director) / Kierstead, Hal (Committee member) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135475-Thumbnail Image.png
Description
Divergence functions are both highly useful and fundamental to many areas in information theory and machine learning, but require either parametric approaches or prior knowledge of labels on the full data set. This paper presents a method to estimate the divergence between two data sets in the absence of fully

Divergence functions are both highly useful and fundamental to many areas in information theory and machine learning, but require either parametric approaches or prior knowledge of labels on the full data set. This paper presents a method to estimate the divergence between two data sets in the absence of fully labeled data. This semi-labeled case is common in many domains where labeling data by hand is expensive or time-consuming, or wherever large data sets are present. The theory derived in this paper is demonstrated on a simulated example, and then applied to a feature selection and classification problem from pathological speech analysis.
ContributorsGilton, Davis Leland (Author) / Berisha, Visar (Thesis director) / Cochran, Douglas (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
158716-Thumbnail Image.png
Description
The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that

The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that machine learning algorithms can be employed for a variety of purposes. To achieve that, without sacrificing the interpretation of the results, the dissertation leverages the physics behind power systems, well-known laws that underlie this man-made infrastructure, and the nature of the underlying stochastic phenomena that define the system operating conditions as the backbone for modeling data from the grid.

The first part of the dissertation introduces a new framework of graph signal processing (GSP) for the power grid, Grid-GSP, and applies it to voltage phasor measurements that characterize the overall system state of the power grid. Concepts from GSP are used in conjunction with known power system models in order to highlight the low-dimensional structure in data and present generative models for voltage phasors measurements. Applications such as identification of graphical communities, network inference, interpolation of missing data, detection of false data injection attacks and data compression are explored wherein Grid-GSP based generative models are used.

The second part of the dissertation develops a model for a joint statistical description of solar photo-voltaic (PV) power and the outdoor temperature which can lead to better management of power generation resources so that electricity demand such as air conditioning and supply from solar power are always matched in the face of stochasticity. The low-rank structure inherent in solar PV power data is used for forecasting and to detect partial-shading type of faults in solar panels.
ContributorsRamakrishna, Raksha (Author) / Scaglione, Anna (Thesis advisor) / Cochran, Douglas (Committee member) / Spanias, Andreas (Committee member) / Vittal, Vijay (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2020
Description
Deforestation in the Amazon rainforest has the potential to have devastating effects on ecosystems on both a local and global scale, making it one of the most environmentally threatening phenomena occurring today. In order to minimize deforestation in the Ama- zon and its consequences, it is helpful to analyze its occurrence using machine

Deforestation in the Amazon rainforest has the potential to have devastating effects on ecosystems on both a local and global scale, making it one of the most environmentally threatening phenomena occurring today. In order to minimize deforestation in the Ama- zon and its consequences, it is helpful to analyze its occurrence using machine learning architectures such as the U-Net. The U-Net is a type of Fully Convolutional Network that has shown significant capability in performing semantic segmentation. It is built upon a symmetric series of downsampling and upsampling layers that propagate feature infor- mation into higher spatial resolutions, allowing for the precise identification of features on the pixel scale. Such an architecture is well-suited for identifying features in satellite imagery. In this thesis, we construct and train a U-Net to identify deforested areas in satellite imagery of the Amazon through semantic segmentation.
ContributorsDouglas, Liam (Author) / Giel, Joshua (Co-author) / Espanol, Malena (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description
Deforestation in the Amazon rainforest has the potential to have devastating effects on ecosystems on both a local and global scale, making it one of the most environmentally threatening phenomena occurring today. In order to minimize deforestation in the Amazon and its consequences, it is helpful to analyze its occurrence

Deforestation in the Amazon rainforest has the potential to have devastating effects on ecosystems on both a local and global scale, making it one of the most environmentally threatening phenomena occurring today. In order to minimize deforestation in the Amazon and its consequences, it is helpful to analyze its occurrence using machine learning architectures such as the U-Net. The U-Net is a type of Fully Convolutional Network that has shown significant capability in performing semantic segmentation. It is built upon a symmetric series of downsampling and upsampling layers that propagate feature information into higher spatial resolutions, allowing for the precise identification of features on the pixel scale. Such an architecture is well-suited for identifying features in satellite imagery. In this thesis, we construct and train a U-Net to identify deforested areas in satellite imagery of the Amazon through semantic segmentation.
ContributorsGiel, Joshua (Author) / Douglas, Liam (Co-author) / Espanol, Malena (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Sustainability (Contributor)
Created2024-05