Matching Items (179)
137558-Thumbnail Image.png
Description
Our goals in our project are to enable management of distributed systems from one central location, record system logs and audit system based on these logs, and to demonstrate feasibility of platform-independent management of distributed systems based on CIM schema. In order to achieve these goals, we will have to

Our goals in our project are to enable management of distributed systems from one central location, record system logs and audit system based on these logs, and to demonstrate feasibility of platform-independent management of distributed systems based on CIM schema. In order to achieve these goals, we will have to overcome research challenges such as identifying meaningful CIM classes and attributes that could help to achieve this goal, how to gather managed objects of these CIM classes to collect such attributes on a given platform, and to research whether a platform's implementation of CIM is complete or incomplete so as to decide which platform would be the best to implement our solution. Even if a platform's implementation of CIM is incomplete, would we be able to create our own solution to a missing attribute and perhaps provide our own extension of the implementation? One major practical accomplishment will include developing a tool to allow distributed systems management regardless of a target system's platform. However, our research accomplishments will include having found the CIM classes that would be advantageous for system management and determining which platform would be best to work with managed objects of these classes.
ContributorsTrang, Patrick D (Author) / Ahn, Gail-Joon (Thesis director) / Chen, Yinong (Committee member) / Wilson, Adrian (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
137481-Thumbnail Image.png
Description
We discuss processes involved in user-centric security design, including the synthesis of goals based on security and usability tasks. We suggest the usage of implicit security and the facilitation of secureuser actions. We propose a process for evaluating usability flaws by treating them as security threats and adapting traditional HCI

We discuss processes involved in user-centric security design, including the synthesis of goals based on security and usability tasks. We suggest the usage of implicit security and the facilitation of secureuser actions. We propose a process for evaluating usability flaws by treating them as security threats and adapting traditional HCI methods. We discuss how to correct these flaws once they are discovered. Finally, we discuss the Usable Security Development Model for developing usable secure systems.
ContributorsJorgensen, Jan Drake (Author) / Ahn, Gail-Joon (Thesis director) / VanLehn, Kurt (Committee member) / Wilkerson, Kelly (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
135725-Thumbnail Image.png
Description
A distributed sensor network (DSN) is a set of spatially scattered intelligent sensors designed to obtain data across an environment. DSNs are becoming a standard architecture for collecting data over a large area. We need registration of nodal data across the network in order to properly exploit having multiple sensors.

A distributed sensor network (DSN) is a set of spatially scattered intelligent sensors designed to obtain data across an environment. DSNs are becoming a standard architecture for collecting data over a large area. We need registration of nodal data across the network in order to properly exploit having multiple sensors. One major problem worth investigating is ensuring the integrity of the data received, such as time synchronization. Consider a group of match filter sensors. Each sensor is collecting the same data, and comparing the data collected to a known signal. In an ideal world, each sensor would be able to collect the data without offsets or noise in the system. Two models can be followed from this. First, each sensor could make a decision on its own, and then the decisions could be collected at a ``fusion center'' which could then decide if the signal is present or not. The fusion center can then decide if the signal is present or not based on the number true-or-false decisions that each sensor has made. Alternatively, each sensor could relay the data that it collects to the fusion center, and it could then make a decision based on all of the data that it then receives. Since the fusion center would have more information to base its decision on in the latter case--as opposed to the former case where it only receives a true or false from each sensor--one would expect the latter model to perform better. In fact, this would be the gold standard for detection across a DSN. However, there is random noise in the world that causes corruption of data collection, especially among sensors in a DSN. Each sensor does not collect the data in the exact same way or with the same precision. We classify these imperfections in data collections as offsets, specifically the offset present in the data collected by one sensor with respect to the rest of the sensors in the network. Therefore, reconsider the two models for a DSN described above. We can naively implement either of these models for data collection. Alternatively, we can attempt to estimate the offsets between the sensors and compensate. One could see how it would be expected that estimating the offsets within the DSN would provide better overall results than not finding estimators. This thesis will be structured as follows. First, there will be an extensive investigation into detection theory and the impact that different types of offsets have on sensor networks. Following the theory, an algorithm for estimating the data offsets will be proposed correct for the offsets. Next, we will look at Monte Carlo simulation results to see the impact on sensor performance of data offsets in comparison to a sensor network without offsets present. The algorithm is then implemented, and further experiments will demonstrate sensor performance with offset detection.
ContributorsMonardo, Vincent James (Author) / Cochran, Douglas (Thesis director) / Kierstead, Hal (Committee member) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
147616-Thumbnail Image.png
Description

The Fourier representation of a signal or image is equivalent to its native representation in the sense that the signal or image can be reconstructed exactly from its Fourier transform. The Fourier transform is generally complex-valued, and each value of the Fourier spectrum thus possesses both magnitude and phase. Degradation

The Fourier representation of a signal or image is equivalent to its native representation in the sense that the signal or image can be reconstructed exactly from its Fourier transform. The Fourier transform is generally complex-valued, and each value of the Fourier spectrum thus possesses both magnitude and phase. Degradation of signals and images when Fourier phase information is lost or corrupted has been studied extensively in the signal processing research literature, as has reconstruction of signals and images using only Fourier magnitude information. This thesis focuses on the case of images, where it examines the visual effect of quantifiable levels of Fourier phase loss and, in particular, studies the merits of introducing varying degrees of phase information in a classical iterative algorithm for reconstructing an image from its Fourier magnitude.

ContributorsShi, Yiting (Author) / Cochran, Douglas (Thesis director) / Jones, Scott (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149360-Thumbnail Image.png
Description
Cloud computing systems fundamentally provide access to large pools of data and computational resources through a variety of interfaces similar in spirit to existing grid and HPC resource management and programming systems. These types of systems offer a new programming target for scalable application developers and have gained popularity over

Cloud computing systems fundamentally provide access to large pools of data and computational resources through a variety of interfaces similar in spirit to existing grid and HPC resource management and programming systems. These types of systems offer a new programming target for scalable application developers and have gained popularity over the past few years. However, most cloud computing systems in operation today are proprietary and rely upon infrastructure that is invisible to the research community, or are not explicitly designed to be instrumented and modified by systems researchers. In this research, Xen Server Management API is employed to build a framework for cloud computing that implements what is commonly referred to as Infrastructure as a Service (IaaS); systems that give users the ability to run and control entire virtual machine instances deployed across a variety physical resources. The goal of this research is to develop a cloud based resource and service sharing platform for Computer network security education a.k.a Virtual Lab.
ContributorsKadne, Aniruddha (Author) / Huang, Dijiang (Thesis advisor) / Tsai, Wei-Tek (Committee member) / Ahn, Gail-Joon (Committee member) / Arizona State University (Publisher)
Created2010
149544-Thumbnail Image.png
Description
This dissertation builds a clear understanding of the role of information in wireless networks, and devises adaptive strategies to optimize the overall performance. The meaning of information ranges from channel
etwork states to the structure of the signal itself. Under the common thread of characterizing the role of information, this dissertation

This dissertation builds a clear understanding of the role of information in wireless networks, and devises adaptive strategies to optimize the overall performance. The meaning of information ranges from channel
etwork states to the structure of the signal itself. Under the common thread of characterizing the role of information, this dissertation investigates opportunistic scheduling, relaying and multicast in wireless networks. To assess the role of channel state information, the problem of opportunistic distributed opportunistic scheduling (DOS) with incomplete information is considered for ad-hoc networks in which many links contend for the same channel using random access. The objective is to maximize the system throughput. In practice, link state information is noisy, and may result in throughput degradation. Therefore, refining the state information by additional probing can improve the throughput, but at the cost of further probing. Capitalizing on optimal stopping theory, the optimal scheduling policy is shown to be threshold-based and is characterized by either one or two thresholds, depending on network settings. To understand the benefits of side information in cooperative relaying scenarios, a basic model is explored for two-hop transmissions of two information flows which interfere with each other. While the first hop is a classical interference channel, the second hop can be treated as an interference channel with transmitter side information. Various cooperative relaying strategies are developed to enhance the achievable rate. In another context, a simple sensor network is considered, where a sensor node acts as a relay, and aids fusion center in detecting an event. Two relaying schemes are considered: analog relaying and digital relaying. Sufficient conditions are provided for the optimality of analog relaying over digital relaying in this network. To illustrate the role of information about the signal structure in joint source-channel coding, multicast of compressible signals over lossy channels is studied. The focus is on the network outage from the perspective of signal distortion across all receivers. Based on extreme value theory, the network outage is characterized in terms of key parameters. A new method using subblock network coding is devised, which prioritizes resource allocation based on the signal information structure.
ContributorsPaataguppe Suryanarayan Bhat, Chandrashekhar Thejaswi (Author) / Zhang, Junshan (Thesis advisor) / Cochran, Douglas (Committee member) / Duman, Tolga (Committee member) / Hui, Yu (Committee member) / Taylor, Thomas (Committee member) / Arizona State University (Publisher)
Created2011
136362-Thumbnail Image.png
Description
Foveal sensors employ a small region of high acuity (the foveal region) surrounded by a periphery of lesser acuity. Consequently, the output map that describes their sensory acuity is nonlinear, rendering the vast corpus of linear system theory inapplicable immediately to the state estimation of a target being tracked by

Foveal sensors employ a small region of high acuity (the foveal region) surrounded by a periphery of lesser acuity. Consequently, the output map that describes their sensory acuity is nonlinear, rendering the vast corpus of linear system theory inapplicable immediately to the state estimation of a target being tracked by such a sensor. This thesis treats the adaptation of the Kalman filter, an iterative optimal estimator for linear-Gaussian dynamical systems, to enable its application to the nonlinear problem of foveal sensing. Results of simulations conducted to evaluate the effectiveness of this algorithm in tracking a target are presented, culminating in successful tracking for motion in two dimensions.
Created2015-05
131166-Thumbnail Image.png
Description
Radar systems seek to detect targets in some search space (e.g. volume of airspace, or area on the ground surface) by actively illuminating the environment with radio waves. This illumination yields a return from targets of interest as well as highly reflective terrain features that perhaps are not of interest

Radar systems seek to detect targets in some search space (e.g. volume of airspace, or area on the ground surface) by actively illuminating the environment with radio waves. This illumination yields a return from targets of interest as well as highly reflective terrain features that perhaps are not of interest (called clutter). Data adaptive algorithms are therefore employed to provide robust detection of targets against a background of clutter and other forms of interference. The adaptive matched filter (AMF) is an effective, well-established detection statistic whose exact probability density function (PDF) is known under prevalent radar system model assumptions. Variations of this approach, however, lead to tests whose PDFs remain unknown or incalculable. This project will study the effectiveness of saddlepoint methods applied to approximate the known pdf of the clairvoyant matched filter, using MATLAB to complete the numerical calculations. Specifically, the approximation was used to compute tail probabilities for a range of thresholds, as well as compute the threshold and probability of detection for a specific desired probability of false alarm. This was compared to the same values computed using the known exact PDF of the filter, with the comparison demonstrating high levels of accuracy for the saddlepoint approximation. The results are encouraging, and justify further study of the approximation as applied to more strained or complicated scenarios.
ContributorsRhoades, Rachel (Author) / Richmond, Christ (Thesis director) / Cochran, Douglas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132777-Thumbnail Image.png
Description
The use of conventional weather radar in vulcanology leads to two problems: the radars often use wavelengths which are too long to detect the fine ash particles, and they cannot be field–adjusted to fit the wide variety of eruptions. Thus, to better study these geologic processes, a new radar must

The use of conventional weather radar in vulcanology leads to two problems: the radars often use wavelengths which are too long to detect the fine ash particles, and they cannot be field–adjusted to fit the wide variety of eruptions. Thus, to better study these geologic processes, a new radar must be developed that is easily reconfigurable to allow for flexibility and can operate at sufficiently short wavelengths.

This thesis investigates how to design a radar using a field–programmable gate array board to generate the radar signal, and process the returned signal to determine the distance and concentration of objects (in this case, ash). The purpose of using such a board lies in its reconfigurability—a design can (relatively easily) be adjusted, recompiled, and reuploaded to the hardware with none of the cost or time overhead required of a standard weather radar.

The design operates on the principle of frequency–modulated continuous–waves, in which the output signal frequency changes as a function of time. The difference in transmit and echo frequencies determines the distance of an object, while the magnitude of a particular difference frequency corresponds to concentration. Thus, by viewing a spectrum of frequency differences, one is able to see both the concentration and distances of ash from the radar.

The transmit signal data was created in MATLAB®, while the radar was designed with MATLAB® Simulink® using hardware IP blocks and implemented on the ROACH2 signal processing hardware, which utilizes a Xilinx® Virtex®–6 chip. The output is read from a computer linked to the hardware through Ethernet, using a Python™ script. Testing revealed minor flaws due to the usage of lower–grade components in the prototype. However, the functionality of the proposed radar design was proven, making this approach to radar a promising path for modern vulcanology.
ContributorsLee, Byeong Mok (Co-author) / Xi, Andrew Jinchi (Co-author) / Groppi, Christopher (Thesis director) / Mauskopf, Philip (Committee member) / Baumann, Alicia (Committee member) / Cochran, Douglas (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132785-Thumbnail Image.png
Description
The use of conventional weather radar in vulcanology leads to two problems: the radars often use wavelengths which are too long to detect the fine ash particles, and they cannot be field–adjusted to fit the wide variety of eruptions. Thus, to better study these geologic processes, a new radar must

The use of conventional weather radar in vulcanology leads to two problems: the radars often use wavelengths which are too long to detect the fine ash particles, and they cannot be field–adjusted to fit the wide variety of eruptions. Thus, to better study these geologic processes, a new radar must be developed that is easily reconfigurable to allow for flexibility and can operate at sufficiently short wavelengths.

This thesis investigates how to design a radar using a field–programmable gate array board to generate the radar signal, and process the returned signal to determine the distance and concentration of objects (in this case, ash). The purpose of using such a board lies in its reconfigurability—a design can (relatively easily) be adjusted, recompiled, and reuploaded to the hardware with none of the cost or time overhead required of a standard weather radar.

The design operates on the principle of frequency–modulated continuous–waves, in which the output signal frequency changes as a function of time. The difference in transmit and echo frequencies determines the distance of an object, while the magnitude of a particular difference frequency corresponds to concentration. Thus, by viewing a spectrum of frequency differences, one is able to see both the concentration and distances of ash from the radar.

The transmit signal data was created in MATLAB®, while the radar was designed with MATLAB® Simulink® using hardware IP blocks and implemented on the ROACH2 signal processing hardware, which utilizes a Xilinx® Virtex®–6 chip. The output is read from a computer linked to the hardware through Ethernet, using a Python™ script. Testing revealed minor flaws due to the usage of lower–grade components in the prototype. However, the functionality of the proposed radar design was proven, making this approach to radar a promising path for modern vulcanology.
ContributorsXi, Andrew Jinchi (Co-author) / Lee, Matthew Byeongmok (Co-author) / Groppi, Christopher (Thesis director) / Mauskopf, Philip (Committee member) / Cochran, Douglas (Committee member) / Baumann, Alicia (Committee member) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05