Matching Items (82)
136362-Thumbnail Image.png
Description
Foveal sensors employ a small region of high acuity (the foveal region) surrounded by a periphery of lesser acuity. Consequently, the output map that describes their sensory acuity is nonlinear, rendering the vast corpus of linear system theory inapplicable immediately to the state estimation of a target being tracked by

Foveal sensors employ a small region of high acuity (the foveal region) surrounded by a periphery of lesser acuity. Consequently, the output map that describes their sensory acuity is nonlinear, rendering the vast corpus of linear system theory inapplicable immediately to the state estimation of a target being tracked by such a sensor. This thesis treats the adaptation of the Kalman filter, an iterative optimal estimator for linear-Gaussian dynamical systems, to enable its application to the nonlinear problem of foveal sensing. Results of simulations conducted to evaluate the effectiveness of this algorithm in tracking a target are presented, culminating in successful tracking for motion in two dimensions.
Created2015-05
133627-Thumbnail Image.png
Description
China's rapid growth was fueled by an unsustainable method: trade environment for GDP. Air pollution has reached dangerous levels and has taken a serious toll on China's economic progress. The World Bank estimates that in 2013, China lost about 10% of its GDP to pollution. As the cost of burning

China's rapid growth was fueled by an unsustainable method: trade environment for GDP. Air pollution has reached dangerous levels and has taken a serious toll on China's economic progress. The World Bank estimates that in 2013, China lost about 10% of its GDP to pollution. As the cost of burning fossil fuels and public dismay continue to mount, the government is taking steps to reduce carbon emissions and appease the people. The rapidly growing nuclear energy program is one of the energy solutions that China is using to addressing carbon emissions. While China has built a respectable amount of renewable energy capacity (such as wind and solar), much of that capacity is not connected to the power grid. Nuclear energy on the other hand, provides a low-emission alternative that operates independently of weather and sunlight. However, the accelerated pace of reactor construction in recent years presents challenges for the safe operation of nuclear energy in China. It is in China's (and the world's) best interest that a repeat of the Fukushima accident does not occur. In the wake of the Fukushima nuclear accident, public support for nuclear energy in China took a serious hit. A major domestic nuclear accident would be detrimental to the development of nuclear energy in China and diminish the government's reliability in the eyes of the people. This paper will outline those risk factors such as regulatory efforts, legal framework, technological issues, spent fuel disposal, and public perception and provide suggestions to decrease the risk of a major nuclear accident.
ContributorsLiu, Haoran (Author) / Kelman, Jonathan (Thesis director) / Cochran, Douglas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137014-Thumbnail Image.png
Description
The solution of the linear system of equations $Ax\approx b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel is considered. The solution by means of Tikhonov regularization in which $x$ is found to as the minimizer of $J(x)=\{ \|Ax -b\|_2^2 + \lambda^2 \|L x\|_2^2\}$

The solution of the linear system of equations $Ax\approx b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel is considered. The solution by means of Tikhonov regularization in which $x$ is found to as the minimizer of $J(x)=\{ \|Ax -b\|_2^2 + \lambda^2 \|L x\|_2^2\}$ introduces the unknown regularization parameter $\lambda$ which trades off the fidelity of the solution data fit and its smoothing norm, which is determined by the choice of $L$. The Generalized Discrepancy Principle (GDP) and Unbiased Predictive Risk Estimator (UPRE) are methods for finding $\lambda$ given prior conditions on the noise in the measurements $b$. Here we consider the case of $L=I$, and hence use the relationship between the singular value expansion and the singular value decomposition for square integrable kernels to prove that the GDP and UPRE estimates yield a convergent sequence for $\lambda$ with increasing problem size. Hence the estimate of $\lambda$ for a large problem may be found by down-sampling to a smaller problem, or to a set of smaller problems, and applying these estimators more efficiently on the smaller problems. In consequence the large scale problem can be solved in a single step immediately with the parameter found from the down sampled problem(s).
ContributorsHorst, Michael Jacob (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Wang, Yang (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137020-Thumbnail Image.png
Description
In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this ill-posed problem. Two such algorithms were examined: alternating projections, utilizing iterative Fourier transforms with manipulations performed in each domain on every iteration, and phase lifting, converting the problem to that of trace minimization, allowing for the use of convex optimization algorithms to perform the signal recovery. These recovery algorithms were compared on a basis of robustness as a function of signal-to-noise ratio. A second problem examined was that of unimodular polyphase radar waveform design. Under a finite signal energy constraint, the maximal energy return of a scene operator is obtained by transmitting the eigenvector of the scene Gramian associated with the largest eigenvalue. It is shown that if instead the problem is considered under a power constraint, a unimodular signal can be constructed starting from such an eigenvector that will have a greater return.
ContributorsJones, Scott Robert (Author) / Cochran, Douglas (Thesis director) / Diaz, Rodolfo (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137081-Thumbnail Image.png
Description
Passive radar can be used to reduce the demand for radio frequency spectrum bandwidth. This paper will explain how a MATLAB simulation tool was developed to analyze the feasibility of using passive radar with digitally modulated communication signals. The first stage of the simulation creates a binary phase-shift keying (BPSK)

Passive radar can be used to reduce the demand for radio frequency spectrum bandwidth. This paper will explain how a MATLAB simulation tool was developed to analyze the feasibility of using passive radar with digitally modulated communication signals. The first stage of the simulation creates a binary phase-shift keying (BPSK) signal, quadrature phase-shift keying (QPSK) signal, or digital terrestrial television (DTTV) signal. A scenario is then created using user defined parameters that simulates reception of the original signal on two different channels, a reference channel and a surveillance channel. The signal on the surveillance channel is delayed and Doppler shifted according to a point target scattering profile. An ambiguity function detector is implemented to identify the time delays and Doppler shifts associated with reflections off of the targets created. The results of an example are included in this report to demonstrate the simulation capabilities.
ContributorsScarborough, Gillian Donnelly (Author) / Cochran, Douglas (Thesis director) / Berisha, Visar (Committee member) / Wang, Chao (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137100-Thumbnail Image.png
Description
Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise measurements from every pair of sensors in the network and are thus only applicable when the network graph is completely connected, or when data are accumulated at a common fusion center. This thesis presents and exploits a new method that uses maximum-entropy techniques to estimate measurements between pairs of sensors that are not in direct communication, thereby enabling the use of the GC estimate in incompletely connected sensor networks. The research in this thesis culminates in a main conjecture supported by statistical tests regarding the topology of the incomplete network graphs.
ContributorsCrider, Lauren Nicole (Author) / Cochran, Douglas (Thesis director) / Renaut, Rosemary (Committee member) / Kosut, Oliver (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
136913-Thumbnail Image.png
Description
In recent years, networked systems have become prevalent in communications, computing, sensing, and many other areas. In a network composed of spatially distributed agents, network-wide synchronization of information about the physical environment and the network configuration must be maintained using measurements collected locally by the agents. Registration is a process

In recent years, networked systems have become prevalent in communications, computing, sensing, and many other areas. In a network composed of spatially distributed agents, network-wide synchronization of information about the physical environment and the network configuration must be maintained using measurements collected locally by the agents. Registration is a process for connecting the coordinate frames of multiple sets of data. This poses numerous challenges, particularly due to availability of direct communication only between neighboring agents in the network. These are exacerbated by uncertainty in the measurements and also by imperfect communication links. This research explored statistically based registration in a sensor network. The approach developed exploits measurements of offsets formed as differences of state values between pairs of agents that share a link in the network graph. It takes into account that the true offsets around any closed cycle in the network graph must sum to zero.
ContributorsPhuong, Shih-Ling (Author) / Cochran, Douglas (Thesis director) / Berman, Spring (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2014-05
135260-Thumbnail Image.png
Description
In modern remote sensing, arrays of sensors, such as antennas in radio frequency (RF) systems and microphones in acoustic systems, provide a basis for estimating the direction of arrival of a narrow-band signal at the sensor array. A Uniform linear array (ULA) is the most well-studied array geometry in that

In modern remote sensing, arrays of sensors, such as antennas in radio frequency (RF) systems and microphones in acoustic systems, provide a basis for estimating the direction of arrival of a narrow-band signal at the sensor array. A Uniform linear array (ULA) is the most well-studied array geometry in that its performance characteristics and limitations are well known, especially for signals originating in the far field. In some instances, the geometry of an array may be perturbed by an environmental disturbance that actually changes its nominal geometry; such as, towing an array behind a moving vehicle. Additionally, sparse arrays have become of interest again due to recent work in co-prime arrays. These sparse arrays contain fewer elements than a ULA but maintain the array length. The effects of these alterations to a ULA are of interest. Given this motivation, theoretical and experimental (i.e. via computer simulation) processes are used to determine quantitative and qualitative effects of perturbation and sparsification on standard metrics of array performance. These metrics include: main lobe gain, main lobe width and main lobe to side lobe ratio. Furthermore, in order to ascertain results/conclusions, these effects are juxtaposed with the performance of a ULA. Through the perturbation of each element following the first element drawn from a uniform distribution centered around the nominal position, it was found that both the theoretical mean and sample mean are relatively similar to the beam pattern of the full array. Meanwhile, by using a sparsification method of maintaining all the lags, it was found that this particular method was unnecessary. Simply taking out any three elements while maintaining the length of the array will produce similar results. Some configurations of elements give a better performance based on the metrics of interest in comparison to the ULA. These results demonstrate that a sparsified, perturbed or sparsified and perturbed array can be used in place of a Uniform Linear Array depending on the application.
ContributorsSilbernagel, Drake Oliver (Author) / Cochran, Douglas (Thesis director) / Aberle, James (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135111-Thumbnail Image.png
Description
The field of computed tomography involves reconstructing an image from lower dimensional projections. This is particularly useful for visualizing the inner structure of an object. Presented here is an imaging setup meant for use in computed tomography applications. This imaging setup relies on imaging electric fields through active interrogation. Models

The field of computed tomography involves reconstructing an image from lower dimensional projections. This is particularly useful for visualizing the inner structure of an object. Presented here is an imaging setup meant for use in computed tomography applications. This imaging setup relies on imaging electric fields through active interrogation. Models designed in Ansys Maxwell are used to simulate this setup and produce 2D images of an object from 1D projections to verify electric field imaging as a potential route for future computed tomography applications. The results of this thesis show reconstructed images that resemble the object being imaged using a filtered back projection method of reconstruction. This work concludes that electric field imaging is a promising option for computed tomography applications.
ContributorsDrummond, Zachary Daniel (Author) / Allee, David (Thesis director) / Cochran, Douglas (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
153629-Thumbnail Image.png
Description
The explosive growth of data generated from different services has opened a new vein of research commonly called ``big data.'' The sheer volume of the information in this data has yielded new applications in a wide range of fields, but the difficulties inherent in processing the enormous amount of

The explosive growth of data generated from different services has opened a new vein of research commonly called ``big data.'' The sheer volume of the information in this data has yielded new applications in a wide range of fields, but the difficulties inherent in processing the enormous amount of data, as well as the rate at which it is generated, also give rise to significant challenges. In particular, processing, modeling, and understanding the structure of online social networks is computationally difficult due to these challenges. The goal of this study is twofold: first to present a new networked data processing framework to model this social structure, and second to highlight the wireless networking gains possible by using this social structure.

The first part of the dissertation considers a new method for modeling social networks via probabilistic graphical models. Specifically, this new method employs the t-cherry junction tree, a recent advancement in probabilistic graphical models, to develop a compact representation and good approximation of an otherwise intractable probabilistic model. There are a number of advantages in this approach: 1) the best approximation possible via junction trees belongs to the class of t-cherry junction trees; 2) constructing a t-cherry junction tree can be largely parallelized; and 3) inference can be performed using distributed computation. To improve the quality of approximation, an algorithm to build a higher order tree gracefully from an existing one, without constructing it from scratch, is developed. this approach is applied to Twitter data containing 100,000 nodes to study the problem of recommending connections to new users.

Next, the t-cherry junction tree framework is extended by considering the impact of estimating the distributions involved from a training data set. Understanding this impact is vital to real-world applications as distributions are not known perfectly, but rather generated from training data. First, the fidelity of the t-cherry junction tree approximation due to this estimation is quantified. Then the scaling behavior, in terms of the size of the t-cherry junction tree, is approximated to show that higher-order t-cherry junction trees, which with perfect information are higher fidelity approximations, may actually result in decreased fidelity due to the difficulties in accurately estimating higher-dimensional distributions. Finally, this part concludes by demonstrating these findings by considering a distributed detection situation in which the sensors' measurements are correlated.

Having developed a framework to model social structure in online social networks, the study then highlights two approaches for utilizing this social network data in existing wireless communication networks. The first approach is a novel application: using social networks to enhance device-to-device wireless communication. It is well known that wireless communication can be significantly improved by utilizing relays to aid in transmission. Rather than deploying dedicated relays, a system is designed in which users can relay traffic for other users if there is a shared social trust between them, e.g., they are ``friends'' on Facebook, and for users that do not share social trust, implements a coalitional game framework to motivate users to relay traffic for each other. This framework guarantees that all users improve their throughput via relaying while ensuring that each user will function as a relay only if there is a social trust relationship or, if there is no social trust, a cycle of reciprocity is established in which a set of users will agree to relay for each other. This new system shows significant throughput gain in simulated networks that utilize real-world social network traces.

The second application of social structure to wireless communication is an approach to reduce the congestion in cellular networks during peak times. This is achieved by two means: preloading and offloading. Preloading refers to the process of using social network data to predict user demand and serve some users early, before the cellular network traffic peaks. Offloading allows users that have already obtained a copy of the content to opportunistically serve other users using device-to-device communication, thus eliminating the need for some cellular traffic. These two methods work especially well in tandem, as preloading creates a base of users that can serve later users via offloading. These two processes can greatly reduce the peak cellular traffic under ideal conditions, and in a more realistic situation, the impact of uncertainty in human mobility and the social network structure is analyzed. Even with the randomness inherent in these processes, both preloading and offloading offer substantial improvement. Finally, potential difficulties in preloading multiple pieces of content simultaneously are highlighted, and a heuristic method to solve these challenges is developed.
ContributorsProulx, Brian (Author) / Zhang, Junshan (Thesis advisor) / Cochran, Douglas (Committee member) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2015