Matching Items (25)
Filtering by

Clear all filters

135425-Thumbnail Image.png
Description
The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor

The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor edge detection method was therefore developed to realize an edge detector directly from spectral data. This thesis explores the possibilities of detecting edges from the phase of the spectral data, that is, without the magnitude of the sampled spectral data. Prior work has demonstrated that the spectral phase contains particularly important information about underlying features in a signal. Furthermore, the concentration factor method yields some insight into the detection of edges in spectral phase data. An iterative design approach was taken to realize an edge detector using only the spectral phase data, also allowing for the design of an edge detector when phase data are intermittent or corrupted. Problem formulations showing the power of the design approach are given throughout. A post-processing scheme relying on the difference of multiple edge approximations yields a strong edge detector which is shown to be resilient under noisy, intermittent phase data. Lastly, a thresholding technique is applied to give an explicit enhanced edge detector ready to be used. Examples throughout are demonstrate both on signals and images.
ContributorsReynolds, Alexander Bryce (Author) / Gelb, Anne (Thesis director) / Cochran, Douglas (Committee member) / Viswanathan, Adityavikram (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136520-Thumbnail Image.png
Description
Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.
ContributorsHansen, Jakob Kristian (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136314-Thumbnail Image.png
Description
The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of

The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of digital output to headphones or speakers. Based on this information, the gamer can discern where a particular stimulus is coming from and whether or not that is a threat to their wellbeing within the virtual world. People with reliable hearing have a distinct advantage over hearing impaired people in the fact that they can gather information not just from what is in front of them, but from every angle relative to the way they're facing. The purpose of this project was to find a way to even the playing field, so that a person hard of hearing could also receive the sensory feedback that any other person would get while playing video games To do this, visual surround sound was created. This is a system that takes a surround sound input, and illuminates LEDs around the periphery of glasses based on the direction, frequency and amplitude of the audio wave. This provides the user with crucial information on the whereabouts of different elements within the game. In this paper, the research and development of Visual Surround Sound is discussed along with its viability in regards to a deaf person's ability to learn the technology, and decipher the visual cues.
ContributorsKadi, Danyal (Co-author) / Burrell, Nathaneal (Co-author) / Butler, Kristi (Co-author) / Wright, Gavin (Co-author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05
136362-Thumbnail Image.png
Description
Foveal sensors employ a small region of high acuity (the foveal region) surrounded by a periphery of lesser acuity. Consequently, the output map that describes their sensory acuity is nonlinear, rendering the vast corpus of linear system theory inapplicable immediately to the state estimation of a target being tracked by

Foveal sensors employ a small region of high acuity (the foveal region) surrounded by a periphery of lesser acuity. Consequently, the output map that describes their sensory acuity is nonlinear, rendering the vast corpus of linear system theory inapplicable immediately to the state estimation of a target being tracked by such a sensor. This thesis treats the adaptation of the Kalman filter, an iterative optimal estimator for linear-Gaussian dynamical systems, to enable its application to the nonlinear problem of foveal sensing. Results of simulations conducted to evaluate the effectiveness of this algorithm in tracking a target are presented, culminating in successful tracking for motion in two dimensions.
Created2015-05
133627-Thumbnail Image.png
Description
China's rapid growth was fueled by an unsustainable method: trade environment for GDP. Air pollution has reached dangerous levels and has taken a serious toll on China's economic progress. The World Bank estimates that in 2013, China lost about 10% of its GDP to pollution. As the cost of burning

China's rapid growth was fueled by an unsustainable method: trade environment for GDP. Air pollution has reached dangerous levels and has taken a serious toll on China's economic progress. The World Bank estimates that in 2013, China lost about 10% of its GDP to pollution. As the cost of burning fossil fuels and public dismay continue to mount, the government is taking steps to reduce carbon emissions and appease the people. The rapidly growing nuclear energy program is one of the energy solutions that China is using to addressing carbon emissions. While China has built a respectable amount of renewable energy capacity (such as wind and solar), much of that capacity is not connected to the power grid. Nuclear energy on the other hand, provides a low-emission alternative that operates independently of weather and sunlight. However, the accelerated pace of reactor construction in recent years presents challenges for the safe operation of nuclear energy in China. It is in China's (and the world's) best interest that a repeat of the Fukushima accident does not occur. In the wake of the Fukushima nuclear accident, public support for nuclear energy in China took a serious hit. A major domestic nuclear accident would be detrimental to the development of nuclear energy in China and diminish the government's reliability in the eyes of the people. This paper will outline those risk factors such as regulatory efforts, legal framework, technological issues, spent fuel disposal, and public perception and provide suggestions to decrease the risk of a major nuclear accident.
ContributorsLiu, Haoran (Author) / Kelman, Jonathan (Thesis director) / Cochran, Douglas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137014-Thumbnail Image.png
Description
The solution of the linear system of equations $Ax\approx b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel is considered. The solution by means of Tikhonov regularization in which $x$ is found to as the minimizer of $J(x)=\{ \|Ax -b\|_2^2 + \lambda^2 \|L x\|_2^2\}$

The solution of the linear system of equations $Ax\approx b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel is considered. The solution by means of Tikhonov regularization in which $x$ is found to as the minimizer of $J(x)=\{ \|Ax -b\|_2^2 + \lambda^2 \|L x\|_2^2\}$ introduces the unknown regularization parameter $\lambda$ which trades off the fidelity of the solution data fit and its smoothing norm, which is determined by the choice of $L$. The Generalized Discrepancy Principle (GDP) and Unbiased Predictive Risk Estimator (UPRE) are methods for finding $\lambda$ given prior conditions on the noise in the measurements $b$. Here we consider the case of $L=I$, and hence use the relationship between the singular value expansion and the singular value decomposition for square integrable kernels to prove that the GDP and UPRE estimates yield a convergent sequence for $\lambda$ with increasing problem size. Hence the estimate of $\lambda$ for a large problem may be found by down-sampling to a smaller problem, or to a set of smaller problems, and applying these estimators more efficiently on the smaller problems. In consequence the large scale problem can be solved in a single step immediately with the parameter found from the down sampled problem(s).
ContributorsHorst, Michael Jacob (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Wang, Yang (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137020-Thumbnail Image.png
Description
In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this ill-posed problem. Two such algorithms were examined: alternating projections, utilizing iterative Fourier transforms with manipulations performed in each domain on every iteration, and phase lifting, converting the problem to that of trace minimization, allowing for the use of convex optimization algorithms to perform the signal recovery. These recovery algorithms were compared on a basis of robustness as a function of signal-to-noise ratio. A second problem examined was that of unimodular polyphase radar waveform design. Under a finite signal energy constraint, the maximal energy return of a scene operator is obtained by transmitting the eigenvector of the scene Gramian associated with the largest eigenvalue. It is shown that if instead the problem is considered under a power constraint, a unimodular signal can be constructed starting from such an eigenvector that will have a greater return.
ContributorsJones, Scott Robert (Author) / Cochran, Douglas (Thesis director) / Diaz, Rodolfo (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137081-Thumbnail Image.png
Description
Passive radar can be used to reduce the demand for radio frequency spectrum bandwidth. This paper will explain how a MATLAB simulation tool was developed to analyze the feasibility of using passive radar with digitally modulated communication signals. The first stage of the simulation creates a binary phase-shift keying (BPSK)

Passive radar can be used to reduce the demand for radio frequency spectrum bandwidth. This paper will explain how a MATLAB simulation tool was developed to analyze the feasibility of using passive radar with digitally modulated communication signals. The first stage of the simulation creates a binary phase-shift keying (BPSK) signal, quadrature phase-shift keying (QPSK) signal, or digital terrestrial television (DTTV) signal. A scenario is then created using user defined parameters that simulates reception of the original signal on two different channels, a reference channel and a surveillance channel. The signal on the surveillance channel is delayed and Doppler shifted according to a point target scattering profile. An ambiguity function detector is implemented to identify the time delays and Doppler shifts associated with reflections off of the targets created. The results of an example are included in this report to demonstrate the simulation capabilities.
ContributorsScarborough, Gillian Donnelly (Author) / Cochran, Douglas (Thesis director) / Berisha, Visar (Committee member) / Wang, Chao (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137100-Thumbnail Image.png
Description
Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise measurements from every pair of sensors in the network and are thus only applicable when the network graph is completely connected, or when data are accumulated at a common fusion center. This thesis presents and exploits a new method that uses maximum-entropy techniques to estimate measurements between pairs of sensors that are not in direct communication, thereby enabling the use of the GC estimate in incompletely connected sensor networks. The research in this thesis culminates in a main conjecture supported by statistical tests regarding the topology of the incomplete network graphs.
ContributorsCrider, Lauren Nicole (Author) / Cochran, Douglas (Thesis director) / Renaut, Rosemary (Committee member) / Kosut, Oliver (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
136913-Thumbnail Image.png
Description
In recent years, networked systems have become prevalent in communications, computing, sensing, and many other areas. In a network composed of spatially distributed agents, network-wide synchronization of information about the physical environment and the network configuration must be maintained using measurements collected locally by the agents. Registration is a process

In recent years, networked systems have become prevalent in communications, computing, sensing, and many other areas. In a network composed of spatially distributed agents, network-wide synchronization of information about the physical environment and the network configuration must be maintained using measurements collected locally by the agents. Registration is a process for connecting the coordinate frames of multiple sets of data. This poses numerous challenges, particularly due to availability of direct communication only between neighboring agents in the network. These are exacerbated by uncertainty in the measurements and also by imperfect communication links. This research explored statistically based registration in a sensor network. The approach developed exploits measurements of offsets formed as differences of state values between pairs of agents that share a link in the network graph. It takes into account that the true offsets around any closed cycle in the network graph must sum to zero.
ContributorsPhuong, Shih-Ling (Author) / Cochran, Douglas (Thesis director) / Berman, Spring (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2014-05