Matching Items (41)

Filtering by

Clear all filters

149503-Thumbnail Image.png

Stereo based visual odometry

Description

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a cheap and relatively accurate alternative to conventional odometry techniques like wheel odometry, inertial measurement systems and global positioning system (GPS). This thesis implements and analyzes the performance of a two camera based VO called Stereo based visual odometry (SVO) in presence of various deterrent factors like shadows, extremely bright outdoors, wet conditions etc... To allow the implementation of VO on any generic vehicle, a discussion on porting of the VO algorithm to android handsets is presented too. The SVO is implemented in three steps. In the first step, a dense disparity map for a scene is computed. To achieve this we utilize sum of absolute differences technique for stereo matching on rectified and pre-filtered stereo frames. Epipolar geometry is used to simplify the matching problem. The second step involves feature detection and temporal matching. Feature detection is carried out by Harris corner detector. These features are matched between two consecutive frames using the Lucas-Kanade feature tracker. The 3D co-ordinates of these matched set of features are computed from the disparity map obtained from the first step and are mapped into each other by a translation and a rotation. The rotation and translation is computed using least squares minimization with the aid of Singular Value Decomposition. Random Sample Consensus (RANSAC) is used for outlier detection. This comprises the third step. The accuracy of the algorithm is quantified based on the final position error, which is the difference between the final position computed by the SVO algorithm and the final ground truth position as obtained from the GPS. The SVO showed an error of around 1% under normal conditions for a path length of 60 m and around 3% in bright conditions for a path length of 130 m. The algorithm suffered in presence of shadows and vibrations, with errors of around 15% and path lengths of 20 m and 100 m respectively.

Contributors

Agent

Created

Date Created
2010

153928-Thumbnail Image.png

Radiation detection and imaging: neutrons and electric fields

Description

The work presented in this manuscript has the overarching theme of radiation. The two forms of radiation of interest are neutrons, i.e. nuclear, and electric fields. The ability to detect such forms of radiation have significant security implications

The work presented in this manuscript has the overarching theme of radiation. The two forms of radiation of interest are neutrons, i.e. nuclear, and electric fields. The ability to detect such forms of radiation have significant security implications that could also be extended to very practical industrial applications. The goal is therefore to detect, and even image, such radiation sources.

The method to do so revolved around the concept of building large-area sensor arrays. By covering a large area, we can increase the probability of detection and gather more data to build a more complete and clearer view of the environment. Large-area circuitry can be achieved cost-effectively by leveraging the thin-film transistor process of the display industry. With production of displays increasing with the explosion of mobile devices and continued growth in sales of flat panel monitors and television, the cost to build a unit continues to decrease.

Using a thin-film process also allows for flexible electronics, which could be taken advantage of in-house at the Flexible Electronics and Display Center. Flexible electronics implies new form factors and applications that would not otherwise be possible with their single crystal counterparts. To be able to effectively use thin-film technology, novel ways of overcoming the drawbacks of the thin-film process, namely the lower performance scale.

The two deliverable devices that underwent development are a preamplifier used in an active pixel sensor for neutron detection and a passive electric field imaging array. This thesis will cover the theory and process behind realizing these devices.

Contributors

Agent

Created

Date Created
2015

151156-Thumbnail Image.png

Underwater optical sensorbot for in situ pH monitoring

Description

Continuous underwater observation is a challenging engineering task that could be accomplished by development and deployment of a sensor array that can survive harsh underwater conditions. One approach to this challenge is a swarm of micro underwater robots, known as

Continuous underwater observation is a challenging engineering task that could be accomplished by development and deployment of a sensor array that can survive harsh underwater conditions. One approach to this challenge is a swarm of micro underwater robots, known as Sensorbots, that are equipped with biogeochemical sensors that can relay information among themselves in real-time. This innovative method for underwater exploration can contribute to a more comprehensive understanding of the ocean by not limiting sampling to a single point and time. In this thesis, Sensorbot Beta, a low-cost fully enclosed Sensorbot prototype for bench-top characterization and short-term field testing, is presented in a modular format that provides flexibility and the potential for rapid design. Sensorbot Beta is designed around a microcontroller driven platform comprised of commercial off-the-shelf components for all hardware to reduce cost and development time. The primary sensor incorporated into Sensorbot Beta is an in situ fluorescent pH sensor. Design considerations have been made for easy adoption of other fluorescent or phosphorescent sensors, such as dissolved oxygen or temperature. Optical components are designed in a format that enables additional sensors. A real-time data acquisition system, utilizing Bluetooth, allows for characterization of the sensor in bench top experiments. The Sensorbot Beta demonstrates rapid calibration and future work will include deployment for large scale experiments in a lake or ocean.

Contributors

Agent

Created

Date Created
2012

151093-Thumbnail Image.png

Multi-user diversity systems with application to cognitive radio

Description

This thesis aims to investigate the capacity and bit error rate (BER) performance of multi-user diversity systems with random number of users and considers its application to cognitive radio systems. Ergodic capacity, normalized capacity, outage capacity, and average bit error

This thesis aims to investigate the capacity and bit error rate (BER) performance of multi-user diversity systems with random number of users and considers its application to cognitive radio systems. Ergodic capacity, normalized capacity, outage capacity, and average bit error rate metrics are studied. It has been found that the randomization of the number of users will reduce the ergodic capacity. A stochastic ordering framework is adopted to order user distributions, for example, Laplace transform ordering. The ergodic capacity under different user distributions will follow their corresponding Laplace transform order. The scaling law of ergodic capacity with mean number of users under Poisson and negative binomial user distributions are studied for large mean number of users and these two random distributions are ordered in Laplace transform ordering sense. The ergodic capacity per user is defined and is shown to increase when the total number of users is randomized, which is the opposite to the case of unnormalized ergodic capacity metric. Outage probability under slow fading is also considered and shown to decrease when the total number of users is randomized. The bit error rate (BER) in a general multi-user diversity system has a completely monotonic derivative, which implies that, according to the Jensen's inequality, the randomization of the total number of users will decrease the average BER performance. The special case of Poisson number of users and Rayleigh fading is studied. Combining with the knowledge of regular variation, the average BER is shown to achieve tightness in the Jensen's inequality. This is followed by the extension to the negative binomial number of users, for which the BER is derived and shown to be decreasing in the number of users. A single primary user cognitive radio system with multi-user diversity at the secondary users is proposed. Comparing to the general multi-user diversity system, there exists an interference constraint between secondary and primary users, which is independent of the secondary users' transmission. The secondary user with high- est transmitted SNR which also satisfies the interference constraint is selected to communicate. The active number of secondary users is a binomial random variable. This is then followed by a derivation of the scaling law of the ergodic capacity with mean number of users and the closed form expression of average BER under this situation. The ergodic capacity under binomial user distribution is shown to outperform the Poisson case. Monte-Carlo simulations are used to supplement our analytical results and compare the performance of different user distributions.

Contributors

Agent

Created

Date Created
2012

151204-Thumbnail Image.png

Camera calibration using adaptive segmentation and ellipse fitting for localizing control points

Description

There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration

There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration is the first step for extracting 3D data from a 2D image. It plays a crucial role in computer vision and 3D reconstruction due to the fact that the accuracy of the reconstruction and 3D coordinate determination relies on the accuracy of the camera calibration to a great extent. This thesis presents a novel camera calibration method using a circular calibration pattern. The disadvantages and issues with existing state-of-the-art methods are discussed and are overcome in this work. The implemented system consists of techniques of local adaptive segmentation, ellipse fitting, projection and optimization. Simulation results are presented to illustrate the performance of the proposed scheme. These results show that the proposed method reduces the error as compared to the state-of-the-art for high-resolution images, and that the proposed scheme is more robust to blur in the imaged calibration pattern.

Contributors

Agent

Created

Date Created
2012

149902-Thumbnail Image.png

Fractional focusing and the chirp scaling algorithm with real synthetic aperture radar data

Description

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.

Contributors

Agent

Created

Date Created
2011

150353-Thumbnail Image.png

Augmented image classification using image registration techniques

Description

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.

Contributors

Agent

Created

Date Created
2011

150530-Thumbnail Image.png

Sensor placement and graphical user interface for photovoltaic array monitoring system

Description

With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for

With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults have to spend a large amount of time determining the location of the fault manually. Automated monitoring systems are needed to obtain the information about the performance of the array and detect faults. Such systems must monitor the DC side of the array in addition to the AC side to identify non catastrophic faults. This thesis focuses on two of the requirements for DC side monitoring of an automated PV array monitoring system. The first part of the thesis quantifies the advantages of obtaining higher resolution data from a PV array on detection of faults. Data for the monitoring system can be gathered for the array as a whole or from additional places within the array such as individual modules and end of strings. The fault detection rate and the false positive rates are compared for array level, string level and module level PV data. Monte Carlo simulations are performed using PV array models developed in Simulink and MATLAB for fault and no fault cases. The second part describes a graphical user interface (GUI) that can be used to visualize the PV array for module level monitoring system information. A demonstration GUI is built in MATLAB using data obtained from a PV array test facility in Tempe, AZ. Visualizations are implemented to display information about the array as a whole or individual modules and locate faults in the array.

Contributors

Agent

Created

Date Created
2012

150423-Thumbnail Image.png

Dynamic waveform design for track-before-detect algorithms in radar

Description

In this thesis, an adaptive waveform selection technique for dynamic target tracking under low signal-to-noise ratio (SNR) conditions is investigated. The approach is integrated with a track-before-detect (TBD) algorithm and uses delay-Doppler matched filter (MF) outputs as raw measurements without

In this thesis, an adaptive waveform selection technique for dynamic target tracking under low signal-to-noise ratio (SNR) conditions is investigated. The approach is integrated with a track-before-detect (TBD) algorithm and uses delay-Doppler matched filter (MF) outputs as raw measurements without setting any threshold for extracting delay-Doppler estimates. The particle filter (PF) Bayesian sequential estimation approach is used with the TBD algorithm (PF-TBD) to estimate the dynamic target state. A waveform-agile TBD technique is proposed that integrates the PF-TBD with a waveform selection technique. The new approach predicts the waveform to transmit at the next time step by minimizing the predicted mean-squared error (MSE). As a result, the radar parameters are adaptively and optimally selected for superior performance. Based on previous work, this thesis highlights the applicability of the predicted covariance matrix to the lower SNR waveform-agile tracking problem. The adaptive waveform selection algorithm's MSE performance was compared against fixed waveforms using Monte Carlo simulations. It was found that the adaptive approach performed at least as well as the best fixed waveform when focusing on estimating only position or only velocity. When these estimates were weighted by different amounts, then the adaptive performance exceeded all fixed waveforms. This improvement in performance demonstrates the utility of the predicted covariance in waveform design, at low SNR conditions that are poorly handled with more traditional tracking algorithms.

Contributors

Agent

Created

Date Created
2011

150830-Thumbnail Image.png

Scheduling neural sensors to estimate brain activity

Description

Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder

Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from where the disorder has originated. Designing advanced localization algorithms that can adapt to environmental changes is considered a significant shift from manual diagnosis which is based on the knowledge and observation of the doctor, to an adaptive and improved brain disorder diagnosis as these algorithms can track activities that might not be noticed by the human eye. An important consideration of these localization algorithms, however, is to try and minimize the overall power consumption in order to improve the study and treatment of brain disorders. This thesis considers the problem of estimating dynamic parameters of neural dipole sources while minimizing the system's overall power consumption; this is achieved by minimizing the number of EEG/MEG measurements sensors without a loss in estimation performance accuracy. As the EEG/MEG measurements models are related non-linearity to the dipole source locations and moments, these dynamic parameters can be estimated using sequential Monte Carlo methods such as particle filtering. Due to the large number of sensors required to record EEG/MEG Measurements for use in the particle filter, over long period recordings, a large amounts of power is required for storage and transmission. In order to reduce the overall power consumption, two methods are proposed. The first method used the predicted mean square estimation error as the performance metric under the constraint of a maximum power consumption. The performance metric of the second method uses the distance between the location of the sensors and the location estimate of the dipole source at the previous time step; this sensor scheduling scheme results in maximizing the overall signal-to-noise ratio. The performance of both methods is demonstrated using simulated data, and both methods show that they can provide good estimation results with significant reduction in the number of activated sensors at each time step.

Contributors

Agent

Created

Date Created
2012