Matching Items (71)

134286-Thumbnail Image.png

Fielding an Autonomous Cobot in a University Environment: Engineering and Evaluation

Description

Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts

Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be able to successfully navigate the office environment. While mobile robots are well suited for navigating and interacting with elements inside a deterministic office environment, attempting to interact with human beings in an office environment remains a challenge due to the limits on the amount of cost-efficient compute power onboard the robot. In this work, I propose the use of remote cloud services to offload intensive interaction tasks. I detail the interactions required in an office environment and discuss the challenges faced when implementing a human-robot interaction platform in a stochastic office environment. I also experiment with cloud services for facial recognition, speech recognition, and environment navigation and discuss my results. As part of my thesis, I have implemented a human-robot interaction system utilizing cloud APIs into a mobile robot, enabling it to navigate the office environment, identify humans within the environment, and communicate with these humans.

Contributors

Created

Date Created
2017-05

136716-Thumbnail Image.png

Low-cost Image-assisted Inertial Navigation System for a Micro Air Vehicle

Description

The increasing civilian demand for autonomous aerial vehicle platforms in both hobby and professional markets has resulted in an abundance of inexpensive inertial navigation systems and hardware. Many of these systems lack full autonomy, relying on the pilot's guidance with

The increasing civilian demand for autonomous aerial vehicle platforms in both hobby and professional markets has resulted in an abundance of inexpensive inertial navigation systems and hardware. Many of these systems lack full autonomy, relying on the pilot's guidance with the assistance of inertial sensors for guidance. Autonomous systems depend heavily on the use of a global positioning satellite receiver which can be inhibited by satellite signal strength, low update rates and poor positioning accuracy. For precise navigation of a micro air vehicle in locations where GPS signals are unobtainable, such as indoors or throughout a dense urban environment, additional sensors must complement the inertial sensors to provide improved navigation state estimations without the use of a GPS. By creating a system that allows for the rapid development of experimental guidance, navigation and control algorithms on versatile, low-cost development platforms, improved navigation systems may be tested with relative ease and at reduced cost. Incorporating a downward-facing camera with this system may also be utilized to further improve vehicle autonomy in denied-GPS environments.

Contributors

Agent

Created

Date Created
2014-12

148263-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
2021-05

147540-Thumbnail Image.png

Development of Automated Data-Collecting Processes for Current Factory Production Systems: An Investigation to Validate Computer Vision Model Outputs

Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributors

Agent

Created

Date Created
2021-05

151120-Thumbnail Image.png

Clinically relevant classification and retrieval of diabetic retinopathy images

Description

Diabetic retinopathy (DR) is a common cause of blindness occurring due to prolonged presence of diabetes. The risk of developing DR or having the disease progress is increasing over time. Despite advances in diabetes care over the years, DR remains

Diabetic retinopathy (DR) is a common cause of blindness occurring due to prolonged presence of diabetes. The risk of developing DR or having the disease progress is increasing over time. Despite advances in diabetes care over the years, DR remains a vision-threatening complication and one of the leading causes of blindness among American adults. Recent studies have shown that diagnosis based on digital retinal imaging has potential benefits over traditional face-to-face evaluation. Yet there is a dearth of computer-based systems that can match the level of performance achieved by ophthalmologists. This thesis takes a fresh perspective in developing a computer-based system aimed at improving diagnosis of DR images. These images are categorized into three classes according to their severity level. The proposed approach explores effective methods to classify new images and retrieve clinically-relevant images from a database with prior diagnosis information associated with them. Retrieval provides a novel way to utilize the vast knowledge in the archives of previously-diagnosed DR images and thereby improve a clinician's performance while classification can safely reduce the burden on DR screening programs and possibly achieve higher detection accuracy than human experts. To solve the three-class retrieval and classification problem, the approach uses a multi-class multiple-instance medical image retrieval framework that makes use of spectrally tuned color correlogram and steerable Gaussian filter response features. The results show better retrieval and classification performances than prior-art methods and are also observed to be of clinical and visual relevance.

Contributors

Agent

Created

Date Created
2012

130884-Thumbnail Image.png

Thermal noise analysis of near-sensor image processing

Description

Commonly, image processing is handled on a CPU that is connected to the image sensor by a wire. In these far-sensor processing architectures, there is energy loss associated with sending data across an interconnect from the sensor to the CPU.

Commonly, image processing is handled on a CPU that is connected to the image sensor by a wire. In these far-sensor processing architectures, there is energy loss associated with sending data across an interconnect from the sensor to the CPU. In an effort to increase energy efficiency, near-sensor processing architectures have been developed, in which the sensor and processor are stacked directly on top of each other. This reduces energy loss associated with sending data off-sensor. However, processing near the image sensor causes the sensor to heat up. Reports of thermal noise in near-sensor processing architectures motivated us to study how temperature affects image quality on a commercial image sensor and how thermal noise affects computer vision task accuracy. We analyzed image noise across nine different temperatures and three sensor configurations to determine how image noise responds to an increase in temperature. Ultimately, our team used this information, along with transient analysis of a stacked image sensor’s thermal behavior, to advise thermal management strategies that leverage the benefits of near-sensor processing and prevent accuracy loss at problematic temperatures.

Contributors

Agent

Created

Date Created
2020-12

130894-Thumbnail Image.png

The Investigation of Low Cost Computer Vision Application for First Responder Co-robotics

Description

The use of Artificial Intelligence in assistive systems is growing in application and efficiency. From self-driving cars, to medical and surgical robots and industrial tasked unsupervised co-robots; the use of AI and robotics to eliminate human error in high-stress environments

The use of Artificial Intelligence in assistive systems is growing in application and efficiency. From self-driving cars, to medical and surgical robots and industrial tasked unsupervised co-robots; the use of AI and robotics to eliminate human error in high-stress environments and perform automated tasks is something that is advancing society’s status quo. Not only has the understanding of co-robotics exploded in the industrial world, but in research as well. The National Science Foundation (NSF) defines co-robots as the following: “...a robot whose main purpose is to work with people or other robots to accomplish a goal” (NSF, 1). The latest iteration of their National Robotics Initiative, NRI-2.0, focuses on efforts of creating co-robots optimized for ‘scalability, customizability, lowering barriers to entry, and societal impact’(NSF, 1). While many avenues have been explored for the implementation of co-robotics to create more efficient processes and sustainable lifestyles, this project’s focus was on societal impact co-robotics in the field of human safety and well-being. Introducing a co-robotics and computer vision AI solution for first responder assistance would help bring awareness and efficiency to public safety. The use of real-time identification techniques would create a greater range of awareness for first responders in high-stress situations. A combination of environmental features collected through sensors (camera and radar) could be used to identify people and objects within certain environments where visual impairments and obstructions are high (eg. burning buildings, smoke-filled rooms, ect.). Information about situational conditions (environmental readings, locations of other occupants, etc.) could be transmitted to first responders in emergency situations, maximizing situational awareness. This would not only aid first responders in the evaluation of emergency situations, but it would provide useful data for the first responder that would help materialize the most effective course of action for said situation.

Contributors

Agent

Created

Date Created
2020-12

154364-Thumbnail Image.png

Perceptual-based locally adaptive noise and blur detection

Description

The quality of real-world visual content is typically impaired by many factors including image noise and blur. Detecting and analyzing these impairments are important steps for multiple computer vision tasks. This work focuses on perceptual-based locally adaptive noise and blur

The quality of real-world visual content is typically impaired by many factors including image noise and blur. Detecting and analyzing these impairments are important steps for multiple computer vision tasks. This work focuses on perceptual-based locally adaptive noise and blur detection and their application to image restoration.

In the context of noise detection, this work proposes perceptual-based full-reference and no-reference objective image quality metrics by integrating perceptually weighted local noise into a probability summation model. Results are reported on both the LIVE and TID2008 databases. The proposed metrics achieve consistently a good performance across noise types and across databases as compared to many of the best very recent quality metrics. The proposed metrics are able to predict with high accuracy the relative amount of perceived noise in images of different content.

In the context of blur detection, existing approaches are either computationally costly or cannot perform reliably when dealing with the spatially-varying nature of the defocus blur. In addition, many existing approaches do not take human perception into account. This work proposes a blur detection algorithm that is capable of detecting and quantifying the level of spatially-varying blur by integrating directional edge spread calculation, probability of blur detection and local probability summation. The proposed method generates a blur map indicating the relative amount of perceived local blurriness. In order to detect the flat
ear flat regions that do not contribute to perceivable blur, a perceptual model based on the Just Noticeable Difference (JND) is further integrated in the proposed blur detection algorithm to generate perceptually significant blur maps. We compare our proposed method with six other state-of-the-art blur detection methods. Experimental results show that the proposed method performs the best both visually and quantitatively.

This work further investigates the application of the proposed blur detection methods to image deblurring. Two selective perceptual-based image deblurring frameworks are proposed, to improve the image deblurring results and to reduce the restoration artifacts. In addition, an edge-enhanced super resolution algorithm is proposed, and is shown to achieve better reconstructed results for the edge regions.

Contributors

Agent

Created

Date Created
2016

151204-Thumbnail Image.png

Camera calibration using adaptive segmentation and ellipse fitting for localizing control points

Description

There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration

There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration is the first step for extracting 3D data from a 2D image. It plays a crucial role in computer vision and 3D reconstruction due to the fact that the accuracy of the reconstruction and 3D coordinate determination relies on the accuracy of the camera calibration to a great extent. This thesis presents a novel camera calibration method using a circular calibration pattern. The disadvantages and issues with existing state-of-the-art methods are discussed and are overcome in this work. The implemented system consists of techniques of local adaptive segmentation, ellipse fitting, projection and optimization. Simulation results are presented to illustrate the performance of the proposed scheme. These results show that the proposed method reduces the error as compared to the state-of-the-art for high-resolution images, and that the proposed scheme is more robust to blur in the imaged calibration pattern.

Contributors

Agent

Created

Date Created
2012

149991-Thumbnail Image.png

Compressive sensing for computer vision and image processing

Description

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications of compressive sensing and sparse representation with regards to image enhancement, restoration and classication. The first application deals with image Super-Resolution through compressive sensing based sparse representation. A novel framework is developed for understanding and analyzing some of the implications of compressive sensing in reconstruction and recovery of an image through raw-sampled and trained dictionaries. Properties of the projection operator and the dictionary are examined and the corresponding results presented. In the second application a novel technique for representing image classes uniquely in a high-dimensional space for image classification is presented. In this method, design and implementation strategy of the image classification system through unique affine sparse codes is presented, which leads to state of the art results. This further leads to analysis of some of the properties attributed to these unique sparse codes. In addition to obtaining these codes, a strong classier is designed and implemented to boost the results obtained. Evaluation with publicly available datasets shows that the proposed method outperforms other state of the art results in image classication. The final part of the thesis deals with image denoising with a novel approach towards obtaining high quality denoised image patches using only a single image. A new technique is proposed to obtain highly correlated image patches through sparse representation, which are then subjected to matrix completion to obtain high quality image patches. Experiments suggest that there may exist a structure within a noisy image which can be exploited for denoising through a low-rank constraint.

Contributors

Agent

Created

Date Created
2011