Matching Items (50)
149991-Thumbnail Image.png
Description
With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications of compressive sensing and sparse representation with regards to image enhancement, restoration and classication. The first application deals with image Super-Resolution through compressive sensing based sparse representation. A novel framework is developed for understanding and analyzing some of the implications of compressive sensing in reconstruction and recovery of an image through raw-sampled and trained dictionaries. Properties of the projection operator and the dictionary are examined and the corresponding results presented. In the second application a novel technique for representing image classes uniquely in a high-dimensional space for image classification is presented. In this method, design and implementation strategy of the image classification system through unique affine sparse codes is presented, which leads to state of the art results. This further leads to analysis of some of the properties attributed to these unique sparse codes. In addition to obtaining these codes, a strong classier is designed and implemented to boost the results obtained. Evaluation with publicly available datasets shows that the proposed method outperforms other state of the art results in image classication. The final part of the thesis deals with image denoising with a novel approach towards obtaining high quality denoised image patches using only a single image. A new technique is proposed to obtain highly correlated image patches through sparse representation, which are then subjected to matrix completion to obtain high quality image patches. Experiments suggest that there may exist a structure within a noisy image which can be exploited for denoising through a low-rank constraint.
ContributorsKulkarni, Naveen (Author) / Li, Baoxin (Thesis advisor) / Ye, Jieping (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
149915-Thumbnail Image.png
Description
Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.
ContributorsWerth, Nicholas (Author) / Karam, Lina (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149965-Thumbnail Image.png
Description
Image processing in canals, rivers and other bodies of water has been a very important concern. This research using Image Processing was performed to obtain a photographic evidence of the data of the site which helps in monitoring the conditions of the water body and the surroundings. Images are captured

Image processing in canals, rivers and other bodies of water has been a very important concern. This research using Image Processing was performed to obtain a photographic evidence of the data of the site which helps in monitoring the conditions of the water body and the surroundings. Images are captured using a digital camera and the images are stored onto a datalogger, these images are retrieved using a cellular/ satellite modem. A MATLAB program was designed to obtain the level of water by just entering the file name into to the program, a curve fit model was created to determine the contrast parameters. The contrast parameters were obtained using the data obtained from the gray scale image mainly the mean and variance of the intensity values. The enhanced images are used to determine the level of water by taking pixel intensity plots along the region of interest. The level of water obtained is accurate to less than 2% of the actual level of water observed from the image. High speed imaging in micro channels have various application in industrial field, medical field etc. In medical field it is tested by using blood samples. The experimental procedure proposed determines the flow duration and the defects observed in these channel using a fluid introduced into the micro channel the fluid being water based dye and whole milk. The viscosity of the fluid shows different types of flow patterns and defects in the micro channel. The defects observed vary from a small effect to the flow pattern to an extreme defect in the channel such as obstruction of flow or deformation in the channel. The sample needs to be further analyzed by SEM to get a better insight on the defects.
ContributorsShasedhara, Abhijeet Bangalore (Author) / Lee, Taewoo (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2011
149901-Thumbnail Image.png
Description
Query Expansion is a functionality of search engines that suggest a set of related queries for a user issued keyword query. In case of exploratory or ambiguous keyword queries, the main goal of the user would be to identify and select a specific category of query results among different categorical

Query Expansion is a functionality of search engines that suggest a set of related queries for a user issued keyword query. In case of exploratory or ambiguous keyword queries, the main goal of the user would be to identify and select a specific category of query results among different categorical options, in order to narrow down the search and reach the desired result. Typical corpus-driven keyword query expansion approaches return popular words in the results as expanded queries. These empirical methods fail to cover all semantics of categories present in the query results. More importantly these methods do not consider the semantic relationship between the keywords featured in an expanded query. Contrary to a normal keyword search setting, these factors are non-trivial in an exploratory and ambiguous query setting where the user's precise discernment of different categories present in the query results is more important for making subsequent search decisions. In this thesis, I propose a new framework for keyword query expansion: generating a set of queries that correspond to the categorization of original query results, which is referred as Categorizing query expansion. Two approaches of algorithms are proposed, one that performs clustering as pre-processing step and then generates categorizing expanded queries based on the clusters. The other category of algorithms handle the case of generating quality expanded queries in the presence of imperfect clusters.
ContributorsNatarajan, Sivaramakrishnan (Author) / Chen, Yi (Thesis advisor) / Candan, Selcuk (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149723-Thumbnail Image.png
Description
This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve

This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve high accuracy, but the combination of many rules is difficult to interpret. Rule condition subset selection (RCSS) methods for associative classification are considered. RCSS aims to prune the rule conditions into a subset via feature selection. The subset then can be summarized into rule-based classifiers. Experiments show that classifiers after RCSS can substantially improve the classification interpretability without loss of accuracy. An ensemble feature selection method is proposed to learn Markov blankets for either discrete or continuous networks (without linear, Gaussian assumptions). The method is compared to a Bayesian local structure learning algorithm and to alternative feature selection methods in the causal structure learning problem. Feature selection is also used to enhance the interpretability of time series classification. Existing time series classification algorithms (such as nearest-neighbor with dynamic time warping measures) are accurate but difficult to interpret. This research leverages the time-ordering of the data to extract features, and generates an effective and efficient classifier referred to as a time series forest (TSF). The computational complexity of TSF is only linear in the length of time series, and interpretable features can be extracted. These features can be further reduced, and summarized for even better interpretability. Lastly, two variable importance measures are proposed to reduce the feature selection bias in tree-based ensemble models. It is well known that bias can occur when predictor attributes have different numbers of values. Two methods are proposed to solve the bias problem. One uses an out-of-bag sampling method called OOBForest, and the other, based on the new concept of a partial permutation test, is called a pForest. Experimental results show the existing methods are not always reliable for multi-valued predictors, while the proposed methods have advantages.
ContributorsDeng, Houtao (Author) / Runger, George C. (Thesis advisor) / Lohr, Sharon L (Committee member) / Pan, Rong (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2011
150281-Thumbnail Image.png
Description
Two-dimensional vision-based measurement is an ideal choice for measuring small or fragile parts that could be damaged using conventional contact measurement methods. Two-dimensional vision-based measurement systems can be quite expensive putting the technology out of reach of inventors and others. The vision-based measurement tool design developed in this thesis is

Two-dimensional vision-based measurement is an ideal choice for measuring small or fragile parts that could be damaged using conventional contact measurement methods. Two-dimensional vision-based measurement systems can be quite expensive putting the technology out of reach of inventors and others. The vision-based measurement tool design developed in this thesis is a low cost alternative that can be made for less than $500US from off-the-shelf parts and free software. The design is based on the USB microscope. The USB microscope was once considered a toy, similar to the telescopes and microscopes of the 17th century, but has recently started finding applications in industry, laboratories, and schools. In order to convert the USB microscope into a measurement tool, research in the following areas was necessary: currently available vision-based measurement systems, machine vision technologies, microscope design, photographic methods, digital imaging, illumination, edge detection, and computer aided drafting applications. The result of the research was a two-dimensional vision-based measurement system that is extremely versatile, easy to use, and, best of all, inexpensive.
ContributorsGraham, Linda L. (Author) / Biekert, Russell (Thesis advisor) / Macia, Narciso (Committee member) / Meitz, Robert (Committee member) / Arizona State University (Publisher)
Created2011
151656-Thumbnail Image.png
Description
Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all

Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all of the known samples. The selection of the contributing data points and the specifics of how they are used to define the interpolated values influences how effectively the interpolation algorithm is able to estimate the underlying, continuous signal. The main contributions of this dissertation are three fold: 1) Reframing edge-directed interpolation of a single image as an intensity-based registration problem. 2) Providing an analytical framework for intensity-based registration using control grid constraints. 3) Quantitative assessment of the new, single-image enlargement algorithm based on analytical intensity-based registration. In addition to single image resizing, the new methods and analytical approaches were extended to address a wide range of applications including volumetric (multi-slice) image interpolation, video deinterlacing, motion detection, and atmospheric distortion correction. Overall, the new approaches generate results that more accurately reflect the underlying signals than less computationally demanding approaches and with lower processing requirements and fewer restrictions than methods with comparable accuracy.
ContributorsZwart, Christine M. (Author) / Frakes, David H (Thesis advisor) / Karam, Lina (Committee member) / Kodibagkar, Vikram (Committee member) / Spanias, Andreas (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2013
151852-Thumbnail Image.png
Description
Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome.

Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome. The composition of plaque is important for detection of plaque vulnerability. Due to prognostic importance of early stage identification, non-invasive assessment of plaque characterization is necessary. Computed tomography (CT) has emerged as a non-invasive alternative to coronary angiography. Recently, dual energy CT (DECT) coronary angiography has been performed clinically. DECT scanners use two different X-ray energies in order to determine the energy dependency of tissue attenuation values for each voxel. They generate virtual monochromatic energy images, as well as material basis pair images. The characterization of plaque components by DECT is still an active research topic since overlap between the CT attenuations measured in plaque components and contrast material shows that the single mean density might not be an appropriate measure for characterization. This dissertation proposes feature extraction, feature selection and learning strategies for supervised characterization of coronary atherosclerotic plaques. In my first study, I proposed an approach for calcium quantification in contrast-enhanced examinations of the coronary arteries, potentially eliminating the need for an extra non-contrast X-ray acquisition. The ambiguity of separation of calcium from contrast material was solved by using virtual non-contrast images. Additional attenuation data provided by DECT provides valuable information for separation of lipid from fibrous plaque since the change of their attenuation as the energy level changes is different. My second study proposed these as the input to supervised learners for a more precise classification of lipid and fibrous plaques. My last study aimed at automatic segmentation of coronary arteries characterizing plaque components and lumen on contrast enhanced monochromatic X-ray images. This required extraction of features from regions of interests. This study proposed feature extraction strategies and selection of important ones. The results show that supervised learning on the proposed features provides promising results for automatic characterization of coronary atherosclerotic plaques by DECT.
ContributorsYamak, Didem (Author) / Akay, Metin (Thesis advisor) / Muthuswamy, Jit (Committee member) / Akay, Yasemin (Committee member) / Pavlicek, William (Committee member) / Vernon, Brent (Committee member) / Arizona State University (Publisher)
Created2013
151511-Thumbnail Image.png
Description
With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus

With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus knowledge discovery by machine learning techniques is necessary if we want to better understand information from data. In this dissertation, we explore the topics of asymmetric loss and asymmetric data in machine learning and propose new algorithms as solutions to some of the problems in these topics. We also studied variable selection of matched data sets and proposed a solution when there is non-linearity in the matched data. The research is divided into three parts. The first part addresses the problem of asymmetric loss. A proposed asymmetric support vector machine (aSVM) is used to predict specific classes with high accuracy. aSVM was shown to produce higher precision than a regular SVM. The second part addresses asymmetric data sets where variables are only predictive for a subset of the predictor classes. Asymmetric Random Forest (ARF) was proposed to detect these kinds of variables. The third part explores variable selection for matched data sets. Matched Random Forest (MRF) was proposed to find variables that are able to distinguish case and control without the restrictions that exists in linear models. MRF detects variables that are able to distinguish case and control even in the presence of interaction and qualitative variables.
ContributorsKoh, Derek (Author) / Runger, George C. (Thesis advisor) / Wu, Tong (Committee member) / Pan, Rong (Committee member) / Cesta, John (Committee member) / Arizona State University (Publisher)
Created2013