Matching Items (69)
152001-Thumbnail Image.png
Description
Despite significant advances in digital pathology and automation sciences, current diagnostic practice for cancer detection primarily relies on a qualitative manual inspection of tissue architecture and cell and nuclear morphology in stained biopsies using low-magnification, two-dimensional (2D) brightfield microscopy. The efficacy of this process is limited by inter-operator variations in

Despite significant advances in digital pathology and automation sciences, current diagnostic practice for cancer detection primarily relies on a qualitative manual inspection of tissue architecture and cell and nuclear morphology in stained biopsies using low-magnification, two-dimensional (2D) brightfield microscopy. The efficacy of this process is limited by inter-operator variations in sample preparation and imaging, and by inter-observer variability in assessment. Over the past few decades, the predictive value quantitative morphology measurements derived from computerized analysis of micrographs has been compromised by the inability of 2D microscopy to capture information in the third dimension, and by the anisotropic spatial resolution inherent to conventional microscopy techniques that generate volumetric images by stacking 2D optical sections to approximate 3D. To gain insight into the analytical 3D nature of cells, this dissertation explores the application of a new technology for single-cell optical computed tomography (optical cell CT) that is a promising 3D tomographic imaging technique which uses visible light absorption to image stained cells individually with sub-micron, isotropic spatial resolution. This dissertation provides a scalable analytical framework to perform fully-automated 3D morphological analysis from transmission-mode optical cell CT images of hematoxylin-stained cells. The developed framework performs rapid and accurate quantification of 3D cell and nuclear morphology, facilitates assessment of morphological heterogeneity, and generates shape- and texture-based biosignatures predictive of the cell state. Custom 3D image segmentation methods were developed to precisely delineate volumes of interest (VOIs) from reconstructed cell images. Comparison with user-defined ground truth assessments yielded an average agreement (DICE coefficient) of 94% for the cell and its nucleus. Seventy nine biologically relevant morphological descriptors (features) were computed from the segmented VOIs, and statistical classification methods were implemented to determine the subset of features that best predicted cell health. The efficacy of our proposed framework was demonstrated on an in vitro model of multistep carcinogenesis in human Barrett's esophagus (BE) and classifier performance using our 3D morphometric analysis was compared against computerized analysis of 2D image slices that reflected conventional cytological observation. Our results enable sensitive and specific nuclear grade classification for early cancer diagnosis and underline the value of the approach as an objective adjunctive tool to better understand morphological changes associated with malignant transformation.
ContributorsNandakumar, Vivek (Author) / Meldrum, Deirdre R (Thesis advisor) / Nelson, Alan C. (Committee member) / Karam, Lina J (Committee member) / Ye, Jieping (Committee member) / Johnson, Roger H (Committee member) / Bussey, Kimberly J (Committee member) / Arizona State University (Publisher)
Created2013
152770-Thumbnail Image.png
Description
Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms

Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established.
ContributorsVaradarajan, Srenivas (Author) / Karam, Lina J (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Li, Baoxin (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
152800-Thumbnail Image.png
Description
To uncover the neural correlates to go-directed behavior, single unit action potentials are considered fundamental computing units and have been examined by different analytical methodologies under a broad set of hypotheses. Using a behaving rat performing a directional choice learning task, we aim to study changes in rat's cortical neural

To uncover the neural correlates to go-directed behavior, single unit action potentials are considered fundamental computing units and have been examined by different analytical methodologies under a broad set of hypotheses. Using a behaving rat performing a directional choice learning task, we aim to study changes in rat's cortical neural patterns while he improved his task performance accuracy from chance to 80% or higher. Specifically, simultaneous multi-channel single unit neural recordings from the rat's agranular medial (AGm) and Agranular lateral (AGl) cortices were analyzed using joint peristimulus time histogram (JPSTHs), which effectively unveils firing coincidences in neural action potentials. My results based on data from six rats revealed that coincidences of pair-wise neural action potentials are higher when rats were performing the task than they were not at the learning stage, and this trend abated after the rats learned the task. Another finding is that the coincidences at the learning stage are stronger than that when the rats learned the task especially when they were performing the task. Therefore, this coincidence measure is the highest when the rats were performing the task at the learning stage. This may suggest that neural coincidences play a role in the coordination and communication among populations of neurons engaged in a purposeful act. Additionally, attention and working memory may have contributed to the modulation of neural coincidences during the designed task.
ContributorsCheng, Bing (Author) / Si, Jennie (Thesis advisor) / Chae, Junseok (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2014
152922-Thumbnail Image.png
Description
Photovoltaic (PV) systems are affected by converter losses, partial shading and other mismatches in the panels. This dissertation introduces a sub-panel maximum power point tracking (MPPT) architecture together with an integrated CMOS current sensor circuit on a chip to reduce the mismatch effects, losses and increase the efficiency of the

Photovoltaic (PV) systems are affected by converter losses, partial shading and other mismatches in the panels. This dissertation introduces a sub-panel maximum power point tracking (MPPT) architecture together with an integrated CMOS current sensor circuit on a chip to reduce the mismatch effects, losses and increase the efficiency of the PV system. The sub-panel MPPT increases the efficiency of the PV during the shading and replaces the bypass diodes in the panels with an integrated MPPT and DC-DC regulator. For the integrated MPPT and regulator, the research developed an integrated standard CMOS low power and high common mode range Current-to-Digital Converter (IDC) circuit and its application for DC-DC regulator and MPPT. The proposed charge based CMOS switched-capacitor circuit directly digitizes the output current of the DC-DC regulator without an analog-to-digital converter (ADC) and the need for high-voltage process technology. Compared to the resistor based current-sensing methods that requires current-to-voltage circuit, gain block and ADC, the proposed CMOS IDC is a low-power efficient integrated circuit that achieves high resolution, lower complexity, and lower power consumption. The IDC circuit is fabricated on a 0.7 um CMOS process, occupies 2mm x 2mm and consumes less than 27mW. The IDC circuit has been tested and used for boost DC-DC regulator and MPPT for photo-voltaic system. The DC-DC converter has an efficiency of 95%. The sub-module level power optimization improves the output power of a shaded panel by up to 20%, compared to panel MPPT with bypass diodes.
ContributorsMarti-Arbona, Edgar (Author) / Kiaei, Sayfe (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Kitchen, Jennifer (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2014
153490-Thumbnail Image.png
Description
This work describes the development of automated flows to generate pad rings, mixed signal power grids, and mega cells in a multi-project test chip. There were three major design flows that were created to create the test chip. The first was the pad ring which was used as the staring

This work describes the development of automated flows to generate pad rings, mixed signal power grids, and mega cells in a multi-project test chip. There were three major design flows that were created to create the test chip. The first was the pad ring which was used as the staring block for creating the test chip. This flow put all of the signals for the chip in the order that was wanted along the outside of the die along with creation of the power ring that is used to supply the chip with a robust power source.

The second flow that was created was used to put together a flash block that is based off of a XILIX XCFXXP. This flow was somewhat similar to how the pad ring flow worked except that optimizations and a clock tree was added into the flow. There was a couple of design redoes due to timing and orientation constraints.

Finally, the last flow that was created was the top level flow which is where all of the components are combined together to create a finished test chip ready for fabrication. The main components that were used were the finished flash block, HERMES, test structures, and a clock instance along with the pad ring flow for the creation of the pad ring and power ring.

Also discussed is some work that was done on a previous multi-project test chip. The work that was done was the creation of power gaters that were used like switches to turn the power on and off for some flash modules. To control the power gaters the functionality change of some pad drivers was done so that they output a higher voltage than what is seen in the core of the chip.
ContributorsLieb, Christopher (Author) / Clark, Lawrence (Thesis advisor) / Holbert, Keith E. (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2015
153039-Thumbnail Image.png
Description
Switching Converters (SC) are an excellent choice for hand held devices due to their high power conversion efficiency. However, they suffer from two major drawbacks. The first drawback is that their dynamic response is sensitive to variations in inductor (L) and capacitor (C) values. A cost effective solution is implemented

Switching Converters (SC) are an excellent choice for hand held devices due to their high power conversion efficiency. However, they suffer from two major drawbacks. The first drawback is that their dynamic response is sensitive to variations in inductor (L) and capacitor (C) values. A cost effective solution is implemented by designing a programmable digital controller. Despite variations in L and C values, the target dynamic response can be achieved by computing and programming the filter coefficients for a particular L and C. Besides, digital controllers have higher immunity to environmental changes such as temperature and aging of components. The second drawback of SCs is their poor efficiency during low load conditions if operated in Pulse Width Modulation (PWM) mode. However, if operated in Pulse Frequency Modulation (PFM) mode, better efficiency numbers can be achieved. A mostly-digital way of detecting PFM mode is implemented. Besides, a slow serial interface to program the chip, and a high speed serial interface to characterize mixed signal blocks as well as to ship data in or out for debug purposes are designed. The chip is taped out in 0.18µm IBM's radiation hardened CMOS process technology. A test board is built with the chip, external power FETs and driver IC. At the time of this writing, PWM operation, PFM detection, transitions between PWM and PFM, and both serial interfaces are validated on the test board.
ContributorsMumma Reddy, Abhiram (Author) / Bakkaloglu, Bertan (Thesis advisor) / Ogras, Umit Y. (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2014
153288-Thumbnail Image.png
Description
Register file (RF) memory is important in low power system on chip (SOC) due to its

inherent low voltage stability. Moreover, designs increasingly use compiled instead of custom memory blocks, which frequently employ static, rather than pre-charged dynamic RFs. In this work, the various RFs designed for a microprocessor cache and

Register file (RF) memory is important in low power system on chip (SOC) due to its

inherent low voltage stability. Moreover, designs increasingly use compiled instead of custom memory blocks, which frequently employ static, rather than pre-charged dynamic RFs. In this work, the various RFs designed for a microprocessor cache and register files are discussed. Comparison between static and dynamic RF power dissipation and timing characteristics is also presented. The relative timing and power advantages of the designs are shown to be dependent on the memory aspect ratio, i.e. array width and height.
ContributorsVashishtha, Vinay (Author) / Clark, Lawrence T. (Thesis advisor) / Seo, Jae-Sun (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2014
153241-Thumbnail Image.png
Description
Thousands of high-resolution images are generated each day. Detecting and analyzing variations in these images are key steps in image understanding. This work focuses on spatial and multitemporal

visual change detection and its applications in multi-temporal synthetic aperture radar (SAR) images.

The Canny edge detector is one of the most widely-used edge

Thousands of high-resolution images are generated each day. Detecting and analyzing variations in these images are key steps in image understanding. This work focuses on spatial and multitemporal

visual change detection and its applications in multi-temporal synthetic aperture radar (SAR) images.

The Canny edge detector is one of the most widely-used edge detection algorithms due to its superior performance in terms of SNR and edge localization and only one response to a single edge. In this work, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance as compared to the original frame-level Canny algorithm. The resulting block-based algorithm has significantly reduced memory requirements and can achieve a significantly reduced latency. Furthermore, the proposed algorithm can be easily integrated with other block-based image processing systems. In addition, quantitative evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images.

In the context of multi-temporal SAR images for earth monitoring applications, one critical issue is the detection of changes occurring after a natural or anthropic disaster. In this work, we propose a novel similarity measure for automatic change detection using a pair of SAR images

acquired at different times and apply it in both the spatial and wavelet domains. This measure is based on the evolution of the local statistics of the image between two dates. The local statistics are modeled as a Gaussian Mixture Model (GMM), which is more suitable and flexible to approximate the local distribution of the SAR image with distinct land-cover typologies. Tests on real datasets show that the proposed detectors outperform existing methods in terms of the quality of the similarity maps, which are assessed using the receiver operating characteristic (ROC) curves, and in terms of the total error rates of the final change detection maps. Furthermore, we proposed a new

similarity measure for automatic change detection based on a divisive normalization transform in order to reduce the computation complexity. Tests show that our proposed DNT-based change detector

exhibits competitive detection performance while achieving lower computational complexity as compared to previously suggested methods.
ContributorsXu, Qian (Author) / Karam, Lina J (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Bliss, Daniel (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
150440-Thumbnail Image.png
Description
Super-Resolution (SR) techniques are widely developed to increase image resolution by fusing several Low-Resolution (LR) images of the same scene to overcome sensor hardware limitations and reduce media impairments in a cost-effective manner. When choosing a solution for the SR problem, there is always a trade-off between computational efficiency and

Super-Resolution (SR) techniques are widely developed to increase image resolution by fusing several Low-Resolution (LR) images of the same scene to overcome sensor hardware limitations and reduce media impairments in a cost-effective manner. When choosing a solution for the SR problem, there is always a trade-off between computational efficiency and High-Resolution (HR) image quality. Existing SR approaches suffer from extremely high computational requirements due to the high number of unknowns to be estimated in the solution of the SR inverse problem. This thesis proposes efficient iterative SR techniques based on Visual Attention (VA) and perceptual modeling of the human visual system. In the first part of this thesis, an efficient ATtentive-SELective Perceptual-based (AT-SELP) SR framework is presented, where only a subset of perceptually significant active pixels is selected for processing by the SR algorithm based on a local contrast sensitivity threshold model and a proposed low complexity saliency detector. The proposed saliency detector utilizes a probability of detection rule inspired by concepts of luminance masking and visual attention. The second part of this thesis further enhances on the efficiency of selective SR approaches by presenting an ATtentive (AT) SR framework that is completely driven by VA region detectors. Additionally, different VA techniques that combine several low-level features, such as center-surround differences in intensity and orientation, patch luminance and contrast, bandpass outputs of patch luminance and contrast, and difference of Gaussians of luminance intensity are integrated and analyzed to illustrate the effectiveness of the proposed selective SR frameworks. The proposed AT-SELP SR and AT-SR frameworks proved to be flexible by integrating a Maximum A Posteriori (MAP)-based SR algorithm as well as a fast two-stage Fusion-Restoration (FR) SR estimator. By adopting the proposed selective SR frameworks, simulation results show significant reduction on average in computational complexity with comparable visual quality in terms of quantitative metrics such as PSNR, SNR or MAE gains, and subjective assessment. The third part of this thesis proposes a Perceptually Weighted (WP) SR technique that incorporates unequal weighting parameters in the cost function of iterative SR problems. The proposed approach is inspired by the unequal processing of the Human Visual System (HVS) to different local image features in an image. Simulation results show an enhanced reconstruction quality and faster convergence rates when applied to the MAP-based and FR-based SR schemes.
ContributorsSadaka, Nabil (Author) / Karam, Lina J (Thesis advisor) / Spanias, Andreas S (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Abousleman, Glen P (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2011
151204-Thumbnail Image.png
Description
There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration is the first step for extracting 3D data from a

There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration is the first step for extracting 3D data from a 2D image. It plays a crucial role in computer vision and 3D reconstruction due to the fact that the accuracy of the reconstruction and 3D coordinate determination relies on the accuracy of the camera calibration to a great extent. This thesis presents a novel camera calibration method using a circular calibration pattern. The disadvantages and issues with existing state-of-the-art methods are discussed and are overcome in this work. The implemented system consists of techniques of local adaptive segmentation, ellipse fitting, projection and optimization. Simulation results are presented to illustrate the performance of the proposed scheme. These results show that the proposed method reduces the error as compared to the state-of-the-art for high-resolution images, and that the proposed scheme is more robust to blur in the imaged calibration pattern.
ContributorsPrakash, Charan Dudda (Author) / Karam, Lina J (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012