Matching Items (19)

134929-Thumbnail Image.png

Use of cleavable fluorescent antibodies for highly multiplexed single cell in situ protein analysis

Description

The ability to profile proteins allows us to gain a deeper understanding of organization, regulation, and function of different biological systems. Many technologies are currently being used in order to

The ability to profile proteins allows us to gain a deeper understanding of organization, regulation, and function of different biological systems. Many technologies are currently being used in order to accurately perform the protein profiling. Some of these technologies include mass spectrometry, microarray based analysis, and fluorescence microscopy. Deeper analysis of these technologies have demonstrated limitations which have taken away from either the efficiency or the accuracy of the results. The objective of this project was to develop a technology in which highly multiplexed single cell in situ protein analysis can be completed in a comprehensive manner without the loss of the protein targets. This was accomplished in the span of 3 steps which is referred to as the immunofluorescence cycle. Antibodies with attached fluorophores with the help of novel azide-based cleavable linker are used to detect protein targets. Fluorescence imaging and data storage procedures are done on the targets and then the fluorophores are cleaved from the antibodies without the loss of the protein targets. Continuous cycles of the immunofluorescence procedure can help create a comprehensive and quantitative profile of the protein. The development of such a technique will not only help us understand biological systems such as solid tumor, brain tissues, and developing embryos. But it will also play a role in real-world applications such as signaling network analysis, molecular diagnosis and cellular targeted therapies.

Contributors

Agent

Created

Date Created
  • 2016-12

134940-Thumbnail Image.png

Multiplexed single-cell in situ RNA analysis by reiterative hybridization

Description

Currently, quantification of single cell RNA species in their natural contexts is restricted due to the little number of parallel analysis. Through this, we identify a method to increase the

Currently, quantification of single cell RNA species in their natural contexts is restricted due to the little number of parallel analysis. Through this, we identify a method to increase the multiplexing capacity of RNA analysis for single cells in situ. Initially, RNA transcripts are found by using fluorescence in situ hybridization (FISH). Once imaging and data storage is completed, the fluorescence signal is detached through photobleaching. By doing so, the FISH is reinitiated to detect other RNA species residing in the same cell. After reiterative cycles of hybridization, imaging and photobleaching, the identities, positions and copy numbers of a huge amount of varied RNA species can be computed in individual cells in situ. Through this approach, we have evaluated seven different transcripts in single HeLa cells with five reiterative RNA FISH cycles. This method has the ability to detect over 100 varied RNA species in single cells in situ, which can be further applied in studies of systems biology, molecular diagnosis and targeted therapies.

Contributors

Agent

Created

Date Created
  • 2016-12

128660-Thumbnail Image.png

A Sparse Voxel Octree-Based Framework for Computing Solar Radiation Using 3D City Models

Description

An effective three-dimensional (3D) data representation is required to assess the spatial distribution of the photovoltaic potential over urban building roofs and facades using 3D city models. Voxels have long

An effective three-dimensional (3D) data representation is required to assess the spatial distribution of the photovoltaic potential over urban building roofs and facades using 3D city models. Voxels have long been used as a spatial data representation, but practical applications of the voxel representation have been limited compared with rasters in traditional two-dimensional (2D) geographic information systems (GIS). We propose to use sparse voxel octree (SVO) as a data representation to extend the GRASS GIS r.sun solar radiation model from 2D to 3D. The GRASS GIS r.sun model is nested in an SVO-based computing framework. The presented 3D solar radiation computing framework was applied to 3D building groups of different geometric complexities to demonstrate its efficiency and scalability. We presented a method to explicitly compute diffuse shading losses in r.sun, and found that diffuse shading losses can reduce up to 10% of the annual global radiation under clear sky conditions. Hence, diffuse shading losses are of significant importance especially in complex urban environments.

Contributors

Agent

Created

Date Created
  • 2017-03-31

154885-Thumbnail Image.png

A computational approach to relative image aesthetics

Description

Computational visual aesthetics has recently become an active research area. Existing state-of-art methods formulate this as a binary classification task where a given image is predicted to be beautiful or

Computational visual aesthetics has recently become an active research area. Existing state-of-art methods formulate this as a binary classification task where a given image is predicted to be beautiful or not. In many applications such as image retrieval and enhancement, it is more important to rank images based on their aesthetic quality instead of binary-categorizing them. Furthermore, in such applications, it may be possible that all images belong to the same category. Hence determining the aesthetic ranking of the images is more appropriate. To this end, a novel problem of ranking images with respect to their aesthetic quality is formulated in this work. A new data-set of image pairs with relative labels is constructed by carefully selecting images from the popular AVA data-set. Unlike in aesthetics classification, there is no single threshold which would determine the ranking order of the images across the entire data-set.

This problem is attempted using a deep neural network based approach that is trained on image pairs by incorporating principles from relative learning. Results show that such relative training procedure allows the network to rank the images with a higher accuracy than a state-of-art network trained on the same set of images using binary labels. Further analyzing the results show that training a model using the image pairs learnt better aesthetic features than training on same number of individual binary labelled images.

Additionally, an attempt is made at enhancing the performance of the system by incorporating saliency related information. Given an image, humans might fixate their vision on particular parts of the image, which they might be subconsciously intrigued to. I therefore tried to utilize the saliency information both stand-alone as well as in combination with the global and local aesthetic features by performing two separate sets of experiments. In both the cases, a standard saliency model is chosen and the generated saliency maps are convoluted with the images prior to passing them to the network, thus giving higher importance to the salient regions as compared to the remaining. Thus generated saliency-images are either used independently or along with the global and the local features to train the network. Empirical results show that the saliency related aesthetic features might already be learnt by the network as a sub-set of the global features from automatic feature extraction, thus proving the redundancy of the additional saliency module.

Contributors

Agent

Created

Date Created
  • 2016

156682-Thumbnail Image.png

Deep Temporal Clustering: Fully Unsupervised Learning of Time-Domain Features

Description

Unsupervised learning of time series data, also known as temporal clustering, is a challenging problem in machine learning. This thesis presents a novel algorithm, Deep Temporal Clustering (DTC), to naturally

Unsupervised learning of time series data, also known as temporal clustering, is a challenging problem in machine learning. This thesis presents a novel algorithm, Deep Temporal Clustering (DTC), to naturally integrate dimensionality reduction and temporal clustering into a single end-to-end learning framework, fully unsupervised. The algorithm utilizes an autoencoder for temporal dimensionality reduction and a novel temporal clustering layer for cluster assignment. Then it jointly optimizes the clustering objective and the dimensionality reduction objective. Based on requirement and application, the temporal clustering layer can be customized with any temporal similarity metric. Several similarity metrics and state-of-the-art algorithms are considered and compared. To gain insight into temporal features that the network has learned for its clustering, a visualization method is applied that generates a region of interest heatmap for the time series. The viability of the algorithm is demonstrated using time series data from diverse domains, ranging from earthquakes to spacecraft sensor data. In each case, the proposed algorithm outperforms traditional methods. The superior performance is attributed to the fully integrated temporal dimensionality reduction and clustering criterion.

Contributors

Agent

Created

Date Created
  • 2018

153433-Thumbnail Image.png

Longitudinal morphometric study of genetic influence of APOE e4 genotype on hippocampal atrophy - An N=1925 surface-based ADNI study

Description

The apolipoprotein E (APOE) e4 genotype is the most prevalent known genetic risk factor for Alzheimer's disease (AD). In this paper, we examined the longitudinal effect of APOE e4 on

The apolipoprotein E (APOE) e4 genotype is the most prevalent known genetic risk factor for Alzheimer's disease (AD). In this paper, we examined the longitudinal effect of APOE e4 on hippocampal morphometry in Alzheimer's Disease Neuroimaging Initiative (ADNI). Generally, atrophy of hippocampus has more chance occurs in AD patients who carrying the APOE e4 allele than those who are APOE e4 noncarriers. Also, brain structure and function depend on APOE genotype not just for Alzheimer's disease patients but also in health elderly individuals, so APOE genotyping is considered critical in clinical trials of Alzheimer's disease. We used a large sample of elderly participants, with the help of a new automated surface registration system based on surface conformal parameterization with holomorphic 1-forms and surface fluid registration. In this system, we automatically segmented and constructed hippocampal surfaces from MR images at many different time points, such as 6 months, 1- and 2-year follow up. Between the two different hippocampal surfaces, we did the high-order correspondences, using a novel inverse consistent surface fluid registration method. At each time point, using Hotelling's T^2 test, we found significant morphological deformation in APOE e4 carriers relative to noncarriers in the entire cohort as well as in the non-demented (pooled MCI and control) subjects, affecting the left hippocampus more than the right, and this effect was more pronounced in e4 homozygotes than heterozygotes.

Contributors

Agent

Created

Date Created
  • 2015

153713-Thumbnail Image.png

Ensuring high-quality colonoscopy by reducing polyp miss-rates

Description

Colorectal cancer is the second-highest cause of cancer-related deaths in the United States with approximately 50,000 estimated deaths in 2015. The advanced stages of colorectal cancer has a poor five-year

Colorectal cancer is the second-highest cause of cancer-related deaths in the United States with approximately 50,000 estimated deaths in 2015. The advanced stages of colorectal cancer has a poor five-year survival rate of 10%, whereas the diagnosis in early stages of development has showed a more favorable five-year survival rate of 90%. Early diagnosis of colorectal cancer is achievable if colorectal polyps, a possible precursor to cancer, are detected and removed before developing into malignancy.

The preferred method for polyp detection and removal is optical colonoscopy. A colonoscopic procedure consists of two phases: (1) insertion phase during which a flexible endoscope (a flexible tube with a tiny video camera at the tip) is advanced via the anus and then gradually to the end of the colon--called the cecum, and (2) withdrawal phase during which the endoscope is gradually withdrawn while colonoscopists examine the colon wall to find and remove polyps. Colonoscopy is an effective procedure and has led to a significant decline in the incidence and mortality of colon cancer. However, despite many screening and therapeutic advantages, 1 out of every 4 polyps and 1 out of 13 colon cancers are missed during colonoscopy.

There are many factors that contribute to missed polyps and cancers including poor colon preparation, inadequate navigational skills, and fatigue. Poor colon preparation results in a substantial portion of colon covered with fecal content, hindering a careful examination of the colon. Inadequate navigational skills can prevent a colonoscopist from examining hard-to-reach regions of the colon that may contain a polyp. Fatigue can manifest itself in the performance of a colonoscopist by decreasing diligence and vigilance during procedures. Lack of vigilance may prevent a colonoscopist from detecting the polyps that briefly appear in the colonoscopy videos. Lack of diligence may result in hasty examination of the colon that is likely to miss polyps and lesions.

To reduce polyp and cancer miss rates, this research presents a quality assurance system with 3 components. The first component is an automatic polyp detection system that highlights the regions with suspected polyps in colonoscopy videos. The goal is to encourage more vigilance during procedures. The suggested polyp detection system consists of several novel modules: (1) a new patch descriptor that characterizes image appearance around boundaries more accurately and more efficiently than widely-used patch descriptors such HoG, LBP, and Daisy; (2) A 2-stage classification framework that is able to enhance low level image features prior to classification. Unlike the traditional way of image classification where a single patch undergoes the processing pipeline, our system fuses the information extracted from a pair of patches for more accurate edge classification; (3) a new vote accumulation scheme that robustly localizes objects with curvy boundaries in fragmented edge maps. Our voting scheme produces a probabilistic output for each polyp candidate but unlike the existing methods (e.g., Hough transform) does not require any predefined parametric model of the object of interest; (4) and a unique three-way image representation coupled with convolutional neural networks (CNNs) for classifying the polyp candidates. Our image representation efficiently captures a variety of features such as color, texture, shape, and temporal information and significantly improves the performance of the subsequent CNNs for candidate classification. This contrasts with the exiting methods that mainly rely on a subset of the above image features for polyp detection. Furthermore, this research is the first to investigate the use of CNNs for polyp detection in colonoscopy videos.

The second component of our quality assurance system is an automatic image quality assessment for colonoscopy. The goal is to encourage more diligence during procedures by warning against hasty and low quality colon examination. We detect a low quality colon examination by identifying a number of consecutive non-informative frames in videos. We base our methodology for detecting non-informative frames on two key observations: (1) non-informative frames

most often show an unrecognizable scene with few details and blurry edges and thus their information can be locally compressed in a few Discrete Cosine Transform (DCT) coefficients; however, informative images include much more details and their information content cannot be summarized by a small subset of DCT coefficients; (2) information content is spread all over the image in the case of informative frames, whereas in non-informative frames, depending on image artifacts and degradation factors, details may appear in only a few regions. We use the former observation in designing our global features and the latter in designing our local image features. We demonstrated that the suggested new features are superior to the existing features based on wavelet and Fourier transforms.

The third component of our quality assurance system is a 3D visualization system. The goal is to provide colonoscopists with feedback about the regions of the colon that have remained unexamined during colonoscopy, thereby helping them improve their navigational skills. The suggested system is based on a new 3D reconstruction algorithm that combines depth and position information for 3D reconstruction. We propose to use a depth camera and a tracking sensor to obtain depth and position information. Our system contrasts with the existing works where the depth and position information are unreliably estimated from the colonoscopy frames. We conducted a use case experiment, demonstrating that the suggested 3D visualization system can determine the unseen regions of the navigated environment. However, due to technology limitations, we were not able to evaluate our 3D visualization system using a phantom model of the colon.

Contributors

Agent

Created

Date Created
  • 2015

155085-Thumbnail Image.png

Video2Vec: learning semantic spatio-temporal embedding for video representations

Description

High-level inference tasks in video applications such as recognition, video retrieval, and zero-shot classification have become an active research area in recent years. One fundamental requirement for such applications is

High-level inference tasks in video applications such as recognition, video retrieval, and zero-shot classification have become an active research area in recent years. One fundamental requirement for such applications is to extract high-quality features that maintain high-level information in the videos.

Many video feature extraction algorithms have been purposed, such as STIP, HOG3D, and Dense Trajectories. These algorithms are often referred to as “handcrafted” features as they were deliberately designed based on some reasonable considerations. However, these algorithms may fail when dealing with high-level tasks or complex scene videos. Due to the success of using deep convolution neural networks (CNNs) to extract global representations for static images, researchers have been using similar techniques to tackle video contents. Typical techniques first extract spatial features by processing raw images using deep convolution architectures designed for static image classifications. Then simple average, concatenation or classifier-based fusion/pooling methods are applied to the extracted features. I argue that features extracted in such ways do not acquire enough representative information since videos, unlike images, should be characterized as a temporal sequence of semantically coherent visual contents and thus need to be represented in a manner considering both semantic and spatio-temporal information.

In this thesis, I propose a novel architecture to learn semantic spatio-temporal embedding for videos to support high-level video analysis. The proposed method encodes video spatial and temporal information separately by employing a deep architecture consisting of two channels of convolutional neural networks (capturing appearance and local motion) followed by their corresponding Fully Connected Gated Recurrent Unit (FC-GRU) encoders for capturing longer-term temporal structure of the CNN features. The resultant spatio-temporal representation (a vector) is used to learn a mapping via a Fully Connected Multilayer Perceptron (FC-MLP) to the word2vec semantic embedding space, leading to a semantic interpretation of the video vector that supports high-level analysis. I evaluate the usefulness and effectiveness of this new video representation by conducting experiments on action recognition, zero-shot video classification, and semantic video retrieval (word-to-video) retrieval, using the UCF101 action recognition dataset.

Contributors

Agent

Created

Date Created
  • 2016

154703-Thumbnail Image.png

A unified framework based on convolutional neural networks for interpreting carotid intima-media thickness videos

Description

Cardiovascular disease (CVD) is the leading cause of mortality yet largely preventable, but the key to prevention is to identify at-risk individuals before adverse events. For predicting individual CVD risk,

Cardiovascular disease (CVD) is the leading cause of mortality yet largely preventable, but the key to prevention is to identify at-risk individuals before adverse events. For predicting individual CVD risk, carotid intima-media thickness (CIMT), a noninvasive ultrasound method, has proven to be valuable, offering several advantages over CT coronary artery calcium score. However, each CIMT examination includes several ultrasound videos, and interpreting each of these CIMT videos involves three operations: (1) select three enddiastolic ultrasound frames (EUF) in the video, (2) localize a region of interest (ROI) in each selected frame, and (3) trace the lumen-intima interface and the media-adventitia interface in each ROI to measure CIMT. These operations are tedious, laborious, and time consuming, a serious limitation that hinders the widespread utilization of CIMT in clinical practice. To overcome this limitation, this paper presents a new system to automate CIMT video interpretation. Our extensive experiments demonstrate that the suggested system significantly outperforms the state-of-the-art methods. The superior performance is attributable to our unified framework based on convolutional neural networks (CNNs) coupled with our informative image representation and effective post-processing of the CNN outputs, which are uniquely designed for each of the above three operations.

Contributors

Agent

Created

Date Created
  • 2016

152165-Thumbnail Image.png

Informatics approach to improving surgical skills training

Description

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are observed during residency for judgment of their skills. Although the value of this method of skills assessment cannot be ignored, novel methodologies of objective skills assessment need to be designed, developed, and evaluated that augment the traditional approach. Several sensor-based systems have been developed to measure a user's skill quantitatively, but use of sensors could interfere with skill execution and thus limit the potential for evaluating real-life surgery. However, having a method to judge skills automatically in real-life conditions should be the ultimate goal, since only with such features that a system would be widely adopted. This research proposes a novel video-based approach for observing surgeons' hand and surgical tool movements in minimally invasive surgical training exercises as well as during laparoscopic surgery. Because our system does not require surgeons to wear special sensors, it has the distinct advantage over alternatives of offering skills assessment in both learning and real-life environments. The system automatically detects major skill-measuring features from surgical task videos using a computing system composed of a series of computer vision algorithms and provides on-screen real-time performance feedback for more efficient skill learning. Finally, the machine-learning approach is used to develop an observer-independent composite scoring model through objective and quantitative measurement of surgical skills. To increase effectiveness and usability of the developed system, it is integrated with a cloud-based tool, which automatically assesses surgical videos upload to the cloud.

Contributors

Agent

Created

Date Created
  • 2013