Matching Items (20)

128022-Thumbnail Image.png

Reconciling the Differences Between a Bottom-Up and Inverse-Estimated FFCO2 Emissions Estimate in a Large U.S. Urban Area

Description

The INFLUX experiment has taken multiple approaches to estimate the carbon dioxide (CO2) flux in a domain centered on the city of Indianapolis, Indiana. One approach, Hestia, uses a bottom-up technique relying on a mixture of activity data, fuel statistics,

The INFLUX experiment has taken multiple approaches to estimate the carbon dioxide (CO2) flux in a domain centered on the city of Indianapolis, Indiana. One approach, Hestia, uses a bottom-up technique relying on a mixture of activity data, fuel statistics, direct flux measurement and modeling algorithms. A second uses a Bayesian atmospheric inverse approach constrained by atmospheric CO2 measurements and the Hestia emissions estimate as a prior CO2 flux. The difference in the central estimate of the two approaches comes to 0.94 MtC (an 18.7% difference) over the eight-month period between September 1, 2012 and April 30, 2013, a statistically significant difference at the 2-sigma level. Here we explore possible explanations for this apparent discrepancy in an attempt to reconcile the flux estimates. We focus on two broad categories: 1) biases in the largest of bottom-up flux contributions and 2) missing CO2 sources. Though there is some evidence for small biases in the Hestia fossil fuel carbon dioxide (FFCO2) flux estimate as an explanation for the calculated difference, we find more support for missing CO2 fluxes, with biological respiration the largest of these. Incorporation of these differences bring the Hestia bottom-up and the INFLUX inversion flux estimates into statistical agreement and are additionally consistent with wintertime measurements of atmospheric 14CO2. We conclude that comparison of bottom-up and top-down approaches must consider all flux contributions and highlight the important contribution to urban carbon budgets of animal and biotic respiration. Incorporation of missing CO2 fluxes reconciles the bottom-up and inverse-based approach in the INFLUX domain.

Contributors

Created

Date Created
2017-08-03

128031-Thumbnail Image.png

Optimizing the Spatial Resolution for Urban CO2 Flux Studies Using the Shannon Entropy

Description

The ‘Hestia Project’ uses a bottom-up approach to quantify fossil fuel CO2(FFCO2) emissions spatially at the building/street level and temporally at the hourly level. Hestia FFCO2 emissions are provided in the form of a group of sector-specific vector layers with

The ‘Hestia Project’ uses a bottom-up approach to quantify fossil fuel CO2(FFCO2) emissions spatially at the building/street level and temporally at the hourly level. Hestia FFCO2 emissions are provided in the form of a group of sector-specific vector layers with point, line, and polygon sources to support carbon cycle science and climate policy. Application to carbon cycle science, in particular, requires regular gridded data in order to link surface carbon fluxes to atmospheric transport models. However, the heterogeneity and complexity of FFCO2 sources within regular grids is sensitive to spatial resolution. From the perspective of a data provider, we need to find a balance between resolution and data volume so that the gridded data product retains the maximum amount of information content while maintaining an efficient data volume.

The Shannon entropy determines the minimum bits that are needed to encode an information source and can serve as a metric for the effective information content. In this paper, we present an analysis of the Shannon entropy of gridded FFCO2 emissions with varying resolutions in four Hestia study areas, and find: (1) the Shannon entropy increases with smaller grid resolution until it reaches a maximum value (the max-entropy resolution); (2) total emissions (the sum of several sector-specific emission fields) show a finer max-entropy resolution than each of the sector-specific fields; (3) the residential emissions show a finer max-entropy resolution than the commercial emissions; (4) the max-entropy resolution of the onroad emissions grid is closely correlated to the density of the road network. These findings suggest that the Shannon entropy can detect the information effectiveness of the spatial resolution of gridded FFCO2 emissions. Hence, the resolution-entropy relationship can be used to assist in determining an appropriate spatial resolution for urban CO2 flux studies. We conclude that the optimal spatial resolution for providing Hestia total FFCO2 emissions products is centered around 100 m, at which the FFCO2 emissions data can not only fully meet the requirement of urban flux integration, but also be effectively used in understanding the relationships between FFCO2 emissions and various social-economic variables at the U.S. census block group level.

Contributors

Agent

Created

Date Created
2017-05-19

128660-Thumbnail Image.png

A Sparse Voxel Octree-Based Framework for Computing Solar Radiation Using 3D City Models

Description

An effective three-dimensional (3D) data representation is required to assess the spatial distribution of the photovoltaic potential over urban building roofs and facades using 3D city models. Voxels have long been used as a spatial data representation, but practical applications

An effective three-dimensional (3D) data representation is required to assess the spatial distribution of the photovoltaic potential over urban building roofs and facades using 3D city models. Voxels have long been used as a spatial data representation, but practical applications of the voxel representation have been limited compared with rasters in traditional two-dimensional (2D) geographic information systems (GIS). We propose to use sparse voxel octree (SVO) as a data representation to extend the GRASS GIS r.sun solar radiation model from 2D to 3D. The GRASS GIS r.sun model is nested in an SVO-based computing framework. The presented 3D solar radiation computing framework was applied to 3D building groups of different geometric complexities to demonstrate its efficiency and scalability. We presented a method to explicitly compute diffuse shading losses in r.sun, and found that diffuse shading losses can reduce up to 10% of the annual global radiation under clear sky conditions. Hence, diffuse shading losses are of significant importance especially in complex urban environments.

Contributors

Created

Date Created
2017-03-31

134929-Thumbnail Image.png

Use of cleavable fluorescent antibodies for highly multiplexed single cell in situ protein analysis

Description

The ability to profile proteins allows us to gain a deeper understanding of organization, regulation, and function of different biological systems. Many technologies are currently being used in order to accurately perform the protein profiling. Some of these technologies include

The ability to profile proteins allows us to gain a deeper understanding of organization, regulation, and function of different biological systems. Many technologies are currently being used in order to accurately perform the protein profiling. Some of these technologies include mass spectrometry, microarray based analysis, and fluorescence microscopy. Deeper analysis of these technologies have demonstrated limitations which have taken away from either the efficiency or the accuracy of the results. The objective of this project was to develop a technology in which highly multiplexed single cell in situ protein analysis can be completed in a comprehensive manner without the loss of the protein targets. This was accomplished in the span of 3 steps which is referred to as the immunofluorescence cycle. Antibodies with attached fluorophores with the help of novel azide-based cleavable linker are used to detect protein targets. Fluorescence imaging and data storage procedures are done on the targets and then the fluorophores are cleaved from the antibodies without the loss of the protein targets. Continuous cycles of the immunofluorescence procedure can help create a comprehensive and quantitative profile of the protein. The development of such a technique will not only help us understand biological systems such as solid tumor, brain tissues, and developing embryos. But it will also play a role in real-world applications such as signaling network analysis, molecular diagnosis and cellular targeted therapies.

Contributors

Agent

Created

Date Created
2016-12

134940-Thumbnail Image.png

Multiplexed single-cell in situ RNA analysis by reiterative hybridization

Description

Currently, quantification of single cell RNA species in their natural contexts is restricted due to the little number of parallel analysis. Through this, we identify a method to increase the multiplexing capacity of RNA analysis for single cells in situ.

Currently, quantification of single cell RNA species in their natural contexts is restricted due to the little number of parallel analysis. Through this, we identify a method to increase the multiplexing capacity of RNA analysis for single cells in situ. Initially, RNA transcripts are found by using fluorescence in situ hybridization (FISH). Once imaging and data storage is completed, the fluorescence signal is detached through photobleaching. By doing so, the FISH is reinitiated to detect other RNA species residing in the same cell. After reiterative cycles of hybridization, imaging and photobleaching, the identities, positions and copy numbers of a huge amount of varied RNA species can be computed in individual cells in situ. Through this approach, we have evaluated seven different transcripts in single HeLa cells with five reiterative RNA FISH cycles. This method has the ability to detect over 100 varied RNA species in single cells in situ, which can be further applied in studies of systems biology, molecular diagnosis and targeted therapies.

Contributors

Agent

Created

Date Created
2016-12

150086-Thumbnail Image.png

Offline and online adaboost for detecting anatomic structures

Description

Detecting anatomical structures, such as the carina, the pulmonary trunk and the aortic arch, is an important step in designing a CAD system of detection Pulmonary Embolism. The presented CAD system gets rid of the high-level prior defined knowledge to

Detecting anatomical structures, such as the carina, the pulmonary trunk and the aortic arch, is an important step in designing a CAD system of detection Pulmonary Embolism. The presented CAD system gets rid of the high-level prior defined knowledge to become a system which can easily extend to detect other anatomic structures. The system is based on a machine learning algorithm --- AdaBoost and a general feature --- Haar. This study emphasizes on off-line and on-line AdaBoost learning. And in on-line AdaBoost, the thesis further deals with extremely imbalanced condition. The thesis first reviews several knowledge-based detection methods, which are relied on human being's understanding of the relationship between anatomic structures. Then the thesis introduces a classic off-line AdaBoost learning. The thesis applies different cascading scheme, namely multi-exit cascading scheme. The comparison between the two methods will be provided and discussed. Both of the off-line AdaBoost methods have problems in memory usage and time consuming. Off-line AdaBoost methods need to store all the training samples and the dataset need to be set before training. The dataset cannot be enlarged dynamically. Different training dataset requires retraining the whole process. The retraining is very time consuming and even not realistic. To deal with the shortcomings of off-line learning, the study exploited on-line AdaBoost learning approach. The thesis proposed a novel pool based on-line method with Kalman filters and histogram to better represent the distribution of the samples' weight. Analysis of the performance, the stability and the computational complexity will be provided in the thesis. Furthermore, the original on-line AdaBoost performs badly in imbalanced conditions, which occur frequently in medical image processing. In image dataset, positive samples are limited and negative samples are countless. A novel Self-Adaptive Asymmetric On-line Boosting method is presented. The method utilized a new asymmetric loss criterion with self-adaptability according to the ratio of exposed positive and negative samples and it has an advanced rule to update sample's importance weight taking account of both classification result and sample's label. Compared to traditional on-line AdaBoost Learning method, the new method can achieve far more accuracy in imbalanced conditions.

Contributors

Agent

Created

Date Created
2011

152165-Thumbnail Image.png

Informatics approach to improving surgical skills training

Description

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are observed during residency for judgment of their skills. Although the value of this method of skills assessment cannot be ignored, novel methodologies of objective skills assessment need to be designed, developed, and evaluated that augment the traditional approach. Several sensor-based systems have been developed to measure a user's skill quantitatively, but use of sensors could interfere with skill execution and thus limit the potential for evaluating real-life surgery. However, having a method to judge skills automatically in real-life conditions should be the ultimate goal, since only with such features that a system would be widely adopted. This research proposes a novel video-based approach for observing surgeons' hand and surgical tool movements in minimally invasive surgical training exercises as well as during laparoscopic surgery. Because our system does not require surgeons to wear special sensors, it has the distinct advantage over alternatives of offering skills assessment in both learning and real-life environments. The system automatically detects major skill-measuring features from surgical task videos using a computing system composed of a series of computer vision algorithms and provides on-screen real-time performance feedback for more efficient skill learning. Finally, the machine-learning approach is used to develop an observer-independent composite scoring model through objective and quantitative measurement of surgical skills. To increase effectiveness and usability of the developed system, it is integrated with a cloud-based tool, which automatically assesses surgical videos upload to the cloud.

Contributors

Agent

Created

Date Created
2013

153713-Thumbnail Image.png

Ensuring high-quality colonoscopy by reducing polyp miss-rates

Description

Colorectal cancer is the second-highest cause of cancer-related deaths in the United States with approximately 50,000 estimated deaths in 2015. The advanced stages of colorectal cancer has a poor five-year survival rate of 10%, whereas the diagnosis in early stages

Colorectal cancer is the second-highest cause of cancer-related deaths in the United States with approximately 50,000 estimated deaths in 2015. The advanced stages of colorectal cancer has a poor five-year survival rate of 10%, whereas the diagnosis in early stages of development has showed a more favorable five-year survival rate of 90%. Early diagnosis of colorectal cancer is achievable if colorectal polyps, a possible precursor to cancer, are detected and removed before developing into malignancy.

The preferred method for polyp detection and removal is optical colonoscopy. A colonoscopic procedure consists of two phases: (1) insertion phase during which a flexible endoscope (a flexible tube with a tiny video camera at the tip) is advanced via the anus and then gradually to the end of the colon--called the cecum, and (2) withdrawal phase during which the endoscope is gradually withdrawn while colonoscopists examine the colon wall to find and remove polyps. Colonoscopy is an effective procedure and has led to a significant decline in the incidence and mortality of colon cancer. However, despite many screening and therapeutic advantages, 1 out of every 4 polyps and 1 out of 13 colon cancers are missed during colonoscopy.

There are many factors that contribute to missed polyps and cancers including poor colon preparation, inadequate navigational skills, and fatigue. Poor colon preparation results in a substantial portion of colon covered with fecal content, hindering a careful examination of the colon. Inadequate navigational skills can prevent a colonoscopist from examining hard-to-reach regions of the colon that may contain a polyp. Fatigue can manifest itself in the performance of a colonoscopist by decreasing diligence and vigilance during procedures. Lack of vigilance may prevent a colonoscopist from detecting the polyps that briefly appear in the colonoscopy videos. Lack of diligence may result in hasty examination of the colon that is likely to miss polyps and lesions.

To reduce polyp and cancer miss rates, this research presents a quality assurance system with 3 components. The first component is an automatic polyp detection system that highlights the regions with suspected polyps in colonoscopy videos. The goal is to encourage more vigilance during procedures. The suggested polyp detection system consists of several novel modules: (1) a new patch descriptor that characterizes image appearance around boundaries more accurately and more efficiently than widely-used patch descriptors such HoG, LBP, and Daisy; (2) A 2-stage classification framework that is able to enhance low level image features prior to classification. Unlike the traditional way of image classification where a single patch undergoes the processing pipeline, our system fuses the information extracted from a pair of patches for more accurate edge classification; (3) a new vote accumulation scheme that robustly localizes objects with curvy boundaries in fragmented edge maps. Our voting scheme produces a probabilistic output for each polyp candidate but unlike the existing methods (e.g., Hough transform) does not require any predefined parametric model of the object of interest; (4) and a unique three-way image representation coupled with convolutional neural networks (CNNs) for classifying the polyp candidates. Our image representation efficiently captures a variety of features such as color, texture, shape, and temporal information and significantly improves the performance of the subsequent CNNs for candidate classification. This contrasts with the exiting methods that mainly rely on a subset of the above image features for polyp detection. Furthermore, this research is the first to investigate the use of CNNs for polyp detection in colonoscopy videos.

The second component of our quality assurance system is an automatic image quality assessment for colonoscopy. The goal is to encourage more diligence during procedures by warning against hasty and low quality colon examination. We detect a low quality colon examination by identifying a number of consecutive non-informative frames in videos. We base our methodology for detecting non-informative frames on two key observations: (1) non-informative frames

most often show an unrecognizable scene with few details and blurry edges and thus their information can be locally compressed in a few Discrete Cosine Transform (DCT) coefficients; however, informative images include much more details and their information content cannot be summarized by a small subset of DCT coefficients; (2) information content is spread all over the image in the case of informative frames, whereas in non-informative frames, depending on image artifacts and degradation factors, details may appear in only a few regions. We use the former observation in designing our global features and the latter in designing our local image features. We demonstrated that the suggested new features are superior to the existing features based on wavelet and Fourier transforms.

The third component of our quality assurance system is a 3D visualization system. The goal is to provide colonoscopists with feedback about the regions of the colon that have remained unexamined during colonoscopy, thereby helping them improve their navigational skills. The suggested system is based on a new 3D reconstruction algorithm that combines depth and position information for 3D reconstruction. We propose to use a depth camera and a tracking sensor to obtain depth and position information. Our system contrasts with the existing works where the depth and position information are unreliably estimated from the colonoscopy frames. We conducted a use case experiment, demonstrating that the suggested 3D visualization system can determine the unseen regions of the navigated environment. However, due to technology limitations, we were not able to evaluate our 3D visualization system using a phantom model of the colon.

Contributors

Agent

Created

Date Created
2015

153433-Thumbnail Image.png

Longitudinal morphometric study of genetic influence of APOE e4 genotype on hippocampal atrophy - An N=1925 surface-based ADNI study

Description

The apolipoprotein E (APOE) e4 genotype is the most prevalent known genetic risk factor for Alzheimer's disease (AD). In this paper, we examined the longitudinal effect of APOE e4 on hippocampal morphometry in Alzheimer's Disease Neuroimaging Initiative (ADNI). Generally, atrophy

The apolipoprotein E (APOE) e4 genotype is the most prevalent known genetic risk factor for Alzheimer's disease (AD). In this paper, we examined the longitudinal effect of APOE e4 on hippocampal morphometry in Alzheimer's Disease Neuroimaging Initiative (ADNI). Generally, atrophy of hippocampus has more chance occurs in AD patients who carrying the APOE e4 allele than those who are APOE e4 noncarriers. Also, brain structure and function depend on APOE genotype not just for Alzheimer's disease patients but also in health elderly individuals, so APOE genotyping is considered critical in clinical trials of Alzheimer's disease. We used a large sample of elderly participants, with the help of a new automated surface registration system based on surface conformal parameterization with holomorphic 1-forms and surface fluid registration. In this system, we automatically segmented and constructed hippocampal surfaces from MR images at many different time points, such as 6 months, 1- and 2-year follow up. Between the two different hippocampal surfaces, we did the high-order correspondences, using a novel inverse consistent surface fluid registration method. At each time point, using Hotelling's T^2 test, we found significant morphological deformation in APOE e4 carriers relative to noncarriers in the entire cohort as well as in the non-demented (pooled MCI and control) subjects, affecting the left hippocampus more than the right, and this effect was more pronounced in e4 homozygotes than heterozygotes.

Contributors

Agent

Created

Date Created
2015

154703-Thumbnail Image.png

A unified framework based on convolutional neural networks for interpreting carotid intima-media thickness videos

Description

Cardiovascular disease (CVD) is the leading cause of mortality yet largely preventable, but the key to prevention is to identify at-risk individuals before adverse events. For predicting individual CVD risk, carotid intima-media thickness (CIMT), a noninvasive ultrasound method, has proven

Cardiovascular disease (CVD) is the leading cause of mortality yet largely preventable, but the key to prevention is to identify at-risk individuals before adverse events. For predicting individual CVD risk, carotid intima-media thickness (CIMT), a noninvasive ultrasound method, has proven to be valuable, offering several advantages over CT coronary artery calcium score. However, each CIMT examination includes several ultrasound videos, and interpreting each of these CIMT videos involves three operations: (1) select three enddiastolic ultrasound frames (EUF) in the video, (2) localize a region of interest (ROI) in each selected frame, and (3) trace the lumen-intima interface and the media-adventitia interface in each ROI to measure CIMT. These operations are tedious, laborious, and time consuming, a serious limitation that hinders the widespread utilization of CIMT in clinical practice. To overcome this limitation, this paper presents a new system to automate CIMT video interpretation. Our extensive experiments demonstrate that the suggested system significantly outperforms the state-of-the-art methods. The superior performance is attributable to our unified framework based on convolutional neural networks (CNNs) coupled with our informative image representation and effective post-processing of the CNN outputs, which are uniquely designed for each of the above three operations.

Contributors

Agent

Created

Date Created
2016