The title “Regents’ Professor” is the highest faculty honor awarded at Arizona State University. It is conferred on ASU faculty who have made pioneering contributions in their areas of expertise, who have achieved a sustained level of distinction, and who enjoy national and international recognition for these accomplishments. This collection contains primarily open access works by ASU Regents' Professors.

Displaying 1 - 6 of 6
Filtering by

Clear all filters

130268-Thumbnail Image.png
Description
Purpose: To evaluate a new method of measuring ocular exposure in the context of a natural blink pattern through analysis of the variables tear film breakup time (TFBUT), interblink interval (IBI), and tear film breakup area (BUA).
Methods: The traditional methodology (Forced-Stare [FS]) measures TFBUT and IBI separately. TFBUT is measured

Purpose: To evaluate a new method of measuring ocular exposure in the context of a natural blink pattern through analysis of the variables tear film breakup time (TFBUT), interblink interval (IBI), and tear film breakup area (BUA).
Methods: The traditional methodology (Forced-Stare [FS]) measures TFBUT and IBI separately. TFBUT is measured under forced-stare conditions by an examiner using a stopwatch, while IBI is measured as the subject watches television. The new methodology (video capture manual analysis [VCMA]) involves retrospective analysis of video data of fluorescein-stained eyes taken through a slit lamp while the subject watches television, and provides TFBUT and BUA for each IBI during the 1-minute video under natural blink conditions. The FS and VCMA methods were directly compared in the same set of dry-eye subjects. The VCMA method was evaluated for the ability to discriminate between dry-eye subjects and normal subjects. The VCMA method was further evaluated in the dry eye subjects for the ability to detect a treatment effect before, and 10 minutes after, bilateral instillation of an artificial tear solution.
Results: Ten normal subjects and 17 dry-eye subjects were studied. In the dry-eye subjects, the two methods differed with respect to mean TFBUTs (5.82 seconds, FS; 3.98 seconds, VCMA; P = 0.002). The FS variables alone (TFBUT, IBI) were not able to successfully distinguish between the dry-eye and normal subjects, whereas the additional VCMA variables, both derived and observed (BUA, BUA/IBI, breakup rate), were able to successfully distinguish between the dry-eye and normal subjects in a statistically significant fashion. TFBUT (P = 0.034) and BUA/IBI (P = 0.001) were able to distinguish the treatment effect of artificial tears in dry-eye subjects.
Conclusion: The VCMA methodology provides a clinically relevant analysis of tear film stability measured in the context of a natural blink pattern.
Created2011-09-21
130267-Thumbnail Image.png
Description
Purpose: To investigate use of an improved ocular tear film analysis protocol (OPI 2.0) in the Controlled Adverse Environment (CAE[superscript SM]) model of dry eye disease, and to examine the utility of new metrics in the identification of subpopulations of dry eye patients.
Methods: Thirty-three dry eye subjects completed a single-center,

Purpose: To investigate use of an improved ocular tear film analysis protocol (OPI 2.0) in the Controlled Adverse Environment (CAE[superscript SM]) model of dry eye disease, and to examine the utility of new metrics in the identification of subpopulations of dry eye patients.
Methods: Thirty-three dry eye subjects completed a single-center, single-visit, pilot CAE study. The primary endpoint was mean break-up area (MBA) as assessed by the OPI 2.0 system. Secondary endpoints included corneal fluorescein staining, tear film break-up time, and OPI 2.0 system measurements. Subjects were also asked to rate their ocular discomfort throughout the CAE. Dry eye endpoints were measured at baseline, immediately following a 90-minute CAE exposure, and again 30 minutes after exposure.
Results: The post-CAE measurements of MBA showed a statistically significant decrease from the baseline measurements. The decrease was relatively specific to those patients with moderate to severe dry eye, as measured by baseline MBA. Secondary endpoints including palpebral fissure size, corneal staining, and redness, also showed significant changes when pre- and post-CAE measurements were compared. A correlation analysis identified specific associations between MBA, blink rate, and palpebral fissure size. Comparison of MBA responses allowed us to identify subpopulations of subjects who exhibited different compensatory mechanisms in response to CAE challenge. Of note, none of the measures of tear film break-up time showed statistically significant changes or correlations in pre-, versus post-CAE measures.
Conclusion: This pilot study confirms that the tear film metric MBA can detect changes in the ocular surface induced by a CAE, and that these changes are correlated with other, established measures of dry eye disease. The observed decrease in MBA following CAE exposure demonstrates that compensatory mechanisms are initiated during the CAE exposure, and that this compensation may provide the means to identify and characterize clinically relevant subpopulations of dry eye patients.
Created2012-11-12
130370-Thumbnail Image.png
Description

Background:
Drosophila gene expression pattern images document the spatiotemporal dynamics of gene expression during embryogenesis. A comparative analysis of these images could provide a fundamentally important way for studying the regulatory networks governing development. To facilitate pattern comparison and searching, groups of images in the Berkeley Drosophila Genome Project (BDGP) high-throughput

Background:
Drosophila gene expression pattern images document the spatiotemporal dynamics of gene expression during embryogenesis. A comparative analysis of these images could provide a fundamentally important way for studying the regulatory networks governing development. To facilitate pattern comparison and searching, groups of images in the Berkeley Drosophila Genome Project (BDGP) high-throughput study were annotated with a variable number of anatomical terms manually using a controlled vocabulary. Considering that the number of available images is rapidly increasing, it is imperative to design computational methods to automate this task.

Results:
We present a computational method to annotate gene expression pattern images automatically. The proposed method uses the bag-of-words scheme to utilize the existing information on pattern annotation and annotates images using a model that exploits correlations among terms. The proposed method can annotate images individually or in groups (e.g., according to the developmental stage). In addition, the proposed method can integrate information from different two-dimensional views of embryos. Results on embryonic patterns from BDGP data demonstrate that our method significantly outperforms other methods.

Conclusion:
The proposed bag-of-words scheme is effective in representing a set of annotations assigned to a group of images, and the model employed to annotate images successfully captures the correlations among different controlled vocabulary terms. The integration of existing annotation information from multiple embryonic views improves annotation performance.

ContributorsJi, Shuiwang (Author) / Li, Ying-Xin (Author) / Zhou, Zhi-Hua (Author) / Kumar, Sudhir (Author) / Ye, Jieping (Author) / Biodesign Institute (Contributor) / Ira A. Fulton Schools of Engineering (Contributor) / School of Electrical, Computer and Energy Engineering (Contributor) / College of Liberal Arts and Sciences (Contributor) / School of Life Sciences (Contributor)
Created2009-04-21
130364-Thumbnail Image.png
Description
Background
Drosophila melanogaster has been established as a model organism for investigating the developmental gene interactions. The spatio-temporal gene expression patterns of Drosophila melanogaster can be visualized by in situ hybridization and documented as digital images. Automated and efficient tools for analyzing these expression images will provide biological insights into the

Background
Drosophila melanogaster has been established as a model organism for investigating the developmental gene interactions. The spatio-temporal gene expression patterns of Drosophila melanogaster can be visualized by in situ hybridization and documented as digital images. Automated and efficient tools for analyzing these expression images will provide biological insights into the gene functions, interactions, and networks. To facilitate pattern recognition and comparison, many web-based resources have been created to conduct comparative analysis based on the body part keywords and the associated images. With the fast accumulation of images from high-throughput techniques, manual inspection of images will impose a serious impediment on the pace of biological discovery. It is thus imperative to design an automated system for efficient image annotation and comparison.
Results
We present a computational framework to perform anatomical keywords annotation for Drosophila gene expression images. The spatial sparse coding approach is used to represent local patches of images in comparison with the well-known bag-of-words (BoW) method. Three pooling functions including max pooling, average pooling and Sqrt (square root of mean squared statistics) pooling are employed to transform the sparse codes to image features. Based on the constructed features, we develop both an image-level scheme and a group-level scheme to tackle the key challenges in annotating Drosophila gene expression pattern images automatically. To deal with the imbalanced data distribution inherent in image annotation tasks, the undersampling method is applied together with majority vote. Results on Drosophila embryonic expression pattern images verify the efficacy of our approach.
Conclusion
In our experiment, the three pooling functions perform comparably well in feature dimension reduction. The undersampling with majority vote is shown to be effective in tackling the problem of imbalanced data. Moreover, combining sparse coding and image-level scheme leads to consistent performance improvement in keywords annotation.
ContributorsSun, Qian (Author) / Muckatira, Sherin (Author) / Yuan, Lei (Author) / Ji, Shuiwang (Author) / Newfeld, Stuart (Author) / Kumar, Sudhir (Author) / Ye, Jieping (Author) / Biodesign Institute (Contributor) / Center for Evolution and Medicine (Contributor) / College of Liberal Arts and Sciences (Contributor) / School of Life Sciences (Contributor) / Ira A. Fulton Schools of Engineering (Contributor)
Created2013-12-03
130363-Thumbnail Image.png
Description
Background
Fruit fly embryogenesis is one of the best understood animal development systems, and the spatiotemporal gene expression dynamics in this process are captured by digital images. Analysis of these high-throughput images will provide novel insights into the functions, interactions, and networks of animal genes governing development. To facilitate comparative analysis,

Background
Fruit fly embryogenesis is one of the best understood animal development systems, and the spatiotemporal gene expression dynamics in this process are captured by digital images. Analysis of these high-throughput images will provide novel insights into the functions, interactions, and networks of animal genes governing development. To facilitate comparative analysis, web-based interfaces have been developed to conduct image retrieval based on body part keywords and images. Currently, the keyword annotation of spatiotemporal gene expression patterns is conducted manually. However, this manual practice does not scale with the continuously expanding collection of images. In addition, existing image retrieval systems based on the expression patterns may be made more accurate using keywords.
Results
In this article, we adapt advanced data mining and computer vision techniques to address the key challenges in annotating and retrieving fruit fly gene expression pattern images. To boost the performance of image annotation and retrieval, we propose representations integrating spatial information and sparse features, overcoming the limitations of prior schemes.
Conclusions
We perform systematic experimental studies to evaluate the proposed schemes in comparison with current methods. Experimental results indicate that the integration of spatial information and sparse features lead to consistent performance improvement in image annotation, while for the task of retrieval, sparse features alone yields better results.
ContributorsYuan, Lei (Author) / Woodard, Alexander (Author) / Ji, Shuiwang (Author) / Jiang, Yuan (Author) / Zhou, Zhi-Hua (Author) / Kumar, Sudhir (Author) / Ye, Jieping (Author) / Biodesign Institute (Contributor) / Center for Evolution and Medicine (Contributor) / Ira A. Fulton Schools of Engineering (Contributor) / College of Liberal Arts and Sciences (Contributor) / School of Life Sciences (Contributor)
Created2012-05-23
130362-Thumbnail Image.png
Description
Background
Multicellular organisms consist of cells of many different types that are established during development. Each type of cell is characterized by the unique combination of expressed gene products as a result of spatiotemporal gene regulation. Currently, a fundamental challenge in regulatory biology is to elucidate the gene expression controls that

Background
Multicellular organisms consist of cells of many different types that are established during development. Each type of cell is characterized by the unique combination of expressed gene products as a result of spatiotemporal gene regulation. Currently, a fundamental challenge in regulatory biology is to elucidate the gene expression controls that generate the complex body plans during development. Recent advances in high-throughput biotechnologies have generated spatiotemporal expression patterns for thousands of genes in the model organism fruit fly Drosophila melanogaster. Existing qualitative methods enhanced by a quantitative analysis based on computational tools we present in this paper would provide promising ways for addressing key scientific questions.
Results
We develop a set of computational methods and open source tools for identifying co-expressed embryonic domains and the associated genes simultaneously. To map the expression patterns of many genes into the same coordinate space and account for the embryonic shape variations, we develop a mesh generation method to deform a meshed generic ellipse to each individual embryo. We then develop a co-clustering formulation to cluster the genes and the mesh elements, thereby identifying co-expressed embryonic domains and the associated genes simultaneously. Experimental results indicate that the gene and mesh co-clusters can be correlated to key developmental events during the stages of embryogenesis we study. The open source software tool has been made available at http://compbio.cs.odu.edu/fly/.
Conclusions
Our mesh generation and machine learning methods and tools improve upon the flexibility, ease-of-use and accuracy of existing methods.
ContributorsZhang, Wenlu (Author) / Feng, Daming (Author) / Li, Rongjian (Author) / Chernikov, Andrey (Author) / Chrisochoides, Nikos (Author) / Osgood, Christopher (Author) / Konikoff, Charlotte (Author) / Newfeld, Stuart (Author) / Kumar, Sudhir (Author) / Ji, Shuiwang (Author) / Biodesign Institute (Contributor) / Center for Evolution and Medicine (Contributor) / College of Liberal Arts and Sciences (Contributor) / School of Life Sciences (Contributor)
Created2013-12-28