Matching Items (113)
130342-Thumbnail Image.png
Description
Background
Grading schemes for breast cancer diagnosis are predominantly based on pathologists' qualitative assessment of altered nuclear structure from 2D brightfield microscopy images. However, cells are three-dimensional (3D) objects with features that are inherently 3D and thus poorly characterized in 2D. Our goal is to quantitatively characterize nuclear structure in 3D,

Background
Grading schemes for breast cancer diagnosis are predominantly based on pathologists' qualitative assessment of altered nuclear structure from 2D brightfield microscopy images. However, cells are three-dimensional (3D) objects with features that are inherently 3D and thus poorly characterized in 2D. Our goal is to quantitatively characterize nuclear structure in 3D, assess its variation with malignancy, and investigate whether such variation correlates with standard nuclear grading criteria.
Methodology
We applied micro-optical computed tomographic imaging and automated 3D nuclear morphometry to quantify and compare morphological variations between human cell lines derived from normal, benign fibrocystic or malignant breast epithelium. To reproduce the appearance and contrast in clinical cytopathology images, we stained cells with hematoxylin and eosin and obtained 3D images of 150 individual stained cells of each cell type at sub-micron, isotropic resolution. Applying volumetric image analyses, we computed 42 3D morphological and textural descriptors of cellular and nuclear structure.
Principal Findings
We observed four distinct nuclear shape categories, the predominant being a mushroom cap shape. Cell and nuclear volumes increased from normal to fibrocystic to metastatic type, but there was little difference in the volume ratio of nucleus to cytoplasm (N/C ratio) between the lines. Abnormal cell nuclei had more nucleoli, markedly higher density and clumpier chromatin organization compared to normal. Nuclei of non-tumorigenic, fibrocystic cells exhibited larger textural variations than metastatic cell nuclei. At p<0.0025 by ANOVA and Kruskal-Wallis tests, 90% of our computed descriptors statistically differentiated control from abnormal cell populations, but only 69% of these features statistically differentiated the fibrocystic from the metastatic cell populations.
Conclusions
Our results provide a new perspective on nuclear structure variations associated with malignancy and point to the value of automated quantitative 3D nuclear morphometry as an objective tool to enable development of sensitive and specific nuclear grade classification in breast cancer diagnosis.
Created2012-01-05
134875-Thumbnail Image.png
Description
Productivity in the construction industry is an essential measure of production efficiency and economic progress, quantified by craft laborers' time spent directly adding value to a project. In order to better understand craft labor productivity as an aspect of lean construction, an activity analysis was conducted at the Arizona State

Productivity in the construction industry is an essential measure of production efficiency and economic progress, quantified by craft laborers' time spent directly adding value to a project. In order to better understand craft labor productivity as an aspect of lean construction, an activity analysis was conducted at the Arizona State University Palo Verde Main engineering dormitory construction site in December of 2016. The objective of this analysis on craft labor productivity in construction projects was to gather data regarding the efficiency of craft labor workers, make conclusions about the effects of time of day and other site-specific factors on labor productivity, as well as suggest improvements to implement in the construction process. Analysis suggests that supporting tasks, such as traveling or materials handling, constitute the majority of craft labors' efforts on the job site with the highest percentages occurring at the beginning and end of the work day. Direct work and delays were approximately equal at about 20% each hour with the highest peak occurring at lunchtime between 10:00 am and 11:00 am. The top suggestion to improve construction productivity would be to perform an extensive site utilization analysis due to the confined nature of this job site. Despite the limitations of an activity analysis to provide a complete prospective of all the factors that can affect craft labor productivity as well as the small number of days of data acquisition, this analysis provides a basic overview of the productivity at the Palo Verde Main construction site. Through this research, construction managers can more effectively generate site plans and schedules to increase labor productivity.
ContributorsFord, Emily Lucile (Author) / Grau, David (Thesis director) / Chong, Oswald (Committee member) / Civil, Environmental and Sustainable Engineering Programs (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134662-Thumbnail Image.png
Description
The overall energy consumption around the United States has not been reduced even with the advancement of technology over the past decades. Deficiencies exist between design and actual energy performances. Energy Infrastructure Systems (EIS) are impacted when the amount of energy production cannot be accurately and efficiently forecasted. Inaccurate engineering

The overall energy consumption around the United States has not been reduced even with the advancement of technology over the past decades. Deficiencies exist between design and actual energy performances. Energy Infrastructure Systems (EIS) are impacted when the amount of energy production cannot be accurately and efficiently forecasted. Inaccurate engineering assumptions can result when there is a lack of understanding on how energy systems can operate in real-world applications. Energy systems are complex, which results in unknown system behaviors, due to an unknown structural system model. Currently, there exists a lack of data mining techniques in reverse engineering, which are needed to develop efficient structural system models. In this project, a new type of reverse engineering algorithm has been applied to a year's worth of energy data collected from an ASU research building called MacroTechnology Works, to identify the structural system model. Developing and understanding structural system models is the first step in creating accurate predictive analytics for energy production. The associative network of the building's data will be highlighted to accurately depict the structural model. This structural model will enhance energy infrastructure systems' energy efficiency, reduce energy waste, and narrow the gaps between energy infrastructure design, planning, operation and management (DPOM).
ContributorsCamarena, Raquel Jimenez (Author) / Chong, Oswald (Thesis director) / Ye, Nong (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134706-Thumbnail Image.png
Description
Open source image analytics and data mining software are widely available but can be overly-complicated and non-intuitive for medical physicians and researchers to use. The ASU-Mayo Clinic Imaging Informatics Lab has developed an in-house pipeline to process medical images, extract imaging features, and develop multi-parametric models to assist disease staging

Open source image analytics and data mining software are widely available but can be overly-complicated and non-intuitive for medical physicians and researchers to use. The ASU-Mayo Clinic Imaging Informatics Lab has developed an in-house pipeline to process medical images, extract imaging features, and develop multi-parametric models to assist disease staging and diagnosis. The tools have been extensively used in a number of medical studies including brain tumor, breast cancer, liver cancer, Alzheimer's disease, and migraine. Recognizing the need from users in the medical field for a simplified interface and streamlined functionalities, this project aims to democratize this pipeline so that it is more readily available to health practitioners and third party developers.
ContributorsBaer, Lisa Zhou (Author) / Wu, Teresa (Thesis director) / Wang, Yalin (Committee member) / Computer Science and Engineering Program (Contributor) / W. P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
135209-Thumbnail Image.png
Description
Building construction, design and maintenance is a sector of engineering where improved efficiency will have immense impacts on resource consumption and environmental health. This research closely examines the Leadership in Environment and Energy Design (LEED) rating system and the International Green Construction Code (IgCC). The IgCC is a model code,

Building construction, design and maintenance is a sector of engineering where improved efficiency will have immense impacts on resource consumption and environmental health. This research closely examines the Leadership in Environment and Energy Design (LEED) rating system and the International Green Construction Code (IgCC). The IgCC is a model code, written with the same structure as many building codes. It is a standard that can be enforced if a city's government decides to adopt it. When IgCC is enforced, the buildings either meet all of the requirements set forth in the document or it fails to meet the code standards. The LEED Rating System, on the other hand, is not a building code. LEED certified buildings are built according to the standards of their local jurisdiction and in addition to that, building owners can chose to pursue a LEED certification. This is a rating system that awards points based on the sustainable measures achieved by a building. A comparison of these green building systems highlights their accomplishments in terms of reduced electricity usage, usage of low-impact materials, indoor environmental quality and other innovative features. It was determined that in general IgCC is more holistic, stringent approach to green building. At the same time the LEED rating system a wider variety of green building options. In addition, building data from LEED certified buildings was complied and analyzed to understand important trends. Both of these methods are progressing towards low-impact, efficient infrastructure and a side-by-side comparison, as done in this research, shed light on the strengths and weaknesses of each method, allowing for future improvements.
ContributorsCampbell, Kaleigh Ruth (Author) / Chong, Oswald (Thesis director) / Parrish, Kristen (Committee member) / Civil, Environmental and Sustainable Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
133914-Thumbnail Image.png
Description
This paper describes the research done to quantify the relationship between external air temperature and energy consumption and internal air temperature and energy consumption. The study was conducted on a LEED Gold certified building, College Avenue Commons, located on Arizona State University's Tempe campus. It includes information on the background

This paper describes the research done to quantify the relationship between external air temperature and energy consumption and internal air temperature and energy consumption. The study was conducted on a LEED Gold certified building, College Avenue Commons, located on Arizona State University's Tempe campus. It includes information on the background of previous studies in the area, some that agree with the research hypotheses and some that take a different path. Real-time data was collected hourly for energy consumption and external air temperature. Intermittent internal air temperature was collected by undergraduate researcher, Charles Banke. Regression analysis was used to prove two research hypotheses. The authors found no correlation between external air temperature and energy consumption, nor did they find a relationship between internal air temperature and energy consumption. This paper also includes recommendations for future work to improve the study.
ContributorsBanke, Charles Michael (Author) / Chong, Oswald (Thesis director) / Parrish, Kristen (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
152126-Thumbnail Image.png
Description
Video object segmentation (VOS) is an important task in computer vision with a lot of applications, e.g., video editing, object tracking, and object based encoding. Different from image object segmentation, video object segmentation must consider both spatial and temporal coherence for the object. Despite extensive previous work, the problem is

Video object segmentation (VOS) is an important task in computer vision with a lot of applications, e.g., video editing, object tracking, and object based encoding. Different from image object segmentation, video object segmentation must consider both spatial and temporal coherence for the object. Despite extensive previous work, the problem is still challenging. Usually, foreground object in the video draws more attention from humans, i.e. it is salient. In this thesis we tackle the problem from the aspect of saliency, where saliency means a certain subset of visual information selected by a visual system (human or machine). We present a novel unsupervised method for video object segmentation that considers both low level vision cues and high level motion cues. In our model, video object segmentation can be formulated as a unified energy minimization problem and solved in polynomial time by employing the min-cut algorithm. Specifically, our energy function comprises the unary term and pair-wise interaction energy term respectively, where unary term measures region saliency and interaction term smooths the mutual effects between object saliency and motion saliency. Object saliency is computed in spatial domain from each discrete frame using multi-scale context features, e.g., color histogram, gradient, and graph based manifold ranking. Meanwhile, motion saliency is calculated in temporal domain by extracting phase information of the video. In the experimental section of this thesis, our proposed method has been evaluated on several benchmark datasets. In MSRA 1000 dataset the result demonstrates that our spatial object saliency detection is superior to the state-of-art methods. Moreover, our temporal motion saliency detector can achieve better performance than existing motion detection approaches in UCF sports action analysis dataset and Weizmann dataset respectively. Finally, we show the attractive empirical result and quantitative evaluation of our approach on two benchmark video object segmentation datasets.
ContributorsWang, Yilin (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Cleveau, David (Committee member) / Arizona State University (Publisher)
Created2013
152128-Thumbnail Image.png
Description
Learning from high dimensional biomedical data attracts lots of attention recently. High dimensional biomedical data often suffer from the curse of dimensionality and have imbalanced class distributions. Both of these features of biomedical data, high dimensionality and imbalanced class distributions, are challenging for traditional machine learning methods and may affect

Learning from high dimensional biomedical data attracts lots of attention recently. High dimensional biomedical data often suffer from the curse of dimensionality and have imbalanced class distributions. Both of these features of biomedical data, high dimensionality and imbalanced class distributions, are challenging for traditional machine learning methods and may affect the model performance. In this thesis, I focus on developing learning methods for the high-dimensional imbalanced biomedical data. In the first part, a sparse canonical correlation analysis (CCA) method is presented. The penalty terms is used to control the sparsity of the projection matrices of CCA. The sparse CCA method is then applied to find patterns among biomedical data sets and labels, or to find patterns among different data sources. In the second part, I discuss several learning problems for imbalanced biomedical data. Note that traditional learning systems are often biased when the biomedical data are imbalanced. Therefore, traditional evaluations such as accuracy may be inappropriate for such cases. I then discuss several alternative evaluation criteria to evaluate the learning performance. For imbalanced binary classification problems, I use the undersampling based classifiers ensemble (UEM) strategy to obtain accurate models for both classes of samples. A small sphere and large margin (SSLM) approach is also presented to detect rare abnormal samples from a large number of subjects. In addition, I apply multiple feature selection and clustering methods to deal with high-dimensional data and data with highly correlated features. Experiments on high-dimensional imbalanced biomedical data are presented which illustrate the effectiveness and efficiency of my methods.
ContributorsYang, Tao (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
158113-Thumbnail Image.png
Description
The Chinese Construction Industry has grown to be one of the largest construction markets in the world within the last 10 years. The size of the Chinese Construction Industry is on par with many developed nations, despite it being a developing country. Despite its rapid growth, the productivity and profitability

The Chinese Construction Industry has grown to be one of the largest construction markets in the world within the last 10 years. The size of the Chinese Construction Industry is on par with many developed nations, despite it being a developing country. Despite its rapid growth, the productivity and profitability of the Chinese Construction Industry is low compared to similar sized construction industries (United States, United Kingdom, etc.). In addition to the low efficiency of the Chinese Construction Industry, there is minimal documentation available showing the performance of the Chinese Construction Industry (projects completed on time, on budget, and customer satisfaction ratings).

The purpose of this research is to investigate potential solutions that could address the poor efficiency and performance of the Chinese Construction Industry. This research is divided into three phases; first, a literature review to identify countries that have similar construction industries to the Chinese Construction Industry. The second phase is to compare the risks and identify solutions that are proposed to increase the performance of similar construction industries and the Chinese Construction Industry. The third phase is to create a survey from the literature-based information to validate the concepts with the Chinese Construction Industry professionals and stakeholders.
ContributorsChen, Yutian (Author) / Chong, Oswald (Thesis advisor) / Kashiwagi, Dean T. (Committee member) / Badger, Willliam (Committee member) / Arizona State University (Publisher)
Created2020
171764-Thumbnail Image.png
Description
This dissertation constructs a new computational processing framework to robustly and precisely quantify retinotopic maps based on their angle distortion properties. More generally, this framework solves the problem of how to robustly and precisely quantify (angle) distortions of noisy or incomplete (boundary enclosed) 2-dimensional surface to surface mappings. This framework

This dissertation constructs a new computational processing framework to robustly and precisely quantify retinotopic maps based on their angle distortion properties. More generally, this framework solves the problem of how to robustly and precisely quantify (angle) distortions of noisy or incomplete (boundary enclosed) 2-dimensional surface to surface mappings. This framework builds upon the Beltrami Coefficient (BC) description of quasiconformal mappings that directly quantifies local mapping (circles to ellipses) distortions between diffeomorphisms of boundary enclosed plane domains homeomorphic to the unit disk. A new map called the Beltrami Coefficient Map (BCM) was constructed to describe distortions in retinotopic maps. The BCM can be used to fully reconstruct the original target surface (retinal visual field) of retinotopic maps. This dissertation also compared retinotopic maps in the visual processing cascade, which is a series of connected retinotopic maps responsible for visual data processing of physical images captured by the eyes. By comparing the BCM results from a large Human Connectome project (HCP) retinotopic dataset (N=181), a new computational quasiconformal mapping description of the transformed retinal image as it passes through the cascade is proposed, which is not present in any current literature. The description applied on HCP data provided direct visible and quantifiable geometric properties of the cascade in a way that has not been observed before. Because retinotopic maps are generated from in vivo noisy functional magnetic resonance imaging (fMRI), quantifying them comes with a certain degree of uncertainty. To quantify the uncertainties in the quantification results, it is necessary to generate statistical models of retinotopic maps from their BCMs and raw fMRI signals. Considering that estimating retinotopic maps from real noisy fMRI time series data using the population receptive field (pRF) model is a time consuming process, a convolutional neural network (CNN) was constructed and trained to predict pRF model parameters from real noisy fMRI data
ContributorsTa, Duyan Nguyen (Author) / Wang, Yalin (Thesis advisor) / Lu, Zhong-Lin (Committee member) / Hansford, Dianne (Committee member) / Liu, Huan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022