This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 36
Filtering by

Clear all filters

141463-Thumbnail Image.png
Description

Five immunocompetent C57BL/6-cBrd/cBrd/Cr (albino C57BL/6) mice were injected with GL261-luc2 cells, a cell line sharing characteristics of human glioblastoma multiforme (GBM). The mice were imaged using magnetic resonance (MR) at five separate time points to characterize growth and development of the tumor. After 25 days, the final tumor volumes of

Five immunocompetent C57BL/6-cBrd/cBrd/Cr (albino C57BL/6) mice were injected with GL261-luc2 cells, a cell line sharing characteristics of human glioblastoma multiforme (GBM). The mice were imaged using magnetic resonance (MR) at five separate time points to characterize growth and development of the tumor. After 25 days, the final tumor volumes of the mice varied from 12 mm3 to 62 mm3, even though mice were inoculated from the same tumor cell line under carefully controlled conditions. We generated hypotheses to explore large variances in final tumor size and tested them with our simple reaction-diffusion model in both a 3-dimensional (3D) finite difference method and a 2-dimensional (2D) level set method. The parameters obtained from a best-fit procedure, designed to yield simulated tumors as close as possible to the observed ones, vary by an order of magnitude between the three mice analyzed in detail. These differences may reflect morphological and biological variability in tumor growth, as well as errors in the mathematical model, perhaps from an oversimplification of the tumor dynamics or nonidentifiability of parameters. Our results generate parameters that match other experimental in vitro and in vivo measurements. Additionally, we calculate wave speed, which matches with other rat and human measurements.

ContributorsRutter, Erica (Author) / Stepien, Tracy (Author) / Anderies, Barrett (Author) / Plasencia, Jonathan (Author) / Woolf, Eric C. (Author) / Scheck, Adrienne C. (Author) / Turner, Gregory H. (Author) / Liu, Qingwei (Author) / Frakes, David (Author) / Kodibagkar, Vikram (Author) / Kuang, Yang (Author) / Preul, Mark C. (Author) / Kostelich, Eric (Author) / College of Liberal Arts and Sciences (Contributor)
Created2017-05-31
129538-Thumbnail Image.png
Description

Gompertz’s empirical equation remains the most popular one in describing cancer cell population growth in a wide spectrum of bio-medical situations due to its good fit to data and simplicity. Many efforts were documented in the literature aimed at understanding the mechanisms that may support Gompertz’s elegant model equation. One

Gompertz’s empirical equation remains the most popular one in describing cancer cell population growth in a wide spectrum of bio-medical situations due to its good fit to data and simplicity. Many efforts were documented in the literature aimed at understanding the mechanisms that may support Gompertz’s elegant model equation. One of the most convincing efforts was carried out by Gyllenberg and Webb. They divide the cancer cell population into the proliferative cells and the quiescent cells. In their two dimensional model, the dead cells are assumed to be removed from the tumor instantly. In this paper, we modify their model by keeping track of the dead cells remaining in the tumor. We perform mathematical and computational studies on this three dimensional model and compare the model dynamics to that of the model of Gyllenberg and Webb. Our mathematical findings suggest that if an avascular tumor grows according to our three-compartment model, then as the death rate of quiescent cells decreases to zero, the percentage of proliferative cells also approaches to zero. Moreover, a slow dying quiescent population will increase the size of the tumor. On the other hand, while the tumor size does not depend on the dead cell removal rate, its early and intermediate growth stages are very sensitive to it.

ContributorsAlzahrani, E. O. (Author) / Asiri, Asim (Author) / El-Dessoky, M. M. (Author) / Kuang, Yang (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-08-01
129256-Thumbnail Image.png
Description

Studies on urban heat island (UHI) have been more than a century after the phenomenon was first discovered in the early 1800s. UHI emerges as the source of many urban environmental problems and exacerbates the living environment in cities. Under the challenges of increasing urbanization and future climate changes, there

Studies on urban heat island (UHI) have been more than a century after the phenomenon was first discovered in the early 1800s. UHI emerges as the source of many urban environmental problems and exacerbates the living environment in cities. Under the challenges of increasing urbanization and future climate changes, there is a pressing need for sustainable adaptation/mitigation strategies for UHI effects, one popular option being the use of reflective materials. While it is introduced as an effective method to reduce temperature and energy consumption in cities, its impacts on environmental sustainability and large-scale non-local effect are inadequately explored. This paper provides a synthetic overview of potential environmental impacts of reflective materials at a variety of scales, ranging from energy load on a single building to regional hydroclimate. The review shows that mitigation potential of reflective materials depends on a set of factors, including building characteristics, urban environment, meteorological and geographical conditions, to name a few. Precaution needs to be exercised by city planners and policy makers for large-scale deployment of reflective materials before their environmental impacts, especially on regional hydroclimates, are better understood. In general, it is recommended that optimal strategy for UHI needs to be determined on a city-by-city basis, rather than adopting a “one-solution-fits-all” strategy.

ContributorsYang, Jiachuan (Author) / Wang, Zhi-Hua (Author) / Kaloush, Kamil (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-07-01
129257-Thumbnail Image.png
Description

Land surface energy balance in a built environment is widely modelled using urban canopy models with representation of building arrays as big street canyons. Modification of this simplified geometric representation, however, leads to challenging numerical difficulties in improving physical parameterization schemes that are deterministic in nature. In this paper, we

Land surface energy balance in a built environment is widely modelled using urban canopy models with representation of building arrays as big street canyons. Modification of this simplified geometric representation, however, leads to challenging numerical difficulties in improving physical parameterization schemes that are deterministic in nature. In this paper, we develop a stochastic algorithm to estimate view factors between canyon facets in the presence of shade trees based on Monte Carlo simulation, where an analytical formulation is inhibited by the complex geometry. The model is validated against analytical solutions of benchmark radiative problems as well as field measurements in real street canyons. In conjunction with the matrix method resolving infinite number of reflections, the proposed model is capable of predicting the radiative exchange inside the street canyon with good accuracy. Modeling of transient evolution of thermal filed inside the street canyon using the proposed method demonstrate the potential of shade trees in mitigating canyon surface temperatures as well as saving of building energy use. This new numerical framework also deepens our insight into the fundamental physics of radiative heat transfer and surface energy balance for urban climate modeling.

ContributorsWang, Zhi-Hua (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-12-01
128941-Thumbnail Image.png
Description

Background: Physical activity (PA) interventions typically include components or doses that are static across participants. Adaptive interventions are dynamic; components or doses change in response to short-term variations in participant's performance. Emerging theory and technologies make adaptive goal setting and feedback interventions feasible.

Objective: To test an adaptive intervention for PA based on

Background: Physical activity (PA) interventions typically include components or doses that are static across participants. Adaptive interventions are dynamic; components or doses change in response to short-term variations in participant's performance. Emerging theory and technologies make adaptive goal setting and feedback interventions feasible.

Objective: To test an adaptive intervention for PA based on Operant and Behavior Economic principles and a percentile-based algorithm. The adaptive intervention was hypothesized to result in greater increases in steps per day than the static intervention.

Methods: Participants (N = 20) were randomized to one of two 6-month treatments: 1) static intervention (SI) or 2) adaptive intervention (AI). Inactive overweight adults (85% women, M = 36.9±9.2 years, 35% non-white) in both groups received a pedometer, email and text message communication, brief health information, and biweekly motivational prompts. The AI group received daily step goals that adjusted up and down based on the percentile-rank algorithm and micro-incentives for goal attainment. This algorithm adjusted goals based on a moving window; an approach that responded to each individual's performance and ensured goals were always challenging but within participants' abilities. The SI group received a static 10,000 steps/day goal with incentives linked to uploading the pedometer's data.

Results: A random-effects repeated-measures model accounted for 180 repeated measures and autocorrelation. After adjusting for covariates, the treatment phase showed greater steps/day relative to the baseline phase (p<.001) and a group by study phase interaction was observed (p = .017). The SI group increased by 1,598 steps/day on average between baseline and treatment while the AI group increased by 2,728 steps/day on average between baseline and treatment; a significant between-group difference of 1,130 steps/day (Cohen's d = .74).

Conclusions: The adaptive intervention outperformed the static intervention for increasing PA. The adaptive goal and feedback algorithm is a “behavior change technology” that could be incorporated into mHealth technologies and scaled to reach large populations.

ContributorsAdams, Marc (Author) / Sallis, James F. (Author) / Norman, Gregory J. (Author) / Hovell, Melbourne F. (Author) / Hekler, Eric (Author) / Perata, Elyse (Author) / College of Health Solutions (Contributor)
Created2013-12-09
128957-Thumbnail Image.png
Description

Background: An evidence-based steps/day translation of U.S. federal guidelines for youth to engage in ≥60 minutes/day of moderate-to-vigorous physical activity (MVPA) would help health researchers, practitioners, and lay professionals charged with increasing youth’s physical activity (PA). The purpose of this study was to determine the number of free-living steps/day (both raw and

Background: An evidence-based steps/day translation of U.S. federal guidelines for youth to engage in ≥60 minutes/day of moderate-to-vigorous physical activity (MVPA) would help health researchers, practitioners, and lay professionals charged with increasing youth’s physical activity (PA). The purpose of this study was to determine the number of free-living steps/day (both raw and adjusted to a pedometer scale) that correctly classified children (6–11 years) and adolescents (12–17 years) as meeting the 60-minute MVPA guideline using the 2005–2006 National Health and Nutrition Examination Survey (NHANES) accelerometer data, and to evaluate the 12,000 steps/day recommendation recently adopted by the President’s Challenge Physical Activity and Fitness Awards Program.

Methods: Analyses were conducted among children (n = 915) and adolescents (n = 1,302) in 2011 and 2012. Receiver Operating Characteristic (ROC) curve plots and classification statistics revealed candidate steps/day cut points that discriminated meeting/not meeting the MVPA threshold by age group, gender and different accelerometer activity cut points. The Evenson and two Freedson age-specific (3 and 4 METs) cut points were used to define minimum MVPA, and optimal steps/day were examined for raw steps and adjusted to a pedometer-scale to facilitate translation to lay populations.

Results: For boys and girls (6–11 years) with ≥ 60 minutes/day of MVPA, a range of 11,500–13,500 uncensored steps/day for children was the optimal range that balanced classification errors. For adolescent boys and girls (12–17) with ≥60 minutes/day of MVPA, 11,500–14,000 uncensored steps/day was optimal. Translation to a pedometer-scaling reduced these minimum values by 2,500 step/day to 9,000 steps/day. Area under the curve was ≥84% in all analyses.

Conclusions: No single study has definitively identified a precise and unyielding steps/day value for youth. Considering the other evidence to date, we propose a reasonable ‘rule of thumb’ value of ≥ 11,500 accelerometer-determined steps/day for both children and adolescents (and both genders), accepting that more is better. For practical applications, 9,000 steps/day appears to be a more pedometer-friendly value.

ContributorsAdams, Marc (Author) / Johnson, William D. (Author) / Tudor-Locke, Catrine (Author) / College of Health Solutions (Contributor)
Created2013-04-21
128958-Thumbnail Image.png
Description

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt to model, may have a diverse range of applications.

Methods: We investigate the utility of a number of statistical methods to determine model performance and address challenges inherent in analyzing immunosignatures. Some of these methods include exploratory and confirmatory factor analyses, classical significance testing, structural equation and mixture modeling.

Results: We demonstrate an ability to classify samples based on disease status and show that immunosignaturing is a very promising technology for screening and presymptomatic screening of disease. In addition, we are able to model complex patterns and latent factors underlying immunosignatures. These latent factors may serve as biomarkers for disease and may play a key role in a bioinformatic method for antibody discovery.

Conclusion: Based on this research, we lay out an analytic framework illustrating how immunosignatures may be useful as a general method for screening and presymptomatic screening of disease as well as antibody discovery.

ContributorsBrown, Justin (Author) / Stafford, Phillip (Author) / Johnston, Stephen (Author) / Dinu, Valentin (Author) / College of Health Solutions (Contributor)
Created2011-08-19
128960-Thumbnail Image.png
Description

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or peptides exceed the upper detection threshold of the scanner software (216 - 1 = 65, 535 for 16-bit images). In practice, spots with a sizable number of saturated pixels are often flagged and discarded. Alternatively, the saturated values are used without adjustments for estimating spot intensities. The resulting expression data tend to be biased downwards and can distort high-level analysis that relies on these data. Hence, it is crucial to effectively correct for signal saturation.

Results: We developed a flexible mixture model-based segmentation and spot intensity estimation procedure that accounts for saturated pixels by incorporating a censored component in the mixture model. As demonstrated with biological data and simulation, our method extends the dynamic range of expression data beyond the saturation threshold and is effective in correcting saturation-induced bias when the lost information is not tremendous. We further illustrate the impact of image processing on downstream classification, showing that the proposed method can increase diagnostic accuracy using data from a lymphoma cancer diagnosis study.

Conclusions: The presented method adjusts for signal saturation at the segmentation stage that identifies a pixel as part of the foreground, background or other. The cluster membership of a pixel can be altered versus treating saturated values as truly observed. Thus, the resulting spot intensity estimates may be more accurate than those obtained from existing methods that correct for saturation based on already segmented data. As a model-based segmentation method, our procedure is able to identify inner holes, fuzzy edges and blank spots that are common in microarray images. The approach is independent of microarray platform and applicable to both single- and dual-channel microarrays.

ContributorsYang, Yan (Author) / Stafford, Phillip (Author) / Kim, YoonJoo (Author) / College of Liberal Arts and Sciences (Contributor)
Created2011-11-30
129072-Thumbnail Image.png
Description

Background: Many studies used the older ActiGraph (7164) for physical activity measurement, but this model has been replaced with newer ones (e.g., GT3X+). The assumption that new generation models are more accurate has been questioned, especially for measuring lower intensity levels. The low-frequency extension (LFE) increases the low-intensity sensitivity of newer

Background: Many studies used the older ActiGraph (7164) for physical activity measurement, but this model has been replaced with newer ones (e.g., GT3X+). The assumption that new generation models are more accurate has been questioned, especially for measuring lower intensity levels. The low-frequency extension (LFE) increases the low-intensity sensitivity of newer models, but its comparability with older models is unknown. This study compared step counts and physical activity collected with the 7164 and GT3X + using the Normal Filter and the LFE (GT3X+N and GT3X+LFE, respectively).

Findings: Twenty-five adults wore 2 accelerometer models simultaneously for 3Âdays and were instructed to engage in typical behaviors. Average daily step counts and minutes per day in nonwear, sedentary, light, moderate, and vigorous activity were calculated. Repeated measures ANOVAs with post-hoc pairwise comparisons were used to compare mean values. Means for the GT3X+N and 7164 were significantly different in 4 of the 6 categories (p < .05). The GT3X+N showed 2041 fewer steps per day and more sedentary, less light, and less moderate than the 7164 (+25.6, -31.2, -2.9 mins/day, respectively). The GT3X+LFE showed non-significant differences in 5 of 6 categories but recorded significantly more steps (+3597 steps/day; p < .001) than the 7164.

Conclusion: Studies using the newer ActiGraphs should employ the LFE for greater sensitivity to lower intensity activity and more comparable activity results with studies using the older models. Newer generation ActiGraphs do not produce comparable step counts to the older generation devices with the Normal filter or the LFE.

ContributorsCain, Kelli L. (Author) / Conway, Terry L. (Author) / Adams, Marc (Author) / Husak, Lisa E. (Author) / Sallis, James F. (Author) / College of Health Solutions (Contributor)
Created2013-04-25
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21