This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 36
Filtering by

Clear all filters

Does School Participatory Budgeting Increase Students’ Political Efficacy? Bandura’s “Sources,” Civic Pedagogy, and Education for Democracy
Description

Does school participatory budgeting (SPB) increase students’ political efficacy? SPB, which is implemented in thousands of schools around the world, is a democratic process of deliberation and decision-making in which students determine how to spend a portion of the school’s budget. We examined the impact of SPB on political efficacy

Does school participatory budgeting (SPB) increase students’ political efficacy? SPB, which is implemented in thousands of schools around the world, is a democratic process of deliberation and decision-making in which students determine how to spend a portion of the school’s budget. We examined the impact of SPB on political efficacy in one middle school in Arizona. Our participants’ (n = 28) responses on survey items designed to measure self-perceived growth in political efficacy indicated a large effect size (Cohen’s d = 1.46), suggesting that SPB is an effective approach to civic pedagogy, with promising prospects for developing students’ political efficacy.

ContributorsGibbs, Norman P. (Author) / Bartlett, Tara Lynn (Author) / Schugurensky, Daniel, 1958- (Author)
Created2021-05-01
141463-Thumbnail Image.png
Description

Five immunocompetent C57BL/6-cBrd/cBrd/Cr (albino C57BL/6) mice were injected with GL261-luc2 cells, a cell line sharing characteristics of human glioblastoma multiforme (GBM). The mice were imaged using magnetic resonance (MR) at five separate time points to characterize growth and development of the tumor. After 25 days, the final tumor volumes of

Five immunocompetent C57BL/6-cBrd/cBrd/Cr (albino C57BL/6) mice were injected with GL261-luc2 cells, a cell line sharing characteristics of human glioblastoma multiforme (GBM). The mice were imaged using magnetic resonance (MR) at five separate time points to characterize growth and development of the tumor. After 25 days, the final tumor volumes of the mice varied from 12 mm3 to 62 mm3, even though mice were inoculated from the same tumor cell line under carefully controlled conditions. We generated hypotheses to explore large variances in final tumor size and tested them with our simple reaction-diffusion model in both a 3-dimensional (3D) finite difference method and a 2-dimensional (2D) level set method. The parameters obtained from a best-fit procedure, designed to yield simulated tumors as close as possible to the observed ones, vary by an order of magnitude between the three mice analyzed in detail. These differences may reflect morphological and biological variability in tumor growth, as well as errors in the mathematical model, perhaps from an oversimplification of the tumor dynamics or nonidentifiability of parameters. Our results generate parameters that match other experimental in vitro and in vivo measurements. Additionally, we calculate wave speed, which matches with other rat and human measurements.

ContributorsRutter, Erica (Author) / Stepien, Tracy (Author) / Anderies, Barrett (Author) / Plasencia, Jonathan (Author) / Woolf, Eric C. (Author) / Scheck, Adrienne C. (Author) / Turner, Gregory H. (Author) / Liu, Qingwei (Author) / Frakes, David (Author) / Kodibagkar, Vikram (Author) / Kuang, Yang (Author) / Preul, Mark C. (Author) / Kostelich, Eric (Author) / College of Liberal Arts and Sciences (Contributor)
Created2017-05-31
129586-Thumbnail Image.png
Description

Recently fabricated two-dimensional phosphorene crystal structures have demonstrated great potential in applications of electronics. In this paper, strain effect on the electronic band structure of phosphorene was studied using first-principles methods including density functional theory (DFT) and hybrid functionals. It was found that phosphorene can withstand a tensile stress and

Recently fabricated two-dimensional phosphorene crystal structures have demonstrated great potential in applications of electronics. In this paper, strain effect on the electronic band structure of phosphorene was studied using first-principles methods including density functional theory (DFT) and hybrid functionals. It was found that phosphorene can withstand a tensile stress and strain up to 10 N/m and 30%, respectively. The band gap of phosphorene experiences a direct-indirect-direct transition when axial strain is applied. A moderate −2% compression in the zigzag direction can trigger this gap transition. With sufficient expansion (+11.3%) or compression (−10.2% strains), the gap can be tuned from indirect to direct again. Five strain zones with distinct electronic band structure were identified, and the critical strains for the zone boundaries were determined. Although the DFT method is known to underestimate band gap of semiconductors, it was proven to correctly predict the strain effect on the electronic properties with validation from a hybrid functional method in this work. The origin of the gap transition was revealed, and a general mechanism was developed to explain energy shifts with strain according to the bond nature of near-band-edge electronic orbitals. Effective masses of carriers in the armchair direction are an order of magnitude smaller than that of the zigzag axis, indicating that the armchair direction is favored for carrier transport. In addition, the effective masses can be dramatically tuned by strain, in which its sharp jump/drop occurs at the zone boundaries of the direct-indirect gap transition.

ContributorsPeng, Xihong (Author) / Wei, Qun (Author) / Copple, Andrew (Author) / College of Integrative Sciences and Arts (Contributor)
Created2014-08-04
129540-Thumbnail Image.png
Description

The role of ambiguity tolerance in career decision making was examined in a sample of college students (n = 275). Three hypotheses were proposed regarding the direct prediction of ambiguity tolerance on career indecision, the indirect prediction of ambiguity tolerance on career indecision through environmental and self explorations, and the

The role of ambiguity tolerance in career decision making was examined in a sample of college students (n = 275). Three hypotheses were proposed regarding the direct prediction of ambiguity tolerance on career indecision, the indirect prediction of ambiguity tolerance on career indecision through environmental and self explorations, and the moderation effect of ambiguity tolerance on the link of environmental and self explorations with career indecision. Results supported the significance of ambiguity tolerance with respect to career indecision, finding that it directly predicted general indecisiveness, dysfunctional beliefs, lack of information, and inconsistent information, and moderated the prediction of environmental exploration on inconsistent information. The implications of this study are discussed and suggestions for future research are provided.

ContributorsXu, Hui (Author) / Tracey, Terence (Author) / College of Integrative Sciences and Arts (Contributor)
Created2014-08-01
128941-Thumbnail Image.png
Description

Background: Physical activity (PA) interventions typically include components or doses that are static across participants. Adaptive interventions are dynamic; components or doses change in response to short-term variations in participant's performance. Emerging theory and technologies make adaptive goal setting and feedback interventions feasible.

Objective: To test an adaptive intervention for PA based on

Background: Physical activity (PA) interventions typically include components or doses that are static across participants. Adaptive interventions are dynamic; components or doses change in response to short-term variations in participant's performance. Emerging theory and technologies make adaptive goal setting and feedback interventions feasible.

Objective: To test an adaptive intervention for PA based on Operant and Behavior Economic principles and a percentile-based algorithm. The adaptive intervention was hypothesized to result in greater increases in steps per day than the static intervention.

Methods: Participants (N = 20) were randomized to one of two 6-month treatments: 1) static intervention (SI) or 2) adaptive intervention (AI). Inactive overweight adults (85% women, M = 36.9±9.2 years, 35% non-white) in both groups received a pedometer, email and text message communication, brief health information, and biweekly motivational prompts. The AI group received daily step goals that adjusted up and down based on the percentile-rank algorithm and micro-incentives for goal attainment. This algorithm adjusted goals based on a moving window; an approach that responded to each individual's performance and ensured goals were always challenging but within participants' abilities. The SI group received a static 10,000 steps/day goal with incentives linked to uploading the pedometer's data.

Results: A random-effects repeated-measures model accounted for 180 repeated measures and autocorrelation. After adjusting for covariates, the treatment phase showed greater steps/day relative to the baseline phase (p<.001) and a group by study phase interaction was observed (p = .017). The SI group increased by 1,598 steps/day on average between baseline and treatment while the AI group increased by 2,728 steps/day on average between baseline and treatment; a significant between-group difference of 1,130 steps/day (Cohen's d = .74).

Conclusions: The adaptive intervention outperformed the static intervention for increasing PA. The adaptive goal and feedback algorithm is a “behavior change technology” that could be incorporated into mHealth technologies and scaled to reach large populations.

ContributorsAdams, Marc (Author) / Sallis, James F. (Author) / Norman, Gregory J. (Author) / Hovell, Melbourne F. (Author) / Hekler, Eric (Author) / Perata, Elyse (Author) / College of Health Solutions (Contributor)
Created2013-12-09
128957-Thumbnail Image.png
Description

Background: An evidence-based steps/day translation of U.S. federal guidelines for youth to engage in ≥60 minutes/day of moderate-to-vigorous physical activity (MVPA) would help health researchers, practitioners, and lay professionals charged with increasing youth’s physical activity (PA). The purpose of this study was to determine the number of free-living steps/day (both raw and

Background: An evidence-based steps/day translation of U.S. federal guidelines for youth to engage in ≥60 minutes/day of moderate-to-vigorous physical activity (MVPA) would help health researchers, practitioners, and lay professionals charged with increasing youth’s physical activity (PA). The purpose of this study was to determine the number of free-living steps/day (both raw and adjusted to a pedometer scale) that correctly classified children (6–11 years) and adolescents (12–17 years) as meeting the 60-minute MVPA guideline using the 2005–2006 National Health and Nutrition Examination Survey (NHANES) accelerometer data, and to evaluate the 12,000 steps/day recommendation recently adopted by the President’s Challenge Physical Activity and Fitness Awards Program.

Methods: Analyses were conducted among children (n = 915) and adolescents (n = 1,302) in 2011 and 2012. Receiver Operating Characteristic (ROC) curve plots and classification statistics revealed candidate steps/day cut points that discriminated meeting/not meeting the MVPA threshold by age group, gender and different accelerometer activity cut points. The Evenson and two Freedson age-specific (3 and 4 METs) cut points were used to define minimum MVPA, and optimal steps/day were examined for raw steps and adjusted to a pedometer-scale to facilitate translation to lay populations.

Results: For boys and girls (6–11 years) with ≥ 60 minutes/day of MVPA, a range of 11,500–13,500 uncensored steps/day for children was the optimal range that balanced classification errors. For adolescent boys and girls (12–17) with ≥60 minutes/day of MVPA, 11,500–14,000 uncensored steps/day was optimal. Translation to a pedometer-scaling reduced these minimum values by 2,500 step/day to 9,000 steps/day. Area under the curve was ≥84% in all analyses.

Conclusions: No single study has definitively identified a precise and unyielding steps/day value for youth. Considering the other evidence to date, we propose a reasonable ‘rule of thumb’ value of ≥ 11,500 accelerometer-determined steps/day for both children and adolescents (and both genders), accepting that more is better. For practical applications, 9,000 steps/day appears to be a more pedometer-friendly value.

ContributorsAdams, Marc (Author) / Johnson, William D. (Author) / Tudor-Locke, Catrine (Author) / College of Health Solutions (Contributor)
Created2013-04-21
128958-Thumbnail Image.png
Description

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt to model, may have a diverse range of applications.

Methods: We investigate the utility of a number of statistical methods to determine model performance and address challenges inherent in analyzing immunosignatures. Some of these methods include exploratory and confirmatory factor analyses, classical significance testing, structural equation and mixture modeling.

Results: We demonstrate an ability to classify samples based on disease status and show that immunosignaturing is a very promising technology for screening and presymptomatic screening of disease. In addition, we are able to model complex patterns and latent factors underlying immunosignatures. These latent factors may serve as biomarkers for disease and may play a key role in a bioinformatic method for antibody discovery.

Conclusion: Based on this research, we lay out an analytic framework illustrating how immunosignatures may be useful as a general method for screening and presymptomatic screening of disease as well as antibody discovery.

ContributorsBrown, Justin (Author) / Stafford, Phillip (Author) / Johnston, Stephen (Author) / Dinu, Valentin (Author) / College of Health Solutions (Contributor)
Created2011-08-19
128960-Thumbnail Image.png
Description

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or peptides exceed the upper detection threshold of the scanner software (216 - 1 = 65, 535 for 16-bit images). In practice, spots with a sizable number of saturated pixels are often flagged and discarded. Alternatively, the saturated values are used without adjustments for estimating spot intensities. The resulting expression data tend to be biased downwards and can distort high-level analysis that relies on these data. Hence, it is crucial to effectively correct for signal saturation.

Results: We developed a flexible mixture model-based segmentation and spot intensity estimation procedure that accounts for saturated pixels by incorporating a censored component in the mixture model. As demonstrated with biological data and simulation, our method extends the dynamic range of expression data beyond the saturation threshold and is effective in correcting saturation-induced bias when the lost information is not tremendous. We further illustrate the impact of image processing on downstream classification, showing that the proposed method can increase diagnostic accuracy using data from a lymphoma cancer diagnosis study.

Conclusions: The presented method adjusts for signal saturation at the segmentation stage that identifies a pixel as part of the foreground, background or other. The cluster membership of a pixel can be altered versus treating saturated values as truly observed. Thus, the resulting spot intensity estimates may be more accurate than those obtained from existing methods that correct for saturation based on already segmented data. As a model-based segmentation method, our procedure is able to identify inner holes, fuzzy edges and blank spots that are common in microarray images. The approach is independent of microarray platform and applicable to both single- and dual-channel microarrays.

ContributorsYang, Yan (Author) / Stafford, Phillip (Author) / Kim, YoonJoo (Author) / College of Liberal Arts and Sciences (Contributor)
Created2011-11-30
129072-Thumbnail Image.png
Description

Background: Many studies used the older ActiGraph (7164) for physical activity measurement, but this model has been replaced with newer ones (e.g., GT3X+). The assumption that new generation models are more accurate has been questioned, especially for measuring lower intensity levels. The low-frequency extension (LFE) increases the low-intensity sensitivity of newer

Background: Many studies used the older ActiGraph (7164) for physical activity measurement, but this model has been replaced with newer ones (e.g., GT3X+). The assumption that new generation models are more accurate has been questioned, especially for measuring lower intensity levels. The low-frequency extension (LFE) increases the low-intensity sensitivity of newer models, but its comparability with older models is unknown. This study compared step counts and physical activity collected with the 7164 and GT3X + using the Normal Filter and the LFE (GT3X+N and GT3X+LFE, respectively).

Findings: Twenty-five adults wore 2 accelerometer models simultaneously for 3Âdays and were instructed to engage in typical behaviors. Average daily step counts and minutes per day in nonwear, sedentary, light, moderate, and vigorous activity were calculated. Repeated measures ANOVAs with post-hoc pairwise comparisons were used to compare mean values. Means for the GT3X+N and 7164 were significantly different in 4 of the 6 categories (p < .05). The GT3X+N showed 2041 fewer steps per day and more sedentary, less light, and less moderate than the 7164 (+25.6, -31.2, -2.9 mins/day, respectively). The GT3X+LFE showed non-significant differences in 5 of 6 categories but recorded significantly more steps (+3597 steps/day; p < .001) than the 7164.

Conclusion: Studies using the newer ActiGraphs should employ the LFE for greater sensitivity to lower intensity activity and more comparable activity results with studies using the older models. Newer generation ActiGraphs do not produce comparable step counts to the older generation devices with the Normal filter or the LFE.

ContributorsCain, Kelli L. (Author) / Conway, Terry L. (Author) / Adams, Marc (Author) / Husak, Lisa E. (Author) / Sallis, James F. (Author) / College of Health Solutions (Contributor)
Created2013-04-25
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21