This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 27
Filtering by

Clear all filters

128249-Thumbnail Image.png
Description

Objective: This cross sectional study aims to determine the effects of gender and parental perception of safety at school on children’s physical activity (PA) levels.

Materials and Methods: Parents of school aged Mexican children residing in Guadalajara, Mexico City, and Puerto Vallarta, completed surveys about their children’s PA measures. The physical

Objective: This cross sectional study aims to determine the effects of gender and parental perception of safety at school on children’s physical activity (PA) levels.

Materials and Methods: Parents of school aged Mexican children residing in Guadalajara, Mexico City, and Puerto Vallarta, completed surveys about their children’s PA measures. The physical activity indicators were evaluated using linear and logistical regression models.

Results: Analysis did not indicate that gender moderated the relationship between parental perception of safety and PA measures, but significant gender issues exist with girls participating less than boys in the three measures of PA in this study (p<0.001).

Conclusion: Results suggest the need for additional interventions promoting physical activity in girls in Mexico.

Created2016-01
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09
127878-Thumbnail Image.png
Description

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach and a web-based retrofit toolkit tested on a case study in Arizona, this methodology was able to save about 50% of the total energy consumed by the case study building, depending on the adopted measures and invested capital. While the case study presented is a deep energy retrofit, the proposed framework is effective in guiding the decision-making process that precedes any energy retrofit, deep or light.

ContributorsRios, Fernanda (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127865-Thumbnail Image.png
Description

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external and internal factors. Modern large scale sensor measures some physical signals to monitor real-time system behaviors. Such data has the potentials to detect anomalies, identify consumption patterns, and analyze peak loads. The paper proposes a novel method to detect hidden anomalies in commercial building energy consumption system. The framework is based on Hilbert-Huang transform and instantaneous frequency analysis. The objectives are to develop an automated data pre-processing system that can detect anomalies and provide solutions with real-time consumption database using Ensemble Empirical Mode Decomposition (EEMD) method. The finding of this paper will also include the comparisons of Empirical mode decomposition and Ensemble empirical mode decomposition of three important type of institutional buildings.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Huang, Zigang (Author) / Cheng, Ying (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127833-Thumbnail Image.png
Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Chen, Xue-wen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
128816-Thumbnail Image.png
Description

To address the need to study frozen clinical specimens using next-generation RNA, DNA, chromatin immunoprecipitation (ChIP) sequencing and protein analyses, we developed a biobank work flow to prospectively collect biospecimens from patients with renal cell carcinoma (RCC). We describe our standard operating procedures and work flow to annotate pathologic results

To address the need to study frozen clinical specimens using next-generation RNA, DNA, chromatin immunoprecipitation (ChIP) sequencing and protein analyses, we developed a biobank work flow to prospectively collect biospecimens from patients with renal cell carcinoma (RCC). We describe our standard operating procedures and work flow to annotate pathologic results and clinical outcomes. We report quality control outcomes and nucleic acid yields of our RCC submissions (N=16) to The Cancer Genome Atlas (TCGA) project, as well as newer discovery platforms, by describing mass spectrometry analysis of albumin oxidation in plasma and 6 ChIP sequencing libraries generated from nephrectomy specimens after histone H3 lysine 36 trimethylation (H3K36me3) immunoprecipitation. From June 1, 2010, through January 1, 2013, we enrolled 328 patients with RCC. Our mean (SD) TCGA RNA integrity numbers (RINs) were 8.1 (0.8) for papillary RCC, with a 12.5% overall rate of sample disqualification for RIN <7. Banked plasma had significantly less albumin oxidation (by mass spectrometry analysis) than plasma kept at 25°C (P<.001). For ChIP sequencing, the FastQC score for average read quality was at least 30 for 91% to 95% of paired-end reads. In parallel, we analyzed frozen tissue by RNA sequencing; after genome alignment, only 0.2% to 0.4% of total reads failed the default quality check steps of Bowtie2, which was comparable to the disqualification ratio (0.1%) of the 786-O RCC cell line that was prepared under optimal RNA isolation conditions. The overall correlation coefficients for gene expression between Mayo Clinic vs TCGA tissues ranged from 0.75 to 0.82. These data support the generation of high-quality nucleic acids for genomic analyses from banked RCC. Importantly, the protocol does not interfere with routine clinical care. Collections over defined time points during disease treatment further enhance collaborative efforts to integrate genomic information with outcomes.

ContributorsHo, Thai H. (Author) / Nunez Nateras, Rafael (Author) / Yan, Huihuang (Author) / Park, Jin (Author) / Jensen, Sally (Author) / Borges, Chad (Author) / Lee, Jeong Heon (Author) / Champion, Mia D. (Author) / Tibes, Raoul (Author) / Bryce, Alan H. (Author) / Carballido, Estrella M. (Author) / Todd, Mark A. (Author) / Joseph, Richard W. (Author) / Wong, William W. (Author) / Parker, Alexander S. (Author) / Stanton, Melissa L. (Author) / Castle, Erik P. (Author) / Biodesign Institute (Contributor)
Created2015-07-16
128800-Thumbnail Image.png
Description

Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify

Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify the costs of analyses, and robust industrial platforms that are reproducible across laboratories. Presented here is an MS-based quantitative IGF1 assay with performance rating of >1,000 samples/day, and a capability of quantifying IGF1 point mutations and posttranslational modifications. The throughput of the IGF1 mass spectrometric immunoassay (MSIA) benefited from a simplified sample preparation step, IGF1 immunocapture in a tip format, and high-throughput MALDI-TOF MS analysis. The Limit of Detection and Limit of Quantification of the resulting assay were 1.5 μg/L and 5 μg/L, respectively, with intra- and inter-assay precision CVs of less than 10%, and good linearity and recovery characteristics. The IGF1 MSIA was benchmarked against commercially available IGF1 ELISA via Bland-Altman method comparison test, resulting in a slight positive bias of 16%. The IGF1 MSIA was employed in an optimized parallel workflow utilizing two pipetting robots and MALDI-TOF-MS instruments synced into one-hour phases of sample preparation, extraction and MSIA pipette tip elution, MS data collection, and data processing. Using this workflow, high-throughput IGF1 quantification of 1,054 human samples was achieved in approximately 9 hours. This rate of assaying is a significant improvement over existing MS-based IGF1 assays, and is on par with that of the enzyme-based immunoassays. Furthermore, a mutation was detected in ∼1% of the samples (SNP: rs17884626, creating an A→T substitution at position 67 of the IGF1), demonstrating the capability of IGF1 MSIA to detect point mutations and posttranslational modifications.

ContributorsOran, Paul (Author) / Trenchevska, Olgica (Author) / Nedelkov, Dobrin (Author) / Borges, Chad (Author) / Schaab, Matthew (Author) / Rehder, Douglas (Author) / Jarvis, Jason (Author) / Sherma, Nisha (Author) / Shen, Luhui (Author) / Krastins, Bryan (Author) / Lopez, Mary F. (Author) / Schwenke, Dawn (Author) / Reaven, Peter D. (Author) / Nelson, Randall (Author) / Biodesign Institute (Contributor)
Created2014-03-24
128789-Thumbnail Image.png
Description

Resource-poor social environments predict poor health, but the mechanisms and processes linking the social environment to psychological health and well-being remain unclear. This study explored psychosocial mediators of the association between the social environment and mental health in African American adults. African American men and women (n = 1467) completed

Resource-poor social environments predict poor health, but the mechanisms and processes linking the social environment to psychological health and well-being remain unclear. This study explored psychosocial mediators of the association between the social environment and mental health in African American adults. African American men and women (n = 1467) completed questionnaires on the social environment, psychosocial factors (stress, depressive symptoms, and racial discrimination), and mental health. Multiple-mediator models were used to assess direct and indirect effects of the social environment on mental health. Low social status in the community (p < .001) and U.S. (p < .001) and low social support (p < .001) were associated with poor mental health. Psychosocial factors significantly jointly mediated the relationship between the social environment and mental health in multiple-mediator models. Low social status and social support were associated with greater perceived stress, depressive symptoms, and perceived racial discrimination, which were associated with poor mental health. Results suggest the relationship between the social environment and mental health is mediated by psychosocial factors and revealed potential mechanisms through which social status and social support influence the mental health of African American men and women. Findings from this study provide insight into the differential effects of stress, depression and discrimination on mental health. Ecological approaches that aim to improve the social environment and psychosocial mediators may enhance health-related quality of life and reduce health disparities in African Americans.

Created2016-04-27
128773-Thumbnail Image.png
Description

Serum Amyloid A (SAA) is an acute phase protein complex consisting of several abundant isoforms. The N- terminus of SAA is critical to its function in amyloid formation. SAA is frequently truncated, either missing an arginine or an arginine-serine dipeptide, resulting in isoforms that may influence the capacity to form

Serum Amyloid A (SAA) is an acute phase protein complex consisting of several abundant isoforms. The N- terminus of SAA is critical to its function in amyloid formation. SAA is frequently truncated, either missing an arginine or an arginine-serine dipeptide, resulting in isoforms that may influence the capacity to form amyloid. However, the relative abundance of truncated SAA in diabetes and chronic kidney disease is not known.

Methods: Using mass spectrometric immunoassay, the abundance of SAA truncations relative to the native variants was examined in plasma of 91 participants with type 2 diabetes and chronic kidney disease and 69 participants without diabetes.

Results: The ratio of SAA 1.1 (missing N-terminal arginine) to native SAA 1.1 was lower in diabetics compared to non-diabetics (p = 0.004), and in males compared to females (p<0.001). This ratio was negatively correlated with glycated hemoglobin (r = −0.32, p<0.001) and triglyceride concentrations (r = −0.37, p<0.001), and positively correlated with HDL cholesterol concentrations (r = 0.32, p<0.001).

Conclusion: The relative abundance of the N-terminal arginine truncation of SAA1.1 is significantly decreased in diabetes and negatively correlates with measures of glycemic and lipid control.

ContributorsYassine, Hussein N. (Author) / Trenchevska, Olgica (Author) / He, Huijuan (Author) / Borges, Chad (Author) / Nedelkov, Dobrin (Author) / Mack, Wendy (Author) / Kono, Naoko (Author) / Koska, Juraj (Author) / Reaven, Peter D. (Author) / Nelson, Randall (Author) / Biodesign Institute (Contributor)
Created2015-01-21
129008-Thumbnail Image.png
Description

Background: Interaction in the form of cooperation, communication, and friendly competition theoretically precede the development of group cohesion, which often precedes adherence to health promotion programs. The purpose of this manuscript was to explore longitudinal relationships among dimensions of group cohesion and group-interaction variables to inform and improve group-based strategies within

Background: Interaction in the form of cooperation, communication, and friendly competition theoretically precede the development of group cohesion, which often precedes adherence to health promotion programs. The purpose of this manuscript was to explore longitudinal relationships among dimensions of group cohesion and group-interaction variables to inform and improve group-based strategies within programs aimed at promoting physical activity.

Methods: Ethnic minority women completed a group dynamics-based physical activity promotion intervention (N = 103; 73% African American; 27% Hispanic/Latina; mage = 47.89 + 8.17 years; mBMI = 34.43+ 8.07 kg/m[superscript 2]) and assessments of group cohesion and group-interaction variables at baseline, 6 months (post-program), and 12 months (follow-up).

Results: All four dimensions of group cohesion had significant (ps < 0.01) relationships with the group-interaction variables. Competition was a consistently strong predictor of cohesion, while cooperation did not demonstrate consistent patterns of prediction.

Conclusions: Facilitating a sense of friendly competition may increase engagement in physical activity programs by bolstering group cohesion.

Created2014-04-09