This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 27
Filtering by

Clear all filters

Description

Neural progenitor cells (NPCs) derived from human pluripotent stem cells (hPSCs) are a multipotent cell population that is capable of nearly indefinite expansion and subsequent differentiation into the various neuronal and supporting cell types that comprise the CNS. However, current protocols for differentiating NPCs toward neuronal lineages result in a

Neural progenitor cells (NPCs) derived from human pluripotent stem cells (hPSCs) are a multipotent cell population that is capable of nearly indefinite expansion and subsequent differentiation into the various neuronal and supporting cell types that comprise the CNS. However, current protocols for differentiating NPCs toward neuronal lineages result in a mixture of neurons from various regions of the CNS. In this study, we determined that endogenous WNT signaling is a primary contributor to the heterogeneity observed in NPC cultures and neuronal differentiation. Furthermore, exogenous manipulation of WNT signaling during neural differentiation, through either activation or inhibition, reduces this heterogeneity in NPC cultures, thereby promoting the formation of regionally homogeneous NPC and neuronal cultures. The ability to manipulate WNT signaling to generate regionally specific NPCs and neurons will be useful for studying human neural development and will greatly enhance the translational potential of hPSCs for neural-related therapies.

ContributorsMoya, Noel (Author) / Cutts, Joshua (Author) / Gaasterland, Terry (Author) / Willert, Karl (Author) / Brafman, David (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-12-09
127902-Thumbnail Image.png
Description

Although the majority of late-onset Alzheimer's disease (AD) patients are labeled sporadic, multiple genetic risk variants have been identified, the most powerful and prevalent of which is the e4 variant of the Apolipoprotein E (APOE) gene. Here, we generated human induced pluripotent stem cell (hiPSC) lines from the peripheral blood

Although the majority of late-onset Alzheimer's disease (AD) patients are labeled sporadic, multiple genetic risk variants have been identified, the most powerful and prevalent of which is the e4 variant of the Apolipoprotein E (APOE) gene. Here, we generated human induced pluripotent stem cell (hiPSC) lines from the peripheral blood mononuclear cells (PBMCs) of a clinically diagnosed AD patient [ASUi003-A] and a non-demented control (NDC) patient [ASUi004-A] homozygous for the APOE4 risk allele. These hiPSCs maintained their original genotype, expressed pluripotency markers, exhibited a normal karyotype, and retained the ability to differentiate into cells representative of the three germ layers.

ContributorsBrookhouser, Nicholas (Author) / Zhang, Ping (Author) / Caselli, Richard (Author) / Kim, Jean J. (Author) / Brafman, David (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-07-10
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09
127878-Thumbnail Image.png
Description

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach and a web-based retrofit toolkit tested on a case study in Arizona, this methodology was able to save about 50% of the total energy consumed by the case study building, depending on the adopted measures and invested capital. While the case study presented is a deep energy retrofit, the proposed framework is effective in guiding the decision-making process that precedes any energy retrofit, deep or light.

ContributorsRios, Fernanda (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127865-Thumbnail Image.png
Description

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external and internal factors. Modern large scale sensor measures some physical signals to monitor real-time system behaviors. Such data has the potentials to detect anomalies, identify consumption patterns, and analyze peak loads. The paper proposes a novel method to detect hidden anomalies in commercial building energy consumption system. The framework is based on Hilbert-Huang transform and instantaneous frequency analysis. The objectives are to develop an automated data pre-processing system that can detect anomalies and provide solutions with real-time consumption database using Ensemble Empirical Mode Decomposition (EEMD) method. The finding of this paper will also include the comparisons of Empirical mode decomposition and Ensemble empirical mode decomposition of three important type of institutional buildings.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Huang, Zigang (Author) / Cheng, Ying (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127863-Thumbnail Image.png
Description

Nonsense-mediated RNA decay (NMD) is a highly conserved pathway that selectively degrades specific subsets of RNA transcripts. Here, we provide evidence that NMD regulates early human developmental cell fate. We found that NMD factors tend to be expressed at higher levels in human pluripotent cells than in differentiated cells, raising

Nonsense-mediated RNA decay (NMD) is a highly conserved pathway that selectively degrades specific subsets of RNA transcripts. Here, we provide evidence that NMD regulates early human developmental cell fate. We found that NMD factors tend to be expressed at higher levels in human pluripotent cells than in differentiated cells, raising the possibility that NMD must be downregulated to permit differentiation. Loss- and gain-of-function experiments in human embryonic stem cells (hESCs) demonstrated that, indeed, NMD downregulation is essential for efficient generation of definitive endoderm. RNA-seq analysis identified NMD target transcripts induced when NMD is suppressed in hESCs, including many encoding signaling components. This led us to test the role of TGF-β and BMP signaling, which we found NMD acts through to influence definitive endoderm versus mesoderm fate. Our results suggest that selective RNA decay is critical for specifying the developmental fate of specific human embryonic cell lineages.

ContributorsLou, Chih-Hong (Author) / Dumdie, Jennifer (Author) / Goetz, Alexandra (Author) / Shum, Eleen Y. (Author) / Brafman, David (Author) / Liao, Xiaoyan (Author) / Mora-Castilla, Sergio (Author) / Ramaiah, Madhuvanthi (Author) / Cook-Andersen, Heidi (Author) / Laurent, Louise (Author) / Wilkinson, Miles F. (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-06-14
127833-Thumbnail Image.png
Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Chen, Xue-wen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
128886-Thumbnail Image.png
Description

Species turnover or β diversity is a conceptually attractive surrogate for conservation planning. However, there has been only 1 attempt to determine how well sites selected to maximize β diversity represent species, and that test was done at a scale too coarse (2,500 km2 sites) to inform most conservation decisions.

Species turnover or β diversity is a conceptually attractive surrogate for conservation planning. However, there has been only 1 attempt to determine how well sites selected to maximize β diversity represent species, and that test was done at a scale too coarse (2,500 km2 sites) to inform most conservation decisions. We used 8 plant datasets, 3 bird datasets, and 1 mammal dataset to evaluate whether sites selected to span β diversity will efficiently represent species at finer scale (sites sizes < 1 ha to 625 km2). We used ordinations to characterize dissimilarity in species assemblages (β diversity) among plots (inventory data) or among grid cells (atlas data). We then selected sites to maximize β diversity and used the Species Accumulation Index, SAI, to evaluate how efficiently the surrogate (selecting sites for maximum β diversity) represented species in the same taxon. Across all 12 datasets, sites selected for maximum β diversity represented species with a median efficiency of 24% (i.e., the surrogate was 24% more effective than random selection of sites), and an interquartile range of 4% to 41% efficiency. β diversity was a better surrogate for bird datasets than for plant datasets, and for atlas datasets with 10-km to 14-km grid cells than for atlas datasets with 25-km grid cells. We conclude that β diversity is more than a mere descriptor of how species are distributed on the landscape; in particular β diversity might be useful to maximize the complementarity of a set of sites. Because we tested only within-taxon surrogacy, our results do not prove that β diversity is useful for conservation planning. But our results do justify further investigation to identify the circumstances in which β diversity performs well, and to evaluate it as a cross-taxon surrogate.

Created2016-03-04
128625-Thumbnail Image.png
Description

A major challenge for biogeographers and conservation planners is to identify where to best locate or distribute high-priority areas for conservation and to explore whether these areas are well represented by conservation actions such as protected areas (PAs). We aimed to identify high-priority areas for conservation, expressed as hotpots of

A major challenge for biogeographers and conservation planners is to identify where to best locate or distribute high-priority areas for conservation and to explore whether these areas are well represented by conservation actions such as protected areas (PAs). We aimed to identify high-priority areas for conservation, expressed as hotpots of rarity-weighted richness (HRR)–sites that efficiently represent species–for birds across EU countries, and to explore whether HRR are well represented by the Natura 2000 network. Natura 2000 is an evolving network of PAs that seeks to conserve biodiversity through the persistence of the most patrimonial species and habitats across Europe. This network includes Sites of Community Importance (SCI) and Special Areas of Conservation (SAC), where the latter regulated the designation of Special Protected Areas (SPA). Distribution maps for 416 bird species and complementarity-based approaches were used to map geographical patterns of rarity-weighted richness (RWR) and HRR for birds. We used species accumulation index to evaluate whether RWR was efficient surrogates to identify HRRs for birds. The results of our analysis support the proposition that prioritizing sites in order of RWR is a reliable way to identify sites that efficiently represent birds. HRRs were concentrated in the Mediterranean Basin and alpine and boreal biogeographical regions of northern Europe. The cells with high RWR values did not correspond to cells where Natura 2000 was present. We suggest that patterns of RWR could become a focus for conservation biogeography. Our analysis demonstrates that identifying HRR is a robust approach for prioritizing management actions, and reveals the need for more conservation actions, especially on HRR.

Created2017-04-05
128684-Thumbnail Image.png
Description

In the decade since Yamanaka and colleagues described methods to reprogram somatic cells into a pluripotent state, human induced pluripotent stem cells (hiPSCs) have demonstrated tremendous promise in numerous disease modeling, drug discovery, and regenerative medicine applications. More recently, the development and refinement of advanced gene transduction and editing technologies

In the decade since Yamanaka and colleagues described methods to reprogram somatic cells into a pluripotent state, human induced pluripotent stem cells (hiPSCs) have demonstrated tremendous promise in numerous disease modeling, drug discovery, and regenerative medicine applications. More recently, the development and refinement of advanced gene transduction and editing technologies have further accelerated the potential of hiPSCs. In this review, we discuss the various gene editing technologies that are being implemented with hiPSCs. Specifically, we describe the emergence of technologies including zinc-finger nuclease (ZFN), transcription activator-like effector nuclease (TALEN), and clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 that can be used to edit the genome at precise locations, and discuss the strengths and weaknesses of each of these technologies. In addition, we present the current applications of these technologies in elucidating the mechanisms of human development and disease, developing novel and effective therapeutic molecules, and engineering cell-based therapies. Finally, we discuss the emerging technological advances in targeted gene editing methods.

ContributorsBrookhouser, Nicholas (Author) / Raman, Sreedevi (Author) / Potts, Chris (Author) / Brafman, David (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-02-06