Matching Items (412)
151294-Thumbnail Image.png
Description
The partitioning of available solar energy into different fluxes at the Earth's surface is important in determining different physical processes, such as turbulent transport, subsurface hydrology, land-atmospheric interactions, etc. Direct measurements of these turbulent fluxes were carried out using eddy-covariance (EC) towers. However, the distribution of EC towers is sparse

The partitioning of available solar energy into different fluxes at the Earth's surface is important in determining different physical processes, such as turbulent transport, subsurface hydrology, land-atmospheric interactions, etc. Direct measurements of these turbulent fluxes were carried out using eddy-covariance (EC) towers. However, the distribution of EC towers is sparse due to relatively high cost and practical difficulties in logistics and deployment. As a result, data is temporally and spatially limited and is inadequate to be used for researches at large scales, such as regional and global climate modeling. Besides field measurements, an alternative way is to estimate turbulent fluxes based on the intrinsic relations between surface energy budget components, largely through thermodynamic equilibrium. These relations, referred as relative efficiency, have been included in several models to estimate the magnitude of turbulent fluxes in surface energy budgets such as latent heat and sensible heat. In this study, three theoretical models based on the lumped heat transfer model, the linear stability analysis and the maximum entropy principle respectively, were investigated. Model predictions of relative efficiencies were compared with turbulent flux data over different land covers, viz. lake, grassland and suburban surfaces. Similar results were observed over lake and suburban surface but significant deviation is found over vegetation surface. The relative efficiency of outgoing longwave radiation is found to be orders of magnitude deviated from theoretic predictions. Meanwhile, results show that energy partitioning process is influenced by the surface water availability to a great extent. The study provides insight into what property is determining energy partitioning process over different land covers and gives suggestion for future models.
ContributorsYang, Jiachuan (Author) / Wang, Zhihua (Thesis advisor) / Huang, Huei-Ping (Committee member) / Vivoni, Enrique (Committee member) / Mays, Larry (Committee member) / Arizona State University (Publisher)
Created2012
150401-Thumbnail Image.png
Description
The North American Monsoon System (NAMS) contributes ~55% of the annual rainfall in the Chihuahuan Desert during the summer months. Relatively frequent, intense storms during the NAMS increase soil moisture, reduce surface temperature and lead to runoff in ephemeral channels. Quantifying these processes, however, is difficult due to the sparse

The North American Monsoon System (NAMS) contributes ~55% of the annual rainfall in the Chihuahuan Desert during the summer months. Relatively frequent, intense storms during the NAMS increase soil moisture, reduce surface temperature and lead to runoff in ephemeral channels. Quantifying these processes, however, is difficult due to the sparse nature of coordinated observations. In this study, I present results from a field network of rain gauges (n = 5), soil probes (n = 48), channel flumes (n = 4), and meteorological equipment in a small desert shrubland watershed (~0.05 km2) in the Jornada Experimental. Using this high-resolution network, I characterize the temporal and spatial variability of rainfall, soil conditions and channel runoff within the watershed from June 2010 to September 2011, covering two NAMS periods. In addition, CO2, water and energy measurements at an eddy covariance tower quantify seasonal, monthly and event-scale changes in land-atmosphere states and fluxes. Results from this study indicate a strong seasonality in water and energy fluxes, with a reduction in Bowen ratio (B, the ratio of sensible to latent heat fluxes) from winter (B = 14) to summer (B = 3.3). This reduction is tied to shallow soil moisture availability during the summer (s = 0.040 m3/m3) as compared to the winter (s = 0.004 m3/m3). During the NAMS, I analyzed four consecutive rainfall-runoff events to quantify the soil moisture and channel flow responses and how water availability impacted the land-atmosphere fluxes. Spatial hydrologic variations during events occur over distances as short as ~15 m. The field network also allowed comparisons of several approaches to estimate evapotranspiration (ET). I found a more accurate ET estimate (a reduction of mean absolute error by 38%) when using distributed soil moisture data, as compared to a standard water balance approach based on the tower site. In addition, use of spatially-varied soil moisture data yielded a more reasonable relationship between ET and soil moisture, an important parameterization in many hydrologic models. The analyses illustrates the value of high-resolution sampling for quantifying seasonal fluxes in desert shrublands and their improvements in closing the water balance in small watersheds.
ContributorsTempleton, Ryan (Author) / Vivoni, Enrique R (Thesis advisor) / Mays, Larry (Committee member) / Fox, Peter (Committee member) / Arizona State University (Publisher)
Created2011
137335-Thumbnail Image.png
Description
ABSTRACT Water and energy resources are intrinsically linked, yet they are managed separately even in the water scarce America southwest. This study develops a spatially explicit model of water energy inter-dependencies in Arizona and assesses the potential for co beneficial conservation programs. The interdependent benefits of investments in 8 conservation

ABSTRACT Water and energy resources are intrinsically linked, yet they are managed separately even in the water scarce America southwest. This study develops a spatially explicit model of water energy inter-dependencies in Arizona and assesses the potential for co beneficial conservation programs. The interdependent benefits of investments in 8 conservation strategies are assessed within the context of legislated renewable energy portfolio and energy efficiency standards. The co- benefits of conservation are found to be significant. Water conservation policies have the potential to reduce statewide electricity demand by 1.0 - 3.0 %, satisfying 3.3 -10 % of the state's mandated energy-efficiency-standard. Adoption of energy -efficiency measures and renewable generation portfolios can reduce non - agricultural water demand by 2.3 - 12 %. The conservation co- benefits are typically not included in conservation plans or benefit cost analyses. Many co-benefits offer negative costs of saved water and energy, indicating that these measures provide water and energy savings at no net cost. Because ranges of costs and savings for water energy conservation measures are somewhat uncertain, future studies should investigate the co-benefits of individual conservation strategies in detail. Although this study focuses on Arizona, the analysis can be extended elsewhere as renewable portfolio and energy efficiency standards become more common nationally and internationally.
ContributorsBartos, Matthew D. (Author) / Chester, Mikhail (Thesis director) / Mays, Larry (Committee member) / Barrett, The Honors College (Contributor)
Created2013-12
149353-Thumbnail Image.png
Description
Fluctuating flow releases on regulated rivers destabilize downstream riverbanks, causing unintended, unnatural, and uncontrolled geomorphologic changes. These flow releases, usually a result of upstream hydroelectric dam operations, create manmade tidal effects that cause significant environmental damage; harm fish, vegetation, mammal, and avian habitats; and destroy riverbank camping and boating areas.

Fluctuating flow releases on regulated rivers destabilize downstream riverbanks, causing unintended, unnatural, and uncontrolled geomorphologic changes. These flow releases, usually a result of upstream hydroelectric dam operations, create manmade tidal effects that cause significant environmental damage; harm fish, vegetation, mammal, and avian habitats; and destroy riverbank camping and boating areas. This work focuses on rivers regulated by hydroelectric dams and have banks formed by sediment processes. For these systems, bank failures can be reduced, but not eliminated, by modifying flow release schedules. Unfortunately, comprehensive mitigation can only be accomplished with expensive rebuilding floods which release trapped sediment back into the river. The contribution of this research is to optimize weekly hydroelectric dam releases to minimize the cost of annually mitigating downstream bank failures. Physical process modeling of dynamic seepage effects is achieved through a new analytical unsaturated porewater response model that allows arbitrary periodic stage loading by Fourier series. This model is incorporated into a derived bank failure risk model that utilizes stochastic parameters identified through a meta-analysis of more than 150 documented slope failures. The risk model is then expanded to the river reach level by a Monte Carlos simulation and nonlinear regression of measured attenuation effects. Finally, the comprehensive risk model is subjected to a simulated annealing (SA) optimization scheme that accounts for physical, environmental, mechanical, operations, and flow constraints. The complete risk model is used to optimize the weekly flow release schedule of the Glen Canyon Dam, which regulates flow in the Colorado River within the Grand Canyon. A solution was obtained that reduces downstream failure risk, allows annual rebuilding floods, and predicts a hydroelectric revenue increase of more than 2%.
ContributorsTravis, Quentin Brent (Author) / Mays, Larry (Thesis advisor) / Schmeeckle, Mark (Committee member) / Houston, Sandra (Committee member) / Arizona State University (Publisher)
Created2010
Description

Pay-for-performance (PFP) is a relatively new approach to agricultural conservation that attaches an incentive payment to quantified reductions in nutrient runoff from a participating farm. Similar to a payment for ecosystem services approach, PFP lends itself to providing incentives for the most beneficial practices at the field level. To date,

Pay-for-performance (PFP) is a relatively new approach to agricultural conservation that attaches an incentive payment to quantified reductions in nutrient runoff from a participating farm. Similar to a payment for ecosystem services approach, PFP lends itself to providing incentives for the most beneficial practices at the field level. To date, PFP conservation in the U.S. has only been applied in small pilot programs. Because monitoring conservation performance for each field enrolled in a program would be cost-prohibitive, field-level modeling can provide cost-effective estimates of anticipated improvements in nutrient runoff. We developed a PFP system that uses a unique application of one of the leading agricultural models, the USDA’s Soil and Water Assessment Tool, to evaluate the nutrient load reductions of potential farm practice changes based on field-level agronomic and management data. The initial phase of the project focused on simulating individual fields in the River Raisin watershed in southeastern Michigan. Here we present development of the modeling approach and results from the pilot year, 2015-2016. These results stress that (1) there is variability in practice effectiveness both within and between farms, and thus there is not one “best practice” for all farms, (2) conservation decisions are made most effectively at the scale of the farm field rather than the sub-watershed or watershed level, and (3) detailed, field-level management information is needed to accurately model and manage on-farm nutrient loadings.

Supplemental information mentioned in the article is attached as a separate document.

ContributorsMuenich, Rebecca (Author) / Kalcic, M. M. (Author) / Winsten, J. (Author) / Fisher, K. (Author) / Day, M. (Author) / O'Neil, G. (Author) / Wang, Y.-C. (Author) / Scavia, D. (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017
128329-Thumbnail Image.png
Description

The emerging field of neuroprosthetics is focused on the development of new therapeutic interventions that will be able to restore some lost neural function by selective electrical stimulation or by harnessing activity recorded from populations of neurons. As more and more patients benefit from these approaches, the interest in neural

The emerging field of neuroprosthetics is focused on the development of new therapeutic interventions that will be able to restore some lost neural function by selective electrical stimulation or by harnessing activity recorded from populations of neurons. As more and more patients benefit from these approaches, the interest in neural interfaces has grown significantly and a new generation of penetrating microelectrode arrays are providing unprecedented access to the neurons of the central nervous system (CNS). These microelectrodes have active tip dimensions that are similar in size to neurons and because they penetrate the nervous system, they provide selective access to these cells (within a few microns). However, the very long-term viability of chronically implanted microelectrodes and the capability of recording the same spiking activity over long time periods still remain to be established and confirmed in human studies. Here we review the main responses to acute implantation of microelectrode arrays, and emphasize that it will become essential to control the neural tissue damage induced by these intracortical microelectrodes in order to achieve the high clinical potentials accompanying this technology.

ContributorsFernandez, Eduardo (Author) / Greger, Bradley (Author) / House, Paul A. (Author) / Aranda, Ignacio (Author) / Botella, Carlos (Author) / Albisua, Julio (Author) / Soto-Sanchez, Cristina (Author) / Alfaro, Arantxa (Author) / Normann, Richard A. (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-07-21
127956-Thumbnail Image.png
Description

In this study, a low-cycle fatigue experiment was conducted on printed wiring boards (PWB). The Weibull regression model and computational Bayesian analysis method were applied to analyze failure time data and to identify important factors that influence the PWB lifetime. The analysis shows that both shape parameter and scale parameter

In this study, a low-cycle fatigue experiment was conducted on printed wiring boards (PWB). The Weibull regression model and computational Bayesian analysis method were applied to analyze failure time data and to identify important factors that influence the PWB lifetime. The analysis shows that both shape parameter and scale parameter of Weibull distribution are affected by the supplier factor and preconditioning methods Based on the energy equivalence approach, a 6-cycle reflow precondition can be replaced by a 5-cycle IST precondition, thus the total testing time can be greatly reduced. This conclusion was validated by the likelihood ratio test of two datasets collected under two different preconditioning methods Therefore, the Weibull regression modeling approach is an effective approach for accounting for the variation of experimental setting in the PWB lifetime prediction.

ContributorsPan, Rong (Author) / Xu, Xinyue (Author) / Juarez, Joseph (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-11-12
127957-Thumbnail Image.png
Description

Studies about the data quality of National Bridge Inventory (NBI) reveal missing, erroneous, and logically conflicting data. Existing data quality programs lack a focus on detecting the logical inconsistencies within NBI and between NBI and external data sources. For example, within NBI, the structural condition ratings of some bridges improve

Studies about the data quality of National Bridge Inventory (NBI) reveal missing, erroneous, and logically conflicting data. Existing data quality programs lack a focus on detecting the logical inconsistencies within NBI and between NBI and external data sources. For example, within NBI, the structural condition ratings of some bridges improve over a period while having no improvement activity or maintenance funds recorded in relevant attributes documented in NBI. An example of logical inconsistencies between NBI and external data sources is that some bridges are not located within 100 meters of any roads extracted from Google Map. Manual detection of such logical errors is tedious and error-prone. This paper proposes a systematical “hypothesis testing” approach for automatically detecting logical inconsistencies within NBI and between NBI and external data sources. Using this framework, the authors detected logical inconsistencies in the NBI data of two sample states for revealing suspicious data items in NBI. The results showed that about 1% of bridges were not located within 100 meters of any actual roads, and few bridges showed improvements in the structural evaluation without any reported maintenance records.

ContributorsDin, Zia Ud (Author) / Tang, Pingbo (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-05-20
Description

Quorum-sensing networks enable bacteria to sense and respond to chemical signals produced by neighboring bacteria. They are widespread: over 100 morphologically and genetically distinct species of eubacteria are known to use quorum sensing to control gene expression. This diversity suggests the potential to use natural protein variants to engineer parallel,

Quorum-sensing networks enable bacteria to sense and respond to chemical signals produced by neighboring bacteria. They are widespread: over 100 morphologically and genetically distinct species of eubacteria are known to use quorum sensing to control gene expression. This diversity suggests the potential to use natural protein variants to engineer parallel, input-specific, cell–cell communication pathways. However, only three distinct signaling pathways, Lux, Las, and Rhl, have been adapted for and broadly used in engineered systems. The paucity of unique quorum-sensing systems and their propensity for crosstalk limits the usefulness of our current quorum-sensing toolkit. This review discusses the need for more signaling pathways, roadblocks to using multiple pathways in parallel, and strategies for expanding the quorum-sensing toolbox for synthetic biology.

ContributorsDaer, Rene (Author) / Muller, Ryan Yue (Author) / Haynes, Karmella (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-03-10
Description

Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead

Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale.

Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and side effect prediction. The software and benchmark are available at https://github.com/hansaimlim/REMAP.

ContributorsLim, Hansaim (Author) / Poleksic, Aleksandar (Author) / Yao, Yuan (Author) / Tong, Hanghang (Author) / He, Di (Author) / Zhuang, Luke (Author) / Meng, Patrick (Author) / Xie, Lei (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-10-07