This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 19
Filtering by

Clear all filters

127929-Thumbnail Image.png
Description

Previous studies in building energy assessment clearly state that to meet sustainable energy goals, existing buildings, as well as new buildings, will need to improve their energy efficiency. Thus, meeting energy goals relies on retrofitting existing buildings. Most building energy models are bottom-up engineering models, meaning these models calculate energy

Previous studies in building energy assessment clearly state that to meet sustainable energy goals, existing buildings, as well as new buildings, will need to improve their energy efficiency. Thus, meeting energy goals relies on retrofitting existing buildings. Most building energy models are bottom-up engineering models, meaning these models calculate energy demand of individual buildings through their physical properties and energy use for specific end uses (e.g., lighting, appliances, and water heating). Researchers then scale up these model results to represent the building stock of the region studied.

Studies reveal that there is a lack of information about the building stock and associated modeling tools and this lack of knowledge affects the assessment of building energy efficiency strategies. Literature suggests that the level of complexity of energy models needs to be limited. Accuracy of these energy models can be elevated by reducing the input parameters, alleviating the need for users to make many assumptions about building construction and occupancy, among other factors. To mitigate the need for assumptions and the resulting model inaccuracies, the authors argue buildings should be described in a regional stock model with a restricted number of input parameters. One commonly-accepted method of identifying critical input parameters is sensitivity analysis, which requires a large number of runs that are both time consuming and may require high processing capacity.

This paper utilizes the Energy, Carbon and Cost Assessment for Buildings Stocks (ECCABS) model, which calculates the net energy demand of buildings and presents aggregated and individual- building-level, demand for specific end uses, e.g., heating, cooling, lighting, hot water and appliances. The model has already been validated using the Swedish, Spanish, and UK building stock data. This paper discusses potential improvements to this model by assessing the feasibility of using stepwise regression to identify the most important input parameters using the data from UK residential sector. The paper presents results of stepwise regression and compares these to sensitivity analysis; finally, the paper documents the advantages and challenges associated with each method.

ContributorsArababadi, Reza (Author) / Naganathan, Hariharan (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
127931-Thumbnail Image.png
Description

Construction waste management has become extremely important due to stricter disposal and landfill regulations, and a lesser number of available landfills. There are extensive works done on waste treatment and management of the construction industry. Concepts like deconstruction, recyclability, and Design for Disassembly (DfD) are examples of better construction waste

Construction waste management has become extremely important due to stricter disposal and landfill regulations, and a lesser number of available landfills. There are extensive works done on waste treatment and management of the construction industry. Concepts like deconstruction, recyclability, and Design for Disassembly (DfD) are examples of better construction waste management methods. Although some authors and organizations have published rich guides addressing the DfD's principles, there are only a few buildings already developed in this area. This study aims to find the challenges in the current practice of deconstruction activities and the gaps between its theory and implementation. Furthermore, it aims to provide insights about how DfD can create opportunities to turn these concepts into strategies that can be largely adopted by the construction industry stakeholders in the near future.

ContributorsRios, Fernanda (Author) / Chong, Oswald (Author) / Grau, David (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2015-09-14
127932-Thumbnail Image.png
Description

We study the so-called Descent, or [bar over Q], Equation for the null polygonal supersymmetric Wilson loop in the framework of the pentagon operator product expansion. To properly address this problem, one requires to restore the cyclicity of the loop broken by the choice of OPE channels. In the course

We study the so-called Descent, or [bar over Q], Equation for the null polygonal supersymmetric Wilson loop in the framework of the pentagon operator product expansion. To properly address this problem, one requires to restore the cyclicity of the loop broken by the choice of OPE channels. In the course of the study, we unravel a phenomenon of twist enhancement when passing to a cyclically shifted channel. Currently, we focus on the consistency of the all-order Descent Equation for the particular case relating the NMHV heptagon to MHV hexagon. We find that the equation establishes a relation between contributions of different twists and successfully verify it in perturbation theory making use of available bootstrap predictions for elementary pentagons.

ContributorsBelitsky, Andrei (Author) / College of Liberal Arts and Sciences (Contributor)
Created2016-10-24
127949-Thumbnail Image.png
Description

The United State generates the most waste among OECD countries, and there are adverse effects of the waste generation. One of the most serious adverse effects is greenhouse gas, especially CH4, which causes global warming. However, the amount of waste generation is not decreasing, and the United State recycling rate,

The United State generates the most waste among OECD countries, and there are adverse effects of the waste generation. One of the most serious adverse effects is greenhouse gas, especially CH4, which causes global warming. However, the amount of waste generation is not decreasing, and the United State recycling rate, which could reduce waste generation, is only 26%, which is lower than other OECD countries. Thus, waste generation and greenhouse gas emission should decrease, and in order for that to happen, identifying the causes should be made a priority. The research objective is to verify whether the Environmental Kuznets Curve relationship is supported for waste generation and GDP across the U.S. Moreover, it also confirmed that total waste generation and recycling waste influences carbon dioxide emissions from the waste sector. The annual-based U.S. data from 1990 to 2012 were used. The data were collected from various data sources, and the Granger causality test was applied for identifying the causal relationships. The results showed that there is no causality between GDP and waste generation, but total waste and recycling generation significantly cause positive and negative greenhouse gas emissions from the waste sector, respectively. This implies that the waste generation will not decrease even if GDP increases. And, if waste generation decreases or recycling rate increases, the greenhouse gas emission will decrease. Based on these results, it is expected that the waste generation and carbon dioxide emission from the waste sector can decrease more efficiently.

ContributorsLee, Seungtaek (Author) / Kim, Jonghoon (Author) / Chong, Oswald (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-05-20
127964-Thumbnail Image.png
Description

As the construction continue to be a leading industry in the number of injuries and fatalities annually, several organizations and agencies are working avidly to ensure the number of injuries and fatalities is minimized. The Occupational Safety and Health Administration (OSHA) is one such effort to assure safe and healthful

As the construction continue to be a leading industry in the number of injuries and fatalities annually, several organizations and agencies are working avidly to ensure the number of injuries and fatalities is minimized. The Occupational Safety and Health Administration (OSHA) is one such effort to assure safe and healthful working conditions for working men and women by setting and enforcing standards and by providing training, outreach, education and assistance. Given the large databases of OSHA historical events and reports, a manual analysis of the fatality and catastrophe investigations content is a time consuming and expensive process. This paper aims to evaluate the strength of unsupervised machine learning and Natural Language Processing (NLP) in supporting safety inspections and reorganizing accidents database on a state level. After collecting construction accident reports from the OSHA Arizona office, the methodology consists of preprocessing the accident reports and weighting terms in order to apply a data-driven unsupervised K-Means-based clustering approach. The proposed method classifies the collected reports in four clusters, each reporting a type of accident. The results show the construction accidents in the state of Arizona to be caused by falls (42.9%), struck by objects (34.3%), electrocutions (12.5%), and trenches collapse (10.3%). The findings of this research empower state and local agencies with a customized presentation of the accidents fitting their regulations and weather conditions. What is applicable to one climate might not be suitable for another; therefore, such rearrangement of the accidents database on a state based level is a necessary prerequisite to enhance the local safety applications and standards.

ContributorsChokor, Abbas (Author) / Naganathan, Hariharan (Author) / Chong, Oswald (Author) / El Asmar, Mounir (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-05-20
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09
127905-Thumbnail Image.png
Description

We present a new approach to computing event shape distributions or, more precisely, charge flow correlations in a generic conformal field theory (CFT). These infrared finite observables are familiar from collider physics studies and describe the angular distribution of global charges in outgoing radiation created from the vacuum by some

We present a new approach to computing event shape distributions or, more precisely, charge flow correlations in a generic conformal field theory (CFT). These infrared finite observables are familiar from collider physics studies and describe the angular distribution of global charges in outgoing radiation created from the vacuum by some source. The charge flow correlations can be expressed in terms of Wightman correlation functions in a certain limit. We explain how to compute these quantities starting from their Euclidean analogues by means of a nontrivial analytic continuation which, in the framework of CFT, can be performed elegantly in Mellin space. The relation between the charge flow correlations and Euclidean correlation functions can be reformulated directly in configuration space, bypassing the Mellin representation, as a certain Lorentzian double discontinuity of the correlation function integrated along the cuts. We illustrate the general formalism in N = 4 SYM, making use of the well-known results on the four-point correlation function of half-BPS scalar operators. We compute the double scalar flow correlation in N = 4 SYM, at weak and strong coupling and show that it agrees with known results obtained by different techniques. One of the remarkable features of the N = 4 theory is that the scalar and energy flow correlations are proportional to each other. Imposing natural physical conditions on the energy flow correlations (finiteness, positivity and regularity), we formulate additional constraints on the four-point correlation functions in N = 4SYM that should be valid at any coupling and away from the planar limit.

ContributorsBelitsky, Andrei (Author) / Hohenegger, S. (Author) / Korchemsky, G. P. (Author) / Sokatchev, E. (Author) / Zhiboedov, A. (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-04-30
127909-Thumbnail Image.png
Description

We analyze the near-collinear limit of the null polygonal hexagon super Wilson loop in the planar N = 4 super-Yang–Mills theory. We focus on its Grassmann components which are dual to next-to-maximal helicity-violating (NMHV) scattering amplitudes. The kinematics in question is studied within a framework of the operator product expansion

We analyze the near-collinear limit of the null polygonal hexagon super Wilson loop in the planar N = 4 super-Yang–Mills theory. We focus on its Grassmann components which are dual to next-to-maximal helicity-violating (NMHV) scattering amplitudes. The kinematics in question is studied within a framework of the operator product expansion that encodes propagation of excitations on the background of the color flux tube stretched between the sides of Wilson loop contour. While their dispersion relation is known to all orders in 't Hooft coupling from previous studies, we find their form factor couplings to the Wilson loop. This is done making use of a particular tessellation of the loop where pentagon transitions play a fundamental role. Being interested in NMHV amplitudes, the corresponding building blocks carry a nontrivial charge under the SU(4) R-symmetry group. Restricting the current consideration to twist-two accuracy, we analyze two-particle contributions with a fermion as one of the constituents in the pair. We demonstrate that these nonsinglet pentagons obey bootstrap equations that possess consistent solutions for any value of the coupling constant. To confirm the correctness of these predictions, we calculate their contribution to the super Wilson loop demonstrating agreement with recent results to four-loop order in 't Hooft coupling.

ContributorsBelitsky, Andrei (Author) / College of Liberal Arts and Sciences (Contributor)
Created2015-03-05
127914-Thumbnail Image.png
Description

We study event shapes in N = 4SYM describing the angular distribution of energy and R-charge in the final states created by the simplest half-BPS scalar operator. Applying the approach developed in the companion paper arXiv:1309.0769, we compute these observables using the correlation functions of certain components of the N

We study event shapes in N = 4SYM describing the angular distribution of energy and R-charge in the final states created by the simplest half-BPS scalar operator. Applying the approach developed in the companion paper arXiv:1309.0769, we compute these observables using the correlation functions of certain components of the N = 4 stress-tensor supermultiplet: the half-BPS operator itself, the R-symmetry current and the stress tensor. We present master formulas for the all-order event shapes as convolutions of the Mellin amplitude defining the correlation function of the half-BPS operators, with a coupling-independent kernel determined by the choice of the observable. We find remarkably simple relations between various event shapes following from N = 4 superconformal symmetry. We perform thorough checks at leading order in the weak coupling expansion and show perfect agreement with the conventional calculations based on amplitude techniques. We extend our results to strong coupling using the correlation function of half-BPS operators obtained from the AdS/CFT correspondence.

ContributorsBelitsky, Andrei (Author) / Hohenegger, S. (Author) / Korchemsky, G. P. (Author) / Sokatchev, E. (Author) / Zhiboedov, A. (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-04-30
127838-Thumbnail Image.png
Description

We compute one-loop renormalization group equations for non-singlet twist-four operators in QCD. The calculation heavily relies on the light-cone gauge formalism in the momentum fraction space that essentially rephrases the analysis of all two-to-two and two-to-three transition kernels to purely algebraic manipulations both for non- and quasipartonic operators. This is

We compute one-loop renormalization group equations for non-singlet twist-four operators in QCD. The calculation heavily relies on the light-cone gauge formalism in the momentum fraction space that essentially rephrases the analysis of all two-to-two and two-to-three transition kernels to purely algebraic manipulations both for non- and quasipartonic operators. This is the first brute force calculation of this sector available in the literature. Fourier transforming our findings to the coordinate space, we checked them against available results obtained within a conformal symmetry-based formalism that bypasses explicit diagrammatic calculations and confirmed agreement with the latter.

ContributorsJi, Yao (Author) / Belitsky, Andrei (Author) / College of Liberal Arts and Sciences (Contributor)
Created2015-03-06