Matching Items (6)

127833-Thumbnail Image.png

Semi-supervised Energy Modeling (SSEM) for Building Clusters Using Machine Learning Techniques

Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

Contributors

Agent

Created

Date Created
  • 2015-09-14

127929-Thumbnail Image.png

Determining the Feasibility of Statistical Techniques to Identify the Most Important Input Parameters of Building Energy Models

Description

Previous studies in building energy assessment clearly state that to meet sustainable energy goals, existing buildings, as well as new buildings, will need to improve their energy efficiency. Thus, meeting

Previous studies in building energy assessment clearly state that to meet sustainable energy goals, existing buildings, as well as new buildings, will need to improve their energy efficiency. Thus, meeting energy goals relies on retrofitting existing buildings. Most building energy models are bottom-up engineering models, meaning these models calculate energy demand of individual buildings through their physical properties and energy use for specific end uses (e.g., lighting, appliances, and water heating). Researchers then scale up these model results to represent the building stock of the region studied.
Studies reveal that there is a lack of information about the building stock and associated modeling tools and this lack of knowledge affects the assessment of building energy efficiency strategies. Literature suggests that the level of complexity of energy models needs to be limited. Accuracy of these energy models can be elevated by reducing the input parameters, alleviating the need for users to make many assumptions about building construction and occupancy, among other factors. To mitigate the need for assumptions and the resulting model inaccuracies, the authors argue buildings should be described in a regional stock model with a restricted number of input parameters. One commonly-accepted method of identifying critical input parameters is sensitivity analysis, which requires a large number of runs that are both time consuming and may require high processing capacity.
This paper utilizes the Energy, Carbon and Cost Assessment for Buildings Stocks (ECCABS) model, which calculates the net energy demand of buildings and presents aggregated and individual- building-level, demand for specific end uses, e.g., heating, cooling, lighting, hot water and appliances. The model has already been validated using the Swedish, Spanish, and UK building stock data. This paper discusses potential improvements to this model by assessing the feasibility of using stepwise regression to identify the most important input parameters using the data from UK residential sector. The paper presents results of stepwise regression and compares these to sensitivity analysis; finally, the paper documents the advantages and challenges associated with each method.

Contributors

Agent

Created

Date Created
  • 2015-09-14

127882-Thumbnail Image.png

Learning Energy Consumption and Demand Models through Data Mining for Reverse Engineering

Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

Contributors

Agent

Created

Date Created
  • 2015-12-09

127964-Thumbnail Image.png

Analyzing Arizona OSHA Injury Reports Using Unsupervised Machine Learning

Description

As the construction continue to be a leading industry in the number of injuries and fatalities annually, several organizations and agencies are working avidly to ensure the number of injuries

As the construction continue to be a leading industry in the number of injuries and fatalities annually, several organizations and agencies are working avidly to ensure the number of injuries and fatalities is minimized. The Occupational Safety and Health Administration (OSHA) is one such effort to assure safe and healthful working conditions for working men and women by setting and enforcing standards and by providing training, outreach, education and assistance. Given the large databases of OSHA historical events and reports, a manual analysis of the fatality and catastrophe investigations content is a time consuming and expensive process. This paper aims to evaluate the strength of unsupervised machine learning and Natural Language Processing (NLP) in supporting safety inspections and reorganizing accidents database on a state level. After collecting construction accident reports from the OSHA Arizona office, the methodology consists of preprocessing the accident reports and weighting terms in order to apply a data-driven unsupervised K-Means-based clustering approach. The proposed method classifies the collected reports in four clusters, each reporting a type of accident. The results show the construction accidents in the state of Arizona to be caused by falls (42.9%), struck by objects (34.3%), electrocutions (12.5%), and trenches collapse (10.3%). The findings of this research empower state and local agencies with a customized presentation of the accidents fitting their regulations and weather conditions. What is applicable to one climate might not be suitable for another; therefore, such rearrangement of the accidents database on a state based level is a necessary prerequisite to enhance the local safety applications and standards.

Contributors

Agent

Created

Date Created
  • 2016-05-20

127865-Thumbnail Image.png

A Non-stationary Analysis Using Ensemble Empirical Mode Decomposition to Detect Anomalies in Building Energy Consumption

Description

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures.

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external and internal factors. Modern large scale sensor measures some physical signals to monitor real-time system behaviors. Such data has the potentials to detect anomalies, identify consumption patterns, and analyze peak loads. The paper proposes a novel method to detect hidden anomalies in commercial building energy consumption system. The framework is based on Hilbert-Huang transform and instantaneous frequency analysis. The objectives are to develop an automated data pre-processing system that can detect anomalies and provide solutions with real-time consumption database using Ensemble Empirical Mode Decomposition (EEMD) method. The finding of this paper will also include the comparisons of Empirical mode decomposition and Ensemble empirical mode decomposition of three important type of institutional buildings.

Contributors

Agent

Created

Date Created
  • 2016-05-20

155870-Thumbnail Image.png

Energy analytics for infrastructure: an application to institutional buildings

Description

Commercial buildings in the United States account for 19% of the total energy consumption annually. Commercial Building Energy Consumption Survey (CBECS), which serves as the benchmark for all the commercial

Commercial buildings in the United States account for 19% of the total energy consumption annually. Commercial Building Energy Consumption Survey (CBECS), which serves as the benchmark for all the commercial buildings provides critical input for EnergyStar models. Smart energy management technologies, sensors, innovative demand response programs, and updated versions of certification programs elevate the opportunity to mitigate energy-related problems (blackouts and overproduction) and guides energy managers to optimize the consumption characteristics. With increasing advancements in technologies relying on the ‘Big Data,' codes and certification programs such as the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), and the Leadership in Energy and Environmental Design (LEED) evaluates during the pre-construction phase. It is mostly carried out with the assumed quantitative and qualitative values calculated from energy models such as Energy Plus and E-quest. However, the energy consumption analysis through Knowledge Discovery in Databases (KDD) is not commonly used by energy managers to perform complete implementation, causing the need for better energy analytic framework.

The dissertation utilizes Interval Data (ID) and establishes three different frameworks to identify electricity losses, predict electricity consumption and detect anomalies using data mining, deep learning, and mathematical models. The process of energy analytics integrates with the computational science and contributes to several objectives which are to

1. Develop a framework to identify both technical and non-technical losses using clustering and semi-supervised learning techniques.

2. Develop an integrated framework to predict electricity consumption using wavelet based data transformation model and deep learning algorithms.

3. Develop a framework to detect anomalies using ensemble empirical mode decomposition and isolation forest algorithms.

With a thorough research background, the first phase details on performing data analytics on the demand-supply database to determine the potential energy loss reduction potentials. Data preprocessing and electricity prediction framework in the second phase integrates mathematical models and deep learning algorithms to accurately predict consumption. The third phase employs data decomposition model and data mining techniques to detect the anomalies of institutional buildings.

Contributors

Agent

Created

Date Created
  • 2017