Matching Items (60)
127833-Thumbnail Image.png
Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Chen, Xue-wen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
127870-Thumbnail Image.png
Description

Zeolitic Imidazolate Frameworks (ZIFs) are one of the potential candidates as highly conducting networks with surface area with a possibility to be used as catalyst support. In the present study, highly active state-of-the-art Pt-NCNTFs catalyst was synthesized by pyrolyzing ZIF-67 along with Pt precursor under flowing Ar-H2 (90-10 %) gas

Zeolitic Imidazolate Frameworks (ZIFs) are one of the potential candidates as highly conducting networks with surface area with a possibility to be used as catalyst support. In the present study, highly active state-of-the-art Pt-NCNTFs catalyst was synthesized by pyrolyzing ZIF-67 along with Pt precursor under flowing Ar-H2 (90-10 %) gas at 700 °C. XRD analysis indicated the formation of Pt-Co alloy on the surface of the nanostructured catalyst support. The high resolution TEM examination showed the particle size range of 7 to 10 nm. Proton exchange membrane fuel cell performance was evaluated by fabricating membrane electrode assemblies using Nafion-212 electrolyte using H2/O2 gases (100 % RH) at various temperatures. The peak power density of 630 mW.cm2 was obtained with Pt-NCNTFs cathode catalyst and commercial Pt/C anode catalyst at 70 °C at ambient pressure.

Created2017-11-16
127878-Thumbnail Image.png
Description

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach and a web-based retrofit toolkit tested on a case study in Arizona, this methodology was able to save about 50% of the total energy consumed by the case study building, depending on the adopted measures and invested capital. While the case study presented is a deep energy retrofit, the proposed framework is effective in guiding the decision-making process that precedes any energy retrofit, deep or light.

ContributorsRios, Fernanda (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127879-Thumbnail Image.png
Description

Brazil has had issues in efficiently providing the required amount of electricity to its citizens at a low cost. One of the main causes to the decreasing performance of energy is due to reoccurring droughts that decrease the power generated by hydroelectric facilities. To compensate for the decrease, Brazil brought

Brazil has had issues in efficiently providing the required amount of electricity to its citizens at a low cost. One of the main causes to the decreasing performance of energy is due to reoccurring droughts that decrease the power generated by hydroelectric facilities. To compensate for the decrease, Brazil brought into use thermal power plants. The power plants being on average 23.7% more expensive than hydroelectric. Wind energy is potentially an alternative source of energy to compensate for the energy decrease during droughts. Brazil has invested in wind farms recently, but, due to issues with the delivery method, only 34% of wind farms are operational. This paper reviews the potential benefit Brazil could receive from investing more resources into developing and operating wind farms. It also proposes that utilization of the best value approach in delivering wind farms could produce operational wind farms quicker and more efficiently than previously experienced.

ContributorsOliveira, Carlos (Author) / Zulanas, Charles (Author) / Kashiwagi, Dean (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
129459-Thumbnail Image.png
Description

Background: The cytokine MIF (Macrophage Migration Inhibitory Factor) has diverse physiological roles and is present at elevated concentrations in numerous disease states. However, its molecular heterogeneity has not been previously investigated in biological samples. Mass Spectrometric Immunoassay (MSIA) may help elucidate MIF post-translational modifications existing in vivo and provide additional clarity

Background: The cytokine MIF (Macrophage Migration Inhibitory Factor) has diverse physiological roles and is present at elevated concentrations in numerous disease states. However, its molecular heterogeneity has not been previously investigated in biological samples. Mass Spectrometric Immunoassay (MSIA) may help elucidate MIF post-translational modifications existing in vivo and provide additional clarity regarding its relationship to diverse pathologies.

Results: In this work, we have developed and validated a fully quantitative MSIA assay for MIF, and used it in the discovery and quantification of different proteoforms of MIF in serum samples, including cysteinylated and glycated MIF. The MSIA assay had a linear range of 1.56-50 ng/mL, and exhibited good precision, linearity, and recovery characteristics. The new assay was applied to a small cohort of human serum samples, and benchmarked against an MIF ELISA assay.

Conclusions: The quantitative MIF MSIA assay provides a sensitive, precise and high throughput method to delineate and quantify MIF proteoforms in biological samples.

ContributorsSherma, Nisha (Author) / Borges, Chad (Author) / Trenchevska, Olgica (Author) / Jarvis, Jason W. (Author) / Rehder, Douglas (Author) / Oran, Paul (Author) / Nelson, Randall (Author) / Nedelkov, Dobrin (Author) / Biodesign Institute (Contributor)
Created2014-10-14
129096-Thumbnail Image.png
Description

Background: Cystatin C (CysC) is an endogenous cysteine protease inhibitor that can be used to assess the progression of kidney function. Recent studies demonstrate that CysC is a more specific indicator of glomerular filtration rate (GFR) than creatinine. CysC in plasma exists in multiple proteoforms. The goal of this study was

Background: Cystatin C (CysC) is an endogenous cysteine protease inhibitor that can be used to assess the progression of kidney function. Recent studies demonstrate that CysC is a more specific indicator of glomerular filtration rate (GFR) than creatinine. CysC in plasma exists in multiple proteoforms. The goal of this study was to clarify the association of native CysC, CysC missing N-terminal Serine (CysC des-S), and CysC without three N-terminal residues (CysC des-SSP) with diabetic chronic kidney disease (CKD).

Results: Using mass spectrometric immunoassay, the plasma concentrations of native CysC and the two CysC truncation proteoforms were examined in 111 individuals from three groups: 33 non-diabetic controls, 34 participants with type 2 diabetes (DM) and without CKD and 44 participants with diabetic CKD. Native CysC concentrations were 1.4 fold greater in CKD compared to DM group (p = 0.02) and 1.5 fold greater in CKD compared to the control group (p = 0.001). CysC des-S concentrations were 1.55 fold greater in CKD compared to the DM group (p = 0.002) and 1.9 fold greater in CKD compared to the control group (p = 0.0002). CysC des-SSP concentrations were 1.8 fold greater in CKD compared to the DM group (p = 0.008) and 1.52 fold greater in CKD compared to the control group (p = 0.002). In addition, the concentrations of CysC proteoforms were greater in the setting of albuminuria. The truncated CysC proteoform concentrations were associated with estimated GFR independent of native CysC concentrations.

Conclusion: Our findings demonstrate a greater amount of CysC proteoforms in diabetic CKD. We therefore suggest assessing the role of cystatin C proteoforms in the progression of CKD.

ContributorsYassine, Hussein N. (Author) / Trenchevska, Olgica (Author) / Dong, Zhiwei (Author) / Bashawri, Yara (Author) / Koska, Juraj (Author) / Reaven, Peter D. (Author) / Nelson, Randall (Author) / Nedelkov, Dobrin (Author) / Biodesign Institute (Contributor)
Created2016-03-25
156726-Thumbnail Image.png
Description
Today, we use resources faster than they can be replaced. Construction consumes more resources than any other industry and has one of the largest waste streams. Resource consumption and waste generation are expected to grow as the global population increases. The circular economy (CE) is based on the concept of

Today, we use resources faster than they can be replaced. Construction consumes more resources than any other industry and has one of the largest waste streams. Resource consumption and waste generation are expected to grow as the global population increases. The circular economy (CE) is based on the concept of a closed-loop cycle (CLC) and proposes a solution that, in theory, can eliminate the environmental impacts caused by construction and demolition (C&D) waste and increase the efficiency of resources’ use. In a CLC, building materials are reused, remanufactured, recycled, and reintegrated into other buildings (or into other sectors) without creating any waste.

Designing out waste is the core principle of the CE. Design for disassembly or design for deconstruction (DfD) is the practice of planning the future deconstruction of a building and the reuse of its materials. Concepts like DfD, CE, and product-service systems (PSS) can work together to promote CLC in the built environment. PSS are business models based on stewardship instead of ownership. CE combines DfD, PSS, materials’ durability, and materials’ reuse in multiple life cycles to promote a low-carbon, regenerative economy. CE prioritizes reuse over recycling. Dealing with resource scarcity demands us to think beyond the incremental changes from recycling waste; it demands an urgent, systemic, and radical change in the way we design, build, and procure construction materials.

This dissertation aims to answer three research questions: 1) How can researchers estimate the environmental benefits of reusing building components, 2) What variables are susceptible to affect the environmental impact assessment of reuse, and 3) What are the barriers and opportunities for DfD and materials’ reuse in the current design practice in the United States.

The first part of this study investigated how different life cycle assessment (LCA) methods (i.e., hybrid LCA and process-based LCA), assumptions (e.g., reuse rates, transportation distances, number of reuses), and LCA timelines can affect the results of a closed-loop LCA. The second part of this study built on interviews with architects in the United States to understand why DfD is not part of the current design practice in the country.
ContributorsCruz Rios, Fernanda (Author) / Grau, David (Committee member) / Chong, Oswald (Committee member) / Parrish, Kristen (Committee member) / Arizona State University (Publisher)
Created2018
155493-Thumbnail Image.png
Description
This thesis presents a literature research analyzing the cost overrun of the construction industry worldwide, exploring documented causes for cost overrun, and documented parties responsible for the inefficiency. The analysis looks at a comparison between the metrics of construction projects in different continents and regions. Multiple publication databases were used

This thesis presents a literature research analyzing the cost overrun of the construction industry worldwide, exploring documented causes for cost overrun, and documented parties responsible for the inefficiency. The analysis looks at a comparison between the metrics of construction projects in different continents and regions. Multiple publication databases were used to look into over 300 papers. It is shown that although construction demands are increasing, cost overrun on these projects is not decreasing at the same rate around the world. This thesis also presents a possible solution to improve cost overrun in the construction industry, through the use of the Best Value Performance Information Procurement System (BV PIPS). This is a system that has been utilized in various countries around the world, and has documented evidence that it may be able to alleviate the overrun occurring in the construction industry.
ContributorsGoyal, Abhinav (Author) / Kashiwagi, Jacob (Thesis advisor) / Kashiwagi, Dean (Committee member) / Chong, Oswald (Committee member) / Arizona State University (Publisher)
Created2017
128800-Thumbnail Image.png
Description

Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify

Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify the costs of analyses, and robust industrial platforms that are reproducible across laboratories. Presented here is an MS-based quantitative IGF1 assay with performance rating of >1,000 samples/day, and a capability of quantifying IGF1 point mutations and posttranslational modifications. The throughput of the IGF1 mass spectrometric immunoassay (MSIA) benefited from a simplified sample preparation step, IGF1 immunocapture in a tip format, and high-throughput MALDI-TOF MS analysis. The Limit of Detection and Limit of Quantification of the resulting assay were 1.5 μg/L and 5 μg/L, respectively, with intra- and inter-assay precision CVs of less than 10%, and good linearity and recovery characteristics. The IGF1 MSIA was benchmarked against commercially available IGF1 ELISA via Bland-Altman method comparison test, resulting in a slight positive bias of 16%. The IGF1 MSIA was employed in an optimized parallel workflow utilizing two pipetting robots and MALDI-TOF-MS instruments synced into one-hour phases of sample preparation, extraction and MSIA pipette tip elution, MS data collection, and data processing. Using this workflow, high-throughput IGF1 quantification of 1,054 human samples was achieved in approximately 9 hours. This rate of assaying is a significant improvement over existing MS-based IGF1 assays, and is on par with that of the enzyme-based immunoassays. Furthermore, a mutation was detected in ∼1% of the samples (SNP: rs17884626, creating an A→T substitution at position 67 of the IGF1), demonstrating the capability of IGF1 MSIA to detect point mutations and posttranslational modifications.

ContributorsOran, Paul (Author) / Trenchevska, Olgica (Author) / Nedelkov, Dobrin (Author) / Borges, Chad (Author) / Schaab, Matthew (Author) / Rehder, Douglas (Author) / Jarvis, Jason (Author) / Sherma, Nisha (Author) / Shen, Luhui (Author) / Krastins, Bryan (Author) / Lopez, Mary F. (Author) / Schwenke, Dawn (Author) / Reaven, Peter D. (Author) / Nelson, Randall (Author) / Biodesign Institute (Contributor)
Created2014-03-24
135547-Thumbnail Image.png
Description
The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a

The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a probabilistic analysis to describe the variation between replicates of the experimental process, and analyze reliability of a structural system based on that model. In order to help design the EDP software to perform the full analysis, the probabilistic and regression modeling aspects of this analysis have been explored. The focus has been on creating and analyzing probabilistic models for the data, adding multivariate and nonparametric fits to raw data, and developing computational techniques that allow for these methods to be properly implemented within EDP. For creating a probabilistic model of replicate data, the normal, lognormal, gamma, Weibull, and generalized exponential distributions have been explored. Goodness-of-fit tests, including the chi-squared, Anderson-Darling, and Kolmogorov-Smirnoff tests, have been used in order to analyze the effectiveness of any of these probabilistic models in describing the variation of parameters between replicates of an experimental test. An example using Young's modulus data for a Kevlar-49 Swath stress-strain test was used in order to demonstrate how this analysis is performed within EDP. In order to implement the distributions, numerical solutions for the gamma, beta, and hypergeometric functions were implemented, along with an arbitrary precision library to store numbers that exceed the maximum size of double-precision floating point digits. To create a multivariate fit, the multilinear solution was created as the simplest solution to the multivariate regression problem. This solution was then extended to solve nonlinear problems that can be linearized into multiple separable terms. These problems were solved analytically with the closed-form solution for the multilinear regression, and then by using a QR decomposition to solve numerically while avoiding numerical instabilities associated with matrix inversion. For nonparametric regression, or smoothing, the loess method was developed as a robust technique for filtering noise while maintaining the general structure of the data points. The loess solution was created by addressing concerns associated with simpler smoothing methods, including the running mean, running line, and kernel smoothing techniques, and combining the ability of each of these methods to resolve those issues. The loess smoothing method involves weighting each point in a partition of the data set, and then adding either a line or a polynomial fit within that partition. Both linear and quadratic methods were applied to a carbon fiber compression test, showing that the quadratic model was more accurate but the linear model had a shape that was more effective for analyzing the experimental data. Finally, the EDP program itself was explored to consider its current functionalities for processing data, as described by shear tests on carbon fiber data, and the future functionalities to be developed. The probabilistic and raw data processing capabilities were demonstrated within EDP, and the multivariate and loess analysis was demonstrated using R. As the functionality and relevant considerations for these methods have been developed, the immediate goal is to finish implementing and integrating these additional features into a version of EDP that performs a full streamlined structural analysis on experimental data.
ContributorsMarkov, Elan Richard (Author) / Rajan, Subramaniam (Thesis director) / Khaled, Bilal (Committee member) / Chemical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Ira A. Fulton School of Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05