Matching Items (479)
Filtering by

Clear all filters

152326-Thumbnail Image.png
Description
Solar power generation is the most promising technology to transfer energy consumption reliance from fossil fuel to renewable sources. Concentrated solar power generation is a method to concentrate the sunlight from a bigger area to a smaller area. The collected sunlight is converted more efficiently through two types of technologies:

Solar power generation is the most promising technology to transfer energy consumption reliance from fossil fuel to renewable sources. Concentrated solar power generation is a method to concentrate the sunlight from a bigger area to a smaller area. The collected sunlight is converted more efficiently through two types of technologies: concentrated solar photovoltaics (CSPV) and concentrated solar thermal power (CSTP) generation. In this thesis, these two technologies were evaluated in terms of system construction, performance characteristics, design considerations, cost benefit analysis and their field experience. The two concentrated solar power generation systems were implemented with similar solar concentrators and solar tracking systems but with different energy collecting and conversion components: the CSPV system uses high efficiency multi-junction solar cell modules, while the CSTP system uses a boiler -turbine-generator setup. The performances are calibrated via the experiments and evaluation analysis.
ContributorsJin, Zhilei (Author) / Hui, Yu (Thesis advisor) / Ayyanar, Raja (Committee member) / Rodriguez, Armando (Committee member) / Arizona State University (Publisher)
Created2013
152297-Thumbnail Image.png
Description
This thesis research focuses on developing a single-cell gene expression analysis method for marine diatom Thalassiosira pseudonana and constructing a chip level tool to realize the single cell RT-qPCR analysis. This chip will serve as a conceptual foundation for future deployable ocean monitoring systems. T. pseudonana, which is a common

This thesis research focuses on developing a single-cell gene expression analysis method for marine diatom Thalassiosira pseudonana and constructing a chip level tool to realize the single cell RT-qPCR analysis. This chip will serve as a conceptual foundation for future deployable ocean monitoring systems. T. pseudonana, which is a common surface water microorganism, was detected in the deep ocean as confirmed by phylogenetic and microbial community functional studies. Six-fold copy number differences between 23S rRNA and 23S rDNA were observed by RT-qPCR, demonstrating the moderate functional activity of detected photosynthetic microbes in the deep ocean including T. pseudonana. Because of the ubiquity of T. pseudonana, it is a good candidate for an early warning system for ocean environmental perturbation monitoring. This early warning system will depend on identifying outlier gene expression at the single-cell level. An early warning system based on single-cell analysis is expected to detect environmental perturbations earlier than population level analysis which can only be observed after a whole community has reacted. Preliminary work using tube-based, two-step RT-qPCR revealed for the first time, gene expression heterogeneity of T. pseudonana under different nutrient conditions. Heterogeneity was revealed by different gene expression activity for individual cells under the same conditions. This single cell analysis showed a skewed, lognormal distribution and helped to find outlier cells. The results indicate that the geometric average becomes more important and representative of the whole population than the arithmetic average. This is in contrast with population level analysis which is limited to arithmetic averages only and highlights the value of single cell analysis. In order to develop a deployable sensor in the ocean, a chip level device was constructed. The chip contains surface-adhering droplets, defined by hydrophilic patterning, that serve as real-time PCR reaction chambers when they are immersed in oil. The chip had demonstrated sensitivities at the single cell level for both DNA and RNA. The successful rate of these chip-based reactions was around 85%. The sensitivity of the chip was equivalent to published microfluidic devices with complicated designs and protocols, but the production process of the chip was simple and the materials were all easily accessible in conventional environmental and/or biology laboratories. On-chip tests provided heterogeneity information about the whole population and were validated by comparing with conventional tube based methods and by p-values analysis. The power of chip-based single-cell analyses were mainly between 65-90% which were acceptable and can be further increased by higher throughput devices. With this chip and single-cell analysis approaches, a new paradigm for robust early warning systems of ocean environmental perturbation is possible.
ContributorsShi, Xu (Author) / Meldrum, Deirdre R. (Thesis advisor) / Zhang, Weiwen (Committee member) / Chao, Shih-hui (Committee member) / Westerhoff, Paul (Committee member) / Arizona State University (Publisher)
Created2013
151784-Thumbnail Image.png
Description
This work focuses on a generalized assessment of source zone natural attenuation (SZNA) at chlorinated aliphatic hydrocarbon (CAH) impacted sites. Given the numbers of sites and technical challenges for cleanup there is a need for a SZNA method at CAH impacted sites. The method anticipates that decision makers will be

This work focuses on a generalized assessment of source zone natural attenuation (SZNA) at chlorinated aliphatic hydrocarbon (CAH) impacted sites. Given the numbers of sites and technical challenges for cleanup there is a need for a SZNA method at CAH impacted sites. The method anticipates that decision makers will be interested in the following questions: 1-Is SZNA occurring and what processes contribute? 2-What are the current SZNA rates? 3-What are the longer-term implications? The approach is macroscopic and uses multiple lines-of-evidence. An in-depth application of the generalized non-site specific method over multiple site events, with sampling refinement approaches applied for improving SZNA estimates, at three CAH impacted sites is presented with a focus on discharge rates for four events over approximately three years (Site 1:2.9, 8.4, 4.9, 2.8kg/yr as PCE, Site 2:1.6, 2.2, 1.7, 1.1kg/y as PCE, Site 3:570, 590, 250, 240kg/y as TCE). When applying the generalized CAH-SZNA method, it is likely that different practitioners will not sample a site similarly, especially regarding sampling density on a groundwater transect. Calculation of SZNA rates is affected by contaminant spatial variability with reference to transect sampling intervals and density with variations in either resulting in different mass discharge estimates. The effects on discharge estimates from varied sampling densities and spacings were examined to develop heuristic sampling guidelines with practical site sampling densities; the guidelines aim to reduce the variability in discharge estimates due to different sampling approaches and to improve confidence in SZNA rates allowing decision-makers to place the rates in perspective and determine a course of action based on remedial goals. Finally bench scale testing was used to address longer term questions; specifically the nature and extent of source architecture. A rapid in-situ disturbance method was developed using a bench-scale apparatus. The approach allows for rapid identification of the presence of DNAPL using several common pilot scale technologies (ISCO, air-sparging, water-injection) and can identify relevant source architectural features (ganglia, pools, dissolved source). Understanding of source architecture and identification of DNAPL containing regions greatly enhances site conceptualization models, improving estimated time frames for SZNA, and possibly improving design of remedial systems.
ContributorsEkre, Ryan (Author) / Johnson, Paul Carr (Thesis advisor) / Rittmann, Bruce (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Arizona State University (Publisher)
Created2013
151595-Thumbnail Image.png
Description
A pressurized water reactor (PWR) nuclear power plant (NPP) model is introduced into Positive Sequence Load Flow (PSLF) software by General Electric in order to evaluate the load-following capability of NPPs. The nuclear steam supply system (NSSS) consists of a reactor core, hot and cold legs, plenums, and a U-tube

A pressurized water reactor (PWR) nuclear power plant (NPP) model is introduced into Positive Sequence Load Flow (PSLF) software by General Electric in order to evaluate the load-following capability of NPPs. The nuclear steam supply system (NSSS) consists of a reactor core, hot and cold legs, plenums, and a U-tube steam generator. The physical systems listed above are represented by mathematical models utilizing a state variable lumped parameter approach. A steady-state control program for the reactor, and simple turbine and governor models are also developed. Adequacy of the isolated reactor core, the isolated steam generator, and the complete PWR models are tested in Matlab/Simulink and dynamic responses are compared with the test results obtained from the H. B. Robinson NPP. Test results illustrate that the developed models represents the dynamic features of real-physical systems and are capable of predicting responses due to small perturbations of external reactivity and steam valve opening. Subsequently, the NSSS representation is incorporated into PSLF and coupled with built-in excitation system and generator models. Different simulation cases are run when sudden loss of generation occurs in a small power system which includes hydroelectric and natural gas power plants besides the developed PWR NPP. The conclusion is that the NPP can respond to a disturbance in the power system without exceeding any design and safety limits if appropriate operational conditions, such as achieving the NPP turbine control by adjusting the speed of the steam valve, are met. In other words, the NPP can participate in the control of system frequency and improve the overall power system performance.
ContributorsArda, Samet Egemen (Author) / Holbert, Keith E. (Thesis advisor) / Undrill, John (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2013
151827-Thumbnail Image.png
Description
The object of this study was a 26 year old residential Photovoltaic (PV) monocrystalline silicon (c-Si) power plant, called Solar One, built by developer John F. Long in Phoenix, Arizona (a hot-dry field condition). The task for Arizona State University Photovoltaic Reliability Laboratory (ASU-PRL) graduate students was to evaluate the

The object of this study was a 26 year old residential Photovoltaic (PV) monocrystalline silicon (c-Si) power plant, called Solar One, built by developer John F. Long in Phoenix, Arizona (a hot-dry field condition). The task for Arizona State University Photovoltaic Reliability Laboratory (ASU-PRL) graduate students was to evaluate the power plant through visual inspection, electrical performance, and infrared thermography. The purpose of this evaluation was to measure and understand the extent of degradation to the system along with the identification of the failure modes in this hot-dry climatic condition. This 4000 module bipolar system was originally installed with a 200 kW DC output of PV array (17 degree fixed tilt) and an AC output of 175 kVA. The system was shown to degrade approximately at a rate of 2.3% per year with no apparent potential induced degradation (PID) effect. The power plant is made of two arrays, the north array and the south array. Due to a limited time frame to execute this large project, this work was performed by two masters students (Jonathan Belmont and Kolapo Olakonu) and the test results are presented in two masters theses. This thesis presents the results obtained on the north array and the other thesis presents the results obtained on the south array. The resulting study showed that PV module design, array configuration, vandalism, installation methods and Arizona environmental conditions have had an effect on this system's longevity and reliability. Ultimately, encapsulation browning, higher series resistance (potentially due to solder bond fatigue) and non-cell interconnect ribbon breakages outside the modules were determined to be the primary causes for the power loss.
ContributorsBelmont, Jonathan (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Henderson, Mark (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2013
151889-Thumbnail Image.png
Description
This dissertation explores the use of bench-scale batch microcosms in remedial design of contaminated aquifers, presents an alternative methodology for conducting such treatability studies, and - from technical, economical, and social perspectives - examines real-world application of this new technology. In situ bioremediation (ISB) is an effective remedial approach for

This dissertation explores the use of bench-scale batch microcosms in remedial design of contaminated aquifers, presents an alternative methodology for conducting such treatability studies, and - from technical, economical, and social perspectives - examines real-world application of this new technology. In situ bioremediation (ISB) is an effective remedial approach for many contaminated groundwater sites. However, site-specific variability necessitates the performance of small-scale treatability studies prior to full-scale implementation. The most common methodology is the batch microcosm, whose potential limitations and suitable technical alternatives are explored in this thesis. In a critical literature review, I discuss how continuous-flow conditions stimulate microbial attachment and biofilm formation, and identify unique microbiological phenomena largely absent in batch bottles, yet potentially relevant to contaminant fate. Following up on this theoretical evaluation, I experimentally produce pyrosequencing data and perform beta diversity analysis to demonstrate that batch and continuous-flow (column) microcosms foster distinctly different microbial communities. Next, I introduce the In Situ Microcosm Array (ISMA), which took approximately two years to design, develop, build and iteratively improve. The ISMA can be deployed down-hole in groundwater monitoring wells of contaminated aquifers for the purpose of autonomously conducting multiple parallel continuous-flow treatability experiments. The ISMA stores all sample generated in the course of each experiment, thereby preventing the release of chemicals into the environment. Detailed results are presented from an ISMA demonstration evaluating ISB for the treatment of hexavalent chromium and trichloroethene. In a technical and economical comparison to batch microcosms, I demonstrate the ISMA is both effective in informing remedial design decisions and cost-competitive. Finally, I report on a participatory technology assessment (pTA) workshop attended by diverse stakeholders of the Phoenix 52nd Street Superfund Site evaluating the ISMA's ability for addressing a real-world problem. In addition to receiving valuable feedback on perceived ISMA limitations, I conclude from the workshop that pTA can facilitate mutual learning even among entrenched stakeholders. In summary, my doctoral research (i) pinpointed limitations of current remedial design approaches, (ii) produced a novel alternative approach, and (iii) demonstrated the technical, economical and social value of this novel remedial design tool, i.e., the In Situ Microcosm Array technology.
ContributorsKalinowski, Tomasz (Author) / Halden, Rolf U. (Thesis advisor) / Johnson, Paul C (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Bennett, Ira (Committee member) / Arizona State University (Publisher)
Created2013
151911-Thumbnail Image.png
Description
Nitrate is the most prevalent water pollutant limiting the use of groundwater as a potable water source. The overarching goal of this dissertation was to leverage advances in nanotechnology to improve nitrate photocatalysis and transition treatment to the full-scale. The research objectives were to (1) examine commercial and synthesized photocatalysts,

Nitrate is the most prevalent water pollutant limiting the use of groundwater as a potable water source. The overarching goal of this dissertation was to leverage advances in nanotechnology to improve nitrate photocatalysis and transition treatment to the full-scale. The research objectives were to (1) examine commercial and synthesized photocatalysts, (2) determine the effect of water quality parameters (e.g., pH), (3) conduct responsible engineering by ensuring detection methods were in place for novel materials, and (4) develop a conceptual framework for designing nitrate-specific photocatalysts. The key issues for implementing photocatalysis for nitrate drinking water treatment were efficient nitrate removal at neutral pH and by-product selectivity toward nitrogen gases, rather than by-products that pose a human health concern (e.g., nitrite). Photocatalytic nitrate reduction was found to follow a series of proton-coupled electron transfers. The nitrate reduction rate was limited by the electron-hole recombination rate, and the addition of an electron donor (e.g., formate) was necessary to reduce the recombination rate and achieve efficient nitrate removal. Nano-sized photocatalysts with high surface areas mitigated the negative effects of competing aqueous anions. The key water quality parameter impacting by-product selectivity was pH. For pH < 4, the by-product selectivity was mostly N-gas with some NH4+, but this shifted to NO2- above pH = 4, which suggests the need for proton localization to move beyond NO2-. Co-catalysts that form a Schottky barrier, allowing for localization of electrons, were best for nitrate reduction. Silver was optimal in heterogeneous systems because of its ability to improve nitrate reduction activity and N-gas by-product selectivity, and graphene was optimal in two-electrode systems because of its ability to shuttle electrons to the working electrode. "Environmentally responsible use of nanomaterials" is to ensure that detection methods are in place for the nanomaterials tested. While methods exist for the metals and metal oxides examined, there are currently none for carbon nanotubes (CNTs) and graphene. Acknowledging that risk assessment encompasses dose-response and exposure, new analytical methods were developed for extracting and detecting CNTs and graphene in complex organic environmental (e.g., urban air) and biological matrices (e.g. rat lungs).
ContributorsDoudrick, Kyle (Author) / Westerhoff, Paul (Thesis advisor) / Halden, Rolf (Committee member) / Hristovski, Kiril (Committee member) / Arizona State University (Publisher)
Created2013
151916-Thumbnail Image.png
Description
Through manipulation of adaptable opportunities available within a given environment, individuals become active participants in managing personal comfort requirements, by exercising control over their comfort without the assistance of mechanical heating and cooling systems. Similarly, continuous manipulation of a building skin's form, insulation, porosity, and transmissivity qualities exerts control over

Through manipulation of adaptable opportunities available within a given environment, individuals become active participants in managing personal comfort requirements, by exercising control over their comfort without the assistance of mechanical heating and cooling systems. Similarly, continuous manipulation of a building skin's form, insulation, porosity, and transmissivity qualities exerts control over the energy exchanged between indoor and outdoor environments. This research uses four adaptive response variables in a modified software algorithm to explore an adaptive building skin's potential in reacting to environmental stimuli with the purpose of minimizing energy use without sacrificing occupant comfort. Results illustrate that significant energy savings can be realized with adaptive envelopes over static building envelopes even under extreme summer and winter climate conditions; that the magnitude of these savings are dependent on climate and orientation; and that occupant thermal comfort can be improved consistently over comfort levels achieved by optimized static building envelopes. The resulting adaptive envelope's unique climate-specific behavior could inform designers in creating an intelligent kinetic aesthetic that helps facilitate adaptability and resiliency in architecture.
ContributorsErickson, James (Author) / Bryan, Harvey (Thesis advisor) / Addison, Marlin (Committee member) / Kroelinger, Michael D. (Committee member) / Reddy, T. Agami (Committee member) / Arizona State University (Publisher)
Created2013
152142-Thumbnail Image.png
Description
According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement.

According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI pertinent to the building type. The ability to identify and rank the important variables is of great importance in practical implementation of the benchmarking tools which rely on query-based building and HVAC variable filters specified by the user.
ContributorsKaskhedikar, Apoorva Prakash (Author) / Reddy, T. Agami (Thesis advisor) / Bryan, Harvey (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2013
152149-Thumbnail Image.png
Description
Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating

Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain sched- uled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.
ContributorsSteenis, Joel (Author) / Ayyanar, Raja (Thesis advisor) / Mittelmann, Hans (Committee member) / Tsakalis, Konstantinos (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2013