Matching Items (357)
150059-Thumbnail Image.png
Description
Dynamic loading is the term used for one way of optimally loading a transformer. Dynamic loading means the utility takes into account the thermal time constant of the transformer along with the cooling mode transitions, loading profile and ambient temperature when determining the time-varying loading capability of a transformer. Knowing

Dynamic loading is the term used for one way of optimally loading a transformer. Dynamic loading means the utility takes into account the thermal time constant of the transformer along with the cooling mode transitions, loading profile and ambient temperature when determining the time-varying loading capability of a transformer. Knowing the maximum dynamic loading rating can increase utilization of the transformer while not reducing life-expectancy, delaying the replacement of the transformer. This document presents the progress on the transformer dynamic loading project sponsored by Salt River Project (SRP). A software application which performs dynamic loading for substation distribution transformers with appropriate transformer thermal models is developed in this project. Two kinds of thermal hottest-spot temperature (HST) and top-oil temperature (TOT) models that will be used in the application--the ASU HST/TOT models and the ANSI models--are presented. Brief validations of the ASU models are presented, showing that the ASU models are accurate in simulating the thermal processes of the transformers. For this production grade application, both the ANSI and the ASU models are built and tested to select the most appropriate models to be used in the dynamic loading calculations. An existing application to build and select the TOT model was used as a starting point for the enhancements developed in this work. These enhancements include:  Adding the ability to develop HST models to the existing application,  Adding metrics to evaluate the models accuracy and selecting which model will be used in dynamic loading calculation  Adding the capability to perform dynamic loading calculations,  Production of a maximum dynamic load profile that the transformer can tolerate without acceleration of the insulation aging,  Provide suitable output (plots and text) for the results of the dynamic loading calculation. Other challenges discussed include: modification to the input data format, data-quality control, cooling mode estimation. Efforts to overcome these challenges are discussed in this work.
ContributorsLiu, Yi (Author) / Tylavksy, Daniel J (Thesis advisor) / Karady, George G. (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2011
150050-Thumbnail Image.png
Description
The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this dissertation. The DC-DC stage is based on a quad-active-bridge (QAB) converter which not only provides isolation for the load, but also

The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this dissertation. The DC-DC stage is based on a quad-active-bridge (QAB) converter which not only provides isolation for the load, but also for the PV and storage. The AC-DC stage is implemented with a pulse-width-modulated (PWM) single phase rectifier. A unified gyrator-based average model is developed for a general multi-active-bridge (MAB) converter controlled through phase-shift modulation (PSM). Expressions to determine the power rating of the MAB ports are also derived. The developed gyrator-based average model is applied to the QAB converter for faster simulations of the proposed SST during the control design process as well for deriving the state-space representation of the plant. Both linear quadratic regulator (LQR) and single-input-single-output (SISO) types of controllers are designed for the DC-DC stage. A novel technique that complements the SISO controller by taking into account the cross-coupling characteristics of the QAB converter is also presented herein. Cascaded SISO controllers are designed for the AC-DC stage. The QAB demanded power is calculated at the QAB controls and then fed into the rectifier controls in order to minimize the effect of the interaction between the two SST stages. The dynamic performance of the designed control loops based on the proposed control strategies are verified through extensive simulation of the SST average and switching models. The experimental results presented herein show that the transient responses for each control strategy match those from the simulations results thus validating them.
ContributorsFalcones, Sixifo Daniel (Author) / Ayyanar, Raja (Thesis advisor) / Karady, George G. (Committee member) / Tylavsky, Daniel (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
148027-Thumbnail Image.png
Description

Papago Park in Tempe, Arizona (USA) is host to several buttes composed of landslide breccias. The focus of this thesis is a butte called “Contact Hill,” which is composed of metarhyolitic debris flows, granitic debris flows, and Barnes Butte Breccia. The Barnes Butte Breccia can be broken down into several

Papago Park in Tempe, Arizona (USA) is host to several buttes composed of landslide breccias. The focus of this thesis is a butte called “Contact Hill,” which is composed of metarhyolitic debris flows, granitic debris flows, and Barnes Butte Breccia. The Barnes Butte Breccia can be broken down into several different compositional categories that can be dated based on their relative ages. The depositional timeline of these rocks is explored through their mineral and physical properties. The rhyolitic debris flow is massively bedded and dips at 26° to the southeast. The granitic debris flow is not bedded and exhibits a mixture of granite clasts of different grain sizes. In thin section analysis, five mineral types were identified: opaque inclusions, white quartz, anhedral and subhedral biotite, yellow stained K-feldspar, and gray plagioclase. It is hypothesized that regional stretching and compression of the crust, accompanied with magmatism, helped bring the metarhyolite and granite to the surface. Domino-like fault blocks caused large brecciation, and collapse of a nearby quartzite and granite mountain helped create the Barnes Butte Breccia: a combination of quartzite, metarhyolite, and granite clasts. Evidence of Papago Park’s ancient terrestrial history is seen in metarhyolite clasts containing sand grains. These geologic events, in addition to erosion, are responsible for Papago Park’s unique appearance today.

ContributorsScheller, Jessica Rose (Author) / Reynolds, Stephen (Thesis director) / Johnson, Julia (Committee member) / School of Earth and Space Exploration (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147988-Thumbnail Image.png
Description

Stardust grains can provide useful information about the Solar System environment before the Sun was born. Stardust grains show distinct isotopic compositions that indicate their origins, like the atmospheres of red giant stars, asymptotic giant branch stars, and supernovae (e.g., Bose et al. 2010). It has been argued that some

Stardust grains can provide useful information about the Solar System environment before the Sun was born. Stardust grains show distinct isotopic compositions that indicate their origins, like the atmospheres of red giant stars, asymptotic giant branch stars, and supernovae (e.g., Bose et al. 2010). It has been argued that some stardust grains likely condensed in classical nova outbursts (e.g., Amari et al. 2001). These nova candidate grains contain 13C, 15N and 17O-rich nuclides which are produced by proton burning. However, these nuclides alone cannot constrain the stellar source of nova candidate grains. Nova ejecta is rich in 7Be that decays to 7Li (which has a half-life of ~53 days). I want to measure 6,7Li isotopes in nova candidate grains using the NanoSIMS 50L (nanoscale secondary ion mass spectrometry) to establish their nova origins without ambiguity. Several stardust grains that are nova candidate grains were identified in meteorite Acfer 094 on the basis of their oxygen isotopes. The identified silicate and oxide stardust grains are <500 nm in size and exist in the meteorite surrounded by meteoritic silicates. Therefore, 6,7Li isotopic measurements on these grains are hindered because of the large 300-500 nm oxygen ion beam in the NanoSIMS. I devised a methodology to isolate stardust grains by performing Focused Ion Beam milling with the FIB – Nova 200 NanoLab (FEI) instrument. We proved that the current FIB instrument cannot be used to prepare stardust grains smaller than 1 𝜇m due to lacking capabilities of the FIB. For future analyses, we could either use the same milling technique with the new and improved FIB – Helios 5 UX or use the recently constructed duoplasmatron on the NanoSIMS that can achieve a size of ~75 nm oxygen ion beam.

ContributorsDuncan, Ethan Jay (Author) / Bose, Miatrayee (Thesis director) / Starrfield, Sumner (Committee member) / Desch, Steve (Committee member) / School of Earth and Space Exploration (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148089-Thumbnail Image.png
Description

In this study, the influence of fluid mixing on temperature and geochemistry of hot spring fluids is investigated. Yellowstone National Park (YNP) is home to a diverse range of hot springs with varying temperature and chemistry. The mixing zone of interest in this paper, located in Geyser Creek, YNP, has

In this study, the influence of fluid mixing on temperature and geochemistry of hot spring fluids is investigated. Yellowstone National Park (YNP) is home to a diverse range of hot springs with varying temperature and chemistry. The mixing zone of interest in this paper, located in Geyser Creek, YNP, has been a point of interest since at least the 1960’s (Raymahashay, 1968). Two springs, one basic (~pH 7) and one acidic (~pH 3) mix together down an outflow channel. There are visual bands of different photosynthetic pigments which suggests the creation of temperature and chemical gradients due to the fluids mixing. In this study, to determine if fluid mixing is driving these changes of temperature and chemistry in the system, a model that factors in evaporation and cooling was developed and compared to measured temperature and chemical data collected downstream. Comparison of the modeled temperature and chemistry to the measured values at the downstream mixture shows that many of the ions, such as Cl⁻, F⁻, and Li⁺, behave conservatively with respect to mixing. This indicates that the influence of mixing accounts for a large proportion of variation in the chemical composition of the system. However, there are some chemical constituents like CH₄, H₂, and NO₃⁻, that were not conserved, and the concentrations were either depleted or increased in the downstream mixture. Some of these constituents are known to be used by microorganisms. The development of this mixing model can be used as a tool for predicting biological activity as well as building the framework for future geochemical and computational models that can be used to understand the energy availability and the microbial communities that are present.

ContributorsOrrill, Brianna Isabel (Author) / Shock, Everett (Thesis director) / Howells, Alta (Committee member) / School of Life Sciences (Contributor) / School of Earth and Space Exploration (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147894-Thumbnail Image.png
Description

This research endeavor explores the 1964 reasoning of Irish physicist John Bell and how it pertains to the provoking Einstein-Podolsky-Rosen Paradox. It is necessary to establish the machinations of formalisms ranging from conservation laws to quantum mechanical principles. The notion that locality is unable to be reconciled with the quantum

This research endeavor explores the 1964 reasoning of Irish physicist John Bell and how it pertains to the provoking Einstein-Podolsky-Rosen Paradox. It is necessary to establish the machinations of formalisms ranging from conservation laws to quantum mechanical principles. The notion that locality is unable to be reconciled with the quantum paradigm is upheld through analysis and the subsequent Aspect experiments in the years 1980-1982. No matter the complexity, any local hidden variable theory is incompatible with the formulation of standard quantum mechanics. A number of strikingly ambiguous and abstract concepts are addressed in this pursuit to deduce quantum's validity, including separability and reality. `Elements of reality' characteristic of unique spaces are defined using basis terminology and logic from EPR. The discussion draws directly from Bell's succinct 1964 Physics 1 paper as well as numerous other useful sources. The fundamental principle and insight gleaned is that quantum physics is indeed nonlocal; the door into its metaphysical and philosophical implications has long since been opened. Yet the nexus of information pertaining to Bell's inequality and EPR logic does nothing but assert the impeccable success of quantum physics' ability to describe nature.

ContributorsRapp, Sean R (Author) / Foy, Joseph (Thesis director) / Martin, Thomas (Committee member) / School of Earth and Space Exploration (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148230-Thumbnail Image.png
Description

Stellar mass loss has a high impact on the overall evolution of a star. The amount<br/>of mass lost during a star’s lifetime dictates which remnant will be left behind and how<br/>the circumstellar environment will be affected. Several rates of mass loss have been<br/>proposed for use in stellar evolution codes, yielding

Stellar mass loss has a high impact on the overall evolution of a star. The amount<br/>of mass lost during a star’s lifetime dictates which remnant will be left behind and how<br/>the circumstellar environment will be affected. Several rates of mass loss have been<br/>proposed for use in stellar evolution codes, yielding discrepant results from codes using<br/>different rates. In this paper, I compare the effect of varying the mass loss rate in the<br/>stellar evolution code TYCHO on the initial-final mass relation. I computed four sets of<br/>models with varying mass loss rates and metallicities. Due to a large number of models<br/>reaching the luminous blue variable stage, only the two lower metallicity groups were<br/>considered. Their mass loss was analyzed using Python. Luminosity, temperature, and<br/>radius were also compared. The initial-final mass relation plots showed that in the 1/10<br/>solar metallicity case, reducing the mass loss rate tended to increase the dependence of final mass on initial mass. The limited nature of these results implies a need for further study into the effects of using different mass loss rates in the code TYCHO.

ContributorsAuchterlonie, Lauren (Author) / Young, Patrick (Thesis director) / Shkolnik, Evgenya (Committee member) / Starrfield, Sumner (Committee member) / School of Earth and Space Exploration (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149932-Thumbnail Image.png
Description
Recent changes in the energy markets structure combined with the conti-nuous load growth have caused power systems to be operated under more stressed conditions. In addition, the nature of power systems has also grown more complex and dynamic because of the increasing use of long inter-area tie-lines and the high

Recent changes in the energy markets structure combined with the conti-nuous load growth have caused power systems to be operated under more stressed conditions. In addition, the nature of power systems has also grown more complex and dynamic because of the increasing use of long inter-area tie-lines and the high motor loads especially those comprised mainly of residential single phase A/C motors. Therefore, delayed voltage recovery, fast voltage collapse and short term voltage stability issues in general have obtained significant importance in relia-bility studies. Shunt VAr injection has been used as a countermeasure for voltage instability. However, the dynamic and fast nature of short term voltage instability requires fast and sufficient VAr injection, and therefore dynamic VAr devices such as Static VAr Compensators (SVCs) and STATic COMpensators (STAT-COMs) are used. The location and size of such devices are optimized in order to improve their efficiency and reduce initial costs. In this work time domain dy-namic analysis was used to evaluate trajectory voltage sensitivities for each time step. Linear programming was then performed to determine the optimal amount of required VAr injection at each bus, using voltage sensitivities as weighting factors. Optimal VAr injection values from different operating conditions were weighted and averaged in order to obtain a final setting of the VAr requirement. Some buses under consideration were either assigned very small VAr injection values, or not assigned any value at all. Therefore, the approach used in this work was found to be useful in not only determining the optimal size of SVCs, but also their location.
ContributorsSalloum, Ahmed (Author) / Vittal, Vijay (Thesis advisor) / Heydt, Gerald (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2011
150298-Thumbnail Image.png
Description
Due to restructuring and open access to the transmission system, modern electric power systems are being operated closer to their operational limits. Additionally, the secure operational limits of modern power systems have become increasingly difficult to evaluate as the scale of the network and the number of transactions between utilities

Due to restructuring and open access to the transmission system, modern electric power systems are being operated closer to their operational limits. Additionally, the secure operational limits of modern power systems have become increasingly difficult to evaluate as the scale of the network and the number of transactions between utilities increase. To account for these challenges associated with the rapid expansion of electric power systems, dynamic equivalents have been widely applied for the purpose of reducing the computational effort of simulation-based transient security assessment. Dynamic equivalents are commonly developed using a coherency-based approach in which a retained area and an external area are first demarcated. Then the coherent generators in the external area are aggregated and replaced by equivalenced models, followed by network reduction and load aggregation. In this process, an improperly defined retained area can result in detrimental impacts on the effectiveness of the equivalents in preserving the dynamic characteristics of the original unreduced system. In this dissertation, a comprehensive approach has been proposed to determine an appropriate retained area boundary by including the critical generators in the external area that are tightly coupled with the initial retained area. Further-more, a systematic approach has also been investigated to efficiently predict the variation in generator slow coherency behavior when the system operating condition is subject to change. Based on this determination, the critical generators in the external area that are tightly coherent with the generators in the initial retained area are retained, resulting in a new retained area boundary. Finally, a novel hybrid dynamic equivalent, consisting of both a coherency-based equivalent and an artificial neural network (ANN)-based equivalent, has been proposed and analyzed. The ANN-based equivalent complements the coherency-based equivalent at all the retained area boundary buses, and it is designed to compensate for the discrepancy between the full system and the conventional coherency-based equivalent. The approaches developed have been validated on a large portion of the Western Electricity Coordinating Council (WECC) system and on a test case including a significant portion of the eastern interconnection.
ContributorsMa, Feng (Author) / Vittal, Vijay (Thesis advisor) / Tylavsky, Daniel (Committee member) / Heydt, Gerald (Committee member) / Si, Jennie (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2011