Matching Items (175)
Filtering by

Clear all filters

152025-Thumbnail Image.png
Description
At present, almost 70% of the electric energy in the United States is produced utilizing fossil fuels. Combustion of fossil fuels contributes CO2 to the atmosphere, potentially exacerbating the impact on global warming. To make the electric power system (EPS) more sustainable for the future, there has been an emphasis

At present, almost 70% of the electric energy in the United States is produced utilizing fossil fuels. Combustion of fossil fuels contributes CO2 to the atmosphere, potentially exacerbating the impact on global warming. To make the electric power system (EPS) more sustainable for the future, there has been an emphasis on scaling up generation of electric energy from wind and solar resources. These resources are renewable in nature and have pollution free operation. Various states in the US have set up different goals for achieving certain amount of electrical energy to be produced from renewable resources. The Southwestern region of the United States receives significant solar radiation throughout the year. High solar radiation makes concentrated solar power and solar PV the most suitable means of renewable energy production in this region. However, the majority of the projects that are presently being developed are either residential or utility owned solar PV plants. This research explores the impact of significant PV penetration on the steady state voltage profile of the electric power transmission system. This study also identifies the impact of PV penetration on the dynamic response of the transmission system such as rotor angle stability, frequency response and voltage response after a contingency. The light load case of spring 2010 and the peak load case of summer 2018 have been considered for analyzing the impact of PV. If the impact is found to be detrimental to the normal operation of the EPS, mitigation measures have been devised and presented in the thesis. Commercially available software tools/packages such as PSLF, PSS/E, DSA Tools have been used to analyze the power network and validate the results.
ContributorsPrakash, Nitin (Author) / Heydt, Gerald T. (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2013
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
152174-Thumbnail Image.png
Description
Recent trends in the electric power industry have led to more attention to optimal operation of power transformers. In a deregulated environment, optimal operation means minimizing the maintenance and extending the life of this critical and costly equipment for the purpose of maximizing profits. Optimal utilization of a transformer can

Recent trends in the electric power industry have led to more attention to optimal operation of power transformers. In a deregulated environment, optimal operation means minimizing the maintenance and extending the life of this critical and costly equipment for the purpose of maximizing profits. Optimal utilization of a transformer can be achieved through the use of dynamic loading. A benefit of dynamic loading is that it allows better utilization of the transformer capacity, thus increasing the flexibility and reliability of the power system. This document presents the progress on a software application which can estimate the maximum time-varying loading capability of transformers. This information can be used to load devices closer to their limits without exceeding the manufacturer specified operating limits. The maximally efficient dynamic loading of transformers requires a model that can accurately predict both top-oil temperatures (TOTs) and hottest-spot temperatures (HSTs). In the previous work, two kinds of thermal TOT and HST models have been studied and used in the application: the IEEE TOT/HST models and the ASU TOT/HST models. And, several metrics have been applied to evaluate the model acceptability and determine the most appropriate models for using in the dynamic loading calculations. In this work, an investigation to improve the existing transformer thermal models performance is presented. Some factors that may affect the model performance such as improper fan status and the error caused by the poor performance of IEEE models are discussed. Additional methods to determine the reliability of transformer thermal models using metrics such as time constant and the model parameters are also provided. A new production grade application for real-time dynamic loading operating purpose is introduced. This application is developed by using an existing planning application, TTeMP, as a start point, which is designed for the dispatchers and load specialists. To overcome the limitations of TTeMP, the new application can perform dynamic loading under emergency conditions, such as loss-of transformer loading. It also has the capability to determine the emergency rating of the transformers for a real-time estimation.
ContributorsZhang, Ming (Author) / Tylavsky, Daniel J (Thesis advisor) / Ayyanar, Raja (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2013
152149-Thumbnail Image.png
Description
Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating

Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain sched- uled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.
ContributorsSteenis, Joel (Author) / Ayyanar, Raja (Thesis advisor) / Mittelmann, Hans (Committee member) / Tsakalis, Konstantinos (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2013
152257-Thumbnail Image.png
Description
Today, more and more substations are created and reconstructed to satisfy the growing electricity demands for both industry and residence. It is always a big concern that the designed substation must guarantee the safety of persons who are in the area of the substation. As a result, the safety metrics

Today, more and more substations are created and reconstructed to satisfy the growing electricity demands for both industry and residence. It is always a big concern that the designed substation must guarantee the safety of persons who are in the area of the substation. As a result, the safety metrics (touch voltage, step voltage and grounding resistance), which should be considered at worst case, are supposed to be under the allowable values. To improve the accuracy of calculating safety metrics, at first, it is necessary to have a relatively accurate soil model instead of uniform soil model. Hence, the two-layer soil model is employed in this thesis. The new approximate finite equations with soil parameters (upper-layer resistivity, lower-layer resistivity and upper-layer thickness) are used, which are developed based on traditional infinite expression. The weighted- least-squares regression with new bad data detection method (adaptive weighted function) is applied to fit the measurement data from the Wenner-method. At the end, a developed error analysis method is used to obtain the error (variance) of each parameter. Once the soil parameters are obtained, it is possible to use a developed complex images method to calculate the mutual (self) resistance, which is the induced voltage of a conductor/rod by unit current form another conductor/rod. The basis of the calculation is Green's function between two point current sources, thus, it can be expanded to either the functions between point and line current sources, or the functions between line and line current sources. Finally, the grounding system optimization is implemented with developed three-step optimization strategy using MATLAB solvers. The first step is using "fmincon" solver to optimize the cost function with differentiable constraint equations from IEEE standard. The result of the first step is set as the initial values to the second step, which is using "patternsearch" solver, thus, the non-differentiable and more accurate constraint calculation can be employed. The final step is a backup step using "ga" solver, which is more robust but lager time cost.
ContributorsWu, Xuan (Author) / Tylavsky, Daniel (Thesis advisor) / Undrill, John (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2013
151860-Thumbnail Image.png
Description
Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal

Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal in both men and women. Developing new drugs for the treatment of cancer is both a slow and expensive process. It is estimated that it takes an average of 15 years and an expense of $800 million to bring a single new drug to the market. However, it is also estimated that nearly 40% of that cost could be avoided by finding alternative uses for drugs that have already been approved by the Food and Drug Administration (FDA). The research presented in this document describes the testing, identification, and mechanistic evaluation of novel methods for treating many human carcinomas using drugs previously approved by the FDA. A tissue culture plate-based screening of FDA approved drugs will identify compounds that can be used in combination with the protein TRAIL to induce apoptosis selectively in cancer cells. Identified leads will next be optimized using high-throughput microfluidic devices to determine the most effective treatment conditions. Finally, a rigorous mechanistic analysis will be conducted to understand how the FDA-approved drug mitoxantrone, sensitizes cancer cells to TRAIL-mediated apoptosis.
ContributorsTaylor, David (Author) / Rege, Kaushal (Thesis advisor) / Jayaraman, Arul (Committee member) / Nielsen, David (Committee member) / Kodibagkar, Vikram (Committee member) / Dai, Lenore (Committee member) / Arizona State University (Publisher)
Created2013
151729-Thumbnail Image.png
Description
This thesis concerns the flashover issue of the substation insulators operating in a polluted environment. The outdoor insulation equipment used in the power delivery infrastructure encounter different types of pollutants due to varied environmental conditions. Various methods have been developed by manufacturers and researchers to mitigate the flashover problem. The

This thesis concerns the flashover issue of the substation insulators operating in a polluted environment. The outdoor insulation equipment used in the power delivery infrastructure encounter different types of pollutants due to varied environmental conditions. Various methods have been developed by manufacturers and researchers to mitigate the flashover problem. The application of Room Temperature Vulcanized (RTV) silicone rubber is one such favorable method as it can be applied over the already installed units. Field experience has already showed that the RTV silicone rubber coated insulators have a lower flashover probability than the uncoated insulators. The scope of this research is to quantify the improvement in the flashover performance. Artificial contamination tests were carried on station post insulators for assessing their performance. A factorial experiment design was used to model the flashover performance. The formulation included the severity of contamination and leakage distance of the insulator samples. Regression analysis was used to develop a mathematical model from the data obtained from the experiments. The main conclusion drawn from the study is that the RTV coated insulators withstood much higher levels of contamination even when the coating had lost its hydrophobicity. This improvement in flashover performance was found to be in the range of 20-40%. A much better flashover performance was observed when the coating recovered its hydrophobicity. It was also seen that the adhesion of coating was excellent even after many tests which involved substantial discharge activity.
ContributorsGholap, Vipul (Author) / Gorur, Ravi S (Thesis advisor) / Karady, George G. (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2013
152012-Thumbnail Image.png
Description
As renewable energy becomes more prevalent in transmission and distribution systems, it is vital to understand the uncertainty and variability that accompany these resources. Microgrids have the potential to mitigate the effects of resource uncertainty. With the ability to exist in either an islanded mode or maintain connections with the

As renewable energy becomes more prevalent in transmission and distribution systems, it is vital to understand the uncertainty and variability that accompany these resources. Microgrids have the potential to mitigate the effects of resource uncertainty. With the ability to exist in either an islanded mode or maintain connections with the main-grid, a microgrid can increase reliability, defer T&D; infrastructure and effectively utilize demand response. This study presents a co-optimization framework for a microgrid with solar photovoltaic generation, emergency generation, and transmission switching. Today unit commitment models ensure reliability with deterministic criteria, which are either insufficient to ensure reliability or can degrade economic efficiency for a microgrid that uses a large penetration of variable renewable resources. A stochastic mixed integer linear program for day-ahead unit commitment is proposed to account for uncertainty inherent in PV generation. The model incorporates the ability to trade energy and ancillary services with the main-grid, including the designation of firm and non-firm imports, which captures the ability to allow for reserve sharing between the two systems. In order to manage the computational complexities, a Benders' decomposition approach is utilized. The commitment schedule was validated with solar scenario analysis, i.e., Monte-Carlo simulations are conducted to test the proposed dispatch solution. For this test case, there were few deviations to power imports, 0.007% of solar was curtailed, no load shedding occurred in the main-grid, and 1.70% load shedding occurred in the microgrid.
ContributorsHytowitz, Robin Broder (Author) / Hedman, Kory W (Thesis advisor) / Heydt, Gerald T (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2013
151656-Thumbnail Image.png
Description
Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all

Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all of the known samples. The selection of the contributing data points and the specifics of how they are used to define the interpolated values influences how effectively the interpolation algorithm is able to estimate the underlying, continuous signal. The main contributions of this dissertation are three fold: 1) Reframing edge-directed interpolation of a single image as an intensity-based registration problem. 2) Providing an analytical framework for intensity-based registration using control grid constraints. 3) Quantitative assessment of the new, single-image enlargement algorithm based on analytical intensity-based registration. In addition to single image resizing, the new methods and analytical approaches were extended to address a wide range of applications including volumetric (multi-slice) image interpolation, video deinterlacing, motion detection, and atmospheric distortion correction. Overall, the new approaches generate results that more accurately reflect the underlying signals than less computationally demanding approaches and with lower processing requirements and fewer restrictions than methods with comparable accuracy.
ContributorsZwart, Christine M. (Author) / Frakes, David H (Thesis advisor) / Karam, Lina (Committee member) / Kodibagkar, Vikram (Committee member) / Spanias, Andreas (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2013