Matching Items (1,182)
Filtering by

Clear all filters

158515-Thumbnail Image.png
Description
Nearly all solar photovoltaic (PV) systems are designed with maximum power point tracking (MPPT) functionality to maximize the utilization of available power from the PV array throughout the day. In conventional PV systems, the MPPT function is handled by a power electronic device, like a DC-AC inverter. However, given that

Nearly all solar photovoltaic (PV) systems are designed with maximum power point tracking (MPPT) functionality to maximize the utilization of available power from the PV array throughout the day. In conventional PV systems, the MPPT function is handled by a power electronic device, like a DC-AC inverter. However, given that most PV systems are designed to be grid-connected, there are several challenges for designing PV systems for DC-powered applications and off-grid applications. The first challenge is that all power electronic devices introduce some degree of power loss. Beyond the cost of the lost power, the upfront cost of power electronics also increases with the required power rating. Second, there are very few commercially available options for DC-DC converters that include MPPT functionality, and nearly all PV inverters are designed as “grid-following” devices, as opposed to “grid-forming” devices, meaning they cannot be used in off-grid applications.

To address the challenges of designing PV systems for high-power DC and off-grid applications, a load-managing photovoltaic (LMPV) system topology has been proposed. Instead of using power electronics, the LMPV system performs maximum power point tracking through load management. By implementing a load-management approach, the upfront costs and the power losses associated with the power electronics are avoided, both of which improve the economic viability of the PV system. This work introduces the concept of an LMPV system, provides in-depth analyses through both simulation and experimental validation, and explores several potential applications of the system, such as solar-powered commercial-scale electrolyzers for the production of hydrogen fuel or the production and purification of raw materials like caustic soda, copper, and zinc.
ContributorsAzzolini, Joseph Anthony (Author) / Tao, Meng (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Qin, Jiangchao (Committee member) / Reno, Matthew J. (Committee member) / Arizona State University (Publisher)
Created2020
158236-Thumbnail Image.png
Description
Power systems are undergoing a significant transformation as a result of the retirements of conventional coal-fired generation units and the increasing integration of converter interfaced renewable resources. The instantaneous renewable generation penetration as a percentage of the load served in megawatt (MW), in some areas of the United States (U.S.)

Power systems are undergoing a significant transformation as a result of the retirements of conventional coal-fired generation units and the increasing integration of converter interfaced renewable resources. The instantaneous renewable generation penetration as a percentage of the load served in megawatt (MW), in some areas of the United States (U.S.) sometimes approaches over 50 percent. These changes have introduced new challenges for reliability studies considering the two functional reliability aspects, i.e., adequacy and the dynamic security or operating reliability.

Adequacy assessment becomes more complex due to the variability introduced by renewable energy generation. The traditionally used reserve margin only considers projected peak demand and would be inadequate since it does not consider an evaluation of off-peak conditions that could also be critical due to the variable renewable generation. Therefore, in order to address the impact of variable renewable generation, a probabilistic evaluation that studies all hours of a year based on statistical characteristics is a necessity to identify the adequacy risks. On the other hand, the system dynamic behavior is also changing. Converter interfaced generation resources have different dynamic characteristics from the conventional synchronous units and inherently do not participate in grid regulation functions such as frequency control and voltage control that are vital to maintaining operating reliability. In order to evaluate these evolving grid characteristics, comprehensive reliability evaluation approaches that consider system stochasticity and evaluate both adequacy and dynamic security are important to identify potential system risks in this transforming environment.
ContributorsWang, Yingying (Author) / Vittal, Vijay (Thesis advisor) / Khorsand, Mojdeh (Thesis advisor) / Heydt, Gerald (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2020
158241-Thumbnail Image.png
Description
This thesis introduces a new robotic leg design with three degrees of freedom that

can be adapted for both bipedal and quadrupedal locomotive systems, and serves as

a blueprint for designers attempting to create low cost robot legs capable of balancing

and walking. Currently, bipedal leg designs are mostly rigid and have not

This thesis introduces a new robotic leg design with three degrees of freedom that

can be adapted for both bipedal and quadrupedal locomotive systems, and serves as

a blueprint for designers attempting to create low cost robot legs capable of balancing

and walking. Currently, bipedal leg designs are mostly rigid and have not strongly

taken into account the advantages/disadvantages of using an active ankle, as opposed

to a passive ankle, for balancing. This design uses low-cost compliant materials, but

the materials used are thick enough to mimic rigid properties under low stresses, so

this paper will treat the links as rigid materials. A new leg design has been created

that contains three degrees of freedom that can be adapted to contain either a passive

ankle using springs, or an actively controlled ankle using an additional actuator. This

thesis largely aims to focus on the ankle and foot design of the robot and the torque

and speed requirements of the design for motor selection. The dynamics of the system,

including height, foot width, weight, and resistances will be analyzed to determine

how to improve design performance. Model-based control techniques will be used to

control the angle of the leg for balancing. In doing so, it will also be shown that it

is possible to implement model-based control techniques on robots made of laminate

materials.
ContributorsShafa, Taha A (Author) / Aukes, Daniel M (Thesis advisor) / Rogers, Bradley (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2020
158254-Thumbnail Image.png
Description
Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection

Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection can be implemented using the classical coherence estimator, which does better at detecting subtle changes, like vehicle tracks. A two-stage detector was proposed by Cha et al., where the sample variance ratio forms the first stage, and the second stage comprises of Berger's alternative coherence estimator.

A modification to the first stage of the two-stage detector is proposed in this study, which significantly simplifies the analysis of the this detector. Cha et al. have used a heuristic approach to determine the thresholds for this two-stage detector. In this study, the probability density function for the modified two-stage detector is derived, and using this probability density function, an approach for determining the thresholds for this two-dimensional detection problem has been proposed. The proposed method of threshold selection reveals an interesting behavior shown by the two-stage detector. With the help of theoretical receiver operating characteristic analysis, it is shown that the two-stage detector gives a better detection performance as compared to the other three detectors. However, the Berger's estimator proves to be a simpler alternative, since it gives only a slightly poorer performance as compared to the two-stage detector. All the four detectors have also been implemented on a SAR data set, and it is shown that the two-stage detector and the Berger's estimator generate images where the areas showing change are easily visible.
ContributorsBondre, Akshay Sunil (Author) / Richmond, Christ D (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel W (Committee member) / Arizona State University (Publisher)
Created2020
158270-Thumbnail Image.png
Description
This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given.

The traditional tomographic image reconstruction approach involves

This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given.

The traditional tomographic image reconstruction approach involves Fourier domain representations. The classic Filtered Back Projection algorithm will be discussed and used for comparison throughout the work. Bayesian statistics and information entropy considerations will be described. The Maximum Entropy reconstruction method will be derived and its performance in limited angular measurement scenarios will be examined.

Many new approaches become available once the reconstruction problem is placed within an algebraic form of Ax=b in which the measurement geometry and instrument response are defined as the matrix A, the measured object as the column vector x, and the resulting measurements by b. It is straightforward to invert A. However, for the limited angle measurement scenarios of interest in this work, the inversion is highly underconstrained and has an infinite number of possible solutions x consistent with the measurements b in a high dimensional space.

The algebraic formulation leads to the need for high performing regularization approaches which add constraints based on prior information of what is being measured. These are constraints beyond the measurement matrix A added with the goal of selecting the best image from this vast uncertainty space. It is well established within this work that developing satisfactory regularization techniques is all but impossible except for the simplest pathological cases. There is a need to capture the "character" of the objects being measured.

The novel result of this effort will be in developing a reconstruction approach that will match whatever reconstruction approach has proven best for the types of objects being measured given full angular coverage. However, when confronted with limited angle tomographic situations or early in a series of measurements, the approach will rely on a prior understanding of the "character" of the objects measured. This understanding will be learned by a parallel Deep Neural Network from examples.
ContributorsDallmann, Nicholas A. (Author) / Tsakalis, Konstantinos (Thesis advisor) / Hardgrove, Craig (Committee member) / Rodriguez, Armando (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2020
158175-Thumbnail Image.png
Description
Aortic aneurysms and dissections are life threatening conditions addressed by replacing damaged sections of the aorta. Blood circulation must be halted to facilitate repairs. Ischemia places the body, especially the brain, at risk of damage. Deep hypothermia circulatory arrest (DHCA) is employed to protect patients and provide time for surgeons

Aortic aneurysms and dissections are life threatening conditions addressed by replacing damaged sections of the aorta. Blood circulation must be halted to facilitate repairs. Ischemia places the body, especially the brain, at risk of damage. Deep hypothermia circulatory arrest (DHCA) is employed to protect patients and provide time for surgeons to complete repairs on the basis that reducing body temperature suppresses the metabolic rate. Supplementary surgical techniques can be employed to reinforce the brain's protection and increase the duration circulation can be suspended. Even then, protection is not completely guaranteed though. A medical condition that can arise early in recovery is postoperative delirium, which is correlated with poor long term outcome. This study develops a methodology to intraoperatively monitor neurophysiology through electroencephalography (EEG) and anticipate postoperative delirium. The earliest opportunity to detect occurrences of complications through EEG is immediately following DHCA during warming. The first observable electrophysiological activity after being completely suppressed is a phenomenon known as burst suppression, which is related to the brain's metabolic state and recovery of nominal neurological function. A metric termed burst suppression duty cycle (BSDC) is developed to characterize the changing electrophysiological dynamics. Predictions of postoperative delirium incidences are made by identifying deviations in the way these dynamics evolve. Sixteen cases are examined in this study. Accurate predictions can be made, where on average 89.74% of cases are correctly classified when burst suppression concludes and 78.10% when burst suppression begins. The best case receiver operating characteristic curve has an area under its convex hull of 0.8988, whereas the worst case area under the hull is 0.7889. These results demonstrate the feasibility of monitoring BSDC to anticipate postoperative delirium during burst suppression. They also motivate a further analysis on identifying footprints of causal mechanisms of neural injury within BSDC. Being able to raise warning signs of postoperative delirium early provides an opportunity to intervene and potentially avert neurological complications. Doing so would improve the success rate and quality of life after surgery.
ContributorsMa, Owen (Author) / Bliss, Daniel W (Thesis advisor) / Berisha, Visar (Committee member) / Kosut, Oliver (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2020
158193-Thumbnail Image.png
Description
Energy is one of the wheels on which the modern world runs. Therefore, standards and limits have been devised to maintain the stability and reliability of the power grid. This research shows a simple methodology for increasing the amount of Inverter-based Renewable Generation (IRG), which is also known as Inverter-based

Energy is one of the wheels on which the modern world runs. Therefore, standards and limits have been devised to maintain the stability and reliability of the power grid. This research shows a simple methodology for increasing the amount of Inverter-based Renewable Generation (IRG), which is also known as Inverter-based Resources (IBR), for that considers the voltage and frequency limits specified by the Western Electricity Coordinating Council (WECC) Transmission Planning (TPL) criteria, and the tie line power flow limits between the area-under-study and its neighbors under contingency conditions. A WECC power flow and dynamic file is analyzed and modified in this research to demonstrate the performance of the methodology. GE's Positive Sequence Load Flow (PSLF) software is used to conduct this research and Python was used to analyze the output data.

The thesis explains in detail how the system with 11% of IRG operated before conducting any adjustments (addition of IRG) and what procedures were modified to make the system run correctly. The adjustments made to the dynamic models are also explained in depth to give a clearer picture of how each adjustment affects the system performance. A list of proposed IRG units along with their locations were provided by SRP, a power utility in Arizona, which were to be integrated into the power flow and dynamic files. In the process of finding the maximum IRG penetration threshold, three sensitivities were also considered, namely, momentary cessation due to low voltages, transmission vs. distribution connected solar generation, and stalling of induction motors. Finally, the thesis discusses how the system reacts to the aforementioned modifications, and how IRG penetration threshold gets adjusted with regards to the different sensitivities applied to the system.
ContributorsAlbhrani, Hashem A M H S (Author) / Pal, Anamitra (Thesis advisor) / Holbert, Keith E. (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2020
158293-Thumbnail Image.png
Description
Reliable operation of modern power systems is ensured by an intelligent cyber layer that monitors and controls the physical system. The data collection and transmission is achieved by the supervisory control and data acquisition (SCADA) system, and data processing is performed by the energy management system (EMS). In the recent

Reliable operation of modern power systems is ensured by an intelligent cyber layer that monitors and controls the physical system. The data collection and transmission is achieved by the supervisory control and data acquisition (SCADA) system, and data processing is performed by the energy management system (EMS). In the recent decades, the development of phasor measurement units (PMUs) enables wide area real-time monitoring and control. However, both SCADA-based and PMU-based cyber layers are prone to cyber attacks that can impact system operation and lead to severe physical consequences.

This dissertation studies false data injection (FDI) attacks that are unobservable to bad data detectors (BDD). Prior work has shown that an attacker-defender bi-level linear program (ADBLP) can be used to determine the worst-case consequences of FDI attacks aiming to maximize the physical power flow on a target line. However, the results were only demonstrated on small systems assuming that they are operated with DC optimal power flow (OPF). This dissertation is divided into four parts to thoroughly understand the consequences of these attacks as well as develop countermeasures.

The first part focuses on evaluating the vulnerability of large-scale power systems to FDI attacks. The solution technique introduced in prior work to solve the ADBLP is intractable on large-scale systems due to the large number of binary variables. Four new computationally efficient algorithms are presented to solve this problem.

The second part studies vulnerability of N-1 reliable power systems operated by state-of-the-art EMSs commonly used in practice, specifically real-time contingency analysis (RTCA), and security-constrained economic dispatch (SCED). An ADBLP is formulated with detailed assumptions on attacker's knowledge and system operations.

The third part considers FDI attacks on PMU measurements that have strong temporal correlations due to high data rate. It is shown that predictive filters can detect suddenly injected attacks, but not gradually ramping attacks.

The last part proposes a machine learning-based attack detection framework consists of a support vector regression (SVR) load predictor that predicts loads by exploiting both spatial and temporal correlations, and a subsequent support vector machine (SVM) attack detector to determine the existence of attacks.
ContributorsChu, Zhigang (Author) / Kosut, Oliver (Thesis advisor) / Sankar, Lalitha (Committee member) / Scaglione, Anna (Committee member) / Pal, Anamitra (Committee member) / Arizona State University (Publisher)
Created2020
158329-Thumbnail Image.png
Description
Precursors of carbon fibers include rayon, pitch, and polyacrylonitrile fibers that can be heat-treated for high-strength or high-modulus carbon fibers. Among them, polyacrylonitrile has been used most frequently due to its low viscosity for easy processing and excellent performance for high-end applications. To further explore polyacrylonitrile-based fibers for better precursors,

Precursors of carbon fibers include rayon, pitch, and polyacrylonitrile fibers that can be heat-treated for high-strength or high-modulus carbon fibers. Among them, polyacrylonitrile has been used most frequently due to its low viscosity for easy processing and excellent performance for high-end applications. To further explore polyacrylonitrile-based fibers for better precursors, in this study, carbon nanofillers were introduced in the polymer matrix to examine their reinforcement effects and influences on carbon fiber performance. Two-dimensional graphene nanoplatelets were mainly used for the polymer reinforcement and one-dimensional carbon nanotubes were also incorporated in polyacrylonitrile as a comparison. Dry-jet wet spinning was used to fabricate the composite fibers. Hot-stage drawing and heat-treatment were used to evolve the physical microstructures and molecular morphologies of precursor and carbon fibers. As compared to traditionally used random dispersions, selective placement of nanofillers was effective in improving composite fiber properties and enhancing mechanical and functional behaviors of carbon fibers. The particular position of reinforcement fillers with polymer layers was enabled by the in-house developed spinneret used for fiber spinning. The preferential alignment of graphitic planes contributed to the enhanced mechanical and functional behaviors than those of dispersed nanoparticles in polyacrylonitrile composites. The high in-plane modulus of graphene and the induction to polyacrylonitrile molecular carbonization/graphitization were the motivation for selectively placing graphene nanoplatelets between polyacrylonitrile layers. Mechanical tests, scanning electron microscopy, thermal, and electrical properties were characterized. Applications such as volatile organic compound sensing and pressure sensing were demonstrated.
ContributorsFranklin, Rahul Joseph (Author) / Song, Kenan (Thesis advisor) / Jiao, Yang (Thesis advisor) / Liu, Yongming (Committee member) / Arizona State University (Publisher)
Created2020
158336-Thumbnail Image.png
Description
With the rapid advancement in the technologies related to renewable energies such

as solar, wind, fuel cell, and many more, there is a definite need for new power con

verting methods involving data-driven methodology. Having adequate information is

crucial for any innovative ideas to fructify; accordingly, moving away from traditional

methodologies is the most

With the rapid advancement in the technologies related to renewable energies such

as solar, wind, fuel cell, and many more, there is a definite need for new power con

verting methods involving data-driven methodology. Having adequate information is

crucial for any innovative ideas to fructify; accordingly, moving away from traditional

methodologies is the most practical way of giving birth to new ideas. While working

on a DC-DC buck converter, the input voltages considered for running the simulations

are varied for research purposes. The critical aspect of the new data-driven method

ology is to propose a machine learning algorithm. In this design, solving for inductor

value and power switching losses, the parameters can be achieved while keeping the

input and output ratio close to the value as necessary. Thus, implementing machine

learning algorithms with the traditional design of a non-isolated buck converter deter

mines the optimal outcome for the inductor value and power loss, which is achieved

by assimilating a DC-DC converter and data-driven methodology.

The present thesis investigates the different outcomes from machine learning al

gorithms in comparison with the dynamic equations. Specifically, the DC-DC buck

converter will be focused on the thesis. In order to determine the most effective way

of keeping the system in a steady-state, different circuit buck converter with different

parameters have been performed.

At present, artificial intelligence plays a vital role in power system control and

theory. Consequently, in this thesis, the approximation error estimation has been

analyzed in a DC-DC buck converter model, with specific consideration of machine

learning algorithms tools that can help detect and calculate the difference in terms

of error. These tools, called models, are used to analyze the collected data. In the

present thesis, a focus on such models as K-nearest neighbors (K-NN), specifically

the Weighted-nearest neighbor (WKNN), is utilized for machine learning algorithm

purposes. The machine learning concept introduced in the present thesis lays down

the foundation for future research in this area so that to enable further research on

efficient ways to improve power electronic devices with reduced power switching losses

and optimal inductor values.
ContributorsAlsalem, Hamad (Author) / Weng, Yang (Thesis advisor) / Lei, Qin (Committee member) / Kozicki, Michael (Committee member) / Arizona State University (Publisher)
Created2020