Matching Items (1,058)
Filtering by

Clear all filters

152318-Thumbnail Image.png
Description
This study evaluates two photovoltaic (PV) power plants based on electrical performance measurements, diode checks, visual inspections and infrared scanning. The purpose of this study is to measure degradation rates of performance parameters (Pmax, Isc, Voc, Vmax, Imax and FF) and to identify the failure modes in a "hot-dry desert"

This study evaluates two photovoltaic (PV) power plants based on electrical performance measurements, diode checks, visual inspections and infrared scanning. The purpose of this study is to measure degradation rates of performance parameters (Pmax, Isc, Voc, Vmax, Imax and FF) and to identify the failure modes in a "hot-dry desert" climatic condition along with quantitative determination of safety failure rates and reliability failure rates. The data obtained from this study can be used by module manufacturers in determining the warranty limits of their modules and also by banks, investors, project developers and users in determining appropriate financing or decommissioning models. In addition, the data obtained in this study will be helpful in selecting appropriate accelerated stress tests which would replicate the field failures for the new modules and would predict the lifetime for new PV modules. The study was conducted at two, single axis tracking monocrystalline silicon (c-Si) power plants, Site 3 and Site 4c of Salt River Project (SRP). The Site 3 power plant is located in Glendale, Arizona and the Site 4c power plant is located in Mesa, Arizona both considered a "hot-dry" field condition. The Site 3 power plant has 2,352 modules (named as Model-G) which was rated at 250 kW DC output. The mean and median degradation of these 12 years old modules are 0.95%/year and 0.96%/year, respectively. The major cause of degradation found in Site 3 is due to high series resistance (potentially due to solder-bond thermo-mechanical fatigue) and the failure mode is ribbon-ribbon solder bond failure/breakage. The Site 4c power plant has 1,280 modules (named as Model-H) which provide 243 kW DC output. The mean and median degradation of these 4 years old modules are 0.96%/year and 1%/year, respectively. At Site 4c, practically, none of the module failures are observed. The average soiling loss is 6.9% in Site 3 and 5.5% in Site 4c. The difference in soiling level is attributed to the rural and urban surroundings of these two power plants.
ContributorsMallineni, Jaya Krishna (Author) / Govindasamy, Tamizhmani (Thesis advisor) / Devarajan, Srinivasan (Committee member) / Narciso, Macia (Committee member) / Arizona State University (Publisher)
Created2013
152321-Thumbnail Image.png
Description
In modern electric power systems, energy management systems (EMSs) are responsi-ble for monitoring and controlling the generation system and transmission networks. State estimation (SE) is a critical `must run successful' component within the EMS software. This is dictated by the high reliability requirements and need to represent the closest real

In modern electric power systems, energy management systems (EMSs) are responsi-ble for monitoring and controlling the generation system and transmission networks. State estimation (SE) is a critical `must run successful' component within the EMS software. This is dictated by the high reliability requirements and need to represent the closest real time model for market operations and other critical analysis functions in the EMS. Tradi-tionally, SE is run with data obtained only from supervisory control and data acquisition (SCADA) devices and systems. However, more emphasis on improving the performance of SE drives the inclusion of phasor measurement units (PMUs) into SE input data. PMU measurements are claimed to be more accurate than conventional measurements and PMUs `time stamp' measurements accurately. These widely distributed devices meas-ure the voltage phasors directly. That is, phase information for measured voltages and currents are available. PMUs provide data time stamps to synchronize measurements. Con-sidering the relatively small number of PMUs installed in contemporary power systems in North America, performing SE with only phasor measurements is not feasible. Thus a hy-brid SE, including both SCADA and PMU measurements, is the reality for contemporary power system SE. The hybrid approach is the focus of a number of research papers. There are many practical challenges in incorporating PMUs into SE input data. The higher reporting rates of PMUs as compared with SCADA measurements is one of the salient problems. The disparity of reporting rates raises a question whether buffering the phasor measurements helps to give better estimates of the states. The research presented in this thesis addresses the design of data buffers for PMU data as used in SE applications in electric power systems. The system theoretic analysis is illustrated using an operating electric power system in the southwest part of the USA. Var-ious instances of state estimation data have been used for analysis purposes. The details of the research, results obtained and conclusions drawn are presented in this document.
ContributorsMurugesan, Veerakumar (Author) / Vittal, Vijay (Committee member) / Heydt, Gerald (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2013
152586-Thumbnail Image.png
Description
The computation of the fundamental mode in structural moment frames provides valuable insight into the physical response of the frame to dynamic or time-varying loads. In standard practice, it is not necessary to solve for all n mode shapes in a structural system; it is therefore practical to limit the

The computation of the fundamental mode in structural moment frames provides valuable insight into the physical response of the frame to dynamic or time-varying loads. In standard practice, it is not necessary to solve for all n mode shapes in a structural system; it is therefore practical to limit the system to some determined number of r significant mode shapes. Current building codes, such as the American Society of Civil Engineers (ASCE), require certain class of structures to obtain 90% effective mass participation as a way to estimate the accuracy of a solution for base shear motion. A parametric study was performed from the collected data obtained by the analysis of a large number of framed structures. The purpose of this study was the development of rules for the required number of r significant modes to meet the ASCE code requirements. The study was based on the implementation of an algorithm and a computer program developed in the past. The algorithm is based on Householders Transformations, QR Factorization, and Inverse Iteration and it extracts a requested s (s<< n) number of predominate mode shapes and periods. Only the first r (r < s) of these modes are accurate. To verify the accuracy of the algorithm a variety of building frames have been analyzed using the commercially available structural software (RISA 3D) as a benchmark. The salient features of the algorithm are presented briefly in this study.
ContributorsGrantham, Jonathan (Author) / Fafitis, Apostolos (Thesis advisor) / Attard, Thomas (Committee member) / Houston, Sandra (Committee member) / Hjelmstad, Keith (Committee member) / Arizona State University (Publisher)
Created2014
152590-Thumbnail Image.png
Description
Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost

Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost in handling breaches for security architects and security engineers. The process of security testing which involves creating tests that effectively examine vulnerabilities is a challenging task. Role-Based Access Control (RBAC) has been widely adopted to support fine-grained access control. However, in practice, due to its complexity including role management, role hierarchy with hundreds of roles, and their associated privileges and users, systematically testing RBAC systems is crucial to ensure the security in various domains ranging from cyber-infrastructure to mission-critical applications. In this thesis, we introduce i) a security testing technique for RBAC systems considering the principle of maximum privileges, the structure of the role hierarchy, and a new security test coverage criterion; ii) a MTBDD (Multi-Terminal Binary Decision Diagram) based representation of RBAC security policy including RHMTBDD (Role Hierarchy MTBDD) to efficiently generate effective positive and negative security test cases; and iii) a security testing framework which takes an XACML-based RBAC security policy as an input, parses it into a RHMTBDD representation and then generates positive and negative test cases. We also demonstrate the efficacy of our approach through case studies.
ContributorsGupta, Poonam (Author) / Ahn, Gail-Joon (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2014
152595-Thumbnail Image.png
Description
The semiconductor field of Photovoltaics (PV) has experienced tremendous growth, requiring curricula to consider ways to promote student success. One major barrier to success students may face when learning PV is the development of misconceptions. The purpose of this work was to determine the presence and prevalence of misconceptions students

The semiconductor field of Photovoltaics (PV) has experienced tremendous growth, requiring curricula to consider ways to promote student success. One major barrier to success students may face when learning PV is the development of misconceptions. The purpose of this work was to determine the presence and prevalence of misconceptions students may have for three PV semiconductor phenomena; Diffusion, Drift and Excitation. These phenomena are emergent, a class of phenomena that have certain characteristics. In emergent phenomena, the individual entities in the phenomena interact and aggregate to form a self-organizing pattern that can be observed at a higher level. Learners develop a different type of misconception for these phenomena, an emergent misconception. Participants (N=41) completed a written protocol. The pilot study utilized half of these protocols (n = 20) to determine the presence of both general and emergent misconceptions for the three phenomena. Once the presence of both general and emergent misconceptions was confirmed, all protocols (N=41) were analyzed to determine the presence and prevalence of general and emergent misconceptions, and to note any relationships among these misconceptions (full study). Through written protocol analysis of participants' responses, numerous codes emerged from the data for both general and emergent misconceptions. General and emergent misconceptions were found in 80% and 55% of participants' responses, respectively. General misconceptions indicated limited understandings of chemical bonding, electricity and magnetism, energy, and the nature of science. Participants also described the phenomena using teleological, predictable, and causal traits, indicating participants had misconceptions regarding the emergent aspects of the phenomena. For both general and emergent misconceptions, relationships were observed between similar misconceptions within and across the three phenomena, and differences in misconceptions were observed across the phenomena. Overall, the presence and prevalence of both general and emergent misconceptions indicates that learners have limited understandings of the physical and emergent mechanisms for the phenomena. Even though additional work is required, the identification of specific misconceptions can be utilized to enhance semiconductor and PV course content. Specifically, changes can be made to curriculum in order to limit the formation of misconceptions as well as promote conceptual change.
ContributorsNelson, Katherine G (Author) / Brem, Sarah K. (Thesis advisor) / Mckenna, Ann F (Thesis advisor) / Hilpert, Jonathan (Committee member) / Honsberg, Christiana (Committee member) / Husman, Jenefer (Committee member) / Arizona State University (Publisher)
Created2014
152482-Thumbnail Image.png
Description
Renewable portfolio standards prescribe for penetration of high amounts of re-newable energy sources (RES) that may change the structure of existing power systems. The load growth and changes in power flow caused by RES integration may result in re-quirements of new available transmission capabilities and upgrades of existing transmis-sion paths.

Renewable portfolio standards prescribe for penetration of high amounts of re-newable energy sources (RES) that may change the structure of existing power systems. The load growth and changes in power flow caused by RES integration may result in re-quirements of new available transmission capabilities and upgrades of existing transmis-sion paths. Construction difficulties of new transmission lines can become a problem in certain locations. The increase of transmission line thermal ratings by reconductoring using High Temperature Low Sag (HTLS) conductors is a comparatively new technology introduced to transmission expansion. A special design permits HTLS conductors to operate at high temperatures (e.g., 200oC), thereby allowing passage of higher current. The higher temperature capability increases the steady state and emergency thermal ratings of the transmission line. The main disadvantage of HTLS technology is high cost. The high cost may place special emphasis on a thorough analysis of cost to benefit of HTLS technology im-plementation. Increased transmission losses in HTLS conductors due to higher current may be a disadvantage that can reduce the attractiveness of this method. Studies described in this thesis evaluate the expenditures for transmission line re-conductoring using HTLS and the consequent benefits obtained from the potential decrease in operating cost for thermally limited transmission systems. Studies performed consider the load growth and penetration of distributed renewable energy sources according to the renewable portfolio standards for power systems. An evaluation of payback period is suggested to assess the cost to benefit ratio of HTLS upgrades. The thesis also considers the probabilistic nature of transmission upgrades. The well-known Chebyshev inequality is discussed with an application to transmission up-grades. The Chebyshev inequality is proposed to calculate minimum payback period ob-tained from the upgrades of certain transmission lines. The cost to benefit evaluation of HTLS upgrades is performed using a 225 bus equivalent of the 2012 summer peak Arizona portion of the Western Electricity Coordi-nating Council (WECC).
ContributorsTokombayev, Askhat (Author) / Heydt, Gerald T. (Thesis advisor) / Sankar, Lalitha (Committee member) / Karady, George G. (Committee member) / Arizona State University (Publisher)
Created2014
152165-Thumbnail Image.png
Description
Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are observed during residency for judgment of their skills. Although the value of this method of skills assessment cannot be ignored, novel methodologies of objective skills assessment need to be designed, developed, and evaluated that augment the traditional approach. Several sensor-based systems have been developed to measure a user's skill quantitatively, but use of sensors could interfere with skill execution and thus limit the potential for evaluating real-life surgery. However, having a method to judge skills automatically in real-life conditions should be the ultimate goal, since only with such features that a system would be widely adopted. This research proposes a novel video-based approach for observing surgeons' hand and surgical tool movements in minimally invasive surgical training exercises as well as during laparoscopic surgery. Because our system does not require surgeons to wear special sensors, it has the distinct advantage over alternatives of offering skills assessment in both learning and real-life environments. The system automatically detects major skill-measuring features from surgical task videos using a computing system composed of a series of computer vision algorithms and provides on-screen real-time performance feedback for more efficient skill learning. Finally, the machine-learning approach is used to develop an observer-independent composite scoring model through objective and quantitative measurement of surgical skills. To increase effectiveness and usability of the developed system, it is integrated with a cloud-based tool, which automatically assesses surgical videos upload to the cloud.
ContributorsIslam, Gazi (Author) / Li, Baoxin (Thesis advisor) / Liang, Jianming (Thesis advisor) / Dinu, Valentin (Committee member) / Greenes, Robert (Committee member) / Smith, Marshall (Committee member) / Kahol, Kanav (Committee member) / Patel, Vimla L. (Committee member) / Arizona State University (Publisher)
Created2013
152273-Thumbnail Image.png
Description
This study focuses on state estimation of nonlinear discrete time systems with constraints. Physical processes have inherent in them, constraints on inputs, outputs, states and disturbances. These constraints can provide additional information to the estimator in estimating states from the measured output. Recursive filters such as Kalman Filters or Extended

This study focuses on state estimation of nonlinear discrete time systems with constraints. Physical processes have inherent in them, constraints on inputs, outputs, states and disturbances. These constraints can provide additional information to the estimator in estimating states from the measured output. Recursive filters such as Kalman Filters or Extended Kalman Filters are commonly used in state estimation; however, they do not allow inclusion of constraints in their formulation. On the other hand, computational complexity of full information estimation (using all measurements) grows with iteration and becomes intractable. One way of formulating the recursive state estimation problem with constraints is the Moving Horizon Estimation (MHE) approximation. Estimates of states are calculated from the solution of a constrained optimization problem of fixed size. Detailed formulation of this strategy is studied and properties of this estimation algorithm are discussed in this work. The problem with the MHE formulation is solving an optimization problem in each iteration which is computationally intensive. State estimation with constraints can be formulated as Extended Kalman Filter (EKF) with a projection applied to estimates. The states are estimated from the measurements using standard Extended Kalman Filter (EKF) algorithm and the estimated states are projected on to a constrained set. Detailed formulation of this estimation strategy is studied and the properties associated with this algorithm are discussed. Both these state estimation strategies (MHE and EKF with projection) are tested with examples from the literature. The average estimation time and the sum of square estimation error are used to compare performance of these estimators. Results of the case studies are analyzed and trade-offs are discussed.
ContributorsJoshi, Rakesh (Author) / Tsakalis, Konstantinos (Thesis advisor) / Rodriguez, Armando (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2013
152510-Thumbnail Image.png
Description
Aluminum alloys and their composites are attractive materials for applications requiring high strength-to-weight ratios and reasonable cost. Many of these applications, such as those in the aerospace industry, undergo fatigue loading. An understanding of the microstructural damage that occurs in these materials is critical in assessing their fatigue resistance. Two

Aluminum alloys and their composites are attractive materials for applications requiring high strength-to-weight ratios and reasonable cost. Many of these applications, such as those in the aerospace industry, undergo fatigue loading. An understanding of the microstructural damage that occurs in these materials is critical in assessing their fatigue resistance. Two distinct experimental studies were performed to further the understanding of fatigue damage mechanisms in aluminum alloys and their composites, specifically fracture and plasticity. Fatigue resistance of metal matrix composites (MMCs) depends on many aspects of composite microstructure. Fatigue crack growth behavior is particularly dependent on the reinforcement characteristics and matrix microstructure. The goal of this work was to obtain a fundamental understanding of fatigue crack growth behavior in SiC particle-reinforced 2080 Al alloy composites. In situ X-ray synchrotron tomography was performed on two samples at low (R=0.1) and at high (R=0.6) R-ratios. The resulting reconstructed images were used to obtain three-dimensional (3D) rendering of the particles and fatigue crack. Behaviors of the particles and crack, as well as their interaction, were analyzed and quantified. Four-dimensional (4D) visual representations were constructed to aid in the overall understanding of damage evolution. During fatigue crack growth in ductile materials, a plastic zone is created in the region surrounding the crack tip. Knowledge of the plastic zone is important for the understanding of fatigue crack formation as well as subsequent growth behavior. The goal of this work was to quantify the 3D size and shape of the plastic zone in 7075 Al alloys. X-ray synchrotron tomography and Laue microdiffraction were used to non-destructively characterize the volume surrounding a fatigue crack tip. The precise 3D crack profile was segmented from the reconstructed tomography data. Depth-resolved Laue patterns were obtained using differential-aperture X-ray structural microscopy (DAXM), from which peak-broadening characteristics were quantified. Plasticity, as determined by the broadening of diffracted peaks, was mapped in 3D. Two-dimensional (2D) maps of plasticity were directly compared to the corresponding tomography slices. A 3D representation of the plastic zone surrounding the fatigue crack was generated by superimposing the mapped plasticity on the 3D crack profile.
ContributorsHruby, Peter (Author) / Chawla, Nikhilesh (Thesis advisor) / Solanki, Kiran (Committee member) / Liu, Yongming (Committee member) / Arizona State University (Publisher)
Created2014
152514-Thumbnail Image.png
Description
As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms

As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms which are capable of finding the hidden structure within these datasets. As consumers of popular Big Data frameworks have sought to apply and benefit from these improved learning algorithms, the problems encountered with the frameworks have motivated a new generation of Big Data tools to address the shortcomings of the previous generation. One important example of this is the improved performance in the newer tools with the large class of machine learning algorithms which are highly iterative in nature. In this thesis project, I set about to implement a low-rank matrix completion algorithm (as an example of a highly iterative algorithm) within a popular Big Data framework, and to evaluate its performance processing the Netflix Prize dataset. I begin by describing several approaches which I attempted, but which did not perform adequately. These include an implementation of the Singular Value Thresholding (SVT) algorithm within the Apache Mahout framework, which runs on top of the Apache Hadoop MapReduce engine. I then describe an approach which uses the Divide-Factor-Combine (DFC) algorithmic framework to parallelize the state-of-the-art low-rank completion algorithm Orthogoal Rank-One Matrix Pursuit (OR1MP) within the Apache Spark engine. I describe the results of a series of tests running this implementation with the Netflix dataset on clusters of various sizes, with various degrees of parallelism. For these experiments, I utilized the Amazon Elastic Compute Cloud (EC2) web service. In the final analysis, I conclude that the Spark DFC + OR1MP implementation does indeed produce competitive results, in both accuracy and performance. In particular, the Spark implementation performs nearly as well as the MATLAB implementation of OR1MP without any parallelism, and improves performance to a significant degree as the parallelism increases. In addition, the experience demonstrates how Spark's flexible programming model makes it straightforward to implement this parallel and iterative machine learning algorithm.
ContributorsKrouse, Brian (Author) / Ye, Jieping (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2014