Matching Items (9)
Filtering by

Clear all filters

151374-Thumbnail Image.png
Description
ABSTRACT As the use of photovoltaic (PV) modules in large power plants continues to increase globally, more studies on degradation, reliability, failure modes, and mechanisms of field aged modules are needed to predict module life expectancy based on accelerated lifetime testing of PV modules. In this work, a 26+ year

ABSTRACT As the use of photovoltaic (PV) modules in large power plants continues to increase globally, more studies on degradation, reliability, failure modes, and mechanisms of field aged modules are needed to predict module life expectancy based on accelerated lifetime testing of PV modules. In this work, a 26+ year old PV power plant in Phoenix, Arizona has been evaluated for performance, reliability, and durability. The PV power plant, called Solar One, is owned and operated by John F. Long's homeowners association. It is a 200 kWdc, standard test conditions (STC) rated power plant comprised of 4000 PV modules or frameless laminates, in 100 panel groups (rated at 175 kWac). The power plant is made of two center-tapped bipolar arrays, the north array and the south array. Due to a limited time frame to execute this large project, this work was performed by two masters students (Jonathan Belmont and Kolapo Olakonu) and the test results are presented in two masters theses. This thesis presents the results obtained on the south array and the other thesis presents the results obtained on the north array. Each of these two arrays is made of four sub arrays, the east sub arrays (positive and negative polarities) and the west sub arrays (positive and negative polarities), making up eight sub arrays. The evaluation and analyses of the power plant included in this thesis consists of: visual inspection, electrical performance measurements, and infrared thermography. A possible presence of potential induced degradation (PID) due to potential difference between ground and strings was also investigated. Some installation practices were also studied and found to contribute to the power loss observed in this investigation. The power output measured in 2011 for all eight sub arrays at STC is approximately 76 kWdc and represents a power loss of 62% (from 200 kW to 76 kW) over 26+ years. The 2011 measured power output for the four south sub arrays at STC is 39 kWdc and represents a power loss of 61% (from 100 kW to 39 kW) over 26+ years. Encapsulation browning and non-cell interconnect ribbon breakages were determined to be the primary causes for the power loss.
ContributorsOlakonu, Kolapo (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Srinivasan, Devarajan (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2012
151533-Thumbnail Image.png
Description
Memories play an integral role in today's advanced ICs. Technology scaling has enabled high density designs at the price paid for impact due to variability and reliability. It is imperative to have accurate methods to measure and extract the variability in the SRAM cell to produce accurate reliability projections for

Memories play an integral role in today's advanced ICs. Technology scaling has enabled high density designs at the price paid for impact due to variability and reliability. It is imperative to have accurate methods to measure and extract the variability in the SRAM cell to produce accurate reliability projections for future technologies. This work presents a novel test measurement and extraction technique which is non-invasive to the actual operation of the SRAM memory array. The salient features of this work include i) A single ended SRAM test structure with no disturbance to SRAM operations ii) a convenient test procedure that only requires quasi-static control of external voltages iii) non-iterative method that extracts the VTH variation of each transistor from eight independent switch point measurements. With the present day technology scaling, in addition to the variability with the process, there is also the impact of other aging mechanisms which become dominant. The various aging mechanisms like Negative Bias Temperature Instability (NBTI), Channel Hot Carrier (CHC) and Time Dependent Dielectric Breakdown (TDDB) are critical in the present day nano-scale technology nodes. In this work, we focus on the impact of NBTI due to aging in the SRAM cell and have used Trapping/De-Trapping theory based log(t) model to explain the shift in threshold voltage VTH. The aging section focuses on the following i) Impact of Statistical aging in PMOS device due to NBTI dominates the temporal shift of SRAM cell ii) Besides static variations , shifting in VTH demands increased guard-banding margins in design stage iii) Aging statistics remain constant during the shift, presenting a secondary effect in aging prediction. iv) We have investigated to see if the aging mechanism can be used as a compensation technique to reduce mismatch due to process variations. Finally, the entire test setup has been tested in SPICE and also validated with silicon and the results are presented. The method also facilitates the study of design metrics such as static, read and write noise margins and also the data retention voltage and thus help designers to improve the cell stability of SRAM.
ContributorsRavi, Venkatesa (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Clark, Lawrence (Committee member) / Arizona State University (Publisher)
Created2013
150421-Thumbnail Image.png
Description
Photovoltaic (PV) modules undergo performance degradation depending on climatic conditions, applications, and system configurations. The performance degradation prediction of PV modules is primarily based on Accelerated Life Testing (ALT) procedures. In order to further strengthen the ALT process, additional investigation of the power degradation of field aged PV modules in

Photovoltaic (PV) modules undergo performance degradation depending on climatic conditions, applications, and system configurations. The performance degradation prediction of PV modules is primarily based on Accelerated Life Testing (ALT) procedures. In order to further strengthen the ALT process, additional investigation of the power degradation of field aged PV modules in various configurations is required. A detailed investigation of 1,900 field aged (12-18 years) PV modules deployed in a power plant application was conducted for this study. Analysis was based on the current-voltage (I-V) measurement of all the 1,900 modules individually. I-V curve data of individual modules formed the basis for calculating the performance degradation of the modules. The percentage performance degradation and rates of degradation were compared to an earlier study done at the same plant. The current research was primarily focused on identifying the extent of potential induced degradation (PID) of individual modules with reference to the negative ground potential. To investigate this, the arrangement and connection of the individual modules/strings was examined in detail. The study also examined the extent of underperformance of every series string due to performance mismatch of individual modules in that string. The power loss due to individual module degradation and module mismatch at string level was then compared to the rated value.
ContributorsJaspreet Singh (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Srinivasan, Devarajan (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2011
150743-Thumbnail Image.png
Description
Thanks to continuous technology scaling, intelligent, fast and smaller digital systems are now available at affordable costs. As a result, digital systems have found use in a wide range of application areas that were not even imagined before, including medical (e.g., MRI, remote or post-operative monitoring devices, etc.), automotive (e.g.,

Thanks to continuous technology scaling, intelligent, fast and smaller digital systems are now available at affordable costs. As a result, digital systems have found use in a wide range of application areas that were not even imagined before, including medical (e.g., MRI, remote or post-operative monitoring devices, etc.), automotive (e.g., adaptive cruise control, anti-lock brakes, etc.), security systems (e.g., residential security gateways, surveillance devices, etc.), and in- and out-of-body sensing (e.g., capsule swallowed by patients measuring digestive system pH, heart monitors, etc.). Such computing systems, which are completely embedded within the application, are called embedded systems, as opposed to general purpose computing systems. In the design of such embedded systems, power consumption and reliability are indispensable system requirements. In battery operated portable devices, the battery is the single largest factor contributing to device cost, weight, recharging time, frequency and ultimately its usability. For example, in the Apple iPhone 4 smart-phone, the battery is $40\%$ of the device weight, occupies $36\%$ of its volume and allows only $7$ hours (over 3G) of talk time. As embedded systems find use in a range of sensitive applications, from bio-medical applications to safety and security systems, the reliability of the computations performed becomes a crucial factor. At our current technology-node, portable embedded systems are prone to expect failures due to soft errors at the rate of once-per-year; but with aggressive technology scaling, the rate is predicted to increase exponentially to once-per-hour. Over the years, researchers have been successful in developing techniques, implemented at different layers of the design-spectrum, to improve system power efficiency and reliability. Among the layers of design abstraction, I observe that the interface between the compiler and processor micro-architecture possesses a unique potential for efficient design optimizations. A compiler designer is able to observe and analyze the application software at a finer granularity; while the processor architect analyzes the system output (power, performance, etc.) for each executed instruction. At the compiler micro-architecture interface, if the system knowledge at the two design layers can be integrated, design optimizations at the two layers can be modified to efficiently utilize available resources and thereby achieve appreciable system-level benefits. To this effect, the thesis statement is that, ``by merging system design information at the compiler and micro-architecture design layers, smart compilers can be developed, that achieve reliable and power-efficient embedded computing through: i) Pure compiler techniques, ii) Hybrid compiler micro-architecture techniques, and iii) Compiler-aware architectures''. In this dissertation demonstrates, through contributions in each of the three compiler-based techniques, the effectiveness of smart compilers in achieving power-efficiency and reliability in embedded systems.
ContributorsJeyapaul, Reiley (Author) / Shrivastava, Aviral (Thesis advisor) / Vrudhula, Sarma (Committee member) / Clark, Lawrence (Committee member) / Colbourn, Charles (Committee member) / Arizona State University (Publisher)
Created2012
154078-Thumbnail Image.png
Description
Photovoltaic (PV) module degradation is a well-known issue, however understanding the mechanistic pathways in which modules degrade is still a major task for the PV industry. In order to study the mechanisms responsible for PV module degradation, the effects of these degradation mechanisms must be quantitatively measured to determine the

Photovoltaic (PV) module degradation is a well-known issue, however understanding the mechanistic pathways in which modules degrade is still a major task for the PV industry. In order to study the mechanisms responsible for PV module degradation, the effects of these degradation mechanisms must be quantitatively measured to determine the severity of each degradation mode. In this thesis multiple modules from three climate zones (Arizona, California and Colorado) were investigated for a single module glass/polymer construction (Siemens M55) to determine the degree to which they had degraded, and the main factors that contributed to that degradation. To explain the loss in power, various nondestructive and destructive techniques were used to indicate possible causes of loss in performance. This is a two-part thesis. Part 1 presents non-destructive test results and analysis and Part 2 presents destructive test results and analysis.
ContributorsChicca, Matthew (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Rogers, Bradley (Committee member) / Srinivasan, Devarajan (Committee member) / Arizona State University (Publisher)
Created2015
156029-Thumbnail Image.png
Description
With the application of reverse osmosis (RO) membranes in the wastewater treatment and seawater desalination, the limitation of flux and fouling problems of RO have gained more attention from researchers. Because of the tunable structure and physicochemical properties of nanomaterials, it is a suitable material that can be used to

With the application of reverse osmosis (RO) membranes in the wastewater treatment and seawater desalination, the limitation of flux and fouling problems of RO have gained more attention from researchers. Because of the tunable structure and physicochemical properties of nanomaterials, it is a suitable material that can be used to incorporate with RO to change the membrane performances. Silver is biocidal, which has been used in a variety of consumer products. Recent studies showed that fabricating silver nanoparticles (AgNPs) on membrane surfaces can mitigate the biofouling problem on the membrane. Studies have shown that Ag released from the membrane in the form of either Ag ions or AgNP will accelerate the antimicrobial activity of the membrane. However, the silver release from the membrane will lower the silver loading on the membrane, which will eventually shorten the antimicrobial activity lifetime of the membrane. Therefore, the silver leaching amount is a crucial parameter that needs to be determined for every type of Ag composite membrane.

This study is attempting to compare four different silver leaching test methods, to study the silver leaching potential of the silver impregnated membranes, conducting the advantages and disadvantages of the leaching methods. An In-situ reduction Ag loaded RO membrane was examined in this study. A custom waterjet test was established to create a high-velocity water flow to test the silver leaching from the nanocomposite membrane in a relative extreme environment. The batch leaching test was examined as the most common leaching test method for the silver composite membrane. The cross-flow filtration and dead-end test were also examined to compare the silver leaching amounts.

The silver coated membrane used in this experiment has an initial silver loading of 2.0± 0.51 ug/cm2. The mass balance was conducted for all of the leaching tests. For the batch test, water jet test, and dead-end filtration, the mass balances are all within 100±25%, which is acceptable in this experiment because of the variance of the initial silver loading on the membranes. A bad silver mass balance was observed at cross-flow filtration. Both of AgNP and Ag ions leached in the solution was examined in this experiment. The concentration of total silver leaching into solutions from the four leaching tests are all below the Secondary Drinking Water Standard for silver which is 100 ppb. The cross-flow test is the most aggressive leaching method, which has more than 80% of silver leached from the membrane after 50 hours of the test. The water jet (54 ± 6.9% of silver remaining) can cause higher silver leaching than batch test (85 ± 1.2% of silver remaining) in one-hour, and it can also cause both AgNP and Ag ions leaching from the membrane, which is closer to the leaching condition in the cross-flow test.
ContributorsHan, Bingru (Author) / Westerhoff, Paul (Thesis advisor) / Perreault, Francois (Committee member) / Sinha, Shahnawaz (Committee member) / Arizona State University (Publisher)
Created2017
156589-Thumbnail Image.png
Description
The volume of end-of-life photovoltaic (PV) modules is increasing as the global PV market increases, and the global PV waste streams are expected to reach 250,000 metric tons by the end of 2020. If the recycling processes are not in place, there would be 60 million tons of end-of-life PV

The volume of end-of-life photovoltaic (PV) modules is increasing as the global PV market increases, and the global PV waste streams are expected to reach 250,000 metric tons by the end of 2020. If the recycling processes are not in place, there would be 60 million tons of end-of-life PV modules lying in the landfills by 2050, that may not become a not-so-sustainable way of sourcing energy since all PV modules could contain certain amount of toxic substances. Currently in the United States, PV modules are categorized as general waste and can be disposed in landfills. However, potential leaching of toxic chemicals and materials, if any, from broken end-of-life modules may pose health or environmental risks. There is no standard procedure to remove samples from PV modules for chemical toxicity testing in the Toxicity Characteristic Leaching Procedure (TCLP) laboratories as per EPA 1311 standard. The main objective of this thesis is to develop an unbiased sampling approach for the TCLP testing of PV modules. The TCLP testing was concentrated only for the laminate part of the modules, as they are already existing recycling technologies for the frame and junction box components of PV modules. Four different sample removal methods have been applied to the laminates of five different module manufacturers: coring approach, cell-cut approach, strip-cut approach, and hybrid approach. These removed samples were sent to two different TCLP laboratories, and TCLP results were tested for repeatability within a lab and reproducibility between the labs. The pros and cons of each sample removal method have been explored and the influence of sample removal methods on the variability of TCLP results has been discussed. To reduce the variability of TCLP results to an acceptable level, additional improvements in the coring approach, the best of the four tested options, are still needed.
ContributorsLeslie, Joswin (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Srinivasan, Devarajan (Committee member) / Kuitche, Joseph (Committee member) / Arizona State University (Publisher)
Created2018
156829-Thumbnail Image.png
Description
Advances in semiconductor technology have brought computer-based systems intovirtually all aspects of human life. This unprecedented integration of semiconductor based systems in our lives has significantly increased the domain and the number

of safety-critical applications – application with unacceptable consequences of failure. Software-level error resilience schemes are attractive because they can

Advances in semiconductor technology have brought computer-based systems intovirtually all aspects of human life. This unprecedented integration of semiconductor based systems in our lives has significantly increased the domain and the number

of safety-critical applications – application with unacceptable consequences of failure. Software-level error resilience schemes are attractive because they can provide commercial-off-the-shelf microprocessors with adaptive and scalable reliability.

Among all software-level error resilience solutions, in-application instruction replication based approaches have been widely used and are deemed to be the most effective. However, existing instruction-based replication schemes only protect some part of computations i.e. arithmetic and logical instructions and leave the rest as unprotected. To improve the efficacy of instruction-level redundancy-based approaches, we developed several error detection and error correction schemes. nZDC (near Zero silent

Data Corruption) is an instruction duplication scheme which protects the execution of whole application. Rather than detecting errors on register operands of memory and control flow operations, nZDC checks the results of such operations. nZDC en

sures the correct execution of memory write instruction by reloading stored value and checking it against redundantly computed value. nZDC also introduces a novel control flow checking mechanism which replicates compare and branch instructions and

detects both wrong direction branches as well as unwanted jumps. Fault injection experiments show that nZDC can improve the error coverage of the state-of-the-art schemes by more than 10x, without incurring any more performance penalty. Further

more, we introduced two error recovery solutions. InCheck is our backward recovery solution which makes light-weighted error-free checkpoints at the basic block granularity. In the case of error, InCheck reverts the program execution to the beginning of last executed basic block and resumes the execution by the aid of preserved in formation. NEMESIS is our forward recovery scheme which runs three versions of computation and detects errors by checking the results of all memory write and branch

operations. In the case of a mismatch, NEMESIS diagnosis routine decides if the error is recoverable. If yes, NEMESIS recovery routine reverts the effect of error from the program state and resumes program normal execution from the error detection

point.
ContributorsDidehban, Moslem (Author) / Shrivastava, Aviral (Thesis advisor) / Wu, Carole-Jean (Committee member) / Clark, Lawrence (Committee member) / Mahlke, Scott (Committee member) / Arizona State University (Publisher)
Created2018
168467-Thumbnail Image.png
Description
Neural networks are increasingly becoming attractive solutions for automated systems within automotive, aerospace, and military industries.Since many applications in such fields are both real-time and safety-critical, strict performance and reliability constraints must be considered. To achieve high performance, specialized architectures are required.Given that over 90% of the workload in modern

Neural networks are increasingly becoming attractive solutions for automated systems within automotive, aerospace, and military industries.Since many applications in such fields are both real-time and safety-critical, strict performance and reliability constraints must be considered. To achieve high performance, specialized architectures are required.Given that over 90% of the workload in modern neural network topologies is dominated by matrix multiplication, accelerating said algorithm becomes of paramount importance. Modern neural network accelerators, such as Xilinx's Deep Processing Unit (DPU), adopt efficient systolic-like architectures. Thanks to their high degree of parallelism and design flexibility, Field-Programmable Gate Arrays (FPGAs) are among the most promising devices for speeding up matrix multiplication and neural network computation.However, SRAM-based FPGAs are also known to suffer from radiation-induced upsets in their configuration memories. To achieve high reliability, hardening strategies must be put in place.However, traditional modular redundancy of inherently expensive modules is not always feasible due to limited resource availability on target devices. Therefore, more efficient and cleverly designed hardening methods become a necessity. For instance, Algorithm-Based Fault-Tolerance (ABFT) exploits algorithm characteristics to deliver error detection/correction capabilities at significantly lower costs. First, experimental results with Xilinx's DPU indicate that failure rates can be over twice as high as the limits specified for terrestrial applications.In other words, the undeniable need for hardening in the state-of-the-art neural network accelerator for FPGAs is demonstrated. Later, an extensive multi-level fault propagation analysis is presented, and an ultra-low-cost algorithm-based error detection strategy for matrix multiplication is proposed.By considering the specifics of FPGAs' fault model, this novel hardening method decreases costs of implementation by over a polynomial degree, when compared to state-of-the-art solutions. A corresponding architectural implementation is suggested, incurring area and energy overheads lower than 1% for the vast majority of systolic arrays dimensions. Finally, the impact of fundamental design decisions, such as data precision in processing elements, and overall degree of parallelism, on the reliability of hypothetical neural network accelerators is experimentally investigated.A novel way of predicting the compound failure rate of inherently inaccurate algorithms/applications in the presence of radiation is also provided.
ContributorsLibano, Fabiano (Author) / Brunhaver, John (Thesis advisor) / Clark, Lawrence (Committee member) / Quinn, Heather (Committee member) / Rech, Paolo (Committee member) / Arizona State University (Publisher)
Created2021