Matching Items (7)
Filtering by

Clear all filters

151533-Thumbnail Image.png
Description
Memories play an integral role in today's advanced ICs. Technology scaling has enabled high density designs at the price paid for impact due to variability and reliability. It is imperative to have accurate methods to measure and extract the variability in the SRAM cell to produce accurate reliability projections for

Memories play an integral role in today's advanced ICs. Technology scaling has enabled high density designs at the price paid for impact due to variability and reliability. It is imperative to have accurate methods to measure and extract the variability in the SRAM cell to produce accurate reliability projections for future technologies. This work presents a novel test measurement and extraction technique which is non-invasive to the actual operation of the SRAM memory array. The salient features of this work include i) A single ended SRAM test structure with no disturbance to SRAM operations ii) a convenient test procedure that only requires quasi-static control of external voltages iii) non-iterative method that extracts the VTH variation of each transistor from eight independent switch point measurements. With the present day technology scaling, in addition to the variability with the process, there is also the impact of other aging mechanisms which become dominant. The various aging mechanisms like Negative Bias Temperature Instability (NBTI), Channel Hot Carrier (CHC) and Time Dependent Dielectric Breakdown (TDDB) are critical in the present day nano-scale technology nodes. In this work, we focus on the impact of NBTI due to aging in the SRAM cell and have used Trapping/De-Trapping theory based log(t) model to explain the shift in threshold voltage VTH. The aging section focuses on the following i) Impact of Statistical aging in PMOS device due to NBTI dominates the temporal shift of SRAM cell ii) Besides static variations , shifting in VTH demands increased guard-banding margins in design stage iii) Aging statistics remain constant during the shift, presenting a secondary effect in aging prediction. iv) We have investigated to see if the aging mechanism can be used as a compensation technique to reduce mismatch due to process variations. Finally, the entire test setup has been tested in SPICE and also validated with silicon and the results are presented. The method also facilitates the study of design metrics such as static, read and write noise margins and also the data retention voltage and thus help designers to improve the cell stability of SRAM.
ContributorsRavi, Venkatesa (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Clark, Lawrence (Committee member) / Arizona State University (Publisher)
Created2013
150743-Thumbnail Image.png
Description
Thanks to continuous technology scaling, intelligent, fast and smaller digital systems are now available at affordable costs. As a result, digital systems have found use in a wide range of application areas that were not even imagined before, including medical (e.g., MRI, remote or post-operative monitoring devices, etc.), automotive (e.g.,

Thanks to continuous technology scaling, intelligent, fast and smaller digital systems are now available at affordable costs. As a result, digital systems have found use in a wide range of application areas that were not even imagined before, including medical (e.g., MRI, remote or post-operative monitoring devices, etc.), automotive (e.g., adaptive cruise control, anti-lock brakes, etc.), security systems (e.g., residential security gateways, surveillance devices, etc.), and in- and out-of-body sensing (e.g., capsule swallowed by patients measuring digestive system pH, heart monitors, etc.). Such computing systems, which are completely embedded within the application, are called embedded systems, as opposed to general purpose computing systems. In the design of such embedded systems, power consumption and reliability are indispensable system requirements. In battery operated portable devices, the battery is the single largest factor contributing to device cost, weight, recharging time, frequency and ultimately its usability. For example, in the Apple iPhone 4 smart-phone, the battery is $40\%$ of the device weight, occupies $36\%$ of its volume and allows only $7$ hours (over 3G) of talk time. As embedded systems find use in a range of sensitive applications, from bio-medical applications to safety and security systems, the reliability of the computations performed becomes a crucial factor. At our current technology-node, portable embedded systems are prone to expect failures due to soft errors at the rate of once-per-year; but with aggressive technology scaling, the rate is predicted to increase exponentially to once-per-hour. Over the years, researchers have been successful in developing techniques, implemented at different layers of the design-spectrum, to improve system power efficiency and reliability. Among the layers of design abstraction, I observe that the interface between the compiler and processor micro-architecture possesses a unique potential for efficient design optimizations. A compiler designer is able to observe and analyze the application software at a finer granularity; while the processor architect analyzes the system output (power, performance, etc.) for each executed instruction. At the compiler micro-architecture interface, if the system knowledge at the two design layers can be integrated, design optimizations at the two layers can be modified to efficiently utilize available resources and thereby achieve appreciable system-level benefits. To this effect, the thesis statement is that, ``by merging system design information at the compiler and micro-architecture design layers, smart compilers can be developed, that achieve reliable and power-efficient embedded computing through: i) Pure compiler techniques, ii) Hybrid compiler micro-architecture techniques, and iii) Compiler-aware architectures''. In this dissertation demonstrates, through contributions in each of the three compiler-based techniques, the effectiveness of smart compilers in achieving power-efficiency and reliability in embedded systems.
ContributorsJeyapaul, Reiley (Author) / Shrivastava, Aviral (Thesis advisor) / Vrudhula, Sarma (Committee member) / Clark, Lawrence (Committee member) / Colbourn, Charles (Committee member) / Arizona State University (Publisher)
Created2012
156829-Thumbnail Image.png
Description
Advances in semiconductor technology have brought computer-based systems intovirtually all aspects of human life. This unprecedented integration of semiconductor based systems in our lives has significantly increased the domain and the number

of safety-critical applications – application with unacceptable consequences of failure. Software-level error resilience schemes are attractive because they can

Advances in semiconductor technology have brought computer-based systems intovirtually all aspects of human life. This unprecedented integration of semiconductor based systems in our lives has significantly increased the domain and the number

of safety-critical applications – application with unacceptable consequences of failure. Software-level error resilience schemes are attractive because they can provide commercial-off-the-shelf microprocessors with adaptive and scalable reliability.

Among all software-level error resilience solutions, in-application instruction replication based approaches have been widely used and are deemed to be the most effective. However, existing instruction-based replication schemes only protect some part of computations i.e. arithmetic and logical instructions and leave the rest as unprotected. To improve the efficacy of instruction-level redundancy-based approaches, we developed several error detection and error correction schemes. nZDC (near Zero silent

Data Corruption) is an instruction duplication scheme which protects the execution of whole application. Rather than detecting errors on register operands of memory and control flow operations, nZDC checks the results of such operations. nZDC en

sures the correct execution of memory write instruction by reloading stored value and checking it against redundantly computed value. nZDC also introduces a novel control flow checking mechanism which replicates compare and branch instructions and

detects both wrong direction branches as well as unwanted jumps. Fault injection experiments show that nZDC can improve the error coverage of the state-of-the-art schemes by more than 10x, without incurring any more performance penalty. Further

more, we introduced two error recovery solutions. InCheck is our backward recovery solution which makes light-weighted error-free checkpoints at the basic block granularity. In the case of error, InCheck reverts the program execution to the beginning of last executed basic block and resumes the execution by the aid of preserved in formation. NEMESIS is our forward recovery scheme which runs three versions of computation and detects errors by checking the results of all memory write and branch

operations. In the case of a mismatch, NEMESIS diagnosis routine decides if the error is recoverable. If yes, NEMESIS recovery routine reverts the effect of error from the program state and resumes program normal execution from the error detection

point.
ContributorsDidehban, Moslem (Author) / Shrivastava, Aviral (Thesis advisor) / Wu, Carole-Jean (Committee member) / Clark, Lawrence (Committee member) / Mahlke, Scott (Committee member) / Arizona State University (Publisher)
Created2018
157184-Thumbnail Image.png
Description
The large-scale anthropogenic emission of carbon dioxide into the atmosphere leads to many unintended consequences, from rising sea levels to ocean acidification. While a clean energy infrastructure is growing, mid-term strategies that are compatible with the current infrastructure should be developed. Carbon capture and storage in fossil-fuel power plants is

The large-scale anthropogenic emission of carbon dioxide into the atmosphere leads to many unintended consequences, from rising sea levels to ocean acidification. While a clean energy infrastructure is growing, mid-term strategies that are compatible with the current infrastructure should be developed. Carbon capture and storage in fossil-fuel power plants is one way to avoid our current gigaton-scale emission of carbon dioxide into the atmosphere. However, for this to be possible, separation techniques are necessary to remove the nitrogen from air before combustion or from the flue gas after combustion. Metal-organic frameworks (MOFs) are a relatively new class of porous material that show great promise for adsorptive separation processes. Here, potential mechanisms of O2/N2 separation and CO2/N2 separation are explored.

First, a logical categorization of potential adsorptive separation mechanisms in MOFs is outlined by comparing existing data with previously studied materials. Size-selective adsorptive separation is investigated for both gas systems using molecular simulations. A correlation between size-selective equilibrium adsorptive separation capabilities and pore diameter is established in materials with complex pore distributions. A method of generating mobile extra-framework cations which drastically increase adsorptive selectivity toward nitrogen over oxygen via electrostatic interactions is explored through experiments and simulations. Finally, deposition of redox-active ferrocene molecules into systematically generated defects is shown to be an effective method of increasing selectivity towards oxygen.
ContributorsMcIntyre, Sean (Author) / Mu, Bin (Thesis advisor) / Green, Matthew (Committee member) / Lind, Marylaura (Committee member) / Arizona State University (Publisher)
Created2019
157001-Thumbnail Image.png
Description
Ethylene vinyl acetate (EVA) is the most commonly used encapsulant in photovoltaic modules. However, EVA degrades over time and causes performance losses in PV system. Therefore, EVA degradation is a matter of concern from a durability point of view.

This work compares EVA encapsulant degradation in glass/backsheet and glass/glass field-aged

Ethylene vinyl acetate (EVA) is the most commonly used encapsulant in photovoltaic modules. However, EVA degrades over time and causes performance losses in PV system. Therefore, EVA degradation is a matter of concern from a durability point of view.

This work compares EVA encapsulant degradation in glass/backsheet and glass/glass field-aged PV modules. EVA was extracted from three field-aged modules (two glass/backsheet and one glass/glass modules) from three different manufacturers from various regions (cell edges, cell centers, and non-cell region) from each module based on their visual and UV Fluorescence images. Characterization techniques such as I-V measurements, Colorimetry, Different Scanning Calorimetry, Thermogravimetric Analysis, Raman spectroscopy, and Fourier Transform Infrared Spectroscopy were performed on EVA samples.

The intensity of EVA discoloration was quantified using colorimetric measurements. Module performance parameters like Isc and Pmax degradation rates were calculated from I-V measurements. Properties such as degree of crystallinity, vinyl acetate content and degree of crosslinking were calculated from DSC, TGA, and Raman measurements, respectively. Polyenes responsible for EVA browning were identified in FTIR spectra.

The results from the characterization techniques confirmed that when EVA undergoes degradation, crosslinking in EVA increases beyond 90% causing a decrease in the degree of crystallinity and an increase in vinyl acetate content of EVA. Presence of polyenes in FTIR spectra of degraded EVA confirmed the occurrence of Norrish II reaction. However, photobleaching occurred in glass/backsheet modules due to the breathable backsheet whereas no photobleaching occurred in glass/glass modules because they were hermetically sealed. Hence, the yellowness index along with the Isc and Pmax degradation rates of EVA in glass/glass module is higher than that in glass/backsheet modules.

The results implied that more acetic acid was produced in the non-cell region due to its double layer of EVA compared to the front EVA from cell region. But, since glass/glass module is hermetically sealed, acetic acid gets entrapped inside the module further accelerating EVA degradation whereas it diffuses out through backsheet in glass/backsheet modules. Hence, it can be said that EVA might be a good encapsulant for glass/backsheet modules, but the same cannot be said for glass/glass modules.
ContributorsPatel, Aesha Parimalbhai (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Green, Matthew (Committee member) / Mu, Bin (Committee member) / Arizona State University (Publisher)
Created2018
155444-Thumbnail Image.png
Description
This is a two-part thesis assessing the long-term reliability of photovoltaic modules.

Part 1: Manufacturing dependent reliability - Adapting FMECA for quality control in PV module manufacturing

This part is aimed at introducing a statistical tool in quality assessments in PV module manufacturing. Developed jointly by ASU-PRL and Clean Energy Associates,

This is a two-part thesis assessing the long-term reliability of photovoltaic modules.

Part 1: Manufacturing dependent reliability - Adapting FMECA for quality control in PV module manufacturing

This part is aimed at introducing a statistical tool in quality assessments in PV module manufacturing. Developed jointly by ASU-PRL and Clean Energy Associates, this work adapts the Failure Mode Effect and Criticality Analysis (FMECA, IEC 60812) to quantify the impact of failure modes observed at the time of manufacturing. The method was developed through analysis of nearly 9000 modules at the pre-shipment evaluation stage in module manufacturing facilities across south east Asia. Numerous projects were analyzed to generate RPN (Risk Priority Number) scores for projects. In this manner, it was possibly to quantitatively assess the risk being carried the project at the time of shipment of modules. The objective of this work was to develop a benchmarking system that would allow for accurate quantitative estimations of risk mitigation and project bankability.

Part 2: Climate dependent reliability - Activation energy determination for climate specific degradation modes

This work attempts to model the parameter (Isc or Rs) degradation rate of modules as a function of the climatic parameters (i.e. temperature, relative humidity and ultraviolet radiation) at the site. The objective of this work was to look beyond the power degradation rate and model based on the performance parameter directly affected by the degradation mode under investigation (encapsulant browning or IMS degradation of solder bonds). Different physical models were tested and validated through comparing the activation energy obtained for each degradation mode. It was concluded that, for the degradation of the solder bonds within the module, the Pecks equation (function of temperature and relative humidity) modelled with Rs increase was the best fit; the activation energy ranging from 0.4 – 0.7 eV based on the climate type. For encapsulant browning, the Modified Arrhenius equation (function of temperature and UV) seemed to be the best fit presently, yielding an activation energy of 0.3 eV. The work was concluded by suggesting possible modifications to the models based on degradation pathways unaccounted for in the present work.
ContributorsPore, Shantanu (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Green, Matthew (Thesis advisor) / Srinivasan, Devrajan (Committee member) / Arizona State University (Publisher)
Created2017
168467-Thumbnail Image.png
Description
Neural networks are increasingly becoming attractive solutions for automated systems within automotive, aerospace, and military industries.Since many applications in such fields are both real-time and safety-critical, strict performance and reliability constraints must be considered. To achieve high performance, specialized architectures are required.Given that over 90% of the workload in modern

Neural networks are increasingly becoming attractive solutions for automated systems within automotive, aerospace, and military industries.Since many applications in such fields are both real-time and safety-critical, strict performance and reliability constraints must be considered. To achieve high performance, specialized architectures are required.Given that over 90% of the workload in modern neural network topologies is dominated by matrix multiplication, accelerating said algorithm becomes of paramount importance. Modern neural network accelerators, such as Xilinx's Deep Processing Unit (DPU), adopt efficient systolic-like architectures. Thanks to their high degree of parallelism and design flexibility, Field-Programmable Gate Arrays (FPGAs) are among the most promising devices for speeding up matrix multiplication and neural network computation.However, SRAM-based FPGAs are also known to suffer from radiation-induced upsets in their configuration memories. To achieve high reliability, hardening strategies must be put in place.However, traditional modular redundancy of inherently expensive modules is not always feasible due to limited resource availability on target devices. Therefore, more efficient and cleverly designed hardening methods become a necessity. For instance, Algorithm-Based Fault-Tolerance (ABFT) exploits algorithm characteristics to deliver error detection/correction capabilities at significantly lower costs. First, experimental results with Xilinx's DPU indicate that failure rates can be over twice as high as the limits specified for terrestrial applications.In other words, the undeniable need for hardening in the state-of-the-art neural network accelerator for FPGAs is demonstrated. Later, an extensive multi-level fault propagation analysis is presented, and an ultra-low-cost algorithm-based error detection strategy for matrix multiplication is proposed.By considering the specifics of FPGAs' fault model, this novel hardening method decreases costs of implementation by over a polynomial degree, when compared to state-of-the-art solutions. A corresponding architectural implementation is suggested, incurring area and energy overheads lower than 1% for the vast majority of systolic arrays dimensions. Finally, the impact of fundamental design decisions, such as data precision in processing elements, and overall degree of parallelism, on the reliability of hypothetical neural network accelerators is experimentally investigated.A novel way of predicting the compound failure rate of inherently inaccurate algorithms/applications in the presence of radiation is also provided.
ContributorsLibano, Fabiano (Author) / Brunhaver, John (Thesis advisor) / Clark, Lawrence (Committee member) / Quinn, Heather (Committee member) / Rech, Paolo (Committee member) / Arizona State University (Publisher)
Created2021