Matching Items (37)

152459-Thumbnail Image.png

Improving the reliability of NAND Flash, phase-change RAM and spin-torque transfer RAM

Description

Non-volatile memories (NVM) are widely used in modern electronic devices due to their non-volatility, low static power consumption and high storage density. While Flash memories are the dominant NVM technology, resistive memories such as phase change access memory (PRAM) and

Non-volatile memories (NVM) are widely used in modern electronic devices due to their non-volatility, low static power consumption and high storage density. While Flash memories are the dominant NVM technology, resistive memories such as phase change access memory (PRAM) and spin torque transfer random access memory (STT-MRAM) are gaining ground. All these technologies suffer from reliability degradation due to process variations, structural limits and material property shift. To address the reliability concerns of these NVM technologies, multi-level low cost solutions are proposed for each of them. My approach consists of first building a comprehensive error model. Next the error characteristics are exploited to develop low cost multi-level strategies to compensate for the errors. For instance, for NAND Flash memory, I first characterize errors due to threshold voltage variations as a function of the number of program/erase cycles. Next a flexible product code is designed to migrate to a stronger ECC scheme as program/erase cycles increases. An adaptive data refresh scheme is also proposed to improve memory reliability with low energy cost for applications with different data update frequencies. For PRAM, soft errors and hard errors models are built based on shifts in the resistance distributions. Next I developed a multi-level error control approach involving bit interleaving and subblock flipping at the architecture level, threshold resistance tuning at the circuit level and programming current profile tuning at the device level. This approach helped reduce the error rate significantly so that it was now sufficient to use a low cost ECC scheme to satisfy the memory reliability constraint. I also studied the reliability of a PRAM+DRAM hybrid memory system and analyzed the tradeoffs between memory performance, programming energy and lifetime. For STT-MRAM, I first developed an error model based on process variations. I developed a multi-level approach to reduce the error rates that consisted of increasing the W/L ratio of the access transistor, increasing the voltage difference across the memory cell and adjusting the current profile during write operation. This approach enabled use of a low cost BCH based ECC scheme to achieve very low block failure rates.

Contributors

Agent

Created

Date Created
2014

152284-Thumbnail Image.png

Electromigration in gold interconnects

Description

Electromigration in metal interconnects is the most pernicious failure mechanism in semiconductor integrated circuits (ICs). Early electromigration investigations were primarily focused on aluminum interconnects for silicon-based ICs. An alternative metallization compatible with gallium arsenide (GaAs) was required in the development

Electromigration in metal interconnects is the most pernicious failure mechanism in semiconductor integrated circuits (ICs). Early electromigration investigations were primarily focused on aluminum interconnects for silicon-based ICs. An alternative metallization compatible with gallium arsenide (GaAs) was required in the development of high-powered radio frequency (RF) compound semiconductor devices operating at higher current densities and elevated temperatures. Gold-based metallization was implemented on GaAs devices because it uniquely forms a very low resistance ohmic contact and gold interconnects have superior electrical and thermal conductivity properties. Gold (Au) was also believed to have improved resistance to electromigration due to its higher melting temperature, yet electromigration reliability data on passivated Au interconnects is scarce and inadequate in the literature. Therefore, the objective of this research was to characterize the electromigration lifetimes of passivated Au interconnects under precisely controlled stress conditions with statistically relevant quantities to obtain accurate model parameters essential for extrapolation to normal operational conditions. This research objective was accomplished through measurement of electromigration lifetimes of large quantities of passivated electroplated Au interconnects utilizing high-resolution in-situ resistance monitoring equipment. Application of moderate accelerated stress conditions with a current density limited to 2 MA/cm2 and oven temperatures in the range of 300°C to 375°C avoided electrical overstress and severe Joule-heated temperature gradients. Temperature coefficients of resistance (TCRs) were measured to determine accurate Joule-heated Au interconnect film temperatures. A failure criterion of 50% resistance degradation was selected to prevent thermal runaway and catastrophic metal ruptures that are problematic of open circuit failure tests. Test structure design was optimized to reduce resistance variation and facilitate failure analysis. Characterization of the Au microstructure yielded a median grain size of 0.91 ìm. All Au lifetime distributions followed log-normal distributions and Black's model was found to be applicable. An activation energy of 0.80 ± 0.05 eV was measured from constant current electromigration tests at multiple temperatures. A current density exponent of 1.91 was extracted from multiple current densities at a constant temperature. Electromigration-induced void morphology along with these model parameters indicated grain boundary diffusion is dominant and the void nucleation mechanism controlled the failure time.

Contributors

Agent

Created

Date Created
2013

153069-Thumbnail Image.png

Determination of dominant failure modes using combined experimental and statistical methods and selection of best method to calculate degradation rates

Description

This is a two part thesis:

Part 1 of this thesis determines the most dominant failure modes of field aged photovoltaic (PV) modules using experimental data and statistical analysis, FMECA (Failure Mode, Effect, and Criticality Analysis). The failure and degradation modes

This is a two part thesis:

Part 1 of this thesis determines the most dominant failure modes of field aged photovoltaic (PV) modules using experimental data and statistical analysis, FMECA (Failure Mode, Effect, and Criticality Analysis). The failure and degradation modes of about 5900 crystalline-Si glass/polymer modules fielded for 6 to 16 years in three different photovoltaic (PV) power plants with different mounting systems under the hot-dry desert climate of Arizona are evaluated. A statistical reliability tool, FMECA that uses Risk Priority Number (RPN) is performed for each PV power plant to determine the dominant failure modes in the modules by means of ranking and prioritizing the modes. This study on PV power plants considers all the failure and degradation modes from both safety and performance perspectives, and thus, comes to the conclusion that solder bond fatigue/failure with/without gridline/metallization contact fatigue/failure is the most dominant failure mode for these module types in the hot-dry desert climate of Arizona.

Part 2 of this thesis determines the best method to compute degradation rates of PV modules. Three different PV systems were evaluated to compute degradation rates using four methods and they are: I-V measurement, metered kWh, performance ratio (PR) and performance index (PI). I-V method, being an ideal method for degradation rate computation, were compared to the results from other three methods. The median degradation rates computed from kWh method were within ±0.15% from I-V measured degradation rates (0.9-1.37 %/year of three models). Degradation rates from the PI method were within ±0.05% from the I-V measured rates for two systems but the calculated degradation rate was remarkably different (±1%) from the I-V method for the third system. The degradation rate from the PR method was within ±0.16% from the I-V measured rate for only one system but were remarkably different (±1%) from the I-V measured rate for the other two systems. Thus, it was concluded that metered raw kWh method is the best practical method, after I-V method and PI method (if ground mounted POA insolation and other weather data are available) for degradation computation as this method was found to be fairly accurate, easy, inexpensive, fast and convenient.

Contributors

Agent

Created

Date Created
2014

150981-Thumbnail Image.png

Harm during hospitalizations for heart failure: adverse events as a reliability measure of hospital policies and procedures

Description

For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP

For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP failures, timing and risk factors) during heart failure (HF) hospitalizations are unknown. There were 1,722 patients discharged with a primary diagnosis of HF from an academic hospital between January 2005 and December 2007. Three hundred eighty-one patients experienced 566 AEs, classified into four categories: medication (43.9%), infection (18.9%), patient care (26.3%), or procedural (10.9%). Three distinct analyses were performed: 1) patient's perspective of SOPP reliability including cumulative distribution and hazard functions of time to AEs; 2) Cox proportional hazards model to determine independent patient-specific risk factors for AEs; and 3) hospital administration's perspective of SOPP reliability through three years of the study including cumulative distribution and hazard functions of time between AEs and moving range statistical process control (SPC) charts for days between failures of each type. This is the first study, to our knowledge, to consider reliability of SOPP from both the patient's and hospital administration's perspective. AE rates in hospitalized patients are similar to other recently published reports and did not improve during the study period. Operations research methodologies will be necessary to improve reliability of care delivered to hospitalized patients.

Contributors

Agent

Created

Date Created
2012

150421-Thumbnail Image.png

Investigation of 1,900 individual field aged photovoltaic modules for potential induced degradation (PID) in a positive biased power plant

Description

Photovoltaic (PV) modules undergo performance degradation depending on climatic conditions, applications, and system configurations. The performance degradation prediction of PV modules is primarily based on Accelerated Life Testing (ALT) procedures. In order to further strengthen the ALT process, additional investigation

Photovoltaic (PV) modules undergo performance degradation depending on climatic conditions, applications, and system configurations. The performance degradation prediction of PV modules is primarily based on Accelerated Life Testing (ALT) procedures. In order to further strengthen the ALT process, additional investigation of the power degradation of field aged PV modules in various configurations is required. A detailed investigation of 1,900 field aged (12-18 years) PV modules deployed in a power plant application was conducted for this study. Analysis was based on the current-voltage (I-V) measurement of all the 1,900 modules individually. I-V curve data of individual modules formed the basis for calculating the performance degradation of the modules. The percentage performance degradation and rates of degradation were compared to an earlier study done at the same plant. The current research was primarily focused on identifying the extent of potential induced degradation (PID) of individual modules with reference to the negative ground potential. To investigate this, the arrangement and connection of the individual modules/strings was examined in detail. The study also examined the extent of underperformance of every series string due to performance mismatch of individual modules in that string. The power loss due to individual module degradation and module mismatch at string level was then compared to the rated value.

Contributors

Agent

Created

Date Created
2011

Reliability associated with the estimation of soil resilient modulus at different hierarchical levels of pavement design

Description

Deterministic solutions are available to estimate the resilient modulus of unbound materials, which are difficult to interpret because they do not incorporate the variability associated with the inherent soil heterogeneity and that associated with environmental conditions. This thesis presents the

Deterministic solutions are available to estimate the resilient modulus of unbound materials, which are difficult to interpret because they do not incorporate the variability associated with the inherent soil heterogeneity and that associated with environmental conditions. This thesis presents the stochastic evaluation of the Enhanced Integrated Climatic Model (EICM), which is a model used in the Mechanistic-Empirical Pavement Design Guide to estimate the soil long-term equilibrium resilient modulus. The stochastic evaluation is accomplished by taking the deterministic equations in the EICM and applying stochastic procedures to obtain a mean and variance associated with the final design parameter, the resilient modulus at equilibrium condition. In addition to the stochastic evaluation, different statistical analyses were applied to determine that the uses of hierarchical levels are valid in the unbound pavement material design and the climatic region has an impact on the final design resilient moduli at equilibrium. After determining that the climatic regions and the hierarchical levels are valid, reliability was applied to the resilient moduli at equilibrium. Finally, the American Association of State Highway and Transportation Officials (AASHTO) design concept based on the Structural Number (SN) was applied in order to illustrate the true implications the hierarchical levels of design and the variability associated with environmental effects and soil properties have in the design of pavement structures. The stochastic solutions developed as part of this thesis work together with the SN design concept were applied to five soils with different resilient moduli at optimum compaction condition in order to evaluate the variability associated with the resilient moduli at equilibrium condition. These soils were evaluated in five different climatic regions ranging from arid to extremely wet conditions. The analysis showed that by using the most accurate input parameters obtained from laboratory testing (hierarchical Level 1) instead of Level 3 analysis could potentially save the State Department of Transportation up to 10.12 inches of asphalt in arid and semi-arid regions.

Contributors

Agent

Created

Date Created
2011

150743-Thumbnail Image.png

Smart compilers for reliable and power-efficient embedded computing

Description

Thanks to continuous technology scaling, intelligent, fast and smaller digital systems are now available at affordable costs. As a result, digital systems have found use in a wide range of application areas that were not even imagined before, including medical

Thanks to continuous technology scaling, intelligent, fast and smaller digital systems are now available at affordable costs. As a result, digital systems have found use in a wide range of application areas that were not even imagined before, including medical (e.g., MRI, remote or post-operative monitoring devices, etc.), automotive (e.g., adaptive cruise control, anti-lock brakes, etc.), security systems (e.g., residential security gateways, surveillance devices, etc.), and in- and out-of-body sensing (e.g., capsule swallowed by patients measuring digestive system pH, heart monitors, etc.). Such computing systems, which are completely embedded within the application, are called embedded systems, as opposed to general purpose computing systems. In the design of such embedded systems, power consumption and reliability are indispensable system requirements. In battery operated portable devices, the battery is the single largest factor contributing to device cost, weight, recharging time, frequency and ultimately its usability. For example, in the Apple iPhone 4 smart-phone, the battery is $40\%$ of the device weight, occupies $36\%$ of its volume and allows only $7$ hours (over 3G) of talk time. As embedded systems find use in a range of sensitive applications, from bio-medical applications to safety and security systems, the reliability of the computations performed becomes a crucial factor. At our current technology-node, portable embedded systems are prone to expect failures due to soft errors at the rate of once-per-year; but with aggressive technology scaling, the rate is predicted to increase exponentially to once-per-hour. Over the years, researchers have been successful in developing techniques, implemented at different layers of the design-spectrum, to improve system power efficiency and reliability. Among the layers of design abstraction, I observe that the interface between the compiler and processor micro-architecture possesses a unique potential for efficient design optimizations. A compiler designer is able to observe and analyze the application software at a finer granularity; while the processor architect analyzes the system output (power, performance, etc.) for each executed instruction. At the compiler micro-architecture interface, if the system knowledge at the two design layers can be integrated, design optimizations at the two layers can be modified to efficiently utilize available resources and thereby achieve appreciable system-level benefits. To this effect, the thesis statement is that, ``by merging system design information at the compiler and micro-architecture design layers, smart compilers can be developed, that achieve reliable and power-efficient embedded computing through: i) Pure compiler techniques, ii) Hybrid compiler micro-architecture techniques, and iii) Compiler-aware architectures''. In this dissertation demonstrates, through contributions in each of the three compiler-based techniques, the effectiveness of smart compilers in achieving power-efficiency and reliability in embedded systems.

Contributors

Agent

Created

Date Created
2012

149658-Thumbnail Image.png

A study of evaluation methods centered on reliability for renewal of aging hydropower plants

Description

Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide.

Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered as 30 to 50 years. Power plants over 30 years old usually conduct a feasibility study of rehabilitation on their entire facilities including infrastructure. By age 35, the forced outage rate increases by 10 percentage points compared to the previous year. Much longer outages occur in power plants older than 20 years. Consequently, the forced outage rate increases exponentially due to these longer outages. Although these long forced outages are not frequent, their impact is immense. If reasonable timing of rehabilitation is missed, an abrupt long-term outage could occur and additional unnecessary repairs and inefficiencies would follow. On the contrary, too early replacement might cause the waste of revenue. The hydropower plants of Korea Water Resources Corporation (hereafter K-water) are utilized for this study. Twenty-four K-water generators comprise the population for quantifying the reliability of each equipment. A facility in a hydropower plant is a repairable system because most failures can be fixed without replacing the entire facility. The fault data of each power plant are collected, within which only forced outage faults are considered as raw data for reliability analyses. The mean cumulative repair functions (MCF) of each facility are determined with the failure data tables, using Nelson's graph method. The power law model, a popular model for a repairable system, can also be obtained to represent representative equipment and system availability. The criterion-based analysis of HydroAmp is used to provide more accurate reliability of each power plant. Two case studies are presented to enhance the understanding of the availability of each power plant and represent economic evaluations for modernization. Also, equipment in a hydropower plant is categorized into two groups based on their reliability for determining modernization timing and their suitable replacement periods are obtained using simulation.

Contributors

Agent

Created

Date Created
2011

150567-Thumbnail Image.png

Investigation of sustainable and reliable design alternatives for water distribution systems

Description

Nowadays there is a pronounced interest in the need for sustainable and reliable infrastructure systems to address the challenges of the future infrastructure development. This dissertation presents the research associated with understanding various sustainable and reliable design alternatives for water

Nowadays there is a pronounced interest in the need for sustainable and reliable infrastructure systems to address the challenges of the future infrastructure development. This dissertation presents the research associated with understanding various sustainable and reliable design alternatives for water distribution systems. Although design of water distribution networks (WDN) is a thoroughly studied area, most researchers seem to focus on developing algorithms to solve the non-linear hard kind of optimization problems associated with WDN design. Cost has been the objective in most of the previous studies with few models considering reliability as a constraint, and even fewer models accounting for the environmental impact of WDN. The research presented in this dissertation combines all these important objectives into a multi-objective optimization framework. The model used in this research is an integration of a genetic algorithm optimization tool with a water network solver, EPANET. The objectives considered for the optimization are Life Cycle Costs (LCC) and Life Cycle Carbon Dioxide (CO2) Emissions (LCE) whereby the system reliability is made a constraint. Three popularly used resilience metrics were investigated in this research for their efficiency in aiding the design of WDNs that are able to handle external natural and man-made shocks. The best performing resilience metric is incorporated into the optimization model as an additional objective. Various scenarios were developed for the design analysis in order to understand the trade-offs between different critical parameters considered in this research. An approach is proposed and illustrated to identify the most sustainable and resilient design alternatives from the solution set obtained by the model employed in this research. The model is demonstrated by using various benchmark networks that were studied previously. The size of the networks ranges from a simple 8-pipe system to a relatively large 2467-pipe one. The results from this research indicate that LCE can be reduced at a reasonable cost when a better design is chosen. Similarly, resilience could also be improved at an additional cost. The model used in this research is more suitable for water distribution networks. However, the methodology could be adapted to other infrastructure systems as well.

Contributors

Agent

Created

Date Created
2012

152400-Thumbnail Image.png

Towards adaptive micro-robotic neural interfaces: autonomous navigation of microelectrodes in the brain for optimal neural recording

Description

Advances in implantable MEMS technology has made possible adaptive micro-robotic implants that can track and record from single neurons in the brain. Development of autonomous neural interfaces opens up exciting possibilities of micro-robots performing standard electrophysiological techniques that would previously

Advances in implantable MEMS technology has made possible adaptive micro-robotic implants that can track and record from single neurons in the brain. Development of autonomous neural interfaces opens up exciting possibilities of micro-robots performing standard electrophysiological techniques that would previously take researchers several hundred hours to train and achieve the desired skill level. It would result in more reliable and adaptive neural interfaces that could record optimal neural activity 24/7 with high fidelity signals, high yield and increased throughput. The main contribution here is validating adaptive strategies to overcome challenges in autonomous navigation of microelectrodes inside the brain. The following issues pose significant challenges as brain tissue is both functionally and structurally dynamic: a) time varying mechanical properties of the brain tissue-microelectrode interface due to the hyperelastic, viscoelastic nature of brain tissue b) non-stationarities in the neural signal caused by mechanical and physiological events in the interface and c) the lack of visual feedback of microelectrode position in brain tissue. A closed loop control algorithm is proposed here for autonomous navigation of microelectrodes in brain tissue while optimizing the signal-to-noise ratio of multi-unit neural recordings. The algorithm incorporates a quantitative understanding of constitutive mechanical properties of soft viscoelastic tissue like the brain and is guided by models that predict stresses developed in brain tissue during movement of the microelectrode. An optimal movement strategy is developed that achieves precise positioning of microelectrodes in the brain by minimizing the stresses developed in the surrounding tissue during navigation and maximizing the speed of movement. Results of testing the closed-loop control paradigm in short-term rodent experiments validated that it was possible to achieve a consistently high quality SNR throughout the duration of the experiment. At the systems level, new generation of MEMS actuators for movable microelectrode array are characterized and the MEMS device operation parameters are optimized for improved performance and reliability. Further, recommendations for packaging to minimize the form factor of the implant; design of device mounting and implantation techniques of MEMS microelectrode array to enhance the longevity of the implant are also included in a top-down approach to achieve a reliable brain interface.

Contributors

Agent

Created

Date Created
2013