Matching Items (80)
Filtering by

Clear all filters

152166-Thumbnail Image.png
Description
The advent of threshold logic simplifies the traditional Boolean logic to the single level multi-input function. Threshold logic latch (TLL), among implementations of threshold logic, is functionally equivalent to a multi-input function with an edge triggered flip-flop, which stands out to improve area and both dynamic and leakage power consumption,

The advent of threshold logic simplifies the traditional Boolean logic to the single level multi-input function. Threshold logic latch (TLL), among implementations of threshold logic, is functionally equivalent to a multi-input function with an edge triggered flip-flop, which stands out to improve area and both dynamic and leakage power consumption, providing an appropriate design alternative. Accordingly, the TLL standard cell library is designed. Through technology mapping, hybrid circuit is generated by absorbing the logic cone backward from each flip-flip to get the smallest remaining feeder. With the scan test methodology adopted, design for testability (DFT) is proposed, including scan element design and scan chain insertion. Test synthesis flow is then introduced, according to the Cadence tool, RTL compiler. Test application is the process of applying vectors and the response analysis, which is mainly about the testbench design. A parameterized generic self-checking Verilog testbench is designed for static fault detection. Test development refers to the fault modeling, and test generation. Firstly, functional truth table test generation on TLL cells is proposed. Before the truth table test of the threshold function, the dependence of sequence of vectors applied, i.e., the dependence of current state on the previous state, should be eliminated. Transition test (dynamic pattern) on all weak inputs is proved to be able to test the reset function, which is supposed to erase the history in the reset phase before every evaluation phase. Remaining vectors in the truth table except the weak inputs are then applied statically (static pattern). Secondly, dynamic patterns for all weak inputs are proposed to detect structural transistor level faults analyzed in the TLL cell, with single fault assumption and stuck-at faults, stuck-on faults, and stuck-open faults under consideration. Containing those patterns, the functional test covers all testable structural faults inside the TLL. Thirdly, with the scope of the whole hybrid netlist, the procedure of test generation is proposed with three steps: scan chain test; test of feeders and other scan elements except TLLs; functional pattern test of TLL cells. Implementation of this procedure is discussed in the automatic test pattern generation (ATPG) chapter.
ContributorsHu, Yang (Author) / Vrudhula, Sarma (Thesis advisor) / Barnaby, Hugh (Committee member) / Yu, Shimeng (Committee member) / Arizona State University (Publisher)
Created2013
152174-Thumbnail Image.png
Description
Recent trends in the electric power industry have led to more attention to optimal operation of power transformers. In a deregulated environment, optimal operation means minimizing the maintenance and extending the life of this critical and costly equipment for the purpose of maximizing profits. Optimal utilization of a transformer can

Recent trends in the electric power industry have led to more attention to optimal operation of power transformers. In a deregulated environment, optimal operation means minimizing the maintenance and extending the life of this critical and costly equipment for the purpose of maximizing profits. Optimal utilization of a transformer can be achieved through the use of dynamic loading. A benefit of dynamic loading is that it allows better utilization of the transformer capacity, thus increasing the flexibility and reliability of the power system. This document presents the progress on a software application which can estimate the maximum time-varying loading capability of transformers. This information can be used to load devices closer to their limits without exceeding the manufacturer specified operating limits. The maximally efficient dynamic loading of transformers requires a model that can accurately predict both top-oil temperatures (TOTs) and hottest-spot temperatures (HSTs). In the previous work, two kinds of thermal TOT and HST models have been studied and used in the application: the IEEE TOT/HST models and the ASU TOT/HST models. And, several metrics have been applied to evaluate the model acceptability and determine the most appropriate models for using in the dynamic loading calculations. In this work, an investigation to improve the existing transformer thermal models performance is presented. Some factors that may affect the model performance such as improper fan status and the error caused by the poor performance of IEEE models are discussed. Additional methods to determine the reliability of transformer thermal models using metrics such as time constant and the model parameters are also provided. A new production grade application for real-time dynamic loading operating purpose is introduced. This application is developed by using an existing planning application, TTeMP, as a start point, which is designed for the dispatchers and load specialists. To overcome the limitations of TTeMP, the new application can perform dynamic loading under emergency conditions, such as loss-of transformer loading. It also has the capability to determine the emergency rating of the transformers for a real-time estimation.
ContributorsZhang, Ming (Author) / Tylavsky, Daniel J (Thesis advisor) / Ayyanar, Raja (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2013
151540-Thumbnail Image.png
Description
The combined heat and power (CHP)-based distributed generation (DG) or dis-tributed energy resources (DERs) are mature options available in the present energy mar-ket, considered to be an effective solution to promote energy efficiency. In the urban en-vironment, the electricity, water and natural gas distribution networks are becoming in-creasingly interconnected with

The combined heat and power (CHP)-based distributed generation (DG) or dis-tributed energy resources (DERs) are mature options available in the present energy mar-ket, considered to be an effective solution to promote energy efficiency. In the urban en-vironment, the electricity, water and natural gas distribution networks are becoming in-creasingly interconnected with the growing penetration of the CHP-based DG. Subse-quently, this emerging interdependence leads to new topics meriting serious consideration: how much of the CHP-based DG can be accommodated and where to locate these DERs, and given preexisting constraints, how to quantify the mutual impacts on operation performances between these urban energy distribution networks and the CHP-based DG. The early research work was conducted to investigate the feasibility and design methods for one residential microgrid system based on existing electricity, water and gas infrastructures of a residential community, mainly focusing on the economic planning. However, this proposed design method cannot determine the optimal DG sizing and sit-ing for a larger test bed with the given information of energy infrastructures. In this con-text, a more systematic as well as generalized approach should be developed to solve these problems. In the later study, the model architecture that integrates urban electricity, water and gas distribution networks, and the CHP-based DG system was developed. The pro-posed approach addressed the challenge of identifying the optimal sizing and siting of the CHP-based DG on these urban energy networks and the mutual impacts on operation per-formances were also quantified. For this study, the overall objective is to maximize the electrical output and recovered thermal output of the CHP-based DG units. The electrici-ty, gas, and water system models were developed individually and coupled by the devel-oped CHP-based DG system model. The resultant integrated system model is used to constrain the DG's electrical output and recovered thermal output, which are affected by multiple factors and thus analyzed in different case studies. The results indicate that the designed typical gas system is capable of supplying sufficient natural gas for the DG normal operation, while the present water system cannot support the complete recovery of the exhaust heat from the DG units.
ContributorsZhang, Xianjun (Author) / Karady, George G. (Thesis advisor) / Ariaratnam, Samuel T. (Committee member) / Holbert, Keith E. (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2013
151289-Thumbnail Image.png
Description
A distributed-parameter model is developed for a pressurized water reactor (PWR) in order to analyze the frequency behavior of the nuclear reactor. The model is built based upon the partial differential equations describing heat transfer and fluid flow in the reactor core. As a comparison, a multi-lump reactor core model

A distributed-parameter model is developed for a pressurized water reactor (PWR) in order to analyze the frequency behavior of the nuclear reactor. The model is built based upon the partial differential equations describing heat transfer and fluid flow in the reactor core. As a comparison, a multi-lump reactor core model with five fuel lumps and ten coolant lumps using Mann's model is employed. The derivations of the different transfer functions in both models are also presented with emphasis on the distributed parameter. In order to contrast the two models, Bode plots of the transfer functions are generated using data from the Palo Verde Nuclear Generating Station. Further, a detailed contradistinction between these two models is presented. From the comparison, the features of both models are presented. The distributed parameter model has the ability to offer an accurate transfer function at any location throughout the reactor core. In contrast, the multi-lump parameter model can only provide the average value in a given region (lump). Also, in the distributed parameter model only the feedback according to the specific location under study is incorporated into the transfer function; whereas the transfer functions derived from the multi-lump model contain the average feedback effects happening all over the reactor core.
ContributorsZhang, Taipeng (Author) / Holbert, Keith E. (Thesis advisor) / Vittal, Vijay (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2012
151352-Thumbnail Image.png
Description
A fully automated logic design methodology for radiation hardened by design (RHBD) high speed logic using fine grained triple modular redundancy (TMR) is presented. The hardening techniques used in the cell library are described and evaluated, with a focus on both layout techniques that mitigate total ionizing dose (TID) and

A fully automated logic design methodology for radiation hardened by design (RHBD) high speed logic using fine grained triple modular redundancy (TMR) is presented. The hardening techniques used in the cell library are described and evaluated, with a focus on both layout techniques that mitigate total ionizing dose (TID) and latchup issues and flip-flop designs that mitigate single event transient (SET) and single event upset (SEU) issues. The base TMR self-correcting master-slave flip-flop is described and compared to more traditional hardening techniques. Additional refinements are presented, including testability features that disable the self-correction to allow detection of manufacturing defects. The circuit approach is validated for hardness using both heavy ion and proton broad beam testing. For synthesis and auto place and route, the methodology and circuits leverage commercial logic design automation tools. These tools are glued together with custom CAD tools designed to enable easy conversion of standard single redundant hardware description language (HDL) files into hardened TMR circuitry. The flow allows hardening of any synthesizable logic at clock frequencies comparable to unhardened designs and supports standard low-power techniques, e.g. clock gating and supply voltage scaling.
ContributorsHindman, Nathan (Author) / Clark, Lawrence T (Thesis advisor) / Holbert, Keith E. (Committee member) / Barnaby, Hugh (Committee member) / Allee, David (Committee member) / Arizona State University (Publisher)
Created2012
151381-Thumbnail Image.png
Description
The dissolution of metal layers such as silver into chalcogenide glass layers such as germanium selenide changes the resistivity of the metal and chalcogenide films by a great extent. It is known that the incorporation of the metal can be achieved by ultra violet light exposure or thermal processes. In

The dissolution of metal layers such as silver into chalcogenide glass layers such as germanium selenide changes the resistivity of the metal and chalcogenide films by a great extent. It is known that the incorporation of the metal can be achieved by ultra violet light exposure or thermal processes. In this work, the use of metal dissolution by exposure to gamma radiation has been explored for radiation sensor applications. Test structures were designed and a process flow was developed for prototype sensor fabrication. The test structures were designed such that sensitivity to radiation could be studied. The focus is on the effect of gamma rays as well as ultra violet light on silver dissolution in germanium selenide (Ge30Se70) chalcogenide glass. Ultra violet radiation testing was used prior to gamma exposure to assess the basic mechanism. The test structures were electrically characterized prior to and post irradiation to assess resistance change due to metal dissolution. A change in resistance was observed post irradiation and was found to be dependent on the radiation dose. The structures were also characterized using atomic force microscopy and roughness measurements were made prior to and post irradiation. A change in roughness of the silver films on Ge30Se70 was observed following exposure. This indicated the loss of continuity of the film which causes the increase in silver film resistance following irradiation. Recovery of initial resistance in the structures was also observed after the radiation stress was removed. This recovery was explained with photo-stimulated deposition of silver from the chalcogenide at room temperature confirmed with the re-appearance of silver dendrites on the chalcogenide surface. The results demonstrate that it is possible to use the metal dissolution effect in radiation sensing applications.
ContributorsChandran, Ankitha (Author) / Kozicki, Michael N (Thesis advisor) / Holbert, Keith E. (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2012
152388-Thumbnail Image.png
Description
Microprocessors are the processing heart of any digital system and are central to all the technological advancements of the age including space exploration and monitoring. The demands of space exploration require a special class of microprocessors called radiation hardened microprocessors which are less susceptible to radiation present outside the earth's

Microprocessors are the processing heart of any digital system and are central to all the technological advancements of the age including space exploration and monitoring. The demands of space exploration require a special class of microprocessors called radiation hardened microprocessors which are less susceptible to radiation present outside the earth's atmosphere, in other words their functioning is not disrupted even in presence of disruptive radiation. The presence of these particles forces the designers to come up with design techniques at circuit and chip levels to alleviate the errors which can be encountered in the functioning of microprocessors. Microprocessor evolution has been very rapid in terms of performance but the same cannot be said about its rad-hard counterpart. With the total data processing capability overall increasing rapidly, the clear lack of performance of the processors manifests as a bottleneck in any processing system. To design high performance rad-hard microprocessors designers have to overcome difficult design problems at various design stages i.e. Architecture, Synthesis, Floorplanning, Optimization, routing and analysis all the while maintaining circuit radiation hardness. The reference design `HERMES' is targeted at 90nm IBM G process and is expected to reach 500Mhz which is twice as fast any processor currently available. Chapter 1 talks about the mechanisms of radiation effects which cause upsets and degradation to the functioning of digital circuits. Chapter 2 gives a brief description of the components which are used in the design and are part of the consistent efforts at ASUVLSI lab culminating in this chip level implementation of the design. Chapter 3 explains the basic digital design ASIC flow and the changes made to it leading to a rad-hard specific ASIC flow used in implementing this chip. Chapter 4 talks about the triple mode redundant (TMR) specific flow which is used in the block implementation, delineating the challenges faced and the solutions proposed to make the flow work. Chapter 5 explains the challenges faced and solutions arrived at while using the top-level flow described in chapter 3. Chapter 6 puts together the results and analyzes the design in terms of basic integrated circuit design constraints.
ContributorsRamamurthy, Chandarasekaran (Author) / Clark, Lawrence T (Thesis advisor) / Holbert, Keith E. (Committee member) / Barnaby, Hugh J (Committee member) / Mayhew, David (Committee member) / Arizona State University (Publisher)
Created2013
150703-Thumbnail Image.png
Description
The geometric growth in the integrated circuit technology due to transistor scaling also with system-on-chip design strategy, the complexity of the integrated circuit has increased manifold. Short time to market with high reliability and performance is one of the most competitive challenges. Both custom and ASIC design methodologies have evolved

The geometric growth in the integrated circuit technology due to transistor scaling also with system-on-chip design strategy, the complexity of the integrated circuit has increased manifold. Short time to market with high reliability and performance is one of the most competitive challenges. Both custom and ASIC design methodologies have evolved over the time to cope with this but the high manual labor in custom and statistic design in ASIC are still causes of concern. This work proposes a new circuit design strategy that focuses mostly on arrayed structures like TLB, RF, Cache, IPCAM etc. that reduces the manual effort to a great extent and also makes the design regular, repetitive still achieving high performance. The method proposes making the complete design custom schematic but using the standard cells. This requires adding some custom cells to the already exhaustive library to optimize the design for performance. Once schematic is finalized, the designer places these standard cells in a spreadsheet, placing closely the cells in the critical paths. A Perl script then generates Cadence Encounter compatible placement file. The design is then routed in Encounter. Since designer is the best judge of the circuit architecture, placement by the designer will allow achieve most optimal design. Several designs like IPCAM, issue logic, TLB, RF and Cache designs were carried out and the performance were compared against the fully custom and ASIC flow. The TLB, RF and Cache were the part of the HEMES microprocessor.
ContributorsMaurya, Satendra Kumar (Author) / Clark, Lawrence T (Thesis advisor) / Holbert, Keith E. (Committee member) / Vrudhula, Sarma (Committee member) / Allee, David (Committee member) / Arizona State University (Publisher)
Created2012
151102-Thumbnail Image.png
Description
The field of flexible displays and electronics gained a big momentum within the recent years due to their ruggedness, thinness, and flexibility as well as low cost large area manufacturability. Amorphous silicon has been the dominant material used in the thin film transistor industry which could only utilize it as

The field of flexible displays and electronics gained a big momentum within the recent years due to their ruggedness, thinness, and flexibility as well as low cost large area manufacturability. Amorphous silicon has been the dominant material used in the thin film transistor industry which could only utilize it as N type thin film transistors (TFT). Amorphous silicon is an unstable material for low temperature manufacturing process and having only one kind of transistor means high power consumption for circuit operations. This thesis covers the three major researches done on flexible TFTs and flexible electronic circuits. First the characterization of both amorphous silicon TFTs and newly emerging mixed oxide TFTs were performed and the stability of these two materials is compared. During the research, both TFTs were stress tested under various biasing conditions and the threshold voltage was extracted to observe the shift in the threshold which shows the degradation of the material. Secondly, the design of the first flexible CMOS TFTs and CMOS gates were covered. The circuits were built using both inorganic and organic components (for nMOS and pMOS transistors respectively) and functionality tests were performed on basic gates like inverter, NAND and NOR gates and the working results are documented. Thirdly, a novel large area sensor structure is demonstrated under the Electronic Textile project section. This project is based on the concept that all the flexible electronics are flexible in only one direction and can not be used for conforming irregular shaped objects or create an electronic cloth for various applications like display or sensing. A laser detector sensor array is designed for proof of concept and is laid in strips that can be cut after manufacturing and weaved to each other to create a real flexible electronic textile. The circuit designed uses a unique architecture that pushes the data in a single line and reads the data from the same line and compares the signal to the original state to determine a sensor excitation. This architecture enables 2 dimensional addressing through an external controller while eliminating the need for 2 dimensional active matrix style electrical connections between the fibers.
ContributorsKaftanoglu, Korhan (Author) / Allee, David R. (Thesis advisor) / Kozicki, Michael N (Committee member) / Holbert, Keith E. (Committee member) / Kaminski, Jann P (Committee member) / Arizona State University (Publisher)
Created2012
151127-Thumbnail Image.png
Description
Renewable energy has been a very hot topic in recent years due to the traditional energy crisis. Incentives that encourage the renewables have been established all over the world. Ordinary homeowners are also seeking ways to exploit renewable energy. In this thesis, residential PV system, wind turbine system and a

Renewable energy has been a very hot topic in recent years due to the traditional energy crisis. Incentives that encourage the renewables have been established all over the world. Ordinary homeowners are also seeking ways to exploit renewable energy. In this thesis, residential PV system, wind turbine system and a hybrid wind/solar system are all investigated. The solar energy received by the PV panels varies with many factors. The most essential one is the irradiance. As the PV panel been installed towards different orientations, the incident insolation received by the panel also will be different. The differing insolation corresponds to the different angles between the irradiance and the panel throughout the day. The result shows that for PV panels in the northern hemisphere, the ones facing south obtain the highest level insolation and thus generate the most electricity. However, with the two different electricity rate plans, flat rate plan and TOU (time of use) plan, the value of electricity that PV generates is different. For wind energy, the wind speed is the most significant variable to determine the generation of a wind turbine. Unlike solar energy, wind energy is much more regionally dependent. Wind resources vary between very close locations. As expected, the result shows that, larger wind speed leads to more electricity generation and thus shorter payback period. For the PV/wind hybrid system, two real cases are analyzed for Altamont and Midhill, CA. In this part, the impact of incentives, system cost and system size are considered. With a hybrid system, homeowners may choose different size combinations between PV and wind turbines. It turns out that for these two locations, the system with larger PV output always achieve a shorter payback period due to the lower cost. Even though, for a longer term, the system with a larger wind turbine in locations with excellent wind resources may lead to higher return on investment. Meanwhile, impacts of both wind and solar incentives (mainly utility rebates) are analyzed. At last, effects of the cost of both renewables are performed.
ContributorsAn, Wen (Author) / Holbert, Keith E. (Thesis advisor) / Karady, George G. (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2012