Matching Items (183)
149997-Thumbnail Image.png
Description
This thesis pursues a method to deregulate the electric distribution system and provide support to distributed renewable generation. A locational marginal price is used to determine prices across a distribution network in real-time. The real-time pricing may provide benefits such as a reduced electricity bill, decreased peak demand, and lower

This thesis pursues a method to deregulate the electric distribution system and provide support to distributed renewable generation. A locational marginal price is used to determine prices across a distribution network in real-time. The real-time pricing may provide benefits such as a reduced electricity bill, decreased peak demand, and lower emissions. This distribution locational marginal price (D-LMP) determines the cost of electricity at each node in the electrical network. The D-LMP is comprised of the cost of energy, cost of losses, and a renewable energy premium. The renewable premium is an adjustable function to compensate `green' distributed generation. A D-LMP is derived and formulated from the PJM model, as well as several alternative formulations. The logistics and infrastructure an implementation is briefly discussed. This study also takes advantage of the D-LMP real-time pricing to implement distributed storage technology. A storage schedule optimization is developed using linear programming. Day-ahead LMPs and historical load data are used to determine a predictive optimization. A test bed is created to represent a practical electric distribution system. Historical load, solar, and LMP data are used in the test bed to create a realistic environment. A power flow and tabulation of the D-LMPs was conducted for twelve test cases. The test cases included various penetrations of solar photovoltaics (PV), system networking, and the inclusion of storage technology. Tables of the D-LMPs and network voltages are presented in this work. The final costs are summed and the basic economics are examined. The use of a D-LMP can lower costs across a system when advanced technologies are used. Storage improves system costs, decreases losses, improves system load factor, and bolsters voltage. Solar energy provides many of these same attributes at lower penetrations, but high penetrations have a detrimental effect on the system. System networking also increases these positive effects. The D-LMP has a positive impact on residential customer cost, while greatly increasing the costs for the industrial sector. The D-LMP appears to have many positive impacts on the distribution system but proper cost allocation needs further development.
ContributorsKiefer, Brian Daniel (Author) / Heydt, Gerald T (Thesis advisor) / Shunk, Dan (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2011
150389-Thumbnail Image.png
Description
Radiation-induced gain degradation in bipolar devices is considered to be the primary threat to linear bipolar circuits operating in the space environment. The damage is primarily caused by charged particles trapped in the Earth's magnetosphere, the solar wind, and cosmic rays. This constant radiation exposure leads to early end-of-life expectancies

Radiation-induced gain degradation in bipolar devices is considered to be the primary threat to linear bipolar circuits operating in the space environment. The damage is primarily caused by charged particles trapped in the Earth's magnetosphere, the solar wind, and cosmic rays. This constant radiation exposure leads to early end-of-life expectancies for many electronic parts. Exposure to ionizing radiation increases the density of oxide and interfacial defects in bipolar oxides leading to an increase in base current in bipolar junction transistors. Radiation-induced excess base current is the primary cause of current gain degradation. Analysis of base current response can enable the measurement of defects generated by radiation exposure. In addition to radiation, the space environment is also characterized by extreme temperature fluctuations. Temperature, like radiation, also has a very strong impact on base current. Thus, a technique for separating the effects of radiation from thermal effects is necessary in order to accurately measure radiation-induced damage in space. This thesis focuses on the extraction of radiation damage in lateral PNP bipolar junction transistors and the space environment. It also describes the measurement techniques used and provides a quantitative analysis methodology for separating radiation and thermal effects on the bipolar base current.
ContributorsCampola, Michael J (Author) / Barnaby, Hugh J (Thesis advisor) / Holbert, Keith E. (Committee member) / Vasileska, Dragica (Committee member) / Arizona State University (Publisher)
Created2011
149934-Thumbnail Image.png
Description
This research work describes the design of a fault current limiter (FCL) using digital logic and a microcontroller based data acquisition system for an ultra fast pilot protection system. These systems have been designed according to the requirements of the Future Renewable Electric Energy Delivery and Management (FREEDM) system (or

This research work describes the design of a fault current limiter (FCL) using digital logic and a microcontroller based data acquisition system for an ultra fast pilot protection system. These systems have been designed according to the requirements of the Future Renewable Electric Energy Delivery and Management (FREEDM) system (or loop), a 1 MW green energy hub. The FREEDM loop merges advanced power electronics technology with information tech-nology to form an efficient power grid that can be integrated with the existing power system. With the addition of loads to the FREEDM system, the level of fault current rises because of increased energy flow to supply the loads, and this requires the design of a limiter which can limit this current to a level which the existing switchgear can interrupt. The FCL limits the fault current to around three times the rated current. Fast switching Insulated-gate bipolar transistor (IGBT) with its gate control logic implements a switching strategy which enables this operation. A complete simulation of the system was built on Simulink and it was verified that the FCL limits the fault current to 1000 A compared to more than 3000 A fault current in the non-existence of a FCL. This setting is made user-defined. In FREEDM system, there is a need to interrupt a fault faster or make intelligent deci-sions relating to fault events, to ensure maximum availability of power to the loads connected to the system. This necessitates fast acquisition of data which is performed by the designed data acquisition system. The microcontroller acquires the data from a current transformer (CT). Mea-surements are made at different points in the FREEDM system and merged together, to input it to the intelligent protection algorithm that has been developed by another student on the project. The algorithm will generate a tripping signal in the event of a fault. The developed hardware and the programmed software to accomplish data acquisition and transmission are presented here. The designed FCL ensures that the existing switchgear equipments need not be replaced thus aiding future power system expansion. The developed data acquisition system enables fast fault sensing in protection schemes improving its reliability.
ContributorsThirumalai, Arvind (Author) / Karady, George G. (Thesis advisor) / Vittal, Vijay (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2011
149939-Thumbnail Image.png
Description
The increased use of commercial complementary metal-oxide-semiconductor (CMOS) technologies in harsh radiation environments has resulted in a new approach to radiation effects mitigation. This approach utilizes simulation to support the design of integrated circuits (ICs) to meet targeted tolerance specifications. Modeling the deleterious impact of ionizing radiation on ICs fabricated

The increased use of commercial complementary metal-oxide-semiconductor (CMOS) technologies in harsh radiation environments has resulted in a new approach to radiation effects mitigation. This approach utilizes simulation to support the design of integrated circuits (ICs) to meet targeted tolerance specifications. Modeling the deleterious impact of ionizing radiation on ICs fabricated in advanced CMOS technologies requires understanding and analyzing the basic mechanisms that result in buildup of radiation-induced defects in specific sensitive regions. Extensive experimental studies have demonstrated that the sensitive regions are shallow trench isolation (STI) oxides. Nevertheless, very little work has been done to model the physical mechanisms that result in the buildup of radiation-induced defects and the radiation response of devices fabricated in these technologies. A comprehensive study of the physical mechanisms contributing to the buildup of radiation-induced oxide trapped charges and the generation of interface traps in advanced CMOS devices is presented in this dissertation. The basic mechanisms contributing to the buildup of radiation-induced defects are explored using a physical model that utilizes kinetic equations that captures total ionizing dose (TID) and dose rate effects in silicon dioxide (SiO2). These mechanisms are formulated into analytical models that calculate oxide trapped charge density (Not) and interface trap density (Nit) in sensitive regions of deep-submicron devices. Experiments performed on field-oxide-field-effect-transistors (FOXFETs) and metal-oxide-semiconductor (MOS) capacitors permit investigating TID effects and provide a comparison for the radiation response of advanced CMOS devices. When used in conjunction with closed-form expressions for surface potential, the analytical models enable an accurate description of radiation-induced degradation of transistor electrical characteristics. In this dissertation, the incorporation of TID effects in advanced CMOS devices into surface potential based compact models is also presented. The incorporation of TID effects into surface potential based compact models is accomplished through modifications of the corresponding surface potential equations (SPE), allowing the inclusion of radiation-induced defects (i.e., Not and Nit) into the calculations of surface potential. Verification of the compact modeling approach is achieved via comparison with experimental data obtained from FOXFETs fabricated in a 90 nm low-standby power commercial bulk CMOS technology and numerical simulations of fully-depleted (FD) silicon-on-insulator (SOI) n-channel transistors.
ContributorsSanchez Esqueda, Ivan (Author) / Barnaby, Hugh J (Committee member) / Schroder, Dieter (Thesis advisor) / Schroder, Dieter K. (Committee member) / Holbert, Keith E. (Committee member) / Gildenblat, Gennady (Committee member) / Arizona State University (Publisher)
Created2011
149803-Thumbnail Image.png
Description
With the advent of technologies such as web services, service oriented architecture and cloud computing, modern organizations have to deal with policies such as Firewall policies to secure the networks, XACML (eXtensible Access Control Markup Language) policies for controlling the access to critical information as well as resources. Management of

With the advent of technologies such as web services, service oriented architecture and cloud computing, modern organizations have to deal with policies such as Firewall policies to secure the networks, XACML (eXtensible Access Control Markup Language) policies for controlling the access to critical information as well as resources. Management of these policies is an extremely important task in order to avoid unintended security leakages via illegal accesses, while maintaining proper access to services for legitimate users. Managing and maintaining access control policies manually over long period of time is an error prone task due to their inherent complex nature. Existing tools and mechanisms for policy management use different approaches for different types of policies. This research thesis represents a generic framework to provide an unified approach for policy analysis and management of different types of policies. Generic approach captures the common semantics and structure of different access control policies with the notion of policy ontology. Policy ontology representation is then utilized for effectively analyzing and managing the policies. This thesis also discusses a proof-of-concept implementation of the proposed generic framework and demonstrates how efficiently this unified approach can be used for analysis and management of different types of access control policies.
ContributorsKulkarni, Ketan (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2011
149858-Thumbnail Image.png
Description
This dissertation is focused on building scalable Attribute Based Security Systems (ABSS), including efficient and privacy-preserving attribute based encryption schemes and applications to group communications and cloud computing. First of all, a Constant Ciphertext Policy Attribute Based Encryption (CCP-ABE) is proposed. Existing Attribute Based Encryption (ABE) schemes usually incur large,

This dissertation is focused on building scalable Attribute Based Security Systems (ABSS), including efficient and privacy-preserving attribute based encryption schemes and applications to group communications and cloud computing. First of all, a Constant Ciphertext Policy Attribute Based Encryption (CCP-ABE) is proposed. Existing Attribute Based Encryption (ABE) schemes usually incur large, linearly increasing ciphertext. The proposed CCP-ABE dramatically reduces the ciphertext to small, constant size. This is the first existing ABE scheme that achieves constant ciphertext size. Also, the proposed CCP-ABE scheme is fully collusion-resistant such that users can not combine their attributes to elevate their decryption capacity. Next step, efficient ABE schemes are applied to construct optimal group communication schemes and broadcast encryption schemes. An attribute based Optimal Group Key (OGK) management scheme that attains communication-storage optimality without collusion vulnerability is presented. Then, a novel broadcast encryption model: Attribute Based Broadcast Encryption (ABBE) is introduced, which exploits the many-to-many nature of attributes to dramatically reduce the storage complexity from linear to logarithm and enable expressive attribute based access policies. The privacy issues are also considered and addressed in ABSS. Firstly, a hidden policy based ABE schemes is proposed to protect receivers' privacy by hiding the access policy. Secondly,a new concept: Gradual Identity Exposure (GIE) is introduced to address the restrictions of hidden policy based ABE schemes. GIE's approach is to reveal the receivers' information gradually by allowing ciphertext recipients to decrypt the message using their possessed attributes one-by-one. If the receiver does not possess one attribute in this procedure, the rest of attributes are still hidden. Compared to hidden-policy based solutions, GIE provides significant performance improvement in terms of reducing both computation and communication overhead. Last but not least, ABSS are incorporated into the mobile cloud computing scenarios. In the proposed secure mobile cloud data management framework, the light weight mobile devices can securely outsource expensive ABE operations and data storage to untrusted cloud service providers. The reported scheme includes two components: (1) a Cloud-Assisted Attribute-Based Encryption/Decryption (CA-ABE) scheme and (2) An Attribute-Based Data Storage (ABDS) scheme that achieves information theoretical optimality.
ContributorsZhou, Zhibin (Author) / Huang, Dijiang (Thesis advisor) / Yau, Sik-Sang (Committee member) / Ahn, Gail-Joon (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2011
149658-Thumbnail Image.png
Description
Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered

Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered as 30 to 50 years. Power plants over 30 years old usually conduct a feasibility study of rehabilitation on their entire facilities including infrastructure. By age 35, the forced outage rate increases by 10 percentage points compared to the previous year. Much longer outages occur in power plants older than 20 years. Consequently, the forced outage rate increases exponentially due to these longer outages. Although these long forced outages are not frequent, their impact is immense. If reasonable timing of rehabilitation is missed, an abrupt long-term outage could occur and additional unnecessary repairs and inefficiencies would follow. On the contrary, too early replacement might cause the waste of revenue. The hydropower plants of Korea Water Resources Corporation (hereafter K-water) are utilized for this study. Twenty-four K-water generators comprise the population for quantifying the reliability of each equipment. A facility in a hydropower plant is a repairable system because most failures can be fixed without replacing the entire facility. The fault data of each power plant are collected, within which only forced outage faults are considered as raw data for reliability analyses. The mean cumulative repair functions (MCF) of each facility are determined with the failure data tables, using Nelson's graph method. The power law model, a popular model for a repairable system, can also be obtained to represent representative equipment and system availability. The criterion-based analysis of HydroAmp is used to provide more accurate reliability of each power plant. Two case studies are presented to enhance the understanding of the availability of each power plant and represent economic evaluations for modernization. Also, equipment in a hydropower plant is categorized into two groups based on their reliability for determining modernization timing and their suitable replacement periods are obtained using simulation.
ContributorsKwon, Ogeuk (Author) / Holbert, Keith E. (Thesis advisor) / Heydt, Gerald T (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2011
150247-Thumbnail Image.png
Description
The electric transmission grid is conventionally treated as a fixed asset and is operated around a single topology. Though several instances of switching transmission lines for corrective mechaism, congestion management, and minimization of losses can be found in literature, the idea of co-optimizing transmission with generation dispatch has not been

The electric transmission grid is conventionally treated as a fixed asset and is operated around a single topology. Though several instances of switching transmission lines for corrective mechaism, congestion management, and minimization of losses can be found in literature, the idea of co-optimizing transmission with generation dispatch has not been widely investigated. Network topology optimization exploits the redundancies that are an integral part of the network to allow for improvement in dispatch efficiency. Although, the concept of a dispatchable network initially appears counterintuitive questioning the wisdom of switching transmission lines on a more regu-lar basis, results obtained in the previous research on transmission switching with a Direct Current Optimal Power Flow (DCOPF) show significant cost reductions. This thesis on network topology optimization with ACOPF emphasizes the need for additional research in this area. It examines the performance of network topology optimization in an Alternating Current (AC) setting and its impact on various parameters like active power loss and voltages that are ignored in the DC setting. An ACOPF model, with binary variables representing the status of transmission lines incorporated into the formulation, is written in AMPL, a mathematical programming language and this optimization problem is solved using the solver KNITRO. ACOPF is a non-convex, nonlinear optimization problem, making it a very hard problem to solve. The introduction of bi-nary variables makes ACOPF a mixed integer nonlinear programming problem, further increasing the complexity of the optimization problem. An iterative method of opening each transmission line individually before choosing the best solution has been proposed as a purely investigative approach to studying the impact of transmission switching with ACOPF. Economic savings of up to 6% achieved using this approach indicate the potential of this concept. In addition, a heuristic has been proposed to improve the computational efficiency of network topology optimization. This research also makes a comparative analysis between transmission switching in a DC setting and switching in an AC setting. Results presented in this thesis indicate significant economic savings achieved by controlled topology optimization, thereby reconfirming the need for further examination of this idea.
ContributorsPotluri, Tejaswi (Author) / Hedman, Kory (Thesis advisor) / Vittal, Vijay (Committee member) / Heydt, Gerald (Committee member) / Arizona State University (Publisher)
Created2011
150197-Thumbnail Image.png
Description
Ever reducing time to market, along with short product lifetimes, has created a need to shorten the microprocessor design time. Verification of the design and its analysis are two major components of this design cycle. Design validation techniques can be broadly classified into two major categories: simulation based approaches and

Ever reducing time to market, along with short product lifetimes, has created a need to shorten the microprocessor design time. Verification of the design and its analysis are two major components of this design cycle. Design validation techniques can be broadly classified into two major categories: simulation based approaches and formal techniques. Simulation based microprocessor validation involves running millions of cycles using random or pseudo random tests and allows verification of the register transfer level (RTL) model against an architectural model, i.e., that the processor executes instructions as required. The validation effort involves model checking to a high level description or simulation of the design against the RTL implementation. Formal techniques exhaustively analyze parts of the design but, do not verify RTL against the architecture specification. The focus of this work is to implement a fully automated validation environment for a MIPS based radiation hardened microprocessor using simulation based approaches. The basic framework uses the classical validation approach in which the design to be validated is described in a Hardware Definition Language (HDL) such as VHDL or Verilog. To implement a simulation based approach a number of random or pseudo random tests are generated. The output of the HDL based design is compared against the one obtained from a "perfect" model implementing similar functionality, a mismatch in the results would thus indicate a bug in the HDL based design. Effort is made to design the environment in such a manner that it can support validation during different stages of the design cycle. The validation environment includes appropriate changes so as to support architecture changes which are introduced because of radiation hardening. The manner in which the validation environment is build is highly dependent on the specifications of the perfect model used for comparisons. This work implements the validation environment for two MIPS simulators as the reference model. Two bugs have been discovered in the RTL model, using simulation based approaches through the validation environment.
ContributorsSharma, Abhishek (Author) / Clark, Lawrence (Thesis advisor) / Holbert, Keith E. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2011
150093-Thumbnail Image.png
Description
Action language C+ is a formalism for describing properties of actions, which is based on nonmonotonic causal logic. The definite fragment of C+ is implemented in the Causal Calculator (CCalc), which is based on the reduction of nonmonotonic causal logic to propositional logic. This thesis describes the language

Action language C+ is a formalism for describing properties of actions, which is based on nonmonotonic causal logic. The definite fragment of C+ is implemented in the Causal Calculator (CCalc), which is based on the reduction of nonmonotonic causal logic to propositional logic. This thesis describes the language of CCalc in terms of answer set programming (ASP), based on the translation of nonmonotonic causal logic to formulas under the stable model semantics. I designed a standard library which describes the constructs of the input language of CCalc in terms of ASP, allowing a simple modular method to represent CCalc input programs in the language of ASP. Using the combination of system F2LP and answer set solvers, this method achieves functionality close to that of CCalc while taking advantage of answer set solvers to yield efficient computation that is orders of magnitude faster than CCalc for many benchmark examples. In support of this, I created an automated translation system Cplus2ASP that implements the translation and encoding method and automatically invokes the necessary software to solve the translated input programs.
ContributorsCasolary, Michael (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Arizona State University (Publisher)
Created2011