Matching Items (32)
Filtering by

Clear all filters

151475-Thumbnail Image.png
Description
The cyber-physical systems (CPS) are emerging as the underpinning technology for major industries in the 21-th century. This dissertation is focused on two fundamental issues in cyber-physical systems: network interdependence and information dynamics. It consists of the following two main thrusts. The first thrust is targeted at understanding the impact

The cyber-physical systems (CPS) are emerging as the underpinning technology for major industries in the 21-th century. This dissertation is focused on two fundamental issues in cyber-physical systems: network interdependence and information dynamics. It consists of the following two main thrusts. The first thrust is targeted at understanding the impact of network interdependence. It is shown that a cyber-physical system built upon multiple interdependent networks are more vulnerable to attacks since node failures in one network may result in failures in the other network, causing a cascade of failures that would potentially lead to the collapse of the entire infrastructure. There is thus a need to develop a new network science for modeling and quantifying cascading failures in multiple interdependent networks, and to develop network management algorithms that improve network robustness and ensure overall network reliability against cascading failures. To enhance the system robustness, a "regular" allocation strategy is proposed that yields better resistance against cascading failures compared to all possible existing strategies. Furthermore, in view of the load redistribution feature in many physical infrastructure networks, e.g., power grids, a CPS model is developed where the threshold model and the giant connected component model are used to capture the node failures in the physical infrastructure network and the cyber network, respectively. The second thrust is centered around the information dynamics in the CPS. One speculation is that the interconnections over multiple networks can facilitate information diffusion since information propagation in one network can trigger further spread in the other network. With this insight, a theoretical framework is developed to analyze information epidemic across multiple interconnecting networks. It is shown that the conjoining among networks can dramatically speed up message diffusion. Along a different avenue, many cyber-physical systems rely on wireless networks which offer platforms for information exchanges. To optimize the QoS of wireless networks, there is a need to develop a high-throughput and low-complexity scheduling algorithm to control link dynamics. To that end, distributed link scheduling algorithms are explored for multi-hop MIMO networks and two CSMA algorithms under the continuous-time model and the discrete-time model are devised, respectively.
ContributorsQian, Dajun (Author) / Zhang, Junshan (Thesis advisor) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Cochran, Douglas (Committee member) / Arizona State University (Publisher)
Created2012
152153-Thumbnail Image.png
Description
Transmission expansion planning (TEP) is a complex decision making process that requires comprehensive analysis to determine the time, location, and number of electric power transmission facilities that are needed in the future power grid. This dissertation investigates the topic of solving TEP problems for large power systems. The dissertation can

Transmission expansion planning (TEP) is a complex decision making process that requires comprehensive analysis to determine the time, location, and number of electric power transmission facilities that are needed in the future power grid. This dissertation investigates the topic of solving TEP problems for large power systems. The dissertation can be divided into two parts. The first part of this dissertation focuses on developing a more accurate network model for TEP study. First, a mixed-integer linear programming (MILP) based TEP model is proposed for solving multi-stage TEP problems. Compared with previous work, the proposed approach reduces the number of variables and constraints needed and improves the computational efficiency significantly. Second, the AC power flow model is applied to TEP models. Relaxations and reformulations are proposed to make the AC model based TEP problem solvable. Third, a convexified AC network model is proposed for TEP studies with reactive power and off-nominal bus voltage magnitudes included in the model. A MILP-based loss model and its relaxations are also investigated. The second part of this dissertation investigates the uncertainty modeling issues in the TEP problem. A two-stage stochastic TEP model is proposed and decomposition algorithms based on the L-shaped method and progressive hedging (PH) are developed to solve the stochastic model. Results indicate that the stochastic TEP model can give a more accurate estimation of the annual operating cost as compared to the deterministic TEP model which focuses only on the peak load.
ContributorsZhang, Hui (Author) / Vittal, Vijay (Thesis advisor) / Heydt, Gerald T (Thesis advisor) / Mittelmann, Hans D (Committee member) / Hedman, Kory W (Committee member) / Arizona State University (Publisher)
Created2013
151982-Thumbnail Image.png
Description
The rapid advances in wireless communications and networking have given rise to a number of emerging heterogeneous wireless and mobile networks along with novel networking paradigms, including wireless sensor networks, mobile crowdsourcing, and mobile social networking. While offering promising solutions to a wide range of new applications, their widespread adoption

The rapid advances in wireless communications and networking have given rise to a number of emerging heterogeneous wireless and mobile networks along with novel networking paradigms, including wireless sensor networks, mobile crowdsourcing, and mobile social networking. While offering promising solutions to a wide range of new applications, their widespread adoption and large-scale deployment are often hindered by people's concerns about the security, user privacy, or both. In this dissertation, we aim to address a number of challenging security and privacy issues in heterogeneous wireless and mobile networks in an attempt to foster their widespread adoption. Our contributions are mainly fivefold. First, we introduce a novel secure and loss-resilient code dissemination scheme for wireless sensor networks deployed in hostile and harsh environments. Second, we devise a novel scheme to enable mobile users to detect any inauthentic or unsound location-based top-k query result returned by an untrusted location-based service providers. Third, we develop a novel verifiable privacy-preserving aggregation scheme for people-centric mobile sensing systems. Fourth, we present a suite of privacy-preserving profile matching protocols for proximity-based mobile social networking, which can support a wide range of matching metrics with different privacy levels. Last, we present a secure combination scheme for crowdsourcing-based cooperative spectrum sensing systems that can enable robust primary user detection even when malicious cognitive radio users constitute the majority.
ContributorsZhang, Rui (Author) / Zhang, Yanchao (Thesis advisor) / Duman, Tolga Mete (Committee member) / Xue, Guoliang (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
152113-Thumbnail Image.png
Description
The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different control algorithms. The focus

The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different control algorithms. The focus of this thesis is to design scheduling and power control algorithms in wireless networks, and analyze their performances. In this thesis, we first study the multicast capacity of wireless ad hoc networks. Gupta and Kumar studied the scaling law of the unicast capacity of wireless ad hoc networks. They derived the order of the unicast throughput, as the number of nodes in the network goes to infinity. In our work, we characterize the scaling of the multicast capacity of large-scale MANETs under a delay constraint D. We first derive an upper bound on the multicast throughput, and then propose a lower bound on the multicast capacity by proposing a joint coding-scheduling algorithm that achieves a throughput within logarithmic factor of the upper bound. We then study the power control problem in ad-hoc wireless networks. We propose a distributed power control algorithm based on the Gibbs sampler, and prove that the algorithm is throughput optimal. Finally, we consider the scheduling algorithm in collocated wireless networks with flow-level dynamics. Specifically, we study the delay performance of workload-based scheduling algorithm with SRPT as a tie-breaking rule. We demonstrate the superior flow-level delay performance of the proposed algorithm using simulations.
ContributorsZhou, Shan (Author) / Ying, Lei (Thesis advisor) / Zhang, Yanchao (Committee member) / Zhang, Junshan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2013
152383-Thumbnail Image.png
Description
Data centers connect a larger number of servers requiring IO and switches with low power and delay. Virtualization of IO and network is crucial for these servers, which run virtual processes for computing, storage, and apps. We propose using the PCI Express (PCIe) protocol and a new PCIe switch fabric

Data centers connect a larger number of servers requiring IO and switches with low power and delay. Virtualization of IO and network is crucial for these servers, which run virtual processes for computing, storage, and apps. We propose using the PCI Express (PCIe) protocol and a new PCIe switch fabric for IO and switch virtualization. The switch fabric has little data buffering, allowing up to 512 physical 10 Gb/s PCIe2.0 lanes to be connected via a switch fabric. The switch is scalable with adapters running multiple adaptation protocols, such as Ethernet over PCIe, PCIe over Internet, or FibreChannel over Ethernet. Such adaptation protocols allow integration of IO often required for disjoint datacenter applications such as storage and networking. The novel switch fabric based on space-time carrier sensing facilitates high bandwidth, low power, and low delay multi-protocol switching. To achieve Terabit switching, both time (high transmission speed) and space (multi-stage interconnection network) technologies are required. In this paper, we present the design of an up to 256 lanes Clos-network of multistage crossbar switch fabric for PCIe system. The switch core consists of 48 16x16 crossbar sub-switches. We also propose a new output contention resolution algorithm utilizing an out-of-band protocol of Request-To-Send (RTS), Clear-To-Send (CTS) before sending PCIe packets through the switch fabric. Preliminary power and delay estimates are provided.
ContributorsLuo, Haojun (Author) / Hui, Joseph (Thesis advisor) / Song, Hongjiang (Committee member) / Reisslein, Martin (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2013
151050-Thumbnail Image.png
Description
In the deregulated power system, locational marginal prices are used in transmission engineering predominantly as near real-time pricing signals. This work extends this concept to distribution engineering so that a distribution class locational marginal price might be used for real-time pricing and control of advanced control systems in distribution circuits.

In the deregulated power system, locational marginal prices are used in transmission engineering predominantly as near real-time pricing signals. This work extends this concept to distribution engineering so that a distribution class locational marginal price might be used for real-time pricing and control of advanced control systems in distribution circuits. A formulation for the distribution locational marginal price signal is presented that is based on power flow sensitivities in a distribution system. A Jacobian-based sensitivity analysis has been developed for application in the distribution pricing method. Increasing deployment of distributed energy sources is being seen at the distribution level and this trend is expected to continue. To facilitate an optimal use of the distributed infrastructure, the control of the energy demand on a feeder node in the distribution system has been formulated as a multiobjective optimization problem and a solution algorithm has been developed. In multiobjective problems the Pareto optimality criterion is generally applied, and commonly used solution algorithms are decision-based and heuristic. In contrast, a mathematically-robust technique called normal boundary intersection has been modeled for use in this work, and the control variable is solved via separable programming. The Roy Billinton Test System (RBTS) has predominantly been used to demonstrate the application of the formulation in distribution system control. A parallel processing environment has been used to replicate the distributed nature of controls at many points in the distribution system. Interactions between the real-time prices in a distribution feeder and the nodal prices at the aggregated load bus have been investigated. The application of the formulations in an islanded operating condition has also been demonstrated. The DLMP formulation has been validated using the test bed systems and a practical framework for its application in distribution engineering has been presented. The multiobjective optimization yields excellent results and is found to be robust for finer time resolutions. The work shown in this report is applicable to, and has been researched under the aegis of the Future Renewable Electric Energy Delivery and Management (FREEDM) center, which is a generation III National Science Foundation engineering research center headquartered at North Carolina State University.
ContributorsRanganathan Sathyanarayana, Bharadwaj (Author) / Heydt, Gerald T (Thesis advisor) / Vittal, Vijay (Committee member) / Ayyanar, Raja (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2012
151244-Thumbnail Image.png
Description
The Smart Grid initiative describes the collaborative effort to modernize the U.S. electric power infrastructure. Modernization efforts incorporate digital data and information technology to effectuate control, enhance reliability, encourage small customer sited distributed generation (DG), and better utilize assets. The Smart Grid environment is envisioned to include distributed generation, flexible

The Smart Grid initiative describes the collaborative effort to modernize the U.S. electric power infrastructure. Modernization efforts incorporate digital data and information technology to effectuate control, enhance reliability, encourage small customer sited distributed generation (DG), and better utilize assets. The Smart Grid environment is envisioned to include distributed generation, flexible and controllable loads, bidirectional communications using smart meters and other technologies. Sensory technology may be utilized as a tool that enhances operation including operation of the distribution system. Addressing this point, a distribution system state estimation algorithm is developed in this thesis. The state estimation algorithm developed here utilizes distribution system modeling techniques to calculate a vector of state variables for a given set of measurements. Measurements include active and reactive power flows, voltage and current magnitudes, phasor voltages with magnitude and angle information. The state estimator is envisioned as a tool embedded in distribution substation computers as part of distribution management systems (DMS); the estimator acts as a supervisory layer for a number of applications including automation (DA), energy management, control and switching. The distribution system state estimator is developed in full three-phase detail, and the effect of mutual coupling and single-phase laterals and loads on the solution is calculated. The network model comprises a full three-phase admittance matrix and a subset of equations that relates measurements to system states. Network equations and variables are represented in rectangular form. Thus a linear calculation procedure may be employed. When initialized to the vector of measured quantities and approximated non-metered load values, the calculation procedure is non-iterative. This dissertation presents background information used to develop the state estimation algorithm, considerations for distribution system modeling, and the formulation of the state estimator. Estimator performance for various power system test beds is investigated. Sample applications of the estimator to Smart Grid systems are presented. Applications include monitoring, enabling demand response (DR), voltage unbalance mitigation, and enhancing voltage control. Illustrations of these applications are shown. Also, examples of enhanced reliability and restoration using a sensory based automation infrastructure are shown.
ContributorsHaughton, Daniel Andrew (Author) / Heydt, Gerald T (Thesis advisor) / Vittal, Vijay (Committee member) / Ayyanar, Raja (Committee member) / Hedman, Kory W (Committee member) / Arizona State University (Publisher)
Created2012
149510-Thumbnail Image.png
Description
Optical Instrument Transformers (OIT) have been developed as an alternative to traditional instrument transformers (IT). The question "Can optical instrument transformers substitute for the traditional transformers?" is the main motivation of this study. Finding the answer for this question and developing complete models are the contributions of this work. Dedicated

Optical Instrument Transformers (OIT) have been developed as an alternative to traditional instrument transformers (IT). The question "Can optical instrument transformers substitute for the traditional transformers?" is the main motivation of this study. Finding the answer for this question and developing complete models are the contributions of this work. Dedicated test facilities are developed so that the steady state and transient performances of analog outputs of a magnetic current transformer (CT) and a magnetic voltage transformer (VT) are compared with that of an optical current transformer (OCT) and an optical voltage transformer (OVT) respectively. Frequency response characteristics of OIT outputs are obtained. Comparison results show that OITs have a specified accuracy of 0.3% in all cases. They are linear, and DC offset does not saturate the systems. The OIT output signal has a 40~60 μs time delay, but this is typically less than the equivalent phase difference permitted by the IEEE and IEC standards for protection applications. Analog outputs have significantly higher bandwidths (adjustable to 20 to 40 kHz) than the IT. The digital output signal bandwidth (2.4 kHz) of an OCT is significantly lower than the analog signal bandwidth (20 kHz) due to the sampling rates involved. The OIT analog outputs may have significant white noise of 6%, but the white noise does not affect accuracy or protection performance. Temperatures up to 50oC do not adversely affect the performance of the OITs. Three types of models are developed for analog outputs: analog, digital, and complete models. Well-known mathematical methods, such as network synthesis and Jones calculus methods are applied. The developed models are compared with experiment results and are verified with simulation programs. Results show less than 1.5% for OCT and 2% for OVT difference and that the developed models can be used for power system simulations and the method used for the development can be used to develop models for all other brands of optical systems. The communication and data transfer between the all-digital protection systems is investigated by developing a test facility for all digital protection systems. Test results show that different manufacturers' relays and transformers based on the IEC standard can serve the power system successfully.
ContributorsKucuksari, Sadik (Author) / Karady, George G. (Thesis advisor) / Heydt, Gerald T (Committee member) / Holbert, Keith E. (Committee member) / Ayyanar, Raja (Committee member) / Farmer, Richard (Committee member) / Arizona State University (Publisher)
Created2010
190798-Thumbnail Image.png
Description
With the proliferation of mobile computing and Internet-of-Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions of Bytes of data at the network edge. Driving by this trend, there is an urgent need to push the artificial intelligence (AI) frontiers to the network edge

With the proliferation of mobile computing and Internet-of-Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions of Bytes of data at the network edge. Driving by this trend, there is an urgent need to push the artificial intelligence (AI) frontiers to the network edge to unleash the potential of the edge big data fully. This dissertation aims to comprehensively study collaborative learning and optimization algorithms to build a foundation of edge intelligence. Under this common theme, this dissertation is broadly organized into three parts. The first part of this study focuses on model learning with limited data and limited computing capability at the network edge. A global model initialization is first obtained by running federated learning (FL) across many edge devices, based on which a semi-supervised algorithm is devised for an edge device to carry out quick adaptation, aiming to address the insufficiency of labeled data and to learn a personalized model efficiently. In the second part of this study, collaborative learning between the edge and the cloud is studied to achieve real-time edge intelligence. More specifically, a distributionally robust optimization (DRO) approach is proposed to enable the synergy between local data processing and cloud knowledge transfer. Two attractive uncertainty models are investigated corresponding to the cloud knowledge transfer: the distribution uncertainty set based on the cloud data distribution and the prior distribution of the edge model conditioned on the cloud model. Collaborative learning algorithms are developed along this line. The final part focuses on developing an offline model-based safe Inverse Reinforcement Learning (IRL) algorithm for connected Autonomous Vehicles (AVs). A reward penalty is introduced to penalize unsafe states, and a risk-measure-based approach is proposed to mitigate the model uncertainty introduced by offline training. The experimental results demonstrate the improvement of the proposed algorithm over the existing baselines in terms of cumulative rewards.
ContributorsZhang, Zhaofeng (Author) / Zhang, Junshan (Thesis advisor) / Zhang, Yanchao (Thesis advisor) / Dasarathy, Gautam (Committee member) / Fan, Deliang (Committee member) / Arizona State University (Publisher)
Created2023
168528-Thumbnail Image.png
Description
Existing radio access networks (RANs) allow only for very limited sharing of thecommunication and computation resources among wireless operators and heterogeneous wireless technologies. The introduced LayBack architecture facilitates communication and computation resource sharing among different wireless operators and technologies. LayBack organizes the RAN communication and multiaccess edge computing (MEC) resources into layers, including a

Existing radio access networks (RANs) allow only for very limited sharing of thecommunication and computation resources among wireless operators and heterogeneous wireless technologies. The introduced LayBack architecture facilitates communication and computation resource sharing among different wireless operators and technologies. LayBack organizes the RAN communication and multiaccess edge computing (MEC) resources into layers, including a devices layer, a radio node (enhanced Node B and access point) layer, and a gateway layer. The layback optimization study addresses the problem of how a central SDN orchestrator can flexibly share the total backhaul capacity of the various wireless operators among their gateways and radio nodes (e.g., LTE enhanced Node Bs or Wi-Fi access points). In order to facilitate flexible network service virtualization and migration, network functions (NFs) are increasingly executed by software modules as so-called "softwarized NFs" on General-Purpose Computing (GPC) platforms and infrastructures. GPC platforms are not specifically designed to efficiently execute NFs with their typically intense Input/Output (I/O) demands. Recently, numerous hardware-based accelerations have been developed to augment GPC platforms and infrastructures, e.g., the central processing unit (CPU) and memory, to efficiently execute NFs. The computing capabilities of client devices are continuously increasing; at the same time, demands for ultra-low latency (ULL) services are increasing. These ULL services can be provided by migrating some micro-service container computations from the cloud and multi-access edge computing (MEC) to the client devices.
ContributorsShantharama, Prateek (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Thyagaturu, Akhilesh (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2022