Matching Items (6)
Filtering by

Clear all filters

150987-Thumbnail Image.png
Description
In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned

In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned and operated by third-party service providers, there are risks of unauthorized use of the users' sensitive data by service providers. Although there are many techniques for protecting users' data from outside attackers, currently there is no effective way to protect users' sensitive data from service providers. In this dissertation, an approach is presented to protecting the confidentiality of users' data from service providers, and ensuring that service providers cannot collect users' confidential data while the data is processed or stored in cloud computing systems. The approach has four major features: (1) separation of software service providers and infrastructure service providers, (2) hiding the information of the owners of data, (3) data obfuscation, and (4) software module decomposition and distributed execution. Since the approach to protecting users' data confidentiality includes software module decomposition and distributed execution, it is very important to effectively allocate the resource of servers in SBS to each of the software module to manage the overall performance of workflows in SBS. An approach is presented to resource allocation for SBS to adaptively allocating the system resources of servers to their software modules in runtime in order to satisfy the performance requirements of multiple workflows in SBS. Experimental results show that the dynamic resource allocation approach can substantially increase the throughput of a SBS and the optimal resource allocation can be found in polynomial time
ContributorsAn, Ho Geun (Author) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Ahn, Gail-Joon (Committee member) / Santanam, Raghu (Committee member) / Arizona State University (Publisher)
Created2012
156819-Thumbnail Image.png
Description
Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring

Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring substantial computing resources to computing systems with sufficient computing resources, such as servers, cloud systems, and/or data centers for processing. However, this offloading method suffers from both high latency and network congestion in the IoT infrastructures.

Recently edge computing has emerged to reduce the negative impacts of tasks offloading to remote computing systems. As edge computing is in close proximity to IoT devices, it can reduce the latency of task offloading and reduce network congestion. Yet, edge computing has its drawbacks, such as the limited computing resources of some edge computing devices and the unbalanced loads among these devices. In order to effectively explore the potential of edge computing to support IoT applications, it is necessary to have efficient task management and load balancing in edge computing networks.

In this dissertation research, an approach is presented to periodically distributing tasks within the edge computing network while satisfying the quality-of-service (QoS) requirements of tasks. The QoS requirements include task completion deadline and security requirement. The approach aims to maximize the number of tasks that can be accommodated in the edge computing network, with consideration of tasks’ priorities. The goal is achieved through the joint optimization of the computing resource allocation and network bandwidth provisioning. Evaluation results show the improvement of the approach in increasing the number of tasks that can be accommodated in the edge computing network and the efficiency in resource utilization.
ContributorsSong, Yaozhong (Author) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Sarjoughian, Hessam S. (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2018
187366-Thumbnail Image.png
Description
The high R/X ratio of typical distribution systems makes the system voltage vulnerable to active power injection from the distributed energy resources (DERs). Moreover, the intermittent and uncertain nature of the DER generation brings new challenges to voltage management. As guided by the previous IEEE standard 1547-2003, most of the

The high R/X ratio of typical distribution systems makes the system voltage vulnerable to active power injection from the distributed energy resources (DERs). Moreover, the intermittent and uncertain nature of the DER generation brings new challenges to voltage management. As guided by the previous IEEE standard 1547-2003, most of the existing photovoltaic (PV) systems in the real distribution networks are equipped with conventional inverters, which only allow the PV systems to operate at unity power factor to generate active power. To utilize the voltage control capability of the existing PV systems following the guideline of the revised IEEE standard 1547-2018, this dissertation proposes a two-stage stochastic optimization strategy aimed at optimally placing the PV smart inverters with Volt-VAr capability among the existing PV systems for distribution systems with high PV penetration to mitigate voltage violations. PV smart inverters are fast-response devices compared to conventional voltage control devices in the distribution system. Historically, distribution system planning and operation studies are mainly based on quasi-static simulation, which ignores system dynamic transitions between static solutions. However, as high-penetration PV systems are present in the distribution system, the fast transients of the PV smart inverters cannot be ignored. A detailed dynamic model of the PV smart inverter with Volt-VAr control capability is developed as a dynamic link library (DLL) in OpenDSS to validate the system voltage stability with autonomous control of the optimally placed PV smart inverters. Static and dynamic verification is conducted on an actual 12.47 kV, 9 km-long Arizona utility feeder that serves residential customers. To achieve fast simulation and accommodate more complex PV models with desired accuracy and efficiency, an integrative dynamic simulation framework for OpenDSS with adaptive step size control is proposed. Based on the original fixed-step size simulation framework in OpenDSS, the proposed framework adds a function in the OpenDSS main program to adjust its step size to meet the minimum step size requirement from all the PV inverters in the system. Simulations are conducted using both the original and the proposed framework to validate the proposed simulation framework.
ContributorsChen, Mengxi (Author) / Vittal, Vijay (Thesis advisor) / Ayyanar, Raja (Thesis advisor) / Hedman, Mojdeh (Committee member) / Wu, Meng (Committee member) / Arizona State University (Publisher)
Created2023
161588-Thumbnail Image.png
Description
Ensuring reliable operation of large power systems subjected to multiple outages is a challenging task because of the combinatorial nature of the problem. Traditional methods of steady-state security assessment in power systems involve contingency analysis based on AC or DC power flows. However, power flow based contingency analysis is not

Ensuring reliable operation of large power systems subjected to multiple outages is a challenging task because of the combinatorial nature of the problem. Traditional methods of steady-state security assessment in power systems involve contingency analysis based on AC or DC power flows. However, power flow based contingency analysis is not fast enough to evaluate all contingencies for real-time operations. Therefore, real-time contingency analysis (RTCA) only evaluates a subset of the contingencies (called the contingency list), and hence might miss critical contingencies that lead to cascading failures.This dissertation proposes a new graph-theoretic approach, called the feasibility test (FT) algorithm, for analyzing whether a contingency will create a saturated or over-loaded cut-set in a meshed power network; a cut-set denotes a set of lines which if tripped separates the network into two disjoint islands. A novel feature of the proposed approach is that it lowers the solution time significantly making the approach viable for an exhaustive real-time evaluation of the system. Detecting saturated cut-sets in the power system is important because they represent the vulnerable bottlenecks in the network. The robustness of the FT algorithm is demonstrated on a 17,000+ bus model of the Western Interconnection (WI). Following the detection of post-contingency cut-set saturation, a two-component methodology is proposed to enhance the reliability of large power systems during a series of outages. The first component combines the proposed FT algorithm with RTCA to create an integrated corrective action (iCA), whose goal is to secure the power system against post-contingency cut-set saturation as well as critical branch overloads. The second component only employs the results of the FT to create a relaxed corrective action (rCA) that quickly secures the system against saturated cut-sets. The first component is more comprehensive than the second, but the latter is computationally more efficient. The effectiveness of the two components is evaluated based upon the number of cascade triggering contingencies alleviated, and the computation time. Analysis of different case-studies on the IEEE 118-bus and 2000-bus synthetic Texas systems indicate that the proposed two-component methodology enhances the scope and speed of power system security assessment during multiple outages.
ContributorsSen Biswas, Reetam (Author) / Pal, Anamitra (Thesis advisor) / Vittal, Vijay (Committee member) / Undrill, John (Committee member) / Wu, Meng (Committee member) / Zhang, Yingchen (Committee member) / Arizona State University (Publisher)
Created2021
151498-Thumbnail Image.png
Description
Nowadays, wireless communications and networks have been widely used in our daily lives. One of the most important topics related to networking research is using optimization tools to improve the utilization of network resources. In this dissertation, we concentrate on optimization for resource-constrained wireless networks, and study two fundamental resource-allocation

Nowadays, wireless communications and networks have been widely used in our daily lives. One of the most important topics related to networking research is using optimization tools to improve the utilization of network resources. In this dissertation, we concentrate on optimization for resource-constrained wireless networks, and study two fundamental resource-allocation problems: 1) distributed routing optimization and 2) anypath routing optimization. The study on the distributed routing optimization problem is composed of two main thrusts, targeted at understanding distributed routing and resource optimization for multihop wireless networks. The first thrust is dedicated to understanding the impact of full-duplex transmission on wireless network resource optimization. We propose two provably good distributed algorithms to optimize the resources in a full-duplex wireless network. We prove their optimality and also provide network status analysis using dual space information. The second thrust is dedicated to understanding the influence of network entity load constraints on network resource allocation and routing computation. We propose a provably good distributed algorithm to allocate wireless resources. In addition, we propose a new subgradient optimization framework, which can provide findgrained convergence, optimality, and dual space information at each iteration. This framework can provide a useful theoretical foundation for many networking optimization problems. The study on the anypath routing optimization problem is composed of two main thrusts. The first thrust is dedicated to understanding the computational complexity of multi-constrained anypath routing and designing approximate solutions. We prove that this problem is NP-hard when the number of constraints is larger than one. We present two polynomial time K-approximation algorithms. One is a centralized algorithm while the other one is a distributed algorithm. For the second thrust, we study directional anypath routing and present a cross-layer design of MAC and routing. For the MAC layer, we present a directional anycast MAC. For the routing layer, we propose two polynomial time routing algorithms to compute directional anypaths based on two antenna models, and prove their ptimality based on the packet delivery ratio metric.
ContributorsFang, Xi (Author) / Xue, Guoliang (Thesis advisor) / Yau, Sik-Sang (Committee member) / Ye, Jieping (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
190932-Thumbnail Image.png
Description
In this dissertation, a distribution system operator (DSO) framework is proposed to optimally coordinate distributed energy resources (DER) aggregators' comprehensive participation in the retail energy market as well as wholesale energy and regulation markets. Various types of DER aggregators, including energy storage aggregators (ESAGs), dispatchable distributed generation aggregators (DDGAGs), electric

In this dissertation, a distribution system operator (DSO) framework is proposed to optimally coordinate distributed energy resources (DER) aggregators' comprehensive participation in the retail energy market as well as wholesale energy and regulation markets. Various types of DER aggregators, including energy storage aggregators (ESAGs), dispatchable distributed generation aggregators (DDGAGs), electric vehicles charging stations (EVCSs), and demand response aggregators (DRAGs), are modeled in the proposed DSO framework. An important characteristic of a DSO is being capable of handling uncertainties in the system operation. An appropriate method for a market operator to cover uncertainties is using two-stage stochastic programming. To handle comprehensive retail and wholesale markets participation of distributed energy resource (DER) aggregators under uncertainty, a two-stage stochastic programming model for the DSO is proposed. To handle unbalanced distribution grids with single-phase aggregators, A DSO framework is proposed for unbalanced distribution networks based on a linearized unbalanced power flow which coordinates with wholesale market clearing process and ensures the DSO's non-profit characteristic. When proposing a DSO, coordination with the ISO is important. A framework is proposed to coordinate the operation of the independent system operator (ISO) and distribution system operator (DSO). The framework is compatible with current practice of the U.S. wholesale market to enable massive distributed energy resources (DERs) to participate in the wholesale market. The DSO builds a bid-in cost function to be submitted to the ISO market through parametric programming. A pricing problem for the DSO is proposed. In pricing problem, after ISO clears the wholesale market, the locational marginal price (LMP) of the ISO-DSO coupling substation is determined, the DSO utilizes this price to solve the DSO pricing problem. The DSO pricing problem determines the distribution LMP (D-LMP) in the distribution system and calculates the payment to each aggregator. An efficient algorithm is proposed to solve the ISO-DSO coordination parametric programming problem. Notably, our proposed algorithm significantly improves the computational efficiency of solving the parametric programming DSO problem which is computationally intensive. Various case studies are performed to analyze the market outcome of the proposed DSO framework and coordination with the ISO.
ContributorsMousavi, Mohammad (Author) / Wu, Meng (Thesis advisor) / Khorsand, Mojdeh (Committee member) / Byeon, Geunyeong (Committee member) / Nguyen, Duong (Committee member) / Arizona State University (Publisher)
Created2023