Matching Items (18)
Filtering by
- Member of: ASU Electronic Theses and Dissertations
- Status: Published

In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned and operated by third-party service providers, there are risks of unauthorized use of the users' sensitive data by service providers. Although there are many techniques for protecting users' data from outside attackers, currently there is no effective way to protect users' sensitive data from service providers. In this dissertation, an approach is presented to protecting the confidentiality of users' data from service providers, and ensuring that service providers cannot collect users' confidential data while the data is processed or stored in cloud computing systems. The approach has four major features: (1) separation of software service providers and infrastructure service providers, (2) hiding the information of the owners of data, (3) data obfuscation, and (4) software module decomposition and distributed execution. Since the approach to protecting users' data confidentiality includes software module decomposition and distributed execution, it is very important to effectively allocate the resource of servers in SBS to each of the software module to manage the overall performance of workflows in SBS. An approach is presented to resource allocation for SBS to adaptively allocating the system resources of servers to their software modules in runtime in order to satisfy the performance requirements of multiple workflows in SBS. Experimental results show that the dynamic resource allocation approach can substantially increase the throughput of a SBS and the optimal resource allocation can be found in polynomial time

Resource allocation in communication networks aims to assign various resources such as power, bandwidth and load in a fair and economic fashion so that the networks can be better utilized and shared by the communicating entities. The design of efficient resource-allocation algorithms is, however, becoming more and more challenging due to the precipitously increasing scale of the networks. This thesis strives to understand how to design such low-complexity algorithms with performance guarantees.
In the first part, the link scheduling problem in wireless ad hoc networks is considered. The scheduler is charge of finding a set of wireless data links to activate at each time slot with the considerations of wireless interference, traffic dynamics, network topology and quality-of-service (QoS) requirements. Two different yet essential scenarios are investigated: the first one is when each packet has a specific deadline after which it will be discarded; the second is when each packet traverses the network in multiple hops instead of leaving the network after a one-hop transmission. In both scenarios the links need to be carefully scheduled to avoid starvation of users and congestion on links. One greedy algorithm is analyzed in each of the two scenarios and performance guarantees in terms of throughput of the networks are derived.
In the second part, the load-balancing problem in parallel computing is studied. Tasks arrive in batches and the duty of the load balancer is to place the tasks on the machines such that minimum queueing delay is incurred. Due to the huge size of modern data centers, sampling the status of all machines may result in significant overhead. Consequently, an algorithm based on limited queue information at the machines is examined and its asymptotic delay performance is characterized and it is shown that the proposed algorithm achieves the same delay with remarkably less sampling overhead compared to the well-known power-of-two-choices algorithm.
Two messages of the thesis are the following: greedy algorithms can work well in a stochastic setting; the fluid model can be useful in "derandomizing" the system and reveal the nature of the algorithm.

We live in a networked world with a multitude of networks, such as communication networks, electric power grid, transportation networks and water distribution networks, all around us. In addition to such physical (infrastructure) networks, recent years have seen tremendous proliferation of social networks, such as Facebook, Twitter, LinkedIn, Instagram, Google+ and others. These powerful social networks are not only used for harnessing revenue from the infrastructure networks, but are also increasingly being used as “non-conventional sensors” for monitoring the infrastructure networks. Accordingly, nowadays, analyses of social and infrastructure networks go hand-in-hand. This dissertation studies resource allocation problems encountered in this set of diverse, heterogeneous, and interdependent networks. Three problems studied in this dissertation are encountered in the physical network domain while the three other problems studied are encountered in the social network domain.
The first problem from the infrastructure network domain relates to distributed files storage scheme with a goal of enhancing robustness of data storage by making it tolerant against large scale geographically-correlated failures. The second problem relates to placement of relay nodes in a deployment area with multiple sensor nodes with a goal of augmenting connectivity of the resulting network, while staying within the budget specifying the maximum number of relay nodes that can be deployed. The third problem studied in this dissertation relates to complex interdependencies that exist between infrastructure networks, such as power grid and communication network. The progressive recovery problem in an interdependent network is studied whose goal is to maximize system utility over the time when recovery process of failed entities takes place in a sequential manner.
The three problems studied from the social network domain relate to influence propagation in adversarial environment and political sentiment assessment in various states in a country with a goal of creation of a “political heat map” of the country. In the first problem of the influence propagation domain, the goal of the second player is to restrict the influence of the first player, while in the second problem the goal of the second player is to have a larger market share with least amount of initial investment.

The critical infrastructures of the nation are a large and complex network of human, physical and cyber-physical systems. In recent times, it has become increasingly apparent that individual critical infrastructures, such as the power and communication networks, do not operate in isolation, but instead are part of a complex interdependent ecosystem where a failure involving a small set of network entities can trigger a cascading event resulting in the failure of a much larger set of entities through the failure propagation process.
Recognizing the need for a deeper understanding of the interdependent relationships between such critical infrastructures, several models have been proposed and analyzed in the last few years. However, most of these models are over-simplified and fail to capture the complex interdependencies that may exist between critical infrastructures. To overcome the limitations of existing models, this dissertation presents a new model -- the Implicative Interdependency Model (IIM) that is able to capture such complex interdependency relations. As the potential for a failure cascade in critical interdependent networks poses several risks that can jeopardize the nation, this dissertation explores relevant research problems in the interdependent power and communication networks using the proposed IIM and lays the foundations for further study using this model.
Apart from exploring problems in interdependent critical infrastructures, this dissertation also explores resource allocation techniques for environments enabled with cyber-physical systems. Specifically, the problem of efficient path planning for data collection using mobile cyber-physical systems is explored. Two such environments are considered: a Radio-Frequency IDentification (RFID) environment with mobile “Tags” and “Readers”, and a sensor data collection environment where both the sensors and the data mules (data collectors) are mobile.
Finally, from an applied research perspective, this dissertation presents Raptor, an advanced network planning and management tool for mitigating the impact of spatially correlated, or region based faults on infrastructure networks. Raptor consolidates a wide range of studies conducted in the last few years on region based faults, and provides an interface for network planners, designers and operators to use the results of these studies for designing robust and resilient networks in the presence of spatially correlated faults.

Resource allocation in cloud computing determines the allocation of computer and network resources of service providers to service requests of cloud users for meeting the cloud users' service requirements. The efficient and effective resource allocation determines the success of cloud computing. However, it is challenging to satisfy objectives of all service providers and all cloud users in an unpredictable environment with dynamic workload, large shared resources and complex policies to manage them.
Many studies propose to use centralized algorithms for achieving optimal solutions for resource allocation. However, the centralized algorithms may encounter the scalability problem to handle a large number of service requests in a realistically satisfactory time. Hence, this dissertation presents two studies. One study develops and tests heuristics of centralized resource allocation to produce near-optimal solutions in a scalable manner. Another study looks into decentralized methods of performing resource allocation.
The first part of this dissertation defines the resource allocation problem as a centralized optimization problem in Mixed Integer Programming (MIP) and obtains the optimal solutions for various resource-service problem scenarios. Based on the analysis of the optimal solutions, various heuristics are designed for efficient resource allocation. Extended experiments are conducted with larger numbers of user requests and service providers for performance evaluation of the resource allocation heuristics. Experimental results of the resource allocation heuristics show the comparable performance of the heuristics to the optimal solutions from solving the optimization problem. Moreover, the resource allocation heuristics demonstrate better computational efficiency and thus scalability than solving the optimization problem.
The second part of this dissertation looks into elements of service provider-user coordination first in the formulation of the centralized resource allocation problem in MIP and then in the formulation of the optimization problem in a decentralized manner for various problem cases. By examining differences between the centralized, optimal solutions and the decentralized solutions for those problem cases, the analysis of how the decentralized service provider-user coordination breaks down the optimal solutions is performed. Based on the analysis, strategies of decentralized service provider-user coordination are developed.

The dissertation consists of two essays in misallocation and development. In particular, the essays explore how government policies distort resource allocation across production units, and therefore affect aggregate economic and environmental outcomes.
The first chapter studies the aggregate consequences of misallocation in a firm dynamics model with multi-establishment firms. I calibrate my model to the US firm size distribution with respect to both the number of employees and the number of establishments, and use it to study distortions that are correlated with establishment size, or so-called size-dependent distortions to establishments, which are modeled as implicit output taxes. In contrast to previous studies, I find that size-dependent distortions are not more damaging to aggregate productivity and output than size-independent distortions, while the implicit tax revenue approximately summarizes the effects on aggregate output. I also use the model to compare the effects of size-dependent distortions to establishments and to firms, and find that they have different effects on firm size distribution, but have similar effects on aggregate output.
The second chapter studies the effects of product market frictions on firm size distribution and their implications for industrial pollution in China. Using a unique micro-level manufacturing census, I find that larger firms generate and emit less pollutants per unit of production. I also provide evidence suggesting the existence of size-dependent product market frictions that disproportionately affect larger firms. Using a model with firms heterogeneous in productivity and an endogenous choice of pollution treatment technology, I show that these frictions result in lower adoption rate of clean technology, higher pollution and lower aggregate output. I use the model to evaluate policies that eliminate size-dependent frictions, and those that increase environmental regulation. Quantitative results show that eliminating size-dependent frictions increases output by 30%. Meanwhile, the fraction of firms using clean technology increases by 27% and aggregate pollution decreases by 20%. In contrast, a regulatory policy which increases the clean technology adoption rate by the same 27%, has no effect on aggregate output and leads to only 10% reduction in aggregate pollution.

In this dissertation, I propose potential techniques to improve the quality-of-service (QoS) of real-time applications in cognitive radio (CR) systems. Unlike best-effort applications, real-time applications, such as audio and video, have a QoS that need to be met. There are two different frameworks that are used to study the QoS in the literature, namely, the average-delay and the hard-deadline frameworks. In the former, the scheduling algorithm has to guarantee that the packet's average delay is below a prespecified threshold while the latter imposes a hard deadline on each packet in the system. In this dissertation, I present joint power allocation and scheduling algorithms for each framework and show their applications in CR systems which are known to have strict power limitations so as to protect the licensed users from interference.
A common aspect of the two frameworks is the packet service time. Thus, the effect of multiple channels on the service time is studied first. The problem is formulated as an optimal stopping rule problem where it is required to decide at which channel the SU should stop sensing and begin transmission. I provide a closed-form expression for this optimal stopping rule and the optimal transmission power of secondary user (SU).
The average-delay framework is then presented in a single CR channel system with a base station (BS) that schedules the SUs to minimize the average delay while protecting the primary users (PUs) from harmful interference. One of the contributions of the proposed algorithm is its suitability for heterogeneous-channels systems where users with statistically low channel quality suffer worse delay performances. The proposed algorithm guarantees the prespecified delay performance to each SU without violating the PU's interference constraint.
Finally, in the hard-deadline framework, I propose three algorithms that maximize the system's throughput while guaranteeing the required percentage of packets to be transmitted by their deadlines. The proposed algorithms work in heterogeneous systems where the BS is serving different types of users having real-time (RT) data and non-real-time (NRT) data. I show that two of the proposed algorithms have the low complexity where the power policies of both the RT and NRT users are in closed-form expressions and a low-complexity scheduler.

This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems (DAS) layer of the Internet of Things (IoT) architecture. To avoid congestion and enable low-latency services, limits have to be imposed on the amount of decisions that can be centralized (i.e. solved in the ``cloud") and/or amount of control information that devices can exchange. This has been the motivation to develop i) a lightweight PHY Layer protocol for time synchronization and scheduling in Wireless Sensor Networks (WSNs), ii) an adaptive receiver that enables Sub-Nyquist sampling, for efficient spectrum sensing at high frequencies, and iii) an SDN-scheme for resource-sharing across different technologies and operators, to harmoniously and holistically respond to fluctuations in demands at the eNodeB' s layer.
The proposed solution for time synchronization and scheduling is a new protocol, called PulseSS, which is completely event-driven and is inspired by biological networks. The results on convergence and accuracy for locally connected networks, presented in this thesis, constitute the theoretical foundation for the protocol in terms of performance guarantee. The derived limits provided guidelines for ad-hoc solutions in the actual implementation of the protocol.
The proposed receiver for Compressive Spectrum Sensing (CSS) aims at tackling the noise folding phenomenon, e.g., the accumulation of noise from different sub-bands that are folded, prior to sampling and baseband processing, when an analog front-end aliasing mixer is utilized.
The sensing phase design has been conducted via a utility maximization approach, thus the scheme derived has been called Cognitive Utility Maximization Multiple Access (CUMMA).
The framework described in the last part of the thesis is inspired by stochastic network optimization tools and dynamics.
While convergence of the proposed approach remains an open problem, the numerical results here presented suggest the capability of the algorithm to handle traffic fluctuations across operators, while respecting different time and economic constraints.
The scheme has been named Decomposition of Infrastructure-based Dynamic Resource Allocation (DIDRA).

Android is currently the most widely used mobile operating system. The permission model in Android governs the resource access privileges of applications. The permission model however is amenable to various attacks, including re-delegation attacks, background snooping attacks and disclosure of private information. This thesis is aimed at understanding, analyzing and performing forensics on application behavior. This research sheds light on several security aspects, including the use of inter-process communications (IPC) to perform permission re-delegation attacks.
Android permission system is more of app-driven rather than user controlled, which means it is the applications that specify their permission requirement and the only thing which the user can do is choose not to install a particular application based on the requirements. Given the all or nothing choice, users succumb to pressures and needs to accept permissions requested. This thesis proposes a couple of ways for providing the users finer grained control of application privileges. The same methods can be used to evade the Permission Re-delegation attack.
This thesis also proposes and implements a novel methodology in Android that can be used to control the access privileges of an Android application, taking into consideration the context of the running application. This application-context based permission usage is further used to analyze a set of sample applications. We found the evidence of applications spoofing or divulging user sensitive information such as location information, contact information, phone id and numbers, in the background. Such activities can be used to track users for a variety of privacy-intrusive purposes. We have developed implementations that minimize several forms of privacy leaks that are routinely done by stock applications.

中国商品期货市场经历30年发展,已初备协调资源分配、对冲经营风险的功能。但受产业自身和期货市场发展的制约,各期货品种市场有效性参差不齐。随着我国经济从增量阶段过渡到存量阶段,期货作为企业的价格管理和风险控制工具的重要性日益凸显,因此研究我国商品期货市场有效性具有非常好的现实意义。
本文开创性的从期货的基本功能——资源配置的角度出发,提出有效市场是指其期货价格能够对本行业社会资源起到合理的调配作用的市场。在内容安排上,本文首先总结了现有国际成熟期货品种的特点并找出能够反映期货对资源配置能力的四个指标假说,分别为期现回归性、利润波动性、库存波动性以及现金流变化,然后通过数学模型证明指标数据和品种成熟度的关联,最后应用该套指标对我国商品市场有效性进行检验。数学方法上,本文先采用Bai-Perron内生多重结构突变模型对时间序列进行突变点检验,然后对断点时间序列分别进行多元回归,并在剔除季节性和周期性后,通过平稳性检验、ARCH效应检验结果来确定相应的Garch模型,并用Garch模型来描述时间序列的波动性。
通过数学验证,我们认为期现回归性、利润波动性、库存波动性以及现金流变化这四个指标可以作为反映期货成熟度的检验指标,用该套方法对国内部分活跃品种检验后发现大连豆粕期货已经具备成熟品种的特征,本文认为豆粕期货市场是有效的;PTA、玉米淀粉期货的四个检验指标在近年来表现出时间序列优化的特点,但因时间较短尚不稳定,可以认为是接近成熟的品种;而螺纹钢和铝期货在多数指标上表现不佳,表明他们对社会资源配置能力较差,因此本文认为螺纹钢和铝期货市场是活跃但非有效的。通过进一步分析,本文认为品种的期现回归性差是制约其资源配置能力发挥的关键因素,而交易标的不明确、
仓单制作难度大、产业参与度低以及期货设计中的其他限制因素又是导致期现回归性差的重要原因。