Matching Items (9)
Filtering by

Clear all filters

150987-Thumbnail Image.png
Description
In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned

In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned and operated by third-party service providers, there are risks of unauthorized use of the users' sensitive data by service providers. Although there are many techniques for protecting users' data from outside attackers, currently there is no effective way to protect users' sensitive data from service providers. In this dissertation, an approach is presented to protecting the confidentiality of users' data from service providers, and ensuring that service providers cannot collect users' confidential data while the data is processed or stored in cloud computing systems. The approach has four major features: (1) separation of software service providers and infrastructure service providers, (2) hiding the information of the owners of data, (3) data obfuscation, and (4) software module decomposition and distributed execution. Since the approach to protecting users' data confidentiality includes software module decomposition and distributed execution, it is very important to effectively allocate the resource of servers in SBS to each of the software module to manage the overall performance of workflows in SBS. An approach is presented to resource allocation for SBS to adaptively allocating the system resources of servers to their software modules in runtime in order to satisfy the performance requirements of multiple workflows in SBS. Experimental results show that the dynamic resource allocation approach can substantially increase the throughput of a SBS and the optimal resource allocation can be found in polynomial time
ContributorsAn, Ho Geun (Author) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Ahn, Gail-Joon (Committee member) / Santanam, Raghu (Committee member) / Arizona State University (Publisher)
Created2012
156819-Thumbnail Image.png
Description
Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring

Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring substantial computing resources to computing systems with sufficient computing resources, such as servers, cloud systems, and/or data centers for processing. However, this offloading method suffers from both high latency and network congestion in the IoT infrastructures.

Recently edge computing has emerged to reduce the negative impacts of tasks offloading to remote computing systems. As edge computing is in close proximity to IoT devices, it can reduce the latency of task offloading and reduce network congestion. Yet, edge computing has its drawbacks, such as the limited computing resources of some edge computing devices and the unbalanced loads among these devices. In order to effectively explore the potential of edge computing to support IoT applications, it is necessary to have efficient task management and load balancing in edge computing networks.

In this dissertation research, an approach is presented to periodically distributing tasks within the edge computing network while satisfying the quality-of-service (QoS) requirements of tasks. The QoS requirements include task completion deadline and security requirement. The approach aims to maximize the number of tasks that can be accommodated in the edge computing network, with consideration of tasks’ priorities. The goal is achieved through the joint optimization of the computing resource allocation and network bandwidth provisioning. Evaluation results show the improvement of the approach in increasing the number of tasks that can be accommodated in the edge computing network and the efficiency in resource utilization.
ContributorsSong, Yaozhong (Author) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Sarjoughian, Hessam S. (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2018
156773-Thumbnail Image.png
Description
As integrated technologies are scaling down, there is an increasing trend in the

process,voltage and temperature (PVT) variations of highly integrated RF systems.

Accounting for these variations during the design phase requires tremendous amount

of time for prediction of RF performance and optimizing it accordingly. Thus, there

is an increasing gap between the need

As integrated technologies are scaling down, there is an increasing trend in the

process,voltage and temperature (PVT) variations of highly integrated RF systems.

Accounting for these variations during the design phase requires tremendous amount

of time for prediction of RF performance and optimizing it accordingly. Thus, there

is an increasing gap between the need to relax the RF performance requirements at

the design phase for rapid development and the need to provide high performance

and low cost RF circuits that function with PVT variations. No matter how care-

fully designed, RF integrated circuits (ICs) manufactured with advanced technology

nodes necessitate lengthy post-production calibration and test cycles with expensive

RF test instruments. Hence design-for-test (DFT) is proposed for low-cost and fast

measurement of performance parameters during both post-production and in-eld op-

eration. For example, built-in self-test (BIST) is a DFT solution for low-cost on-chip

measurement of RF performance parameters. In this dissertation, three aspects of

automated test and calibration, including DFT mathematical model, BIST hardware

and built-in calibration are covered for RF front-end blocks.

First, the theoretical foundation of a post-production test of RF integrated phased

array antennas is proposed by developing the mathematical model to measure gain

and phase mismatches between antenna elements without any electrical contact. The

proposed technique is fast, cost-efficient and uses near-field measurement of radiated

power from antennas hence, it requires single test setup, it has easy implementation

and it is short in time which makes it viable for industrialized high volume integrated

IC production test.

Second, a BIST model intended for the characterization of I/Q offset, gain and

phase mismatch of IQ transmitters without relying on external equipment is intro-

duced. The proposed BIST method is based on on-chip amplitude measurement as

in prior works however,here the variations in the BIST circuit do not affect the target

parameter estimation accuracy since measurements are designed to be relative. The

BIST circuit is implemented in 130nm technology and can be used for post-production

and in-field calibration.

Third, a programmable low noise amplifier (LNA) is proposed which is adaptable

to different application scenarios depending on the specification requirements. Its

performance is optimized with regards to required specifications e.g. distance, power

consumption, BER, data rate, etc.The statistical modeling is used to capture the

correlations among measured performance parameters and calibration modes for fast

adaptation. Machine learning technique is used to capture these non-linear correlations and build the probability distribution of a target parameter based on measurement results of the correlated parameters. The proposed concept is demonstrated by

embedding built-in tuning knobs in LNA design in 130nm technology. The tuning

knobs are carefully designed to provide independent combinations of important per-

formance parameters such as gain and linearity. Minimum number of switches are

used to provide the desired tuning range without a need for an external analog input.
ContributorsShafiee, Maryam (Author) / Ozev, Sule (Thesis advisor) / Diaz, Rodolfo (Committee member) / Ogras, Umit Y. (Committee member) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2018
157577-Thumbnail Image.png
Description
Emerging from years of research and development, the Internet-of-Things (IoT) has finally paved its way into our daily lives. From smart home to Industry 4.0, IoT has been fundamentally transforming numerous domains with its unique superpower of interconnecting world-wide devices. However, the capability of IoT is largely constrained by the

Emerging from years of research and development, the Internet-of-Things (IoT) has finally paved its way into our daily lives. From smart home to Industry 4.0, IoT has been fundamentally transforming numerous domains with its unique superpower of interconnecting world-wide devices. However, the capability of IoT is largely constrained by the limited resources it can employ in various application scenarios, including computing power, network resource, dedicated hardware, etc. The situation is further exacerbated by the stringent quality-of-service (QoS) requirements of many IoT applications, such as delay, bandwidth, security, reliability, and more. This mismatch in resources and demands has greatly hindered the deployment and utilization of IoT services in many resource-intense and QoS-sensitive scenarios like autonomous driving and virtual reality.

I believe that the resource issue in IoT will persist in the near future due to technological, economic and environmental factors. In this dissertation, I seek to address this issue by means of smart resource allocation. I propose mathematical models to formally describe various resource constraints and application scenarios in IoT. Based on these, I design smart resource allocation algorithms and protocols to maximize the system performance in face of resource restrictions. Different aspects are tackled, including networking, security, and economics of the entire IoT ecosystem. For different problems, different algorithmic solutions are devised, including optimal algorithms, provable approximation algorithms, and distributed protocols. The solutions are validated with rigorous theoretical analysis and/or extensive simulation experiments.
ContributorsYu, Ruozhou, Ph.D (Author) / Xue, Guoliang (Thesis advisor) / Huang, Dijiang (Committee member) / Sen, Arunabha (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2019
154267-Thumbnail Image.png
Description
Internet of Things (IoT) has become a popular topic in industry over the recent years, which describes an ecosystem of internet-connected devices or things that enrich the everyday life by improving our productivity and efficiency. The primary components of the IoT ecosystem are hardware, software and services. While the software

Internet of Things (IoT) has become a popular topic in industry over the recent years, which describes an ecosystem of internet-connected devices or things that enrich the everyday life by improving our productivity and efficiency. The primary components of the IoT ecosystem are hardware, software and services. While the software and services of IoT system focus on data collection and processing to make decisions, the underlying hardware is responsible for sensing the information, preprocess and transmit it to the servers. Since the IoT ecosystem is still in infancy, there is a great need for rapid prototyping platforms that would help accelerate the hardware design process. However, depending on the target IoT application, different sensors are required to sense the signals such as heart-rate, temperature, pressure, acceleration, etc., and there is a great need for reconfigurable platforms that can prototype different sensor interfacing circuits.

This thesis primarily focuses on two important hardware aspects of an IoT system: (a) an FPAA based reconfigurable sensing front-end system and (b) an FPGA based reconfigurable processing system. To enable reconfiguration capability for any sensor type, Programmable ANalog Device Array (PANDA), a transistor-level analog reconfigurable platform is proposed. CAD tools required for implementation of front-end circuits on the platform are also developed. To demonstrate the capability of the platform on silicon, a small-scale array of 24×25 PANDA cells is fabricated in 65nm technology. Several analog circuit building blocks including amplifiers, bias circuits and filters are prototyped on the platform, which demonstrates the effectiveness of the platform for rapid prototyping IoT sensor interfaces.

IoT systems typically use machine learning algorithms that run on the servers to process the data in order to make decisions. Recently, embedded processors are being used to preprocess the data at the energy-constrained sensor node or at IoT gateway, which saves considerable energy for transmission and bandwidth. Using conventional CPU based systems for implementing the machine learning algorithms is not energy-efficient. Hence an FPGA based hardware accelerator is proposed and an optimization methodology is developed to maximize throughput of any convolutional neural network (CNN) based machine learning algorithm on a resource-constrained FPGA.
ContributorsSuda, Naveen (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Ozev, Sule (Committee member) / Yu, Shimeng (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2016
155819-Thumbnail Image.png
Description
Today the information technology systems have addresses, software stacks and other configuration remaining unchanged for a long period of time. This paves way for malicious attacks in the system from unknown vulnerabilities. The attacker can take advantage of this situation and plan their attacks with sufficient time. To protect our

Today the information technology systems have addresses, software stacks and other configuration remaining unchanged for a long period of time. This paves way for malicious attacks in the system from unknown vulnerabilities. The attacker can take advantage of this situation and plan their attacks with sufficient time. To protect our system from this threat, Moving Target Defense is required where the attack surface is dynamically changed, making it difficult to strike.

In this thesis, I incorporate live migration of Docker container using CRIU (checkpoint restore) for moving target defense. There are 460K Dockerized applications, a 3100% growth over 2 years[1]. Over 4 billion containers have been pulled so far from Docker hub. Docker is supported by a large and fast growing community of contributors and users. As an example, there are 125K Docker Meetup members worldwide. As we see industry adapting to Docker rapidly, a moving target defense solution involving containers is beneficial for being robust and fast. A proof of concept implementation is included for studying performance attributes of Docker migration.

The detection of attack is using a scenario involving definitions of normal events on servers. By defining system activities, and extracting syslog in centralized server, attack can be detected via extracting abnormal activates and this detection can be a trigger for the Docker migration.
ContributorsBohara, Bhakti (Author) / Huang, Dijiang (Thesis advisor) / Doupe, Adam (Committee member) / Zhao, Ziming (Committee member) / Arizona State University (Publisher)
Created2017
152590-Thumbnail Image.png
Description
Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost

Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost in handling breaches for security architects and security engineers. The process of security testing which involves creating tests that effectively examine vulnerabilities is a challenging task. Role-Based Access Control (RBAC) has been widely adopted to support fine-grained access control. However, in practice, due to its complexity including role management, role hierarchy with hundreds of roles, and their associated privileges and users, systematically testing RBAC systems is crucial to ensure the security in various domains ranging from cyber-infrastructure to mission-critical applications. In this thesis, we introduce i) a security testing technique for RBAC systems considering the principle of maximum privileges, the structure of the role hierarchy, and a new security test coverage criterion; ii) a MTBDD (Multi-Terminal Binary Decision Diagram) based representation of RBAC security policy including RHMTBDD (Role Hierarchy MTBDD) to efficiently generate effective positive and negative security test cases; and iii) a security testing framework which takes an XACML-based RBAC security policy as an input, parses it into a RHMTBDD representation and then generates positive and negative test cases. We also demonstrate the efficacy of our approach through case studies.
ContributorsGupta, Poonam (Author) / Ahn, Gail-Joon (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2014
158720-Thumbnail Image.png
Description
The field of cyber-defenses has played catch-up in the cat-and-mouse game of finding vulnerabilities followed by the invention of patches to defend against them. With the complexity and scale of modern-day software, it is difficult to ensure that all known vulnerabilities are patched; moreover, the attacker, with reconnaissance on their

The field of cyber-defenses has played catch-up in the cat-and-mouse game of finding vulnerabilities followed by the invention of patches to defend against them. With the complexity and scale of modern-day software, it is difficult to ensure that all known vulnerabilities are patched; moreover, the attacker, with reconnaissance on their side, will eventually discover and leverage them. To take away the attacker's inherent advantage of reconnaissance, researchers have proposed the notion of proactive defenses such as Moving Target Defense (MTD) in cyber-security. In this thesis, I make three key contributions that help to improve the effectiveness of MTD.

First, I argue that naive movement strategies for MTD systems, designed based on intuition, are detrimental to both security and performance. To answer the question of how to move, I (1) model MTD as a leader-follower game and formally characterize the notion of optimal movement strategies, (2) leverage expert-curated public data and formal representation methods used in cyber-security to obtain parameters of the game, and (3) propose optimization methods to infer strategies at Strong Stackelberg Equilibrium, addressing issues pertaining to scalability and switching costs. Second, when one cannot readily obtain the parameters of the game-theoretic model but can interact with a system, I propose a novel multi-agent reinforcement learning approach that finds the optimal movement strategy. Third, I investigate the novel use of MTD in three domains-- cyber-deception, machine learning, and critical infrastructure networks. I show that the question of what to move poses non-trivial challenges in these domains. To address them, I propose methods for patch-set selection in the deployment of honey-patches, characterize the notion of differential immunity in deep neural networks, and develop optimization problems that guarantee differential immunity for dynamic sensor placement in power-networks.
ContributorsSengupta, Sailik (Author) / Kambhampati, Subbarao (Thesis advisor) / Bao, Tiffany (Youzhi) (Committee member) / Huang, Dijiang (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2020
156904-Thumbnail Image.png
Description
Machine learning tutorials often employ an application and runtime specific solution for a given problem in which users are expected to have a broad understanding of data analysis and software programming. This thesis focuses on designing and implementing a new, hands-on approach to teaching machine learning by streamlining the process

Machine learning tutorials often employ an application and runtime specific solution for a given problem in which users are expected to have a broad understanding of data analysis and software programming. This thesis focuses on designing and implementing a new, hands-on approach to teaching machine learning by streamlining the process of generating Inertial Movement Unit (IMU) data from multirotor flight sessions, training a linear classifier, and applying said classifier to solve Multi-rotor Activity Recognition (MAR) problems in an online lab setting. MAR labs leverage cloud computing and data storage technologies to host a versatile environment capable of logging, orchestrating, and visualizing the solution for an MAR problem through a user interface. MAR labs extends Arizona State University’s Visual IoT/Robotics Programming Language Environment (VIPLE) as a control platform for multi-rotors used in data collection. VIPLE is a platform developed for teaching computational thinking, visual programming, Internet of Things (IoT) and robotics application development. As a part of this education platform, this work also develops a 3D simulator capable of simulating the programmable behaviors of a robot within a maze environment and builds a physical quadrotor for use in MAR lab experiments.
ContributorsDe La Rosa, Matthew Lee (Author) / Chen, Yinong (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2018