Matching Items (44)
Filtering by

Clear all filters

168454-Thumbnail Image.png
Description
Federated Learning (FL) is envisaged to be a promising solution for collaboratively training a machine learning model while keeping the training data decentralized and private. Instead of sharing raw data to the central entity, the participating client devices share focused updates for aggregation to ensure global convergence of the model.

Federated Learning (FL) is envisaged to be a promising solution for collaboratively training a machine learning model while keeping the training data decentralized and private. Instead of sharing raw data to the central entity, the participating client devices share focused updates for aggregation to ensure global convergence of the model. Owing to the shortcomings of manually handcrafted neural network architectures, the research community is striving to develop Neural Architecture Search (NAS) approaches to automatically search for optimal networks that fit the clients’ data. Despite the inaccessibility of clients’ data in an FL setting, the federated NAS literature has recently witnessed great progress to apply these NAS techniques to an FL setting. However, one of the key bottlenecks of Federated Learning is the cost of communication between clients and the server, and the state-of-the-art federated NAS techniques search for networks with millions of parameters that require several rounds of communication to find the optimal weight parameters. Also, deploying a network having millions of parameters on edge devices (which are the typical participants in an FL process) is infeasible due to its computational limitations and increased latency. Thus, this work proposes Weight-Agnostic Federated Neural Architecture Search (WFNAS), a novel evolutionary framework to search for well-performing and minimally connected weight-agnostic network architectures in an FL setting. As the connectivity of the networks themselves is the solution, there is no need for weight training and hyperparameter tuning, reducing the communication overhead significantly. The experiments indicate a gain of nearly 40% for orthogonal (vertical FL) data distributions compared to local training. This work is the first federated NAS technique in the literature for vertical FL. Although the experiments are performed in a resource-constrained environment, the aim of this thesis is to show a new direction of research to the FL community.
ContributorsThakkar, Om (Author) / Bazzi, Rida (Thesis advisor) / Li, Baoxin (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2021
157463-Thumbnail Image.png
Description
For systems having computers as a significant component, it becomes a critical task to identify the potential threats that the users of the system can present, while being both inside and outside the system. One of the most important factors that differentiate an insider from an outsider is the fact

For systems having computers as a significant component, it becomes a critical task to identify the potential threats that the users of the system can present, while being both inside and outside the system. One of the most important factors that differentiate an insider from an outsider is the fact that the insider being a part of the system, owns privileges that enable him/her access to the resources and processes of the system through valid capabilities. An insider with malicious intent can potentially be more damaging compared to outsiders. The above differences help to understand the notion and scope of an insider.

The significant loss to organizations due to the failure to detect and mitigate the insider threat has resulted in an increased interest in insider threat detection. The well-studied effective techniques proposed for defending against attacks by outsiders have not been proven successful against insider attacks. Although a number of security policies and models to deal with the insider threat have been developed, the approach taken by most organizations is the use of audit logs after the attack has taken place. Such approaches are inspired by academic research proposals to address the problem by tracking activities of the insider in the system. Although tracking and logging are important, it is argued that they are not sufficient. Thus, the necessity to predict the potential damage of an insider is considered to help build a stronger evaluation and mitigation strategy for the insider attack. In this thesis, the question that seeks to be answered is the following: `Considering the relationships that exist between the insiders and their role, their access to the resources and the resource set, what is the potential damage that an insider can cause?'

A general system model is introduced that can capture general insider attacks including those documented by Computer Emergency Response Team (CERT) for the Software Engineering Institute (SEI). Further, initial formulations of the damage potential for leakage and availability in the model is introduced. The model usefulness is shown by expressing 14 of actual attacks in the model and show how for each case the attack could have been mitigated.
ContributorsNolastname, Sharad (Author) / Bazzi, Rida (Thesis advisor) / Sen, Arunabha (Committee member) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2019
166652-Thumbnail Image.png
Description
Adaptive capacity to climate change is the ability of a system to mitigate or take advantage of climate change effects. Research on adaptive capacity to climate change suffers fragmentation. This is partly because there is no clear consensus around precise definitions of adaptive capacity. The aim of this thesis is

Adaptive capacity to climate change is the ability of a system to mitigate or take advantage of climate change effects. Research on adaptive capacity to climate change suffers fragmentation. This is partly because there is no clear consensus around precise definitions of adaptive capacity. The aim of this thesis is to place definitions of adaptive capacity into a formal framework. I formalize adaptive capacity as a computational model written in the Idris 2 programming language. The model uses types to constrain how the elements of the model fit together. To achieve this, I analyze nine existing definitions of adaptive capacity. The focus of the analysis was on important factors that affect definitions and shared elements of the definitions. The model is able to describe an adaptive capacity study and guide a user toward concepts lacking clarity in the study. This shows that the model is useful as a tool to think about adaptive capacity. In the future, one could refine the model by forming an ontology for adaptive capacity. One could also review the literature more systematically. Finally, one might consider turning to qualitative research methods for reviewing the literature.
ContributorsManuel, Jason (Author) / Bazzi, Rida (Thesis director) / Pavlic, Theodore (Committee member) / Middel, Ariane (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
166188-Thumbnail Image.png
Description
Data breaches and software vulnerabilities are increasingly severe problems that incur both monetary and reputational costs for companies as well as societal impacts. While companies have clear monetary and legal incentives to mitigate risk of data breaches, companies have significantly less incentive to mitigate software product vulnerabilities, and their existing

Data breaches and software vulnerabilities are increasingly severe problems that incur both monetary and reputational costs for companies as well as societal impacts. While companies have clear monetary and legal incentives to mitigate risk of data breaches, companies have significantly less incentive to mitigate software product vulnerabilities, and their existing incentive is widely considered insufficient. In this thesis, I initially set out to perform a statistical analysis correlating company characteristics and behavior with the characteristics of the data breaches they suffer, as well as performing a metaanalysis of existing literature. While the attempted statistical analysis was hindered by lack of sufficiently comprehensive free company datasets, I have recorded my efforts in finding suitable databases. I have also performed an exploratory literature review of 15 papers in the field of improving cybersecurity, and identified four blockers to security addressed and three elements of solutions proposed by the papers, as well as derived insights from the distribution of these blockers and elements of solutions in the papers reviewed.
ContributorsMac, Anthony (Author) / Bazzi, Rida (Thesis director) / Shoshitaishvili, Yan (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
156582-Thumbnail Image.png
Description
Distributed systems are prone to attacks, called Sybil attacks, wherein an adversary may generate an unbounded number of bogus identities to gain control over the system. In this thesis, an algorithm, DownhillFlow, for mitigating such attacks is presented and

tested experimentally. The trust rankings produced by the algorithm are significantly better

Distributed systems are prone to attacks, called Sybil attacks, wherein an adversary may generate an unbounded number of bogus identities to gain control over the system. In this thesis, an algorithm, DownhillFlow, for mitigating such attacks is presented and

tested experimentally. The trust rankings produced by the algorithm are significantly better than those of the distributed SybilGuard protocol and only slightly worse than those of the best-known Sybil defense algorithm, ACL. The results obtained for ACL are

consistent with those obtained in previous studies. The running times of the algorithms are also tested and two results are obtained: first, DownhillFlow’s running time is found to be significantly faster than any existing algorithm including ACL, terminating in

slightly over one second on the 300,000-node DBLP graph. This allows it to be used in settings such as dynamic networks as-is with no additional functionality needed. Second, when ACL is configured such that it matches DownhillFlow’s speed, it fails to recognize

large portions of the input graphs and its accuracy among the portion of the graphs it does recognize becomes lower than that of DownhillFlow.
ContributorsBradley, Michael (Author) / Bazzi, Rida (Thesis advisor) / Richa, Andrea (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2018
161779-Thumbnail Image.png
Description
Cryptographic voting systems such as Helios rely heavily on a trusted party to maintain privacy or verifiability. This tradeoff can be done away with by using distributed substitutes for the components that need a trusted party. By replacing the encryption, shuffle, and decryption steps described by Helios with the Pedersen

Cryptographic voting systems such as Helios rely heavily on a trusted party to maintain privacy or verifiability. This tradeoff can be done away with by using distributed substitutes for the components that need a trusted party. By replacing the encryption, shuffle, and decryption steps described by Helios with the Pedersen threshold encryption and Neff shuffle, it is possible to obtain a distributed voting system which achieves both privacy and verifiability without trusting any of the contributors. This thesis seeks to examine existing approaches to this problem, and their shortcomings. It provides empirical metrics for comparing different working solutions in detail.
ContributorsBouck, Spencer Joseph (Author) / Bazzi, Rida (Thesis advisor) / Boscovic, Dragan (Committee member) / Shoshitaishvili, Yan (Committee member) / Arizona State University (Publisher)
Created2021
Description
Data from a total of 282 online web applications was collected, and accounts for 230 of those web applications were created in order to gather data about authentication practices, multistep authentication practices, security question practices, fallback authentication practices, and other security practices for online accounts. The account creation and data

Data from a total of 282 online web applications was collected, and accounts for 230 of those web applications were created in order to gather data about authentication practices, multistep authentication practices, security question practices, fallback authentication practices, and other security practices for online accounts. The account creation and data collection was done between June 2016 and April 2017. The password strengths for online accounts were analyzed and password strength data was compared to existing data. Security questions used by online accounts were evaluated for security and usability, and fallback authentication practices were assessed based on their adherence to best practices. Alternative authentication schemes were examined, and other security considerations such as use of HTTPS and CAPTCHAs were explored. Based on existing data, password policies require stronger passwords in for web applications in 2017 compared to the requirements in 2010. Nevertheless, password policies for many accounts are still not adequate. About a quarter of online web applications examined use security questions, and many of the questions have usability and security concerns. Security mechanisms such as HTTPS and continuous authentication are in general not used in conjunction with security questions for most web applications, which reduces the overall security of the web application. A majority of web applications use email addresses as the login credential and the password recovery credential and do not follow best practices. About a quarter of accounts use multistep authentication and a quarter of accounts employ continuous authentication, yet most accounts fail to combine security measures for defense in depth. The overall conclusion is that some online web applications are using secure practices; however, a majority of online web applications fail to properly implement and utilize secure practices.
ContributorsGutierrez, Garrett (Author) / Bazzi, Rida (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2017
157892-Thumbnail Image.png
Description
Machine learning (ML) and deep neural networks (DNNs) have achieved great success in a variety of application domains, however, despite significant effort to make these networks robust, they remain vulnerable to adversarial attacks in which input that is perceptually indistinguishable from natural data can be erroneously classified with high prediction

Machine learning (ML) and deep neural networks (DNNs) have achieved great success in a variety of application domains, however, despite significant effort to make these networks robust, they remain vulnerable to adversarial attacks in which input that is perceptually indistinguishable from natural data can be erroneously classified with high prediction confidence. Works on defending against adversarial examples can be broadly classified as correcting or detecting, which aim, respectively at negating the effects of the attack and correctly classifying the input, or detecting and rejecting the input as adversarial. In this work, a new approach for detecting adversarial examples is proposed. The approach takes advantage of the robustness of natural images to noise. As noise is added to a natural image, the prediction probability of its true class drops, but the drop is not sudden or precipitous. The same seems to not hold for adversarial examples. In other word, the stress response profile for natural images seems different from that of adversarial examples, which could be detected by their stress response profile. An evaluation of this approach for detecting adversarial examples is performed on the MNIST, CIFAR-10 and ImageNet datasets. Experimental data shows that this approach is effective at detecting some adversarial examples on small scaled simple content images and with little sacrifice on benign accuracy.
ContributorsSun, Lin (Author) / Bazzi, Rida (Thesis advisor) / Li, Baoxin (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2019
132016-Thumbnail Image.png
Description
Since its introduction to the iPhone X in 2017, Apple’s Face ID has been regarded as more accurate than facial recognition systems used by their competitors due to the use of depth information and infrared images to capture accurate face data. The goal of this thesis is to explore the

Since its introduction to the iPhone X in 2017, Apple’s Face ID has been regarded as more accurate than facial recognition systems used by their competitors due to the use of depth information and infrared images to capture accurate face data. The goal of this thesis is to explore the usability of current smartphone facial recognition systems as represented by the latest generation of Apple’s Face ID. To that end, a research study was conducted to test the usability of Apple’s Face ID on the iPhone XR under diverse, simulated conditions designed to replicate real-life scenarios under which a consumer may need to use Face ID. The goal of the study was to make observations on Face ID usability and create a preliminary understanding of areas in which technology may struggle and/or fail. From the results of the research study, Face ID on the iPhone XR generally performed well under low-light conditions and adapted to minor changes in the conditions under which a face capture is done, but did not do as well when the user did not maintain full eye contact with the camera or when the capture is done at an angle.
ContributorsTang, Xina (Author) / Bazzi, Rida (Thesis director) / Ulrich, Jon (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
132156-Thumbnail Image.png
Description
A web server is a program that responds to your browser's
requests. Often, the response is a HTML document that the browser
renders in a way that looks pleasant to humans. The manner in which it
responds is generally determined before the server is started up; it
is static. The content may change arbitrarily,

A web server is a program that responds to your browser's
requests. Often, the response is a HTML document that the browser
renders in a way that looks pleasant to humans. The manner in which it
responds is generally determined before the server is started up; it
is static. The content may change arbitrarily, but the actual logic
that the server follows resists change while the server is still
running. The goal of this thesis is to explore the possibility of
removing this restriction, allowing a web server's logic to be
modified arbitrarily during runtime by select users. This is why the
term ``Federated'' appears in the title: my goal is to create a system
that can be developed in a decentralized manner, by multiple entities
with similar high-level goals but different ideas at the lower level.
ContributorsKulkarni, Sidharth (Author) / Bazzi, Rida (Thesis director) / Doupe, Adam (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05