Matching Items (44)
133279-Thumbnail Image.png
Description
There are potential risks when individuals choose to share information on social media platforms such as Facebook. With over 2.20 billion active monthly users, Facebook has the largest collection of user information compared to other social media sites. Due to their large collection of data, Facebook has constantly received criticism

There are potential risks when individuals choose to share information on social media platforms such as Facebook. With over 2.20 billion active monthly users, Facebook has the largest collection of user information compared to other social media sites. Due to their large collection of data, Facebook has constantly received criticism for their data privacy policies. Facebook has constantly changed its privacy policies in the effort to protect itself and end users. However, the changes in privacy policy may not translate into users changing their privacy controls. The goal of Facebook privacy controls is to allow Facebook users to be in charge of their data privacy. The goal of this study was to determine if a gap between user perceived privacy and reality existed. If this gap existed we investigated to see if certain information about the user would have a relationship to their ability to implement their settings successfully. We gathered information of ASU college students such as: gender, field of study, political affiliations, leadership involvement, privacy settings and online behaviors. After collecting the data, we reviewed each participants' Facebook profiles to examine the existence of the gap between their privacy settings and information available as a stranger. We found that there existed a difference between their settings and reality and it was not related to any of the users' background information.
ContributorsPascua, Raphael Matthew Bustos (Author) / Bazzi, Rida (Thesis director) / Dasgupta, Partha (Committee member) / Computer Science and Engineering Program (Contributor) / W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
As computers become a more embedded aspect of daily life, the importance of communicating ideas in computing and technology to the general public has become increasingly apparent. One such growing technology is electronic voting. The feasibility of explaining electronic voting protocols was directly investigated through the generation of a presentation

As computers become a more embedded aspect of daily life, the importance of communicating ideas in computing and technology to the general public has become increasingly apparent. One such growing technology is electronic voting. The feasibility of explaining electronic voting protocols was directly investigated through the generation of a presentation based on journal articles and papers identified by the investigator. Extensive use of analogy and visual aids were used to explain various cryptographic concepts. The presentation was then given to a classroom of ASU freshmen, followed by a feedback survey. A self-evaluation on the presentation methods is conducted, and a procedure for explaining subjects in computer science is proposed based on the researcher's personal process.
ContributorsReniewicki, Peter Josef (Author) / Bazzi, Rida (Thesis director) / Childress, Nancy (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133093-Thumbnail Image.png
Description
Error-correcting codes are fundamental in modern digital communication with applications in data storage and data transmission. Interest in a class of error-correcting codes called low-density parity-check (LDPC) codes has been growing since their recent rediscovery because of their low decoding complexity and their high-performance. However, practical applications have been limited

Error-correcting codes are fundamental in modern digital communication with applications in data storage and data transmission. Interest in a class of error-correcting codes called low-density parity-check (LDPC) codes has been growing since their recent rediscovery because of their low decoding complexity and their high-performance. However, practical applications have been limited due to the difficulty of finding good LDPC codes for practical parameters. This paper proposes an exhaustive and a randomized algorithm for constructing a family of LDPC codes with practical parameters whose matrix representations meet the following requirements: for each row in the LDPC code matrix there exists exactly one common nonzero element, each row has a minimum weight of one and must be odd, and each column has a weight of at least two. These conditions improve performance of the resulting codes and simplify conversion into codes for quantum systems. Both algorithms utilize a maximal clique algorithm to construct LDPC matrices from graphs whose vertices are possible rows within said matrices and are adjacent the first condition is true. While these algorithms were found to be suitable for small parameters, future work which optimizes the resulting codes for their expected applications could also dramatically increase performance of the algorithms themselves.
ContributorsShurman, Andrew Christian (Author) / Colbourn, Charles (Thesis director) / Bazzi, Rida (Committee member) / Computer Science and Engineering Program (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-12
134339-Thumbnail Image.png
Description
Implementing a distributed algorithm is more complicated than implementing a non-distributed algorithm. This is because distributed systems involve coordination of different processes each of which has a partial view of the global system state. The only way to share information in a distributed system is by message passing. Task that

Implementing a distributed algorithm is more complicated than implementing a non-distributed algorithm. This is because distributed systems involve coordination of different processes each of which has a partial view of the global system state. The only way to share information in a distributed system is by message passing. Task that are straightforward in a non-distributed system, like deciding on the value of a global system state, can be quite complicated to achieve in a distributed system [1]. On top of the difficulties caused by the distributed nature of the computations, distributed systems typically need to be able to operate normally even if some of the nodes in the system are faulty which further adds to the uncertainty that processes have about the global state. Many factors make the implementation of a distributed algorithms difficult. Design patterns [2] are useful in simplifying the development of general algorithms. A design pattern describes a high level solution to a common, abstract problem that many systems may face. Common structural, creational, and behavioral problems are identified and elegantly solved by design patterns. By identifying features that an algorithm uses, and framing each feature as one of the common problems that a specific design pattern solves, designing a robust implementation of an algorithm becomes more manageable. In this way, design patterns can aid the implementation of algorithms. Unfortunately, design patterns are typically not discussed when developing distributed algorithms. Because correctly developing a distributed algorithm is difficult, many papers (eg. [1], [3], [4]) focus on verifying the correctness of the developed algorithm. Papers that are more practical ([5], [6]) establish the correctness of their algorithm and that their algorithm is efficient enough to be practical. However, papers on distributed algorithms usually make little mention of design patterns. The goal of this work was to gain experience implementing distributed systems including learning the application of design patterns and the application of related technical topics. This was achieved by implementing a currently unpublished algorithm that is tentatively called Bakery Consensus. Bakery Consensus is a replicated state-machine protocol that can tolerate servers with Byzantine faults, but assumes non-faulty clients. The algorithm also establishes non-skipping timestamps for each operation completed by the replicated state-machine. The design of the structure, communication, and creation of the different system parts depended heavily upon the book Design Patterns [2]. After implementing the system, the success of the in implementing its various parts was based upon their ability to satisfy the SOLID [7] principles as well as their ability to establish low coupling and high cohesion [8]. The rest of this paper is organized as follows. We begin by providing background information about distributed algorithms, including replicated state-machine protocols and the Bakery Consensus algorithm. Section 3 gives a background on several design patterns and software engineering principles that were used in the development process. Section 4 discusses the well designed parts of the system that used design patterns, and how these design patterns were chosen. Section 5 discusses well designed system parts that relied upon other technical topics. Section 6 discusses system parts that need redesign. The conclusion summarizes what was accomplished by the implementation process and the lessons learned about design patterns for distributed algorithms.
ContributorsStoutenburg, Tristan Kaleb (Author) / Bazzi, Rida (Thesis director) / Richa, Andrea (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
133206-Thumbnail Image.png
Description
Ethereum smart contracts are susceptible not only to those vulnerabilities common to all software development domains, but also to those arising from the peculiar execution model of the Ethereum Virtual Machine. One of these vulnerabilities, a susceptibility to re-entrancy attacks, has been at the center of several high-profile contract exploits.

Ethereum smart contracts are susceptible not only to those vulnerabilities common to all software development domains, but also to those arising from the peculiar execution model of the Ethereum Virtual Machine. One of these vulnerabilities, a susceptibility to re-entrancy attacks, has been at the center of several high-profile contract exploits. Currently, there exist many tools to detect these vulnerabilties, as well as languages which preempt the creation of contracts exhibiting these issues, but no mechanism to address them in an automated fashion. One possible approach to filling this gap is direct patching of source files. The process of applying these patches to contracts written in Solidity, the primary Ethereum contract language, is discussed. Toward this end, a survey of deployed contracts is conducted, focusing on prevalence of language features and compiler versions. A heuristic approach to mitigating a particular class of re-entrancy vulnerability is developed, implemented as the SolPatch tool, and examined with respect to its limitations. As a proof of concept and illustrative example, a simplified version of the contract featured in a high-profile exploit is patched in this manner.
ContributorsLehman, Maxfield Chance Christian (Author) / Bazzi, Rida (Thesis director) / Doupe, Adam (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-12
135148-Thumbnail Image.png
Description
\English is a programming language, a method of allowing programmers to write instructions such that a computer may understand and execute said instructions in the form of a program. Though many programming languages exist, this particular language is designed for ease of development and heavy optimizability in ways that no

\English is a programming language, a method of allowing programmers to write instructions such that a computer may understand and execute said instructions in the form of a program. Though many programming languages exist, this particular language is designed for ease of development and heavy optimizability in ways that no other programming language is. Building on the principles of Assembly level efficiency, referential integrity, and high order functionality, this language is able to produce extremely efficient code; meanwhile, programmatically defined English-based reusable syntax and a strong, static type system make \English easier to read and write than many existing programming languages. Its generalization of all language structures and components to operators leaves the language syntax open to project-specific syntactical structuring, making it more easily applicable in more cases. The thesis project requirements came in three parts: a compiler to compile \English code into NASM Assembly to produce a final program product; a standard library to define many of the basic operations of the language, including the creation of lists; and C translation library that would utilize \English properties to compile C code using the \English compiler. Though designed and partially coded, the compiler remains incomplete. The standard library, C translation library, and design of the language were completed. Additional tools regarding the language design and implementation were also created, including a Gedit syntax highlighting configuration file; usage documentation describing in a tutorial style the basic usage of the language; and more. Though the thesis project itself may be complete, the \English project will continue in order to produce a new language capable of the abilities possible with the design of this language.
ContributorsDavey, Connor (Author) / Gupta, Sandeep (Thesis director) / Bazzi, Rida (Committee member) / Calliss, Debra (Committee member) / Barrett, The Honors College (Contributor)
Created2016-05
Description
On Android, existing security procedures require apps to request permissions for access to sensitive resources.

Only when the user approves the requested permissions will the app be installed.

However, permissions are an incomplete security mechanism.

In addition to a user's limited understanding of permissions, the mechanism does not account for the possibility that

On Android, existing security procedures require apps to request permissions for access to sensitive resources.

Only when the user approves the requested permissions will the app be installed.

However, permissions are an incomplete security mechanism.

In addition to a user's limited understanding of permissions, the mechanism does not account for the possibility that different permissions used together have the ability to be more dangerous than any single permission alone.

Even if users did understand the nature of an app's requested permissions, this mechanism is still not enough to guarantee that a user's information is protected.

Applications can potentially send or receive sensitive information from other applications without the required permissions by using intents.

In other words, applications can potentially collaborate in ways unforeseen by the user, even if the user understands the permissions of each app independently.

In this thesis, we present several graph-based approaches to address these issues.

We determine the permissions of an app and generate scores based on our assigned value of certain resources.

We analyze these scores overall, as well as in the context of the app's category as determined by Google Play.

We show that these scores can be used to identify overzealous apps, as well as apps that do not properly fit within their category.

We analyze potential interactions between different applications using intents, and identify several promiscuous apps with low permission scores, showing that permissions alone are not sufficient to evaluate the security risks of an app.

Our analyses can form the basis of a system to assist users in identifying apps that can potentially compromise user privacy.
ContributorsGibson, Aaron (Author) / Bazzi, Rida (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2015
153583-Thumbnail Image.png
Description
When scientific software is written to specify processes, it takes the form of a workflow, and is often written in an ad-hoc manner in a dynamic programming language. There is a proliferation of legacy workflows implemented by non-expert programmers due to the accessibility of dynamic languages. Unfortunately, ad-hoc workflows lack

When scientific software is written to specify processes, it takes the form of a workflow, and is often written in an ad-hoc manner in a dynamic programming language. There is a proliferation of legacy workflows implemented by non-expert programmers due to the accessibility of dynamic languages. Unfortunately, ad-hoc workflows lack a structured description as provided by specialized management systems, making ad-hoc workflow maintenance and reuse difficult, and motivating the need for analysis methods. The analysis of ad-hoc workflows using compiler techniques does not address dynamic languages - a program has so few constrains that its behavior cannot be predicted. In contrast, workflow provenance tracking has had success using run-time techniques to record data. The aim of this work is to develop a new analysis method for extracting workflow structure at run-time, thus avoiding issues with dynamics.

The method captures the dataflow of an ad-hoc workflow through its execution and abstracts it with a process for simplifying repetition. An instrumentation system first processes the workflow to produce an instrumented version, capable of logging events, which is then executed on an input to produce a trace. The trace undergoes dataflow construction to produce a provenance graph. The dataflow is examined for equivalent regions, which are collected into a single unit. The workflow is thus characterized in terms of its treatment of an input. Unlike other methods, a run-time approach characterizes the workflow's actual behavior; including elements which static analysis cannot predict (for example, code dynamically evaluated based on input parameters). This also enables the characterization of dataflow through external tools.

The contributions of this work are: a run-time method for recording a provenance graph from an ad-hoc Python workflow, and a method to analyze the structure of a workflow from provenance. Methods are implemented in Python and are demonstrated on real world Python workflows. These contributions enable users to derive graph structure from workflows. Empowered by a graphical view, users can better understand a legacy workflow. This makes the wealth of legacy ad-hoc workflows accessible, enabling workflow reuse instead of investing time and resources into creating a workflow.
ContributorsAcűna, Ruben (Author) / Bazzi, Rida (Thesis advisor) / Lacroix, Zoé (Thesis advisor) / Candan, Kasim (Committee member) / Arizona State University (Publisher)
Created2015
154497-Thumbnail Image.png
Description
Robotic technology is advancing to the point where it will soon be feasible to deploy massive populations, or swarms, of low-cost autonomous robots to collectively perform tasks over large domains and time scales. Many of these tasks will require the robots to allocate themselves around the boundaries of regions

Robotic technology is advancing to the point where it will soon be feasible to deploy massive populations, or swarms, of low-cost autonomous robots to collectively perform tasks over large domains and time scales. Many of these tasks will require the robots to allocate themselves around the boundaries of regions or features of interest and achieve target objectives that derive from their resulting spatial configurations, such as forming a connected communication network or acquiring sensor data around the entire boundary. We refer to this spatial allocation problem as boundary coverage. Possible swarm tasks that will involve boundary coverage include cooperative load manipulation for applications in construction, manufacturing, and disaster response.

In this work, I address the challenges of controlling a swarm of resource-constrained robots to achieve boundary coverage, which I refer to as the problem of stochastic boundary coverage. I first examined an instance of this behavior in the biological phenomenon of group food retrieval by desert ants, and developed a hybrid dynamical system model of this process from experimental data. Subsequently, with the aid of collaborators, I used a continuum abstraction of swarm population dynamics, adapted from a modeling framework used in chemical kinetics, to derive stochastic robot control policies that drive a swarm to target steady-state allocations around multiple boundaries in a way that is robust to environmental variations.

Next, I determined the statistical properties of the random graph that is formed by a group of robots, each with the same capabilities, that have attached to a boundary at random locations. I also computed the probability density functions (pdfs) of the robot positions and inter-robot distances for this case.

I then extended this analysis to cases in which the robots have heterogeneous communication/sensing radii and attach to a boundary according to non-uniform, non-identical pdfs. I proved that these more general coverage strategies generate random graphs whose probability of connectivity is Sharp-P Hard to compute. Finally, I investigated possible approaches to validating our boundary coverage strategies in multi-robot simulations with realistic Wi-fi communication.
ContributorsPeruvemba Kumar, Ganesh (Author) / Berman, Spring M (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Bazzi, Rida (Committee member) / Syrotiuk, Violet (Committee member) / Taylor, Thomas (Committee member) / Arizona State University (Publisher)
Created2016
154657-Thumbnail Image.png
Description
Several decades of transistor technology scaling has brought the threat of soft errors to modern embedded processors. Several techniques have been proposed to protect these systems from soft errors. However, their effectiveness in protecting the computation cannot be ascertained without accurate and quantitative estimation of system reliability. Vulnerability -- a

Several decades of transistor technology scaling has brought the threat of soft errors to modern embedded processors. Several techniques have been proposed to protect these systems from soft errors. However, their effectiveness in protecting the computation cannot be ascertained without accurate and quantitative estimation of system reliability. Vulnerability -- a metric that defines the probability of system-failure (reliability) through analytical models -- is the most effective mechanism for our current estimation and early design space exploration needs. Previous vulnerability estimation tools are based around the Sim-Alpha simulator which has been to shown to have several limitations. In this thesis, I present gemV: an accurate and comprehensive vulnerability estimation tool based on gem5. Gem5 is a popular cycle-accurate micro-architectural simulator that can model several different processor models in close to real hardware form. GemV can be used for fast and early design space exploration and also evaluate the protection afforded by commodity processors. gemV is comprehensive, since it models almost all sequential components of the processor. gemV is accurate because of fine-grain vulnerability tracking, accurate vulnerability modeling of squashed instructions, and accurate vulnerability modeling of shared data structures in gem5. gemV has been thoroughly validated against extensive fault injection experiments and achieves a 97\% accuracy with 95\% confidence. A micro-architect can use gemV to discover micro-architectural variants of a processor that minimize vulnerability for allowed performance penalty. A software developer can use gemV to explore the performance-vulnerability trade-off by choosing different algorithms and compiler optimizations, while the system designer can use gemV to explore the performance-vulnerability trade-offs of choosing different Insruction Set Architectures (ISA).
ContributorsTanikella, Srinivas Karthik (Author) / Shrivastava, Aviral (Thesis advisor) / Bazzi, Rida (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2016